If you’re writing a pixelated game that performs large magnification of textures, you’re probably using the nearest texture filter so your game will look like the image on the right instead of the one to the left.

The problem with nearest texel sampling is that it’s susceptible to aliasing if the texels are not aligned with the screen pixels, which can happen if you apply transformations such as rotation and shearing to your textured polygons. Ideally, we would like to have smooth transitions between neighboring texels in the final image, as shown figure below:

## Manual texture filtering

One way to achieve this result is by performing linear interpolation between texels on the edges of each texel in the fragment shader, but sampling the nearest texel everywhere else. A simple way to achieve this is by activating WebGL’s linear filtering, and playing with UV coordinates so that the graphics card will perform the actual interpolation between texels for you.

We know that the texture coordinates **t’** in an nearest filter can be calculated by:

,

where *w* and *h* are the texture width and height, respectively. The **<0.5, 0.5>** offset makes our fragment shader sample at the center of each texel, which is important since we have enabled linear filtering. In order to have smooth transitions between texels, this offset should be replaced by a function that increases linearly at the margin of the texel, remains constant at its “blocky” region (with a value of 0.5) and then increases to 1.0 on the opposite margin of the texel, like this:

By doing this with the UV coordinates, the video card will automatically interpolate your texels whenever they are sampled at their margins, effectively producing an anti-aliased blocky effect like the knight shown above.

## Closed formula for the offset function

The offset function displayed in the plot above could be easily implemented with a bunch of conditional logic, but I personally steer from conditional statements on GLSL programs for performance and legibility reasons. Having said that, our offset function can be formulated by the sum of two clamped linear functions, illustrated below:

Here, *x* is the fractional part of the texture coordinate *u* after it is scaled from **[0, 1]** to **[0, w]**. That is . The same logic also applies to the texture coordinate *v*, which leads to the following formula:

## Meaning of the α parameter

The value of α determines how smooth will be the transition between texels, and it must be in the range **]0, 0.5[**. For α=0, the transition between texels will be crisp, since such a value leaves no room for linear interpolation — that is, the final result will be the equivalent of the nearest filter. For α=0.5, every coordinate inside the texels will be subject to linear interpolation, equivalently to just using a linear filter. The ideal value for α really depends on how stretched your textures will be: the larger the stretching, the smaller should be your α.

Ideally, your program should automatically determine the best value of α given the depth of the fragment and your camera parameters (including the canvas size), but that’s something I’ll talk about in the future.

## Putting it all together

The final equation that gives us **uv** coordinates that smooths a magnified, pixelated texture is as follows:

,

where .

The term can be computed on the vertex shader. If you’re on OpenGL, you could also try to use the **flat** modifier to disable interpolating it, and see if that gives any performance boost. In WebGL GLSL, the vertex and fragment shaders for our filter is as follows:

varying vec2 vUv; void main() { const float w=32.0, h=64.0; vUv = uv * vec2(w, h); gl_Position = projectionMatrix * viewMatrix * modelMatrix * vec4(position, 1.0 ); }

precision highp float; varying vec2 vUv; uniform sampler2D texSampler; void main(void) { // I chose alpha=0.1 because it looked nice in my demo const float w=32.0, h=64.0, alpha = 0.1; vec2 x = fract(vUv); vec2 x_ = clamp(0.5/alpha*x, 0.0, 0.5) + clamp(0.5/alpha*(x-1.0)+0.5, 0.0, 0.5); gl_FragColor = texture2D(texSampler, (floor(vUv) + x_) /vec2(w,h)); }

Notice that some attributes and uniforms are not declared in my code. That’s because I used THREE.JS to make my demo.

## Final thoughts

The discussed method is indicated if your scene matches the following two criteria:

**You perform large magnification of your texture**: The knight texture used in the header of this article has 32×64 pixels, but was scaled up to cover a rectangle of size 264×413 pixels (before the rotation was applied). That’s the equivalent of taking each texel and making it more than 8 times bigger. In cases like this, linear filtering will just make the texture blurry, and nearest filtering might introduce unwanted aliasing.**Your objects undergo rotation or shearing**: There’s a reason why I rotated the knight shown in the header of this article: if the texels were aligned with the screen pixels, then there would be no aliasing at all and a nearest filter would suffice for my purpose.

### Update

This discussion has been extended here, where I talk about a way to automatically compute the best α independently of the polygon position.

Hey, thanks for your article! But I have a problem. I use transparency alot, and on all the edges from a transparent pixel to a full color pixel I get these white stripes. You can see it here: http://i.imgur.com/cRvEGgc.gif

I guess it’s because the transparent pixel is white, but has alpha = 0.

I tried floor(alpha), but then I get this: http://i.imgur.com/Pzf6qRL.gif

Do you have any idea how to solve it?

Looks like your game won’t have any perspective distortion. If that’s the case, have you experimented with a smaller transition region (the alpha value described in my post)?

Have you come up with a method to calculate a depending on the ratio the number of texels to fragments? Can we somehow know this sampling ratio in the shader?

Thanks so much for writing this! I’m working on a 2D pixel art game in Unity, and for a long time, I’ve been trying to figure out how to scale the pixel art without making it look bad. It’s surprisingly hard to find the solution to this online – most places just say it’s not possible to make it look good unless you scale it to an integer multiple (2x, 3x, etc.). I tried your approach, and it looks really good at various resolutions!