Loading a skeletal animation? Linearize it!

When I first studied the principles behind skeletal animations, I decided to implement it by interpolating between poses on CPU and by having the skinning on the vertex shader for performance reasons. I was thrilled to see some computational weight being transferred from CPU to GPU, where it could be handled with heavy parallelization.

As performance was my main concern, I also tried to squeeze the most of my CPU in the pose interpolation process by avoiding conditional logic (which, in turn, avoids branch mispredictions), interpolating quaternions with LERP instead of SLERP (the animation difference is not noticeable if you have a large amount of samples), normalizing vectors with the invsqrt procedure and using function pointers to discriminate between bones that do require interpolation from the ones that do not — which, eventually, I realized was not such a good idea: although a procedure call may be decoded in the fetch pipeline stage, an indirect function call depends on an address that might not be available on cache, which could cause the pipeline to stall for a (very slow) memory fetch.

When I wrote my first implementation, I found out that there was a myriad of possible interpolation methods, which would have been chosen by the designer in the modelling software: linear, constant, bezier curves and so on. It didn’t sound like a good idea to implement all those methods, as this would clutter my code with features that could fit very well in a large game engine, but not in my time constrained demo. Also, implementing some of these techniques would defeat my purpose of having a blazing fast skeletal animation demo.

Continue reading

Advertisements

Superior texture filter for dynamic pixel art upscaling

This article is an extension to a previously discussed topic: How to perform large texture magnification for pixelated games without aliasing between texels. My first post described a method to achieve this by assuming that, during the whole time, your polygons remain approximately at the same size after they are rendered.

From left to right, the knight is rendered at: Close distance with 15k pixels²/texels; medium distance with 49 pixels²/texels; far distance with 9 pixels²/texels. In all frames, α=0.1.

Continue reading

Manual texture filtering for pixelated games in WebGL

cover
Left: Linear filtering; middle: nearest neighbor sampling; right: custom texture filter. The knight is part of the Wesnoth Frankenpack. Click here to see texture filter demo in action.

If you’re writing a pixelated game that performs large magnification of textures, you’re probably using the nearest texture filter so your game will look like the knight on the middle instead of the one to the left.

The problem with nearest texel sampling is that it’s susceptible to aliasing if the texels are not aligned with the screen pixels, which can happen if you apply transformations such as rotation and shearing to your textured polygons. Ideally, we would like to have smooth transitions between neighboring texels in the final image, like the knight on the right in the image above.

Continue reading

Custom shaders with Three.JS: Uniforms, textures and lighting

If you’re familiar to WebGL and GLSL programming and have started using three.js, you’ll eventually run into a situation where you want to code your own shader, but at the same time use the resources that this library provides you with. In this post, I’ll show you how to setup a custom shader with a three.js geometry, pass it your own uniforms, bind a texture to a particular uniform and receive all lights that you’ve added to the scene.

Continue reading

Hello, world!

I spent the last few months working on a game project that proved to be too ambitious for the manpower I had available. So we decided to make small games and try to learn as much as possible from this experience, and bring our findings to the public.

In this post I’ll start by revealing the final year project I conducted as a computer science student at Universidade Federal de Minas Gerais (UFMG) back in 2010. Although not related to games, the idea was a system very similar to Oculus Rift, in which you could control a robot instead of a game character.

In the video, what we see is a pair of cameras attached to a setup with two servo motors, which allowed the cameras to rotate in pitch and yaw. The user would equip a headset (not shown in the video) attached to an IMU (a sensor that can estimate orientation in space), and the cameras would be moved to match the user head orientation.