2D Face Modeling with JavaScript

blog_header

One common algorithm to perform Face Morphing requires fitting a polygonal mesh on top of all input face images. This must be done in a way such that each vertex converge to the same physical spot of all faces – such as the eye contour, nose tip, chin, etc. Having such a mesh aligned to a face can also be useful for other purposes, from caricaturing to beautification, by performing non-rigid modifications to the original face.

In this post, I’ll describe how I implemented a simple face distortion tool using only JavaScript and HTML5.

Continue reading

2D omnidirectional shadow mapping with Three.Js

A 2D scene is visualized by a camera placed on top of it. Black squares represent obstacles that light cannot transpose. In this scene, we have three light sources.

In this post, I will talk about how I implemented a 2D omnidirectional shadow mapping algorithm (that is, given a 2D world with obstacles defined by polygon and a set of point lights, render this world by taking into account the shadows produced by the interaction between the lights and the obstacles). You can check out a live demo here (WebGL support is required to run it).

Although I’ve already implemented 3D shadow maps for spot lights in the past, the present case opens up space for exciting new ways of approaching this problem. As far as I know, Three.Js’ shadow implementations are specifically designed for 3D scenes. Moreover, I haven’t seen any references to omnidirectional shadow maps in this library (which doesn’t mean it’s not there; it might just not be documented). Therefore, instead of using the default implementation of shadow maps, I decided to code my own and explore the new possibilities I had in mind, including the use of a 1D depth map.

Continue reading

Loading a skeletal animation? Linearize it!

When I first studied the principles behind skeletal animations, I decided to implement it by interpolating between poses on CPU and by having the skinning on the vertex shader for performance reasons. I was thrilled to see some computational weight being transferred from CPU to GPU, where it could be handled with heavy parallelization.

As performance was my main concern, I also tried to squeeze the most of my CPU in the pose interpolation process by avoiding conditional logic (which, in turn, avoids branch mispredictions), interpolating quaternions with LERP instead of SLERP (the animation difference is not noticeable if you have a large amount of samples), normalizing vectors with the invsqrt procedure and using function pointers to discriminate between bones that do require interpolation from the ones that do not — which, eventually, I realized was not such a good idea: although a procedure call may be decoded in the fetch pipeline stage, an indirect function call depends on an address that might not be available on cache, which could cause the pipeline to stall for a (very slow) memory fetch.

When I wrote my first implementation, I found out that there was a myriad of possible interpolation methods, which would have been chosen by the designer in the modelling software: linear, constant, bezier curves and so on. It didn’t sound like a good idea to implement all those methods, as this would clutter my code with features that could fit very well in a large game engine, but not in my time constrained demo. Also, implementing some of these techniques would defeat my purpose of having a blazing fast skeletal animation demo.

Continue reading