Between 2005-2010 I undertook a post-doctoral research fellowship at the University of Surrey, Centre for Vision, Speech and Signal Processing, under Professor Adrian Hilton. The work extended my PhD study but focused more upon computer vision techniques for capturing facial performance. In particular the work involved the use of surface capture technologies to record actor’s speaking, and the use of that data in order to synthesise speech.
This research touched on the areas of capture, modelling/representation, and animation. Capture used a scanner from 3dMD which recovered raw scans at 60fps of an actor. Features and markers on the actor’s face are tracked, which allows the model to be registered over time. With the tracked scans a statistical model of speech movements can be constructed, which allows for the analysis of speech lip motion. The work done on tracking the face, and animating a final model were later involved in a technology transfer position with Framestore who were in the initial stages of working on the Alfonso Cuaron film Gravity.
The below video demonstrated the 3d tracking of human face movements. The underlying data is from a 3dMD scanner. The white dots are tracking painted blue markers on the skin, whilst the points inbetween are attempting to lock onto skin features.
This next video shows the animation of an artist generated mesh using the tracked data. The motion and 3d mesh are unwrapped into a 2d domain, once this has been achieved it is a trivial interpolation task to apply the motion to the surface.
This final video demonstrates some initial work I did at Framestore into tracking eye movements. The tracker is very simple, a 3d model which pivots to track the dark iris against the white of the eye. Pixels are classified as either iris or non-iris using a Gaussian classifier. The track requires an initial track of eyelid movements which is passed on from the previously described surface tracker. This was just a proof of concept, further work would be required to make it more robust.