• I am a Filmmaker/Researcher/Software Developer with interests in all things to do with Film, Animation, and Screenwriting. I studied for a PhD in Computer Graphics and Animation at the University of Sheffield, did a post-doc focusing on performance capture (face mocap) at the University of Surrey, and recently completed an MA in Filmmaking at the London Film School.


    Compositing CG into Live Footage pt. 1

    I have long had an interest in the workflow for integrating 3D animation into live footage, especially since many of the tools are becoming increasingly accessible to anyone with a decent computer. There are quite a few steps to get from footage through to a composited scene, and whilst there are tutorials available online I thought it would be useful to write the steps down if only to remind myself in the future.

    This workflow uses Maya for 3D rendering, and NukeX for compositing and tracking. There are other tools available, such as PFTrack and After Effects, but this is the way I have learnt to work and is easiest for me to understand. The objective is to record a shot with a free moving camera, to track the camera, render a 3D object into the scene, and composite the result together. The 3D object should integrate into the scene naturally, and exhibit effects such as depth of field and motion blur.

    Camera Tracking in NukeX:

    There are several tools available for camera tracking, such as PFTrack and Boujou, but even ignoring cost (and Boujou in particular is beyond the average user) they can be a bit obscure and difficult to use. NukeX, whilst primarily a compositing tool, has improved dramatically over the past few iterations in its treatment of 3D and this includes the CameraTracker node. The footage I used is shown below, it is recorded with a Blackmagic Camera 2K handheld and consists of a short sequence of the camera rotating around a point on my desk. Tracking will only work well if there is enough texture within the frame to track, also beware that large movements and shiny surfaces can make it difficult for the underlying algorithms.

    Within NukeX read the sequence, and connect up a CameraTracker node. The node settings are shown below:

    CameraTracker_node

    There are a lot of settings here, I will only go over the most important:

    Camera Motion – this is how the camera is moving in the shot. In this case it is a free camera, as the camera is handheld. If the camera were on a tripod it would be rotation only. Linear and planar are for other types of constrained camera movement.

    Lens Distortion – all real world lenses create distortion of the image to a greater or lesser extent. This setting tells the system to estimate the distortion of the lens, and use this to undistort the image. I don’t use this setting, as the LensDistortion node can be used to do this, and can also be plugged in to re-distort your cg layers to fit in with the original footage.

    Focal length – the focal length of the lens. In this case I am using a 28mm f2.8 lens with a 0.71x speedbooster, which gives an output focal length of 19.88mm.

    Film Back Preset – presets for the Film Back Size. I’m using a Blackmagic Cinema Camera 2.5K which has a film back size of 15.81mm x 8.88mm. If your camera isn’t in the presets just type the dimensions in.

    There are a number of settings in the other tabs. UserTracks allows you to specify good user-defined feature points to guide the process. AutoTracks allows you to filter out poor tracks from the automatic solver. Settings/Scene/Output allow you to specify the internal workings of the node.

    In this case the defaults will serve well, press the Track button and the node will track forward and backward to find good feature points to define the camera motion. Once the tracking has completed you will be able to see the tracked points in the viewer (so long as it is attached). Playing the sequence through a couple of times will give you an idea of whether there are erroneous tracks – if so just select them in the viewport and delete them.

    Now, click solve and the node will attempt to create a camera. The tracks will change colour, this is a result of projecting/unprojecting the tracks using the derived camera – low error tracks are shown in green, high error tracks in red. Hopefully there are only a few red tracks, if not you may have a shot which needs a more in-depth treatment.

    CameraTracker_tracks

    The Error value is the accumulated projection error across the image in pixels, lower is obviously better. Given that the error is low we can create a camera and use it for the rest of the process. First select some points in the viewer which lie on the ground, right-click and click ground-plane > set to selected. Now select Camera in the dropdown box and click Create. The node graph should look something like this:

    CameraTracker_graph1

    The new Camera node is part of the 3D framework in NukeX, signified by the fact that it is rounded. Now we need to check that the generated camera is good.

    Create a ScanlineRender node, connect the Camera into the cam input, the image sequence into the bg input, and a new Cube node into the obj/scn input. If you connect the ScanlineRender node to the viewer you should now see the cube superimposed onto the image (if you can’t see it, it’s probably too big – scale the cube down). Unfortunately it is black, because it has no texture or shaders attached. Connect a Checkerboard as the img input to the Cube and it will display a texture across its surface. The node graph should look like this:

    CameraTracker_graph2

    Play through the sequence and the cube should stick to the background footage. You can place the cube by entering the 3d viewer (press tab) and dragging the cube in the view.

    The track looks good, so the last step of the process is to export the camera in a format which maya can read. NukeX has a WriteGeo node which will do what we want, but it needs a Scene node to export. Create a Scene node, and attach the camera to it. This is all we need, but it’s a good idea to include some rough geometry as well so that we know where to put the CG object. Connect the Cube as well and connect the Scene to a WriteGeo node.

    CameraTracker_graph3

    In the WriteGeo node specify the output file, you want to use fbx format because this will support camera export. Click export and you will have a file which holds your tracked camera.

    In the next part I will render an object in Maya using the tracked camera.

     

    No Comments