• I am a Filmmaker/Researcher/Software Developer with interests in all things to do with Film, Animation, and Screenwriting. I studied for a PhD in Computer Graphics and Animation at the University of Sheffield, did a post-doc focusing on performance capture (face mocap) at the University of Surrey, and recently completed an MA in Filmmaking at the London Film School.


    Compositing CG into Live Footage pt. 2

    From the first part we have an fbx file containing the tracked camera and some roughed in geometry so that we know where to place our model. Within Maya we can import this file and we will have our render camera set up.

    In the outliner the camera will be called Camera1. Before proceeding select the camera and lock all its attributes in the channel box – this is important because if you look through the camera in the viewport it is very easy to edit it unintentionally. Also, go to the CameraShape tab and set Environment > Background Color to black – otherwise the shadows will look murky.

    Now we need to add in our geometry. Add a polygon plane aligned with the XZ axis, which will be our ground. Set the material to a use_background shader. The use_background shader is useful for rendering either shadows or reflections on an object which has no primary visibility. In this case we just want to catch shadows, so go into the attribute editor and set the Specular Color to black, and the Reflectivity to 0.0, the Reflection Limit to 0.

    usebackground_attr

    Now add in the geometry, in my case a buddha model I found on the internet, and place it using the cube imported with the camera as a guide, delete the cube. I’m going to use mental ray for rendering and so using mia_material_x_passes for the geometry.

    We can do a test render to see the object in the scene. Set the duration of the animation to the length of the video we tracked (391 frames). Open the render settings, and load up one of the presets (I use FinalFrameEXR). Set the resolution to match the video (1080p), a filename prefix for the output, the animation extension (name_#.ext), the end frame for the animation, and importantly the renderable camera. Now run a batch render and you should get a reasonably quick result.

    render_settings

    Open up NukeX again and you can do a quick comp with the rendered result. The node graph below will do the job. One thing to note is that the LensDistortion node is set to re-distort the rendered result so that it appears to have been shot with the real camera.

    comp_graph1

    Below is the result of this comp:

    So the maya render is meshing well with the original footage, but obviously some work is needed to get it to look integrated with the scene. To do this I’ll use image-based lighting to use the correct lighting setup from the original environment.

    Image-Based Lighting (IBL) uses an environment map in the lighting calculations so that the object is in effect being lit by all the surfaces which surround it. There are many ways to recover a map (e.g. using a light probe), but I use a nodal-ninja panoramic head to create a high-dynamic range 360 degree spherical map of the environment. A non-hdri version of the one used in this example is shown below.

    pano_1024

    We need to do a little work for this to be useful however. For a start hdri panoramas tend to be very large and can cause problems with maya becoming unstable. Also, there is more detail in this map than entirely necessary. We will be using final gather to provide indirect illumination of the model, this sends out rays to determine the colour of the surface. If you have a very detailed map then each ray sent out may find a very different value in the environment, which can cause flickering in animation. To fix these issues we want to blur the map and resize it to something a bit more managable.

    blur_256

    For the rendering I use the Environment Lighting Mode, which is not exposed in maya by default. To add it into the interface download the scripts here:

    https://code.google.com/p/maya-render-settings-mental-ray/wiki/ProjectDescription

    Once installed they will update the Render Settings window in a number of ways. Importantly, the Indirect Lighting tab should now include the Environment Lighting Mode section. Set the Environment Lighting Mode to on.

    EnvironmentLightingMode

    To enable IBL in mental ray go to the Indirect Lighting tab in Render Settings and click the Image Based Lighting > Create button. This will create an IBL node which should open in the Attribute Editor. First of all set the image name (the location of the environment map). The image will be texture mapped onto the sphere which appears in the viewport. Also, go to Render Stats and turn off primary visibility (we don’t want to see the environment map in our renders). Turn on textures in your viewport and you can move and rotate the sphere so that it is correctly aligned with your scene orientation, try and get this as accurate as possible or the light will be coming from the wrong place. F

    Finally, a couple of Render Settings need to be changed. Turn off the default lights in Common > Render Options. Enable Indirect Lighting > Final Gathering. Now you should be ready to try a render. It may appear dark, as in the image below, if this is the case go to the IBL node attributes node and boost up the color gain value until it looks good. Below shows before and after tuning the color gain, and the alpha channel of the image. It’s starting to look a bit better, but not quite there yet.

    final_gather

    Notice that now we have a bit of shadowing on the ground plane which wasn’t there before – but this isn’t enough, we need a maya light in the scene to give a proper shadow as though the model were backlit as in the video. By creating an area light and positioning it behind and above the model we can create a more realistic shadow – make sure that this light doesn’t blow out the IBL, it is there for a specific purpose.

    Now you can set the render settings to something a bit more adventurous and get to your final render quality. Comp this using the script from before and you should get something like this:

    Finally, it is starting to look a bit more integrated into the shot. In the last part I’ll address depth-of-field and motion blur to get the final shot.

     

    No Comments