• I am a Filmmaker/Researcher/Software Developer with interests in all things to do with Film, Animation, and Screenwriting. I studied for a PhD in Computer Graphics and Animation at the University of Sheffield, did a post-doc focusing on performance capture (face mocap) at the University of Surrey, and recently completed an MA in Filmmaking at the London Film School.


    Compositing CG into Live Footage with Renderman

    Recently Pixar released Renderman with a non-commercial licence, so I though I’d have a quick look into it and see how it compares to Mental ray.

    The previous blogs (part 1, part 2) I posted demonstrate the general setup of the scene, so I won’t go through that again. The main difference is that you use an RMS Env Light, which is the Renderman environment light, and PxrDisney material shaders.

    There is some complexity to setting up the ground plane for rendering shadows and reflections, the standard maya use_background shader doesn’t work with Renderman. The shadowcollector and reflectioncollector passes can be added to get the indirect components from the ground plane, and a PxrMatteID output to get the alpha for foreground objects. Below are frames for the main render, shadow collector and id outputs.

    renderman_out

    The nuke graph for compositing this into the live footage is a little more complex than the mental ray example – mainly because I am dealing with multiple inputs. I needed to add some erode/dilate nodes to adjust the id matte slightly, but otherwise it is fairly simple to understand.

    nuke_rman

    Here is the final output, I only rendered the first 100 frames because of slow rendering performance:

    There is some documentation on the Renderman website to explain the process in more detail here.

    Quality-wise I think the Renderman results are exceptionally good, and much easier to get clean results than with Mental-Ray. However, from what I can see there are fewer ways to optimise the speed of the render. There is a document on this here, and the main factor appears to be resolution (which you can’t change if you’re rendering for compositing into live footage). From my limited tests the Shading Rate parameter has very little effect on the resulting speed, and I can’t even find the Pixel Samples parameter it talks about. Furthermore, to use Tractor for distributed rendering you need a floating license and the non-commercial licence is node-locked – so I can’t even take advantage of any extra computers I have lying around. In conclusion for my purposes Renderman has the following benefits/weaknesses:

    Benefits:

    • Nice clean images
    • Small number of parameters to tweak compared with Mental-Ray
    • Clean UI in Maya

    Weaknesses:

    • Slow, and few tweakable parameters to optimise render speed.
    • Can’t distribute rendering with the non-commercial licence.
     

    No Comments