CG & VR Part 3 – Spherical Compositing in Maya

CG & VR Part 3 – Spherical Compositing in Maya

Looks like it’s time for Part 3! In this tutorial, we’re going to go over the basics of how to composite elements from cg renders into your mono-spherical footage using Maya and Premiere. In the case of this tutorial, I’m going to use the 3D model of the eleVR office as reference to create special effects that can be overlaid onto footage of our physical office!

For the uninitiated, cg compositing is the process of taking visuals rendered from a 3D animation or special effects package, and incorporating them into a separate piece of media, such as video or images. You can learn much about about it on Wikipedia.

Hopefully you’ve already completed Part 1 & 2 of this series where I covered basic spherical rendering in Maya. If you have not already read through it, I would suggest you do so, as it contains fundamental information you’ll need to complete this tutorial.

CG & VR Part 1 – Rendering Challenges
CG & VR Part 2 – VR Rendering in Maya

Furthermore, this tutorial assumes you already have knowledge concerning batch rendering frames from Maya, and overlaying the image sequence in your favored video editor.

Elijah Butterfield
Elijah Butterfield – Intern

 

Software:
Maya 2016 – Modeling (There’s a Free 30-Day Trial)
Mental Ray for Maya – Rendering
Domemaster3D Maya Plugin – Spherical Camera Creator

Hardware:
A Mono-Spherical Camera – (I’ll be using a Ricoh Theta S)

 

Step 1 – Getting our Footage

 

For the sake of simplicity, I’ll be recording my spherical footage from a fixed point. While it is possible to composite cg elements into spherical footage where the camera is moving, it would be difficult to get perfect results, as there isn’t any dedicated software package for tracking motion in spherical video yet.

With my spherical camera mounted on a tripod and in the position I want to record, I’m going measure its position in relationship to the environment around it, i.e, the distance from the walls and how high the lens is off of the ground.

The reason why I’m measuring the location of the camera in relationship to the room is so I can use that information to match a virtual camera to the same relative position to a virtual room in Maya.

 

2_BP3_Ricoh_Theta_Room_Image
Output from the Ricoh Theta

 

 

 

 

Camera Location in Relation to Room

Step 2 – Creating & Positioning the Virtual Camera

 

Now that we have our measurements, we’re going to move into Maya and create a mono-spherical camera. This will be the virtual equivalent of our physical camera.

In the Rendering Menu, select Domemaster3D > Dome Cameras > LatLong Camera

Figure 2.0
Figure 2.0

 

Before we make any changes to the camera we’ve just created, let’s first set up a place to put it. Using the measurements we took of the physical camera’s location, we’re going to create a cube with the dimensions as those measurements, and then use that cube as a reference object to place our camera.

Figure 2.1
Figure 2.1

 

Since we’ve already gone through all the trouble of measuring everything out in the physical world, we want to be as accurate as possible when we place our camera. To ensure we place our camera exactly where the corner our measured cube is, we’re going to use the Snap-to-Point tool.

  • Select the LatLong Camera that we made previously.
  • Activate the Move Tool by pressing the W key.
  • Release the W key, and hold down the V Key to activate Snap to Point mode.
  • With the V key held down, hold down the Middle Mouse Button, and drag your cursor over the vertex where you want your camera to be snapped to.
  • Now that we have our virtual camera where we want it, we can safely delete the measurement cube we made earlier
Figure 2.2
Figure 2.2

 

Now that we have our camera in position, we need to orient it so it’s facing in the same direction as our physical camera. Now, you’re probably thinking something along the lines of: “Why would a spherical camera need to be rotated in a certain direction? It’s taking footage in every direction, after all.”

Good question. Because spherical footage and effects are saved/rendered in the flattened-out equirectangular format, the cameras they are shot with need to be facing the same direction so the vertical stitch lines will align with each other. In essence, this is the same approach used in creating effects for traditional flat videos.

In the case of the Ricoh Theta S, the ‘front’ of the camera, i.e., the side that doesn’t make the vertical stitch line, is the side without the shutter button. This is the direction we want our LatLong Camera in Maya to face. See Figure 2.3 (Right).

Now that we have our cameras aligned with each other and rotated correctly, we can render out a test image. You’ll likely have to fiddle a little bit with the position of your LatLong Camera in maya to get an 100% accurate match, but lets take a look at my results. See Figure 2.4 (Below).

 

 

Figure 2.3
Figure 2.3
Figure 2.4
Figure 2.4

As you can see, my rendered image resembles the physical room pretty closely, but isn’t perfect due to some slight inaccuracies in my 3D model. However, for our purposes, this is going to work just fine. Let’s move on to making some special effects.

Step 3 – Creating Some Basic Effects

For the sake of fast rendering times, I’m not going to get involved in any super intensive or technical special effects. Instead, I’m going use some simple animated polygons. Here’s what I’ll be working with.

Figure 2.5
Figure 2.5

 

So far we’ve matched up our physical and virtual cameras in their respective rooms, and added some basic animated effects. Now, how do we render out just the effects we created without the model of the room getting in the way?

Of course, we could just delete the room model and only render the objects we want to see, but in doing so we would be limiting the scope of immersive effects we could create. We would lose shadows, reflections, and refractions caused by the objects we want to composite. However, in the case of my scene, I’m not going to worry too much about shadows at the moment for the sake of this tutorial’s brevity.

 

Now, what we’re going to do is create a material for our room which will make it invisible in renders, but will still allow it to cast shadows and reflections on the objects we’re compositing in, leaving us with images we can incorporate into our footage, minus the shadows cast off the composite objects.

To achieve this, the first thing we’re going to want to to is select all of our stand-in geometry, i.e., the objects that we don’t want to render and assign a useBackground material to it. In regard to this scene, that’s going to be all the walls, floors, etc.

1.

  • Open the Hypershade Editor by selecting the Hypershade/Persp Panel Layout on the left-hand side of the screen.
  • In the ‘Create’ window of the Hypershade Editor, select Maya > Surface.
  • Select the ‘Use Background’ shader.

2.

  • With your stand-in geometry selected, hold down the Right Mouse Button.
  • In the pop-up menu, select Assign Existing Material > useBackground
Figure 2.6
Figure 2.6

 

 

Now that we have our UseBackground shader assigned to our stand-in geometry, we need to tell the geometry not to Receive any Final Gather from the Mental Ray Renderer. This is so we can isolate the composite objects in our scene without any global illumination shading being cast on our stand-in geometry. For more information on Final Gather, you can read about it here.

 

  • Select your stand-in geometry.
  • Navigate to the Attribute Editor, and select the Shape Node attached to your object.
  • Open the Mental Ray menu, and uncheck Final Gather Receive.
Figure 2.7

If we do a quick test render now, our result should consist of our composite geometry with shading from the environment, and a transparent background where our stand-in geometry is located. Note that shadows are not being cast off any of the the composite objects.

Figure 2.7.1
Figure 2.7.1

 

Optional Step

If you’re using a Mental Ray Physical Sky to light your scene, you might have noticed that your render previews still have the Physical Sky horizon in them. See Figure 2.8 (Right).

This horizon won’t always be rendered in your final images depending on your settings, but it could interact with the way some reflections and final gather simulations appear.

 

To fix this, we’re going to enable the UseBackground setting on the Physical Sky.

  • With your Physical Sky selected, open the Attribute Editor.
  • Navigate to the mia_physicalskyX tab, and check the UseBackground checkbox.
Figure 2.8
Figure 2.8.1

Step 4 – Composited Footage

At this point, we’ve covered how to align our virtual and physical cameras, how to create some basic effects, and some of the first steps that need to be taken to render our effects so that we can put them over our spherical footage. From here, the next steps to wrap things up are going to be to render out our effects and put them together in Premiere. I’ve taken the liberty of doing that, and here are my results.

2.9_BP3_CompositeAnimation

 

 

Now, it’s time to make something incredible! I can’t wait to see what kind of creative masterpiece you’ll come up with.