CG & VR Part 1 – Rendering Challenges

CG & VR Part 1 – Rendering Challenges

 

Hello, World. My name is Elijah Butterfield, and I am eleVR’s very first intern! I am a tech instructor with a passion for mobile app & game develop, and I am also VR enthusiast. With a background in 3D modeling & animation, video game design, and CG environment creation, I recently published an educational history VR Google Cardboard app on the Play Store.

 

The purpose of this blog post and its subsequent parts is to give a brief overview of a few ways to take your computer generated environments and render them in a VR format. I’ll be briefly covering VR rendering for Cinematic use utilizing Autodesk Maya in the form of short tutorials.

Elijah Butterfield
Elijah Butterfield – Intern

 

 

Stereo Rigs

Before we launch into any hands-on stuff, we’re going to explore what makes pre-rendering CG scenes in stereo-spherical for cinematic with a software package like Maya a bit challenging in comparison to rendering them in a mono-spherical format. To start with, mono-spherical images are relatively simple to produce in a digital environment, as they are shot under the same principle as in the physical world — with a single camera rotating around fixed pivot point.
See Figure 1.0 & 2.0.

 

Figure 1
Figure 1.0
Figure 2.0
Figure 2.0 – Mono-Spherical Render in Equirectangular Format

 

 

Stereo-spherical content is a bit tricky though, as it needs to be shot/rendered with two cameras, each one shooting different footage for each eye. Initially, this might seem simple. All we have to do is take two mono-spherical images (Like figure 2.0) that were taken side-by-side and use one for each eye, right?

 

Well, almost.

 

Because mono-spherical images are shot with a camera with its own pivot point, their view is that of a single eye looking in all directions. If we were to apply this to the cameras on a stereo rig, it would be the equivalent of our eyes spinning around in their sockets, which is clearly not how they behave. The result would be an image where we see a stereo-3d effect in the direction the cameras were initially facing, and a cross-eyed effect in the opposite direction due to the cameras viewpoints being swapped when they’re turned 180 degrees. See Figure 3.0 & 4.0.

 

Figure 1
Figure 3.0
Figure 4.0
Figure 4.0

 

 

To get around this headache inducing effect, we need to create a camera rig that behaves the same way our eyes do in relationship to our heads. This means our two cameras need to rotate around a single shared pivot point with our preferred eye separation as the distance between the cameras. See Figure 0.05

 

Figure 5.0
Figure 5.0

 

 

Camera/Stereo Configurations

Now that we’ve covered the basics of how stereo camera rigs are created and how they move, lets take a look at stereo configurations that we could apply to the cameras themselves.
The three primary configurations are Converged, Parallel, and Off-Axis.

 

Converged

A converged stereo rig consists of two cameras toed-in to focus on a single plane of space known as the Focal Length or Zero Parallax Plane. This type of stereo configuration might seem intuitively correct, as it behaves the way our eyes do.

However, when the two angled views from a converged stereo rig is displayed on a flat surface, you’ll notice what’s called Vertical Parallax, or keystone effect, on the projections of each eye.

This is caused by trying to display the offset perspective of each camera onto a single screen that is not perpendicular to either of the cameras.

This method can cause eye-strain due to the distortion and objects not converging seamlessly.

Converged

Converged View

 

 

 

Parallel

As you may have guessed from the name, Parallel stereo rigs are comprised of two cameras parallel to each other.

This configuration may get rid of our distortion/keystone issue, but it immediately introduces the problem of our Zero Parallax being stuck at infinity. This means everything in our scene will appear to pop out of the screen.

We can fix this in post by artificially adding a convergence point using a technique called Horizontal Image Translation, but this involves cropping down our images and is a time consuming process.

We’re better off avoiding this altogether and using a different configuration.

 

 

Parallel Parallel View

 

Off-Axis

Off-Axis are generally the most commonly used stereo camera rigs, as they provide the best of both worlds from Converged and Parallel set-ups:

They consist of two parallel cameras to eliminate any head-ache inducing Vertical Parallax (keystoning), and the cameras have asymmetrical frustums which allows us to control our Zero Parallax plane.

One drawback of this method is that due to the cameras being parallel, objects at infinity will have the same disparity as the rig’s camera interpupillary distance. This means that objects in the far very distance wont fuse when looked at.

Despite this, Off-Axis stereo rigs typically provide the best overall stereo viewing experience.

 

OffAxis Off Axis View

 

 

For more detailed and in-depth information on these stereo configurations, take a look at these resources:

vfxIO – Parallel vs. Converged
Paul Bourke – Calculating Stereo Pairs
Stereoscopy.co – Depth Positioning

 

Rendering

Now that we have our basic stereo camera rig configuration all figured out, we can move into rendering out some stereo-spherical images. However, this is a slightly complicated process, because now as we rotate our stereo rig, our cameras aren’t in a fixed position anymore. How are we supposed to generate a single still image when our cameras are moving around?

 

For software I’ll be using, we have two options. The first of which is the ‘traditional’ way of rendering CG scenes into stereo-spherical images, the fun process called Strip Rendering. This is where we rotate our stereo rig in increments of 1° and for each eye render out a 1° wide by 180° tall strip of the scene on each subsequent 1° rotation. At the end of the render, this leaves us with 360 slivers of pixels from each camera that we then have to stitch together into our left and right images. While this is a viable option for single frames, this can potentially make any sort of animation project unfeasible due to how labor intensive it is. For more information on this method, I would recommend taking a look at this article from Paul Bourke.

 

Our second option is an awesome plugin that lets us render out stereo spherical images without (most of) the hassle! Thanks to visual effects artist Andrew Hazelden and his Domemaster 3D plugin for Autodesk Maya, 3DS Max, and SoftImage, we can render equirectangular stereo-spherical images in an animation/time friendly way. This is the method I’ll be using in Part 2 of this post, where I’ll cover basic stereo-spherical rendering in Autodesk Maya.

 

CG & VR Part 2 – VR Rendering in Maya