Tossing and Turning

Tossing and Turning

Tossing and turning is something between a reading experience and a watching one. It is an experiment in using written instructions to guide the viewer through a series of movements while experiencing an immersive video. If you are going to play along I highly recommend printing the instructions out or at least having them on a separate screen than the one you use to watch the video. Here are the instructions and the video source:
Tossing-and-turning-page-1 Tossing-and-turning-page-2

(The instructions are not perfectly optimized, you can get stuck in at least one infinite loop depending on how you answer the questions, so use creatively.)

 

The video was made using Domemaster 3D in Maya and a collage of models made using TiltBrush, 3D scanning from the iPad mounted Skanect and Anyland. The TiltBrush models were made in the same way I made Self Frottage with Light only instead of rubbing my own body with the controller I got my very-patient-and-not-easily-weirded-out lab mates to model for me. Vi was my floor model and Elizabeth, our office manager, was my couch model. The 3D scans of my body were selfies I made way back in 2015 when I was stuck in bed with an ear infection in Baltimore on a work trip to a conference there. It was made in two parts: the head, and the body. This cludge came about because the project started as an experiment in storyboarding using Anyland. I have been a little obsessed with beds in my work lately so I started making models of bed using Anyland (an activity that itself spun off into Making the Bed). There I could make simple models, arrange them, scale them and then use my head as a camera to experiment with fly-thrus patterns I would then later implement in Maya.

The sensation of these fly-thrus was something between being a cameraperson and being a viewer of the spherical end product. In any position I could look around and observe to see what views were possible then plan my next move based on what I wanted to emphasize of those perspectives or could move with a certain feeling I thought was important to communicate.

 

 

 

It feels so direct even in comparison to real world capture with a handheld Theta. When the camera is in my hand I am manipulating the view the way I would a tool, but when the camera is my head I am embodying the viewer. That subtle difference gives an emotional richness in the camera movement I am just not able to get with camera animation in Maya. It’s similar to the difference between shooting with shoulder mounted rig vs a crane. Footage shot with a shoulder mounted rig is bumpier, more idiosyncratic, while the crane is perfectly smooth. Or the difference between mocap animation and straight tool based animation. We can tell the difference when a body is involved.

But unfortunately the final video wasn’t able to live up to that embodied camera promise: it’s achingly smooth.  To get that feeling into the camera motion my next step is to sketch in TiltBrush the basic layout of a scene, where I want all the various kinds of models to go, then draw dynamic camera paths by tracing behind my own head as I move.  I can then export all of that information and use it as an armature in Maya to see if I can get a more emotive camera path.

The piece will appear in the upcoming Living Room Light exchange publication STATE\CHANGE