This is what Science sounds like

posted in: Camera Research | 0

In order to have great stereo in part of a static spherical video, you’ll have to have not-so-great stereo in other parts, and finding the best ways to mess up the stereo without making it seem that messed up is pretty important. We made this video to test different angles and distances between eyes in stereo video, by filming with two cameras and moving them live as we filmed. You can download it and watch it yourself and see what you notice, though careful not to strain your eyes trying to get unnatural views to work. Here’s what we’ve learned from it so far.

Free-viewing vs Oculus: I (Vi) can get much wider ranges to align in proper stereo when I’m free-viewing it, watching a small version of the video on my screen and going wall-eyed, than in the oculus. I suspect this is because when the video is small on the screen, tiny adjustments in my eyes can have large effects, while in the oculus the video is right in front of your eyes and you can’t just cross your eyes a little more to dramatically shift the distance between the images.

I (Andrea) suspect that when you are free-viewing you are already doing tricks with letting your eyes focus weirdly in order to get 3D, whereas when using the oculus, you are, for the most part, just viewing normally and the stereo happens. When you are already free-viewing, adjusting the free-view is probably easier than suddenly having to make your eyes misalign while watching a video because the stereo is way off. One interesting difference for me between free-viewing and viewing in the oculus, is that when the stereo falls apart free-viewing, you just “lose it”, whereas when the stereo falls apart in the oculus, it’s a much less sudden sensation, rather you notice because of doubling, or the 3D effect becoming less strong. This is presumably because as soon as I can’t free-view, my eyes try to bounce back to normal viewing whereas they never need to go weird for the oculus to begin with, so you don’t get that sudden “I lost it” sensation.

Free viewing having a larger effective range than headset viewing suggests that if you can free-view it in stereo but it doesn’t line up in oculus, there’s a way to edit it into being correct (maybe the footage isn’t lined up correctly, or zoomed the right amount, or the correct distance apart), whereas if you are an adept free-viewer and can’t get the stereo to work free-viewing, there’s not much point in wasting time trying to get it to work in editing. We will definitely have to experiment more and see to what extent this is actually the case.

photo 2 (1)

Interpupillary distance: The distance between cameras doesn’t need to closely match that of your eyes for the stereo to work. We barely noticed entire centimeters of change. This is good news for stereo video, as interpupillary distance can be centimeters different for different people.

This is in contrast to our tiny-eyes 360 3d experiments, which did seem to make the world larger with smaller interpupillary distance. Maybe the difference is in the relationship between the stereo and the field of view of the world, because in Approaching Spherical 3d the field of view is in the context of a sphere of vision. In “This is what science sounds like”, the field of view does not really change, but perhaps by zooming the video in or out along with changing interpupillary distance you could get some really cool and maybe interestingly subtle effects.

photo 4 (1)

Angle of convergence: Unsurprisingly, this seems similar to what’s easy to focus on in real life. When you’re looking at something extremely close to your face, it gets harder and harder to focus on, with accompanying eye strain, until you simply can’t do it. At the time of filming it seemed to make sense that when you make the cameras point close together, you’re focusing on something close, but it’s clear upon watching and thinking about it that the inward angle is what becomes too great too fast, while far-away objects can still be seen in stereo that is perhaps a bit more emphasized. So, tilt in to focus out, tilt out to focus in.

When the cameras point outward at a slightly divergent angle, you can still focus on what’s right in front of you, where the wide-angle lenses still capture overlapping footage at a converging angle. What surprised me is how much my face in particular warps when this happens. Maybe something in our brain’s specialized facial construction software seems to be sensitive to this angle, or maybe it’s the gopro’s wide-angle warping changing as my face goes closer to the edge of each camera’s field of view. Needs more experiment.

Rotation and vertical shift: even a tiny amount of this makes it very difficult to get the images to align, for all of us. This will be a big limitation to which panoramic twists will work for stereo spherical video, and is the same as the problem of head-tilt. When you tilt your head in a spherical stereo video, it goes out of alignment pretty fast.

Vertical tilt: with one eye tilted upward a little more than the other, the video still lines up pretty well for me (Vi) when free-viewing with my head tilted a bit to the side to compensate for the vertical shift. There’s the same sort of slight warping of the face as in the angle of convergence. I didn’t really expect differently-tilted images to align so well, so definitely need more tests! It’s probably ideal to change the panoramic twist subtly through the video in a way that always makes faces look good, concentrating the greater divergences on non-face things.

The vertical tilt doesn’t align so well in the oculus with this particular video, because the way the camera is tilted makes one image higher than the other and it’s harder to tilt your head in relation to the image when the image is strapped to your face. We could compensate for the vertical shift in post-production, now that we know this is a thing. Definitely want to experiment with what happens when you edit the same footage different ways, next time.

photo 1 (1)

Non-aligning images: We found the layering of not-even-close-to-aligning images so interesting that we stuck a bit of video on the end where one eye is completely upside-down, and they overlap to create a rotationally-symmetric cool-looking thing. Unlike just layering video, each image is itself crisp, and the eyes and brain can choose which parts of which to “layer” over the other eye’s image. We also played with this idea in #3D Selfie, where when we turn the cameras completely out of alignment you no longer bother to try seeing stereo but instead create a layered story of different faces, which then converge again on the other side.

The effect of non-aligning images isn’t one that is really possible to see with free-viewing, at least for me (Andrea). But it’s totally possible for me (Vi). It’s not automatic like in the oculus though. It takes work and deciding what I want to see, and already knowing how it looks in the oculus gives my brain a goal to go for when free-viewing.

That’s what we’ve noticed so far! All these observations aren’t exactly super scientific, but they’re great preliminary results to point us in interesting directions.

-eleVR