eleVRant: My brain plays tricks on me

posted in: Camera Research | 0

I have a background in neuroscience, and one thing that every neuroscientist knows is that your brain is playing tricks on you all of the time. There is actually an entire research community devoted to the science of illusions. There is even an annual competition for the creator of the year’s best perceptual illusion.

The thing about perceptual illusions is that even though they seem baffling, as though our brains were making crucial mistakes, they actually tell us really important things about how we perceive the world. And, generally, what they tell us is that our brain is taking what actually ought to be insufficient information and processing it based on some fairly reasonable assumptions about what our world is really like to come up with a good understanding of what is probably actually happening. Sure, we can trick our brains by giving them something so unusual that it probably wouldn’t come up in the real world, but only because it’s not something that would really happen.

One of the things that we can learn from optical illusions is how much we really want to see the world in 3D. Our brains have optimized for this so drastically, that even when looking at flat images on a screen, we adjust for a world of light and depth and color.

Take this illusion from Ted Adelson at MIT. You’re probably entirely convinced that tile A and tile B are different colors. They’re not, of course, or it wouldn’t be an optical illusion. But, why?


As it turns out, our brain takes way more cues into account when it decides whether something is 3-dimensional than just that stereo cue that everyone and all the 3D movies are so keen on. We live in a world where light tends to shine from above, where shadows alter colors and shades, where things get smaller in the distance. And our brains use all of those cues to process everything we see, even if it’s on a screen.

A and B look different colors to use because we see B as being in shadow, and our brain has made an automatic adjustment for the fact that shadows make colors darker. That’s how the real 3D world works.

Optical illusions usually aren’t examples of our brains being stupid. They’re examples of our brains being clever.

Computer vision is a difficult problem because the world that we see has been cleverly parsed by our brain to make sense of shadows and light and edges. For example, we have an incredibly uncanny ability to identify someone as the same person when they have rotated a few degrees.

To do really completely authentically realistic 3D 360 virtual reality video we need an actual 3D map of the world at all times that is rendered differently depending on your angle of gaze and your movement. Otherwise, we’ll be ignoring parallax and screwing up stereo half the time and getting all kinds of visual cues wrong.

My logical, math-y side that understands how real 3D images change is completely convinced that there is no way that generating 2 static 360 panoramas (one for each eye) should work to give us a feeling of stereo 3D. Frankly, it only works because we don’t have real 360 point cameras and our stitching algorithms stitch together two slightly different worlds centered around the same point.

But, it does work. Because my brain is playing tricks on me. Or, more accurately, because my brain is so good at seeing the world as it really is in it’s wonderful, gorgeous, light and shadow filled 3D glory, that it will happily work around the imperfect cues that we give it to create a world that is convincingly 3-dimensional.

In the end, I want VR video to be perfect, but my brain is happy with good enough.