The Year of Dailies

posted in: Uncategorized | 0

Annnnnd done! I just finished a one year long project of publishing a spherical video every weekday. That came out to 262 videos and since the theoretical maximum of number of weekdays in a year is also 262 I guess I win! You can sift through the entire heap or you can trust me I know what I’m talking about and just watch the cream of the crop in this handy dandy playlist.

When I was only a few weeks into it Vi inter-eleVR interviewed me about the project so we thought we’d stick with the format. Questions activate!

One year is a long time in the VR industry right now. What changed around you, and how did this affect your practice?

Since last year the attention and fervor around VR as an industry has definitely increased. With hardware finally coming out to consumers and more people getting exposed to immersive projects both makers and audience are pouring into the medium. This is both frustrating and fantastic. That more people making more immersive work means that the brave and curious will explore and expand the edges of the medium, drawing in new mountain ranges and sea monsters and bringing back stories of their travels. But it also means that the center becomes trampled and the paper worn thin with obvious tropes, and, as is often the case in other mediums, that which is obvious to make is easy to watch. In our attention economy that means it accumulates views and apparent value.

Because the weight of this infant industry drives, and you might argue must drive, toward this center if they are going to make money I now realize that I personally, but also our team, have a responsibility to the future of immersive media to be an evocative and effective counterweight. I have to use my platform and opportunity, my art practice, my voice, to speak to another path, a more abstract, rigorous, weird, experimental way.

What did you do to keep yourself interested in this project for an entire year? Are you glad that it’s over? Will you miss it?

I am really happy it’s over, not because I was miserable but because it feels like a huge accomplishment. I already miss it though. Having such a clear record of my time, both the mounting number of videos and the lifelogging aspects, were extremely satisfying. I kept myself interested by using my super special Emily skill: try every single thing I can think of and if it turns out crap just shrug and move on to the next idea.

Tell us about some of the projects-within-the-project, like the alphabet series and the cinematography experiments.

Ah yes my still unfinished alphabet series. After 6 months of ladeling out my life I started to realize how intimate and particular the videos are to my identity and circumstance. I am a 31 year old white American artist living in San Francisco and I wanted to reflect a self consciousness of my position in a set of videos. Thus was born the alphabet series consisting of one video for every letter in the English alphabet. A is, of course, for art, while H is for husband and L for laundry. Each video was of a simple thing from my everyday life and my idea was that someone else’s alphabet of daily life would be very different. Even if we both did L is for Laundry, what doing laundry means would change based on their identity and circumstance. I am still missing videos for K, O, R, U, V, X, and Z.

I have also written a few previous posts relating to the videos in this series including Spherical Cinematography 102: Texture if you are looking for more detailed information on making good spherical shots. First forays: Multi-camera Spherical for experiments with shooting with multiple spherical cameras simultaneously. Performance at LRLE to read about an experiment in mixing live and spherically recorded site specific performance with audience using their phones as distributed immersion. And Play/Room for a study in creating a physical library/database/mess of spherical videos.

Let’s talk cameras. Midway through the project you upgraded from the Ricoh Theta M15 to a Ricoh Theta S. Did this change anything? How do you think this particular style of camera affected the project, and ideally what would you change about it?

So much changed! The Theta M15 had a three minute video capture limit so when the project started I called it 3 Minutes of Yesterday and it consisted mostly of moments of my daily life. The video that best illustrates this phase is H is for Husband. Its simple and sweet and cuts off mid word.


The agency for each video was partly mine, for start, and partly the hardware’s, for stop. When I later switched over to the Theta S I got a 25 minute one time record limit. At first that felt like an ocean of time but eventually with playing with overlapping time lapses and performance events even 25 mins felt too short. Ideally the camera would shoot for over an hour with 8k resolution at 60 fps in the same form factor but would also have malleable depth of field so I could dynamically adjust, using a handheld remote, between different ranges of focus.

Some of the videos look like simple unedited captures of your life, and some are heavily composed and edited, and it’s not always easy to tell which is which, or whether the difference really matters. I was most of the way through “Touch me” before I realized what you had done with it. I had to go back to “TFW you realize” several times before I finally “realized”. How do you feel about these different types of videos, and about the level of engagement required of the audience to appreciate what’s going on with this entire project?

I totally agree this project asks a lot from viewers and that it would be ridiculous of me to expect people to engage closely enough with the project to “get” both the concept driving every individual video and see the project as an experimental whole. It was most importantly a tool for me to gain better intimacy with the medium so that I could, as Ansel Adams would put it, practice it in terms of its inherent qualities and thus reveal endless horizons of meaning. For it to be easily understood as a branching exploration of the mediums qualities would likely require a more curated and informational setting than just a Youtube channel.

I asked this in the last interview, but seriously how do you manage to do this in addition to all your other projects and are you some kind of magic human or something?

When it comes to my art practice, once I make a rule I will go all out to stick with it. Even if that means publishing an utter flop of a video like Almost, in which I repeatedly try and fail to form a coherent thought to record. So I can do it on top of the rest of my work because I prioritize one kind of failure over another.


Green Screening video on the web – a ThreeJS Extension

posted in: Uncategorized | 0

A while ago, I posted about “10 fun and easy things that anyone can make for VR”. The tenth item on that list suggested green screening your a flat video onto your panoramic photo to create a spherical video.

This is easy to do in pre-processing with video editing software on your computer. For example, Emily helped me take a test video in front of a green screen that looked like this:



And using Adobe Premiere (and more Emily video editing help), I was able to insert myself a number of times on top of the “Kirby Cove Panorama” fairly easily. You can check out the original panorama on the eleVR Picture Player – it’s the second image.



But, for a really interesting experience, I wanted to have the green screened video have the potential to be more interactive. Eg. click a button to start the video on top of the current scene, or move the character in the video around. For that, I really wanted to have the compositing be done live on the web rather than pre-processed.

My first thought was to take advantage of the Alpha channel. On the alpha channel it is possible to encode for “alpha transparency” on some video types, including WebM, a standard web video format. Creating a WebM with alpha transparency was a bit more annoying than I would have liked, but the effect looked great… on Chrome, which has supported WebM transparency since 2013. What I hadn’t realized at the time is that Firefox doesn’t currently have any support WebM transparency. So, as soon as I checked out my project on Firefox, I saw my video in an ugly black rectangle instead of properly composited. Since Firefox is the WebVR browser of choice for a lot of people, this was unacceptable.

I had to go with a different option, and my next thought was to basically do the green screening “live” on the web. Someone, I figured, has probably written a ThreeJS extension to do just that. And, well, there were a few people who had created live green-screened videos with ThreeJS and shared their code, but the projects were all one-offs and not easily “droppable” into other projects. That said, by this point I had seen enough that I was pretty sure that I could use the various projects I’d seen to create my own ThreeJS extension for this purpose.

With the THREEx.chromakey extension, available on Github, all you need to do is include one Javascript file in your library and replace your usual material with the ThreeX.ChromaKey material. Then, in your animate loop, you need to update your material to get the latest frame. I actually think it’s easier to use than the standard way of doing video in ThreeJS.

You can check out the video above composited into another project by going to pressing space bar once, and then using the WASD keys to look around and find the video (sorry, this was an experimental project with no polish). I’m hiding over by my desk.



Perhaps more usefully, Emily ended up using the THREEx.chromakey extension in one of her HyperLinkSpace projects, water. In that project (also navigable with WASD), all of the “animated gifs” are actually green screened videos, as the animated GIF format is not at all supported by webGL.



THREEx.chromakey has been a nice tool to have handy around the eleVR office as we’ve done different experiments. We’re finally releasing it separately on Github today in the hopes that other people might also find it useful for their projects. We look forward to seeing how other people use it.


VR and the Philosophy of Color

posted in: Uncategorized | 0

You’ve probably heard the old philosophical question: is my red your red? Might what I see as red be what you see as green? Is the color we experience a real property of objects, or something that exists only in the human mind?

With VR getting better every day, soon we’ll be able to provide some experimental insight to an entire class of ontological questions that were previously unapproachable as anything but thought experiments.

Specifically, today we’ll be talking about: 1. The “inverted spectrum” thought experiment, 2. How we might actually make it real using virtual reality, and 3. What possible results might mean according to various schools of thought in the philosophy of color.[1 “Schools”]

Before we dive in though, I’d like you to take a look at this picture I just took of our office fruit bowl.

Screen Shot 2016-02-17 at 12.48.48 PM

When I look at this, I see colorful fruit piercing through the obviously blue-tinted photo. Bright orange oranges, green apples, yellow bananas. Just how yellow do those bananas look to you?

I was inspired to make the above picture by a study about color memory that showed that when adjusting the color of a banana-shaped object to match a completely grayscale background, people will go past the gray of the background to a slightly blue tint, suggesting that they still see bananas as yellow even when they’re perfectly grey.[2 Hansen] Do you see the above bananas as yellow? Maybe with a bit of green on the ends, maybe with the further bunch and shadowed parts being more towards a yellowy-orange or red?

If you look at the actual color of the pixels on the screen, you’ll find that, in fact, that the bananas are blue. Not grey, not negligibly blue, but quite blue, from periwinkle where it’s brightest to cyan at the tops to purply-blue in the shadowed regions. The orangey-looking bananas in the background range from a royal purple to lavender. At least, those are the colors straight out of a color picker. But regardless of the wavelengths, the color I experience in my brain when I look at the brightest part of those bananas is quite definitely yellow.

So, keeping in mind our brains’ ability to experience a color that is the exact opposite of the color traditionally associated with the incoming wavelengths, let us continue to the inverted spectrum experiment.

This time, the yellowest part is actually yellow.

1. History

The question of the “inverted spectrum” was introduced by Locke in 1690, though Locke himself quickly dismissed it.[3 Locke] Locke uses color frequently in his writings, as examples of self-evident bits of logic like “blue is not yellow.” Color in general has been used as an example of self-evident truth for longer than that, and many times since.[4 Russel][5 Berkeley] If I see green, I see green. No supporting evidence is needed, no contrary evidence could change my mind. I might be wrong about what’s green, why it’s green, whether it appears green to others, and whether the green exists in the object or in my mind, but in any case, I’m the authority on whether I’m experiencing greenness or not.[6 Johnston]

In 2016 and to one who has been researching color for the past year, the flaws of philosophical arguments that ignore color science are many and obvious.[7 Hume][8 Wittgenstein] Even something as simple as “blue is not yellow” no longer seems self-evident or meaningful when I know from experience that a blue banana can in fact be a yellow banana; and if you and I are color-inverted-opposites and my blue is your yellow, or neither blue nor yellow exist as properties of objects, simple logical-sounding quips about colors can no longer be counted on as simple illustrations of logical types or excluded middles, and thus Locke’s thought experiment about the inverted spectrum becomes grounds for expansion and controversy.

The idea of the inverted spectrum seems simple at first. Imagine a person who saw the opposite of every color: green when looking at red things, red when looking at green things. We call this person “Invert”, and their normally-sighted counterpart is named “Nonvert”.[9 “Nonvert”] A whole host of questions can be asked about this: is it possible? Would you be able to tell? Would a green object falsely appear red to Invert, or would it be that a usually-green object is red to Invert?

Hilary Putnam, in “Reason, Truth, and History”, proposes a variation.[10 Putnam] Imagine that the above spectrum inversion happens suddenly to a person. They remember what colors looked like before that, and now, suddenly everything switches. Over time, we might imagine the person would get used to it enough that they can navigate the world just as they once did, and refer to colors according to how other people see them instead of how they see them. But would they get completely acclimated to the switch and start actually seeing red as green?

What do you think? Personally I don’t know, but luckily I don’t have to figure it out using logic alone. The experiment Putnam proposes is one which we are on the verge of being able to actually do, using VR/AR, and even the details involved in thinking about how to design this experiment lend some insight to the question itself.

2. Inverting the Spectrum in VR

inverted video passthrough
Inverted video passthrough [11 “Wearality”]
Most of those I know in VR research believe, as I do, that in ten years many people will have access to high quality VR/AR (which will be the same thing, same device) that they can wear wherever they go, a piece of hardware that will sense and transform the world they perceive. There’s some hard questions to solve with regards to computer vision, detecting spaces and overlaying realistic objects on them, etc. But eventually, spectrum inversion will be as simple as taking the incoming visuals and shifting the hue before showing it to the viewer. We will soon be able to change what color people see attached to red objects, in a fairly unobtrusive way.

Of course, there are subtleties, both on the tech end and the human end. Firstly, being that even the best cameras, displaying images on the best screens, don’t do justice to the human eye’s capabilities. If anything changes this, it will be piles of upcoming VR tech, but for now we’re stuck with RGB screens that have quite a small range of brightness and not enough resolution for a full field of view’s worth of crisp visuals, coming in with a delay and a framerate that leaves many people feeling sick. Before we can invert reality, we need to be able to display reality, as it is, in front of the viewer at that very moment.

The first approximation of this is video passthrough, which is when a VR headset (which usually blocks vision of the outside world) shows camera input from a camera on the headset. Of headsets on the market right now, Gear VR comes with this capability ready to use.[12 “Gear VR”] At the moment, video passthrough does not simulate looking at reality very well… it’s good enough to keep track of what’s going on in the “outside” world while you’re in headset, but you wouldn’t want to walk around or do even the simplest of “outside” tasks using it. We’re not far, though, from being able to make video passthrough good enough that you could use it day-to-day if you had to, never leaving the headset at all, and it is already good enough for some simple outside interactions.[13 “Passthrough”]

This brings its own host of ontological questions and moral quandaries. Even before VR went commercial, there were those who wonder whether those who experience life through a camera lens are actually experiencing life. Is what we see real, or just pixels? Real, or just incoming light? Real, or just a construct of a fundamentally lonely brain-in-a-vat?

The mug is orange, the pretzel is brown. Right?
The mug is orange, the pretzel is brown.

Whether what I see is the thing or merely represents a thing, I can’t say for certain. But whether I see my orange coffee mug through the intermediary of my cell phone or just through my eyes as usual, we’ve decided that the image, the mug, and the orangeness have something to do with reality as we live it, despite that a single pixel of orange is the same as a single pixel of brown and a single atom of mug doesn’t really have any color properties at all.[14 Lotto]

Another subtlety is that the differences in human color perception vary much more than most people realize, even among those with “normal” color vision. My eyes and yours probably think a slightly different frequency is pure green, but the manufacturers of screens can only pick one. When they mix this green with this red to try and simulate a very specific yellow, my eyes might think the red+green yellow perfectly matches a certain corresponding real-life yellow composed of an entirely different mix of wavelengths, while you might think they don’t match at all.[15 “Metamers”]

Assuming your screen’s RGB perfectly matches your eye’s RGB, we still have to consider our video capture methods if we intend to manipulate the video passthrough. Regular cameras only sense RGB values, and the normal color inversion functions found in photo editing software can only change the already-captured RGB values. This is different from inverting the original frequencies. Cameras and screens also can only display and show an extremely limited range of brightness compared to what the human eye can detect, and these differences in brightness play a large role in the more subtle bits of color perception. The same mix of green+red, at the same brightness, looks brown or orange depending on sometimes quite subtle differences in the brightness of the colors around it.

And then there’s other fun with human eyes… you might notice that this pure green is harder to read than this pure red. It’s not that the green is actually brighter than the red, but in how our eyes sense those wavelengths. And so, if you were to just switch red for green, you might think that you wouldn’t get used to it, because no matter how long you spent with inverted vision you’d look at the above text and think this pure green is in fact much easier to read than this pure red. Brightest-appearing of all is yellow, even if it’s actually no brighter than magenta.

(For fun times, try taking a picture of this sentence with your cell phone and putting a black+white filter on it)

No amount of time in color-inversion VR would change the biology of your eyes to be able to read yellow text as easily as magenta text, which is Hardin’s argument against the idea that inverts and nonverts might mingle unknowingly.[16 Hardin]

The pretzel and mug are the same color.
The pretzel and mug are the same color, at least to nonvert.

The counter-argument would be that a true color invert would see a rich dark yellow in place of magenta and an ultra bright hard-to-read magenta in place of yellow. A true invert would see a difference between green and green-that-is-darker-only-in-context, like what a nonvert sees between orange and brown, and have trouble telling orange and brown apart. The way our eyes detect color is inherently asymmetric, and switching out wavelengths doesn’t simulate what it’d be like for our eyes to actually be wired differently.

And it isn’t so unbelievable that maybe we someday could create these different perceptions. Certain illusions can create an effect of a yellow that is darker and more supersaturated than any yellow seen under normal circumstances. Look at the center of the blue flower below for 30 seconds, then look at the fully-saturated yellow square and notice the afterimage. It’s a superyellow flower![17 superyellow]

Stare at the center of the flower for 30 seconds, then look at the yellow square to see a superyellow afterimage.[]
Stare at the center of the flower for 30 seconds, then look at the yellow square to see a superyellow afterimage.
This points towards the idea that maybe our brains have the ability to perceive colors outside the standard range even if our eyes can’t, and that inverting how we see colors should actually change the set of colors we see rather than just rearranging the current ones.

All this is good news for fans of complicated ontological arguments, but bad news for those of us who would like to try simulating color inversion using current VR technology (though maybe eventually we can bypass the hardware of the eyes and get some properly color-inverted VR straight to the brain). Screens already can’t show us anywhere near the range of colors we see in real life, much less the ones beyond the normal range. But in 10 years, who knows?

And finally, after all that, we’d have to figure out what it means to invert the spectrum given the color space of a particular piece of technology, and make sure we design experiments that understand that whatever technology and inversion we use is just one choice of many possibilities, in a world of colors so much more complex than simple pixels.

A and Be are the same "color".
A and B are the same “color”. [18 Adelson]
3. Placing our Bets

I don’t know what the answer will be, but I do know that we should place our bets now, design good experiments, and consider the implications of different possible results, before it can actually be done. Otherwise we’ll be susceptible to the scientistic phenomenon where “experiments” become merely demonstrations, force-fit into our previous worldview no matter the results.

Many commonly held philosophical ideas about color don’t hold up even to current science. Hume’s “missing shade of blue” misses how dependent our color perception is on context. Most people are familiar with some optical illusions of color, such as Ted Adelson’s Checker Shadow Illusion,[18 Adelson] which we’ve talked about before.[19 Hawksley] The recent meme “The Dress” is another striking example of how color perception is about so much more than photons.[20 “The Dress”]

Sydney Shoemaker argues that color might be a relational concept.[21 Shoemaker] There’s no contradiction in having something be on my right but on your left, and neither is it just a reflection of our individual opinions and preferences. Likewise, maybe The Dress could be black and gold to me and white and blue to you, in a way that is precise and correct rather than some vaguely-defined concept of individual perceptual differences.

There is some evidence that differences in culture and surroundings may change the perception of color, and certainly there are large linguistic differences between how different groups name colors (as with the well-known work by Berlin and Kay[22 B+K]), though while these linguistic differences may change peoples memory and meaning of color, it does not seem to change the in-the-moment physical ability to differentiate subtle shades. So in any experiment on changing color vision using VR, one should be careful to make sure that there’s no confusion between whether the experiment is directly testing how a person sees, or whether it’s testing the memory of their experience.

White and gold, or black and blue?
White and gold, or black and blue? [20 “The Dress”]
There have been some surprising examples of visual adaptation, like that those who wear lenses that invert up and down will, after a couple weeks, get completely used to the switch; down becomes up and the world can be navigated normally again. When the glasses are removed, the world looks inverted again for a while.[23 Stratton] But for color, Putnam assumes that a sudden Invert would never get used to the inversion, and that even after years of inverted colors a de-inverted person would think of the de-inverted colors as switching back to normal. This seems easy to test, though if it’s similar to things like language in that children get used to it while adults don’t, it might be difficult to test ethically.

Iris Murdoch, in “The Idea of Perfection”, says “‘Red’ cannot be the name of something private. The structure of the concept is its public structure, which is established by coinciding procedures in public situations.”[24 Murdoch] Channelling Hampshire, she says: “This is really red if several people agree about the description, indeed this is what being really red means.” Someone who has been in color-inverted VR their whole life, then, would agree on what is red with most other people, until they de-invert.

A relevant fact is that it is not unusual for those with certain types of colorblindness to reach adulthood before finding out that they are “missing” any sense data, because they make such fluent linguistic use of the public concepts of color. Even fully blind people learn the concepts and grammar of color well enough for casual conversation—everyone knows roses are red and the sky is blue, that red is loud/angry/passionate while blue is calm/depressing/relaxing, whether they’ve seen the color or not. One could thus argue that blind and colorblind people have complete knowledge of red, that direct personal experience of a color isn’t necessary for understanding it, just as direct personal experience isn’t always necessary to gain a good understanding of many non-sight-related concepts. In this case, someone who has never seen colors, but has full cultural knowledge of them, understands what colors are better than a normally-sighted baby who has only experienced colors without context or interpretation. So too would a color invert.

We have many vision tests which can differentiate between different types of vision, and some would argue those tests are an important part of color discourse that differentiate what blind and colorblind people mean by “blue” from what their normally-sighted counterparts do.[25 Harrison] One could design tests to differentiate whether someone is a certain type of color invert as well, such as how easily they can read yellow text vs magenta text. On the other hand, if we are to rely on subtle case-specific differences in discourse as with colorblind tests, we must also believe the differences in discourse created by phenomena like The Dress, as well as other subtle differences between individual color experiences.

An Ishihara test for certain types of color blindness
An Ishihara test [29 Ishihara]
Dimitria Electra Gatzia suggests that much of color discourse is said in a sort of fiction, where we know that what we say is an approximation useful for communication rather than a statement of fact about the world.[26 Gatzia] Certainly if I were to wear color inversion glasses and say “the lemon is blue”, I would know I really mean “the lemon looks blue through these glasses but I know it is really yellow, by which I mean I know that under usual conditions I would experience it as yellow through my individual human eyebrain, etc”. Also compelling is Gatzia’s work exploring the unusual color experiences of synesthetes, including colorblind synesthetes who nonetheless experience colors through other means. It’s interesting to have examples of those who don’t share the same public visual experiences as others, and thus can have both the public concept of color and individual color experience without connecting the two together.

I’m fascinated by the idea that “red” is a public context-dependent thing because it implies that color is associated with context, that if someone grew up with colors randomly assigned to objects in a constantly-shifting way, we would have no concept of “red” at all. That if all the world were seen in black and white except for abstract splashes of colors in art, auroras, and abstract screensavers, the individual colors in these things would not point to any concept whatsoever, and would be neither differentiable from each other in memory nor nameable. I imagine that there might actually exist many kinds of random-seeming things that we are able to sense but that are conceptually invisible to us for similar reasons.

I’m most compelled by the idea that color is a statistical concept.[27 J+W] By collecting enough data from ourselves and others, we can increase our chances of successfully understanding and communicating about colors, but we can only ever say what color something is with a certain amount of accuracy. We shouldn’t, in this view, expect a VR color inversion experiment to give us 100% clear results, but would expect the experience to add new data which we might learn to accept as reality once it becomes statistically significant to our brain compared with our previous color experience.

With all this in mind, we begin to see the difficulties of using VR for the inverted spectrum experiment, and the things that must be defined before it begins. What counts as “getting used to”? What makes a real invert, and what counts as true color inversion? If it’s more than just being able to talk about what you see as if you weren’t inverted, what is that and how do you test it? If the experiment fails with one kind of color inverted glasses, do we accept the result, or do we blame the limitations of current technology? Are we even asking the right questions, or is the very idea of inverting color vision fundamentally broken? And what else could virtual reality help us learn about the way we experience the world?

We could go much, much deeper with this, but I think that is enough for this time.[30 Hart]

Vi Hart


Notes and References:

[1 schools] At the moment, most literature in the philosophy of color will tell you that the schools of thought include eliminatavism, dispositionalism, physicalism, primitivism, and perhaps others, maybe divided into the two schools of color realism and color fictionalism, or some other set depending on what intro to what paper/book you’re reading. There’s a long description of various schools in the intro to [28 B+H], if you want to get the idea. For our purposes, I think these divisions into schools are less helpful than just referencing individual philosophers’ views, which is basically the same thing but without the extra layer of nomenclature.

[2 Hansen]Memory Modulates Color Appearance”, by Hansen, Olkkonen, Walter, and Gegenfurtner, 2006.

[3 Locke] John Locke, An Essay Concerning Human Understanding, Book 2, Chapter XXXII, section 15.

[4 Russel] In Bertrand Russell’s “The Problems of Philosophy”, from the very first problem (Appearance and Reality) he uses color as a frequent example, perhaps taking Hume’s lead. Like many philosophers, Russell chose color because it seems so obviously a real property of objects, not just created in the mind, and thus refute Berkeley’s account of vision. He takes a more scientific approach than most, but we’ve learned a lot about color since 1912.

[5 Berkeley] George Berkeley’s “An Essay Towards a New Theory of Vision” (1709) That we cannot see distance, but rather interpret it from what we see. We can be tricked (as in stereo VR). And a blind person can get a perfectly real conception of distance, shape, and size. He says colour and light are the only “immediate objects” of sight (section 129), following Locke. He barely touches on the topic of colour, mostly using it as a contrast to less-immediate qualities. Like Locke, the existence of colour as an immediate and perceivable truth is unquestioned, the bar to which other things are compared.

[6 Johnston] Mark Johnston in “How To Speak of the Colors” puts forth the idea of “revelation”, that seeing a color is the same as knowing it despite anything science can say, and that to choose this view is the ethical choice. Either we know what blue is and that a thing is blue because we see it is blue and experience blueness and this is unquestionable, or we deny human perception and cannot speak of anything at all. We may not be able to prove the second option scientifically wrong, but we can decide it’s ethically wrong, and choose the best alternative.

Johnston’s paper appears in [28 B+H]

[7 Hume] See Hume’s missing shade of blue, a thought experiment in the first part of A Treatise of Human Nature.

In his context of seeing a smooth change in shades of blue, the “missing” shade is apparent. But color vision depends on context and comparisons; no one could, given a shade of blue, answer the question of whether they had seen that exact shade before or not, and one could not see a shade of blue and answer whether it was the missing shade unless they saw it side-by-side with the other blues.

[8 Wittgenstein] Wittgenstein’s “Remarks on Colour” contains many fun little thoughts in classic Wittgenstein style. While his thought process is interesting, many of his questions would have been answered if he’d just looked at the science on vision that was available at the time, and much of what wasn’t already obsolete then is certainly obsolete now.

[9 Nonvert] See introduction to The Philosophy of Color, volume 1 of Readings on Color, a collection edited by Byrne and Hilbert.

[10 Putnam] Hilary Putnam: Reason Truth and History (see chapter 4, on page 80 in the edition I own).

[11 Wearality] The lenses shown here are a Wearality prototype, designed to use with certain smart phones. The photo, in this case, is faked—it’s not actually doing video passthrough at the moment, as you might be able to tell by the odd angle of the apple. But you get the idea.

[12 Gear VR] Gear VR is Oculus and Samsung’s VR headset, that uses a Samsung smartphone in a special holder.

[13 Passthrough] The first time I experienced video passthrough, it was an an Oculus event, trying out their latest Gear VR stuff. It took a moment to orient what I was seeing in the passthrough with where I was in actual space. Then I spied the event photographer, closing in for what she thought would be a candid shot of an unsuspecting, totally-immersed, subject. Non-consenting promotional photos of unknowing in-VR subjects is a pet peeve of mine, so what followed was an amusing couple minutes of me striking fancy “whoah VR!” poses, waiting for her to set up the shot, and turning quickly away before she had a chance to get the shot. I managed to get an audibly frustrated sigh out of my unwitting victim before I took off the headset and told her of the wonders of video passthrough.

[14 Lotto] Lotto and Purves’ Rubik’s Cube Color Illusion is a lovely example of brown and orange coming from the same wavelength, or color pixels, depending on context. There’s a fun collection of color illusions on the Lotto Labs website.

[15 Metamers] I think people often notice these differences and attribute them to sensitivity rather than perceptual differences. If I can’t see the difference between the color of my shoes and my handbag, but you think they obviously aren’t the same shade, chances are there’s another set of clothing that I can see the difference between but you can’t. The frequencies of light are many, and the kinds of sensors in our eyes are few. Tons of actual data on this in [16 Hardin] and many other places, but the best thing would be to see for yourself if you’re near the Exploratorium and if the “Disagreeing about Color” exhibit is still up.

[16 Hardin] C. L. Hardin, in “Color for Philosophers,” finally introduces known science about color vision to the world of philosophy, after many frustrating years of philosophers continuing to argue whether color is something that exists as a property of objects rather than human perception, why there’s no “reddish green” or “brown light”, and other questions that had long since been answered by science.

His answer to the inverted spectrum problem, taking after Harrison [25 Harrison] is that you’d be able to tell, due to perceptual asymmetries. Our eyes can perceive more subtleties in the red part of the spectrum (such as how pink and brown seem so perceptually different that they are their own colors), while the yellows are both brighter and less unique from each other due to the physical way eyes perceive color (light yellow, yellow, and dark yellow just don’t have the same perceptual shape as pink, red, and brown, and couldn’t be inverted one into the other).

[17 Superyellow] I made this one myself, feel free to reuse.

[18 Adelson] Ted Adelson: Checker Shadow Illusion

[19 Hawksley] Andrea Hawksley’s eleVR post: My Brain Plays Tricks on Me

[20 Dress] The Dress, which went viral in 2015. Many people tried to “solve” the problem—or prove the correctness of their own view—by looking at the RGB value of the pixels, but that’s not how color works.

The Dress is particularly striking because there are two colors involved, and the pair perceived by some are the exact opposite of the pair perceived by others. Before seeing the image itself, when I heard the controversy was between “black and blue” and “white and gold” I thought: “how is that even possible? Black is the opposite of white, and blue is the opposite of gold.” But of course it’s not a confusion of white with black or blue with gold, but white with blue and black with gold. Take an ambiguous muddle of whitish-blue and goldish-black in bad lighting. Subtract overblown bright yellowish-lighting to get black and blue, or autocorrect a dark blue evening lighting to brighten up to white and gold. “The Dress” was at that perfectly balanced place where everyone’s eyes sense the the lighting is wrong, but many people autocorrect in the “wrong” direction.

The photons they saw were the same, but the perception of color was different. Just as it is normal for me to think the shirt I’m wearing right now stays the same color whether I’m in the office, out in the sun, or in a dark closet, even though the photons bouncing off it are very different. Just as we believe that a known object still has its known color even when it is in complete darkness.

[21 Shoemaker] Sydney Shoemaker, in “Phenomenal Character” (found in [28 B+H]), puts forth the idea that color is a relational property, like “left” and “right,” that only exist in context and reference to the observer. Therefore an object can be red to me and green to you, just as it might be to the left of me and to the right of you.

[22 B+K] Berlin and Kay’s “Basic Color Terms: Their Universality and Evolution” is the well-cited work that introduced to the world the idea that there’s many cultures with different numbers of color words, some with only two or three named colors, and that all languages add the same color words in the same order (first black and white, then add red, then add yellow or green, then green or yellow, then blue, then brown, then pink or grey or purple or orange). There are some things to criticize as to their methods and just how universal this really is, but it’s definitely mind opening research that has had positive influence on the philosophical dialogue.

[23 Stratton] Stratton’s famous vision inversion experiment from the 1890s, described in “Some Preliminary Experiments on Vision Without Inversion of the Perceptual Image“. I was a bit surprised to see how few attempts at replication have been made, given how famous the results are and that this study involved a single subject who was also the experimenter. H. Dolezal describes similar results in “Living in a World Transformed”.

[24 Murdoch] Iris Murdoch: The Sovereignty of Good, The Idea of Perfection. I find the idea of public concepts and private concepts to be a very useful concept, and Murdoch makes good use of it.

[25 Harrison] Bernard Harrison, in “Form and Content”, might be behind the times when it comes to color science, but he’s on point when it comes to naming ideas. Natural nameables are things that any set of language-creating humans would want to name—“brown” is a natural nameable, while “darkish yellow” is not so much. Something is discourse neutral if you wouldn’t be able to tell, with any amount of talking, whether, for example, my red is your green, even if my red really was your green.

Harrison argues the red/green spectrum switch would not be discourse neutral, because while the spectrum of possible colors is symmetric, the naturally nameable ones aren’t; we’ve got subtleties on the reddish side, like orange and pink and brown, that create an asymmetry in the linguistic form of our color names. He calls this the “semantic topology”, and backs it up with Berlin and Kay’s well-known studies on color language.[22 B+K]

And while many colorblind people make fluent use of color words up until they accidentally come across a color vision test in their adult life, the difference in their response to the test is enough to make their change in vision not discourse neutral after all.

[26 Gatzia] Dimitria Electra Gatzia’s “Color Fictionalism” explores how we can talk about color properties of objects even if we don’t think they exist, and “Martian Colors” takes a look at nontraditional color experiences (such as certain kinds of synesthesia) and their implications for the philosophy of color.

[27 J+W] “Colors as Properties of the Special Sciences“, by Kent Johnson and Wayne Wright, argues that we shouldn’t think of colors as real or as fiction, but as “high-level statistical constructs built out of correlations between color experiences and other phenomena.”

[28 B+H] The Philosophy of Color, volume 1 of Readings on Color, a collection edited by Byrne and Hilbert.

[29 Ishihara] Ishihara tests use dots in red/green shades to test for red/green color blindness. It’s a specific artificial circumstance under which many people learn for the first time that their color vision is different from the majority. Some look at tests like this as proof of the non-discourse-neutral nature of color blindness, though keep in mind that tests could be designed that differentiate pretty much any two people’s color perception from each other.

[30 Hart] You may be interested in the previous work in our series on philosophy and VR, “Are We Living in a Virtual Reality?

CG & VR Part 3 – Spherical Compositing in Maya

posted in: Uncategorized | 0

Looks like it’s time for Part 3! In this tutorial, we’re going to go over the basics of how to composite elements from cg renders into your mono-spherical footage using Maya and Premiere. In the case of this tutorial, I’m going to use the 3D model of the eleVR office as reference to create special effects that can be overlaid onto footage of our physical office!

For the uninitiated, cg compositing is the process of taking visuals rendered from a 3D animation or special effects package, and incorporating them into a separate piece of media, such as video or images. You can learn much about about it on Wikipedia.

Hopefully you’ve already completed Part 1 & 2 of this series where I covered basic spherical rendering in Maya. If you have not already read through it, I would suggest you do so, as it contains fundamental information you’ll need to complete this tutorial.

CG & VR Part 1 – Rendering Challenges
CG & VR Part 2 – VR Rendering in Maya

Furthermore, this tutorial assumes you already have knowledge concerning batch rendering frames from Maya, and overlaying the image sequence in your favored video editor.

Elijah Butterfield
Elijah Butterfield – Intern


Maya 2016 – Modeling (There’s a Free 30-Day Trial)
Mental Ray for Maya – Rendering
Domemaster3D Maya Plugin – Spherical Camera Creator

A Mono-Spherical Camera – (I’ll be using a Ricoh Theta S)


Step 1 – Getting our Footage


For the sake of simplicity, I’ll be recording my spherical footage from a fixed point. While it is possible to composite cg elements into spherical footage where the camera is moving, it would be difficult to get perfect results, as there isn’t any dedicated software package for tracking motion in spherical video yet.

With my spherical camera mounted on a tripod and in the position I want to record, I’m going measure its position in relationship to the environment around it, i.e, the distance from the walls and how high the lens is off of the ground.

The reason why I’m measuring the location of the camera in relationship to the room is so I can use that information to match a virtual camera to the same relative position to a virtual room in Maya.


Output from the Ricoh Theta





Camera Location in Relation to Room

Step 2 – Creating & Positioning the Virtual Camera


Now that we have our measurements, we’re going to move into Maya and create a mono-spherical camera. This will be the virtual equivalent of our physical camera.

In the Rendering Menu, select Domemaster3D > Dome Cameras > LatLong Camera

Figure 2.0
Figure 2.0


Before we make any changes to the camera we’ve just created, let’s first set up a place to put it. Using the measurements we took of the physical camera’s location, we’re going to create a cube with the dimensions as those measurements, and then use that cube as a reference object to place our camera.

Figure 2.1
Figure 2.1


Since we’ve already gone through all the trouble of measuring everything out in the physical world, we want to be as accurate as possible when we place our camera. To ensure we place our camera exactly where the corner our measured cube is, we’re going to use the Snap-to-Point tool.

  • Select the LatLong Camera that we made previously.
  • Activate the Move Tool by pressing the W key.
  • Release the W key, and hold down the V Key to activate Snap to Point mode.
  • With the V key held down, hold down the Middle Mouse Button, and drag your cursor over the vertex where you want your camera to be snapped to.
  • Now that we have our virtual camera where we want it, we can safely delete the measurement cube we made earlier
Figure 2.2
Figure 2.2


Now that we have our camera in position, we need to orient it so it’s facing in the same direction as our physical camera. Now, you’re probably thinking something along the lines of: “Why would a spherical camera need to be rotated in a certain direction? It’s taking footage in every direction, after all.”

Good question. Because spherical footage and effects are saved/rendered in the flattened-out equirectangular format, the cameras they are shot with need to be facing the same direction so the vertical stitch lines will align with each other. In essence, this is the same approach used in creating effects for traditional flat videos.

In the case of the Ricoh Theta S, the ‘front’ of the camera, i.e., the side that doesn’t make the vertical stitch line, is the side without the shutter button. This is the direction we want our LatLong Camera in Maya to face. See Figure 2.3 (Right).

Now that we have our cameras aligned with each other and rotated correctly, we can render out a test image. You’ll likely have to fiddle a little bit with the position of your LatLong Camera in maya to get an 100% accurate match, but lets take a look at my results. See Figure 2.4 (Below).



Figure 2.3
Figure 2.3
Figure 2.4
Figure 2.4

As you can see, my rendered image resembles the physical room pretty closely, but isn’t perfect due to some slight inaccuracies in my 3D model. However, for our purposes, this is going to work just fine. Let’s move on to making some special effects.

Step 3 – Creating Some Basic Effects

For the sake of fast rendering times, I’m not going to get involved in any super intensive or technical special effects. Instead, I’m going use some simple animated polygons. Here’s what I’ll be working with.

Figure 2.5
Figure 2.5


So far we’ve matched up our physical and virtual cameras in their respective rooms, and added some basic animated effects. Now, how do we render out just the effects we created without the model of the room getting in the way?

Of course, we could just delete the room model and only render the objects we want to see, but in doing so we would be limiting the scope of immersive effects we could create. We would lose shadows, reflections, and refractions caused by the objects we want to composite. However, in the case of my scene, I’m not going to worry too much about shadows at the moment for the sake of this tutorial’s brevity.


Now, what we’re going to do is create a material for our room which will make it invisible in renders, but will still allow it to cast shadows and reflections on the objects we’re compositing in, leaving us with images we can incorporate into our footage, minus the shadows cast off the composite objects.

To achieve this, the first thing we’re going to want to to is select all of our stand-in geometry, i.e., the objects that we don’t want to render and assign a useBackground material to it. In regard to this scene, that’s going to be all the walls, floors, etc.


  • Open the Hypershade Editor by selecting the Hypershade/Persp Panel Layout on the left-hand side of the screen.
  • In the ‘Create’ window of the Hypershade Editor, select Maya > Surface.
  • Select the ‘Use Background’ shader.


  • With your stand-in geometry selected, hold down the Right Mouse Button.
  • In the pop-up menu, select Assign Existing Material > useBackground
Figure 2.6
Figure 2.6



Now that we have our UseBackground shader assigned to our stand-in geometry, we need to tell the geometry not to Receive any Final Gather from the Mental Ray Renderer. This is so we can isolate the composite objects in our scene without any global illumination shading being cast on our stand-in geometry. For more information on Final Gather, you can read about it here.


  • Select your stand-in geometry.
  • Navigate to the Attribute Editor, and select the Shape Node attached to your object.
  • Open the Mental Ray menu, and uncheck Final Gather Receive.
Figure 2.7

If we do a quick test render now, our result should consist of our composite geometry with shading from the environment, and a transparent background where our stand-in geometry is located. Note that shadows are not being cast off any of the the composite objects.

Figure 2.7.1
Figure 2.7.1


Optional Step

If you’re using a Mental Ray Physical Sky to light your scene, you might have noticed that your render previews still have the Physical Sky horizon in them. See Figure 2.8 (Right).

This horizon won’t always be rendered in your final images depending on your settings, but it could interact with the way some reflections and final gather simulations appear.


To fix this, we’re going to enable the UseBackground setting on the Physical Sky.

  • With your Physical Sky selected, open the Attribute Editor.
  • Navigate to the mia_physicalskyX tab, and check the UseBackground checkbox.
Figure 2.8
Figure 2.8.1

Step 4 – Composited Footage

At this point, we’ve covered how to align our virtual and physical cameras, how to create some basic effects, and some of the first steps that need to be taken to render our effects so that we can put them over our spherical footage. From here, the next steps to wrap things up are going to be to render out our effects and put them together in Premiere. I’ve taken the liberty of doing that, and here are my results.




Now, it’s time to make something incredible! I can’t wait to see what kind of creative masterpiece you’ll come up with.

Performance at LRLE

posted in: Uncategorized | 0

Emily here. Recently I was invited to give a talk at the Living Room Light Exchange, a monthly salon here in the Bay Area which curates talks from artists working in new media forms. (If you go looking and can’t find me that’s because my artist name is BlinkPopShift, or BlinkPop if we are being informal). The Light Exchange asks artists to talk about 1 or 2 recent projects but of course I am ornery and resistant to formula so instead of giving a traditional talk with a slide show and orderly speakers notes I decided to try a mixed reality live performance. Before the event began I recorded half of the performance in the space allotted for the talk. Once edited and uploaded it was accessible by any device so at the beginning my of turn I had everyone go to the video on their phone using the Youtube app. Here is video documentation of the performance so you can get a better idea of what happened.



Here are a few of the factors I considered when developing the piece.


Performances which include both live artists and video often take similar form: The video half of the performance is shown larger than life size on a wall or screen. The artist performs in front of or beside the recorded image. When the two components are equally powerful the audience divides their attention between the two by looking back and forth, but if the live or recorded portions are unequal in grip the audience’s attention is drawn and often locked to one component over the other. To avoid these conventions in my piece the video component’s display surface was distributed while maintaining communal audio.


In order to achieve the of goal of multiple scattered BlinkPops I had to utilize the audience’s own phones which means confronting the culture and stigmas around our little pocket devices. Immersive media and phones are often picked with similar criticism: flagrant escapism. When a friend is looking at their phone during dinner we perceive their mind to have leaned through that glowing window to the land of elsewhere. Similarly phones curry no favor in live performance settings because in their unaddressed state they are a means of distancing the mind while the body remains present. When the audience is otherwise a sea of black that escape is especially highlighted. But phones are intimate objects. Even if asleep so as not to distract others from a live event, they have not left our sides. Assuming they don’t exist and have no effect on an assembled crowd is disregarding what an audience actually is today: a group of both humans and computers, usually paired one to one.


So if instead of all that escapist stigma, I create performance that includes your intimate device. Your phone then reinforces and elaborates on the physical world interaction instead of being a severing or interruptive means of distance.  

The roughest spot in this instance of the piece was my level of ease with the emergent audio. While I did count down so each audience member could start their instance of the video roughly in sync, that was more goal than reality. I did have the chance to practice the performance once before the Light Exchange with a small group of fellow researchers at the lab but I was not prepared for just how disorienting it would be to have so many distributed copies of myself talking at staggered intervals all around me. In the practice talk we came up with the very helpful plan to have one phone connected to a speaker so that I at least had one voice to talk to and with practice that will likely go more smoothly.


I am looking forward to reperforming this piece more in the future and seeing what new insights bubble up!  

VR World Nav

posted in: Uncategorized | 0

As we slowly transition to webVR api 1.0, this seems like a good time to talk about various projects and experiments that are going to be obsoleted by the new API.

A number of these are experiments in different ways to natively navigate the web in VR. In this blog post, I’m going to talk about the first of several ideas that we mocked up as a potential solution to this problem. This mock up is particularly good for navigating between eye catching scenes, like spherical videos.

In the regular web, navigation is mostly done by clicking links, which then jump you to a new page. In VR, “jumping” to a new page, can make people feel sick, so we were particularly interested in ways to introduce a new scene without causing nausea in the end user. Additionally, current VR navigation isn’t great for selecting small targets, and, for many systems you effectively don’t have hand controls other than a “big red button” (which we approximate with the space bar). All of this was even more true a couple years ago when I put this experimental interface together.

VR World Nav imagines web pages as interactive spherical environments and makes the “links” to them be the actual spheres of the linked page. This is nice as it also gives you a bit of a “preview” of what you’re going to see, like a Youtube thumbnail.


When a link is selected (by “looking” at the desired sphere and hitting the big red button/space bar), the selected sphere gradually grows to fill the whole scene.


For this experiment, the link spheres and the environments are really dummy pages. Although the links do actually work to show you entirely different websites, while staying in VR mode, it was an interesting experimental work-around as this behavior is somewhat unsupported currently (VR mode generally has to be entered on a per-website basis). The real question here was how people would interact with this kind of navigation mechanism and whether this kind of transition between pages was more or less nauseating to people than simply jumping to another page would be.

I experimented most notably with different times for the transitions. When the sphere grew too fast (which, I believe, is actually the current setting, sorry), it tended to make people as or more sick than “jumping”. When the sphere grew too slowly, waiting for the transition when going between pages quickly became aggravating. We don’t really have a large enough sample size in the office to determine exactly what the optimal speed is, but there does seem to be a “sweet spot” for the transition that makes it smooth and non-sickening, without becoming frustrating to navigate between pages. I don’t get sick quite as easily as some people, so I believe that what it is set to right now is a speed that worked for me, but, experimentally, it’s not a good choice to force on everyone. Since everyone is different, one interesting idea might be to let the speed and other settings for link transitions be something that can be set in the browser by the user, kind of like default zoom size might be larger for people with vision difficulties, or how mousing speed is customizable on your desktop.

If you are interested in trying out VR World Nav, you can do so here:

It works with WASD/QE for rotation navigation and space bar to enter links. You can also hit space bar to exit links. If you have a compatible VR device and a browser that still supports pre-1.0 webVR, then you can also try this experiment out in the browser as originally designed.

The code is available on Github here:





eleVR joins YCR

posted in: Uncategorized | 0

It’s big announcement day everyone! eleVR is joining a new organization called HARC, Human Advancement Research Community, which is the newest project of Y-Combinator Research.

Lots of things will be the same: we will still be posting all our findings here and we will still be releasing all of our work open source and creative commons, but now we are better setup to support a longer term and more ambitious vision, as part of a nonprofit 501(c)3.

For the past two years, our group has worked on understanding what virtual reality is and wants to be, with a particular focus on self-expression and mathematical visualization. This fits really well into the larger vision of HARC, which is as follows:

Our mission is to ensure human wisdom exceeds human power, by inventing and freely sharing technology that allows all humans to see further and understand more deeply.

In our increasingly interconnected world, every individual’s actions can affect billions of others in complex and invisible ways. We believe every individual must have access to technologies that allow them to build their own understanding of the world and its systems in order to act conscientiously, responsibly, and effectively, both as individuals and in collaboration with others.

HARC researches technology in its broadest context, which includes: technology for communication (from the invention of spoken language to modern data graphics), intellectual tools (such as the scientific method and computer simulation), media (from cave painting to video games), and social systems (including democracy and public education). We’re focusing on areas where we believe the structures created today will have the most impact on the future, and that can most benefit from having dedicated resources outside the for-profit world. At the moment, these areas include programming languages, interfaces, education, and virtual reality.

Our shared vision of technology combines an expansive long-term view with a strong moral sense. We look to the distant past as well as the far future. We reject the artificial boundaries created between the humanities, arts, and sciences. We don’t always agree on what is good or evil, right or wrong, but we use these words seriously and are driven by them. We seek to guide human technologies in thoughtful and ethical directions, with a deep sensitivity to the relationship between technology and the human condition, and the difference between what a piece of technology is intended to be and how it impacts humanity in reality.

First forays: Multi-camera Spherical

posted in: Uncategorized | 0


As part of my ongoing quest to discover the whats and hows of spherical cinematography I (this is Emily by the way) have been experimenting with shooting the same scene simultaneously with multiple spherical cameras. Over in flatland multi-camera setups have been around for about 65 years. They have been a mainstay of TV since the early 1950’s. Often filmed in front of a live studio audience, multi-camera shows are captured concurrently from multiple, usually 3 or 4, angles. It is most often employed in television because while the director of the shows gets less control over any one shot, it is faster and cheaper since this method is designed to capture events live and in order instead of repeating scenes multiple times.

So what happens when you take that idea, simultaneous capture from multi viewpoints, and let it loose on immersive capture? Here are my results so far by type.




“Between me” was shot on a sunny afternoon in Dolores park. I set up one camera on each side of me with about a meter total distance between the two. In editing I blended the two viewpoints into one. It looks pretty convincing until passers-by drift over the seam. Watching it I tend to look back and forth between the two Emilys never quite catching the same movement from both sides. I become a gateway, a set of twin pillars which you can neither fully pass through nor retreat from. 




“Toward” was made by attaching a camera to both mine and my husbands bicycles. The footage was simply cropped to the width of the handle bars of each bike and oriented so we were facing each other, fingers close enough to touch, with no effort made toward an illusion of singularity. I made it to see what would happen if the time slice each camera was capturing were the same but the spaces were separated. It is hard to tell this piece was shot simultaneously unless you watch the cars in the background carefully.




“From Both Sides” is a movement study that was shot on my deck. The two cameras were about 2.5 meters from one another at the same height. One shot is simply overlaid on the other and realigned.  When I was playing with the footage alignment in post I discovered the semi transparent overlap of two slightly misaligned copies of the same space was really visually compelling. I like the vague strain of my brain trying to snap the two together as I pass in and out of the space between the cameras.




“It takes such a long time” is my favorite of the bunch. It’s easily the most interesting video I have made in four months. Similar to “Between me” the cameras were placed at only a meter apart but this time one was half a meter above the other as well. Because my body, and most of the rest of the scene, is above the lower of the two cameras, as opposed to more eye level with it like the high camera, the lower section of footage seems to perch looking down over the shoulder of the higher camera. The higher the camera the lower in the sphere the bulk of action will sit and, at least using this editing technique, the lower it will sink in the visual stack of footage. In this piece I chose to configure it so the basic cardinal directions of both shots were aligned to keep the space consistent but breaking that rule, as in “From Both Sides” above, also garners interesting, if more cognitively taxing, results. One thing to note when watching “It takes such a long time” is that a piece of this duration is ill suited to watching on a phone. Where possible I would choose to show it to you on a small full dome screen suited to house a 2 or 3 of audience members at a time.


As more patterns emerge I will report back to share what I find.

The State of webVR

posted in: Uncategorized | 0

There is a big (and exciting!) change in the API that breaks most of our current webVR stuff in the latest chromium builds (and upcoming at some point in Firefox).

This is being done to improve the API and make it better match what VR and AR has become since browser webVR was introduced. Major overhauls like this are hard to do once something becomes too popular so this change is being made earlier when it only affects a small number of other developers rather than later when the whole web is dependent on it and everyone feels stuck with all of the problems forever. Brandon Jones (the Chromium webVR guy) has a more detailed blog post about the reasons for the API changes in webVR 1.0 here:, and you can learn more about webVR generally here:

Separately, Chromium with webVR capability is now only available for Windows, which, unfortunately, is understandable given that both the HTC Vive and the Oculus headsets are now Windows only.

The new API is here:

The old API is here:

For the most part, this will require a major update to our boilerplate and then moving that boilerplate out into all of our projects. We have already started moving some of our more recent and any actively developed projects over to the new API. This includes things like “Float”, “Peachy Rings”, and our drawing and scanning projects that we have been showing recently.

On the other hand, we may never get around to updating all of our older and more experimental projects. Additionally, due to the mixed support currently amongst browsers for the new API, it’s hard to know when or whether is a good time to change our code to break things for the smallest number of people. Technically it is possible to go out of the way to support both specs, but, realistically, maintaining that kind of backwards compatibility for something being obsoleted is not a good idea.

To this end, we will soon be publishing a completely *separate* webVR boilerplate that supports the new webVR 1.0 API (and not the old one). The current boilerplate will remain available here:, but will be considered obsoleted.


Creating a VR first-person shooter, live

posted in: Uncategorized | 0

Now that hand controls have been added to the webVR API (thanks, Brandon Jones!) and we have them working in our framework (Andrea Magic!), we’ve been trying them out with the two easiest and obvious applications:

  1. A drawing program (see previous post)
  2. A first person shooter in which you shoot peachy rings that bounce across a giant jello floor and into a cake

While we’ve livestreamed most of the creation of both, I decided to edit down the stream for the second one to make it a really watchable demonstration of programming a VR first person shooter. It covers a tiny bit of vector math (by drawing in our own VR drawing program) and some simple physics (the peachy rings are affected by gravity, air friction, and bouncing on the floor), and is generally just a fun time.

So why peachy rings? Well, the previous post explains a little. And once we had webVR peachy rings, I needed to make a Multi-Sensory Peachy Ring Experience for the office lunch table, so I modified the original peachy dance to have more peachy rings that float around with random vectors, set them up on a couple phones in Wearality headsets (try it on your phone) and bought a 5-pound bag of peachy rings:

multisensory peachy ring experience

Once you have peachy rings moving along random vectors, you kinda start to think about how easy it would be to make those vectors not random. I figured I could modify it into a peachy ring shooter within a single twitch stream, so I did.

As you can tell from the video, making a first person shooter isn’t exactly my goal, but it’s an easy demo/test, and our peachy ring shooter does have some advantages for VR over a standard shooter:

  1. The peachy rings are big and slow enough to see them in VR with depth and head tracking
  2. Peachy rings bounce off of floors, highlighting the virtual floor which aligns with the real floor
  3. The parabolic arcs of gravity-affected peachy rings encourage tracking them with your head
  4. The first instinct of many people, when seeing small floating VR objects in abundance, is to eat them, and peachy rings are representational of an edible thing
  5. There is a cake. You can shoot peachy rings into the cake. This is important


The edited stream is on YouTube (or in the header above). The full original stream is on Twitch, if you want to dig in further. Look out for future streams on

The webVR site is at You can look around it on a phone or in a normal browser, but you’ll need a Vive plus special chromium webVR build if you want to shoot peachy rings.

Code is on github.

Vi Hart

Drawing in WebVR

posted in: Uncategorized | 0

Vi here, and it’s my turn to do a quick update.

We’re currently working on a website with a basic 3d drawing tool (and livestreaming the process, as seen in the gif below). If you’ve got a Vive, get webVR working and try it yourself.

drawing in webVRIf you’ve done any sort of drawing/sculpting in VR, you know just how incredibly magical it is. I feel like a child scribbling, mesmerized by the power of turning simple hand motions into a semi-permanent visual object.

Besides being so compelling in VR, a simple drawing program is super easy to make, and you can even see us making it live on What we’ve got isn’t fancy, but the ratio between how fun it is to use and how long it took to make is ridiculous.

Those would be great reasons to make a drawing program if we were looking to create a product or make the next hit VR thing (as many companies have figured out already), and as far as I’m aware no one has done one on the web yet. But we’re researchers, so that’s not the reason we’re working on it.Clem and Toto website

We’re working on this tool so that we can use it for other projects.

Specifically, I recently found a website my cousin and I made in early 2000 and forgot about a long time ago. It’s a treasure-trove of my own early attempts at self-expression on the web, participation in memes, and attempts at communication with the outside world. It is so honest, so free of social media glut and modern expectations, so pure!

As we’re reinventing the web again with webVR, I decided as a design exercise to try making a webVR version of this website as if I were still that carefree 11-year-old having fun with my cousin.

First, I took the same art assets and created a simple mock-up of the index, a cubic room wallpapered with that same cloud wallpaper and floating clouds that are obviously just pretending to be cloud-shaped but actually cubes:

original on the left, webVR on the right
original on the left, webVR on the right


For the Peachy Ring Dance, I made simple bouncing toruses textured with something I drew in paint using the same default colors and aesthetics as the original.

peachy ring dances

Original on the left, webVR on the right

But when I tried to think of how to remake other pages, I ran into the problem that the original has all these great drawings made in paint or KidPix, and I wanted VR drawings in that style, which means I needed the VR equivalent of Paint, which does not exist.

Oh, there’s a dozen VR drawing tools out there that one might compare to Paint, but the thing about Paint is that it is not a closed environment for playing in—it’s a tool that lets you create drawings and then use them elsewhere. I tried to find an existing tool for Vive that would let me export my 3d drawing as an obj rather than storing it in a special format that can only be viewed within that same drawing program, and I couldn’t. If one exists, do let me know, but I figured it would be easier to make our own.

Andrea has been adding hand controls to our webVR framework (which hopefully she’ll talk about next week), so with her help, it was easy to get basic drawing working. Add some colors and brush sizes, and I’m ready to create a website header just as gorgeous as the original from 2000!

Screenshot from livestream
Screenshot from livestream


The drawing tool and website remake both use the same webVR framework, so as a first approximation of save functionality all I had to do was log the state of the paint-blobs and copy them over to the other website.

Clem and Toto VR header

The beautiful thing about this whole process is that we created a tool to be used, not to be marketed or sold, and so we didn’t have to guess about what we think people would want in a product or worry about beating the competition. One of the aesthetics of our group is building our own tools, which both gives us control over the tool we need and makes the tool more useful than it would be if it were designed for an imaginary audience.

During one of the live programming sessions there was this great moment when I started explaining to the viewers how I wanted to make a cart full of paint buckets and tools, and usually I would have gone to another program to sketch it out in 2d, but I realized I could sketch it out in proper 3d right in the very drawing program we were working on! I know this is going to come in handy for future streams on other projects, as well.

sketching the tool cart during live stream
sketching the tool cart during live stream


I can’t wait to be able to stick drawing functionality into all of our webVR pages so that we can mark them up, sketch out ideas, and generally just follow the aesthetic of working as directly as possible with the thing we’re making.

And it’s already finding secondary applications with performance art motion capture dance by Emily:

We weren’t intending to make a product, but it’s compelling enough that we might try to polish it up a little to make it more useable to other humans. Or you can fork it on github and do it yourself!

Vi Hart

Scanning all the things

posted in: Uncategorized | 0

In an effort to keep our blog up to date instead of just focused on epic treatises that take months to write and edit, I, Emily, am here to give y’all an update. All of my time in the recent weeks are been spent converting physical things in to virtual things.

















These sculptures are called the Unscannables. They are hollow forms made from found paper and lots and lots of glue. I have been building them slowly over the past 8 months and now I have a large enough number to do a room scale project with them.


Now I am converting them to 3D models using a Structure sensor and Skanect software and Maya. The sculptures used to live in my studio at home but now they live in the little corner offer in the lab.




As I scan them Vi has been placing them in to a Vive enabled webVR site. Here is the site so far. (If you don’t have a Vive just use arrow keys and WASD to get around.)




Eventually we will have a physical room and a virtual room  with the same objects in similar orientations which will let us play with techniques for amalgamating a physical and a virtual object into a physi/vitu object in your memory. One idea for how to do that is to have users repeat pathways in both the physical and virtual spaces several times to activate similar place cells in the hypocampi and thus help solidify the two versions of the space in to one. I’m excited to see what happens.




Update achieved! If you want to see everything as we make it come fine me on Twitter or Instagram @blinkpopshift.

Humans Are Reasonable

posted in: Uncategorized | 0

A lot has been happening around here (more coming soon), but we hope to get back to our regularly scheduled research in the next couple months. In the mean time, here’s something that fell through the cracks:

I forgot I made a VR remix video called “Humans Are Reasonable” in 2014, and also that a year later along with another pile of our stereo spherical videos I made this stereo spherical video playlist.

I posted before about the process behind our first VR remix video, “The Process By Which Repeated Opinion Becomes Fact“. It explores vertical lines of symmetry, which can’t be used unedited in stereo video because the apparent views of the right and left eyes become swapped. So I cropped and swapped the mirrored versions.

Humans Are Reasonable, with its horizontal lines of symmetry, keeps the right and left eye in proper orientation, and is in that sense much more natural for VR video.

On the other hand, it flips gravity—creating a beautiful contrast between the cavelike spaces of floors on the ceiling with all the furniture hanging down, and the open chasm of a sky reflected below you.

I just love that view of the Bay Bridge, reflected and breathing like that. It’s very calming to watch in real time and at full resolution (we have the entire sunrise, maybe for a future video…).

A successful experiment, I’d say! And now that it’s 2016 and we have much better stereo spherical footage, maybe one worth repeating.

If you have a google cardboard or Wearality or other phone-based VR headset, plus a recent phone, you too can view a pile of our stereo spherical videos on YouTube using the YouTube app.

Stay tuned.


Spherical Cinematography 102: Texture

posted in: Uncategorized | 0

Welcome to another edition of “What the heck is Spherical Cinematography!” This time we are looking at what texture is, how to use it, and why. There are two preceding posts in this exploration, Spherical Cinematography 101: Scale and its companion piece The Choreography of Attention if you want to learn more but for now let’s focus on texture and using it to create good immersion.

While immersive media is not new, sporting a steady track record back to panoramic paintings like la chambre du cerf and even cave paintings, spherical video capture of the real world is brand spanking. So it’s not that surprising that most of the work lauded as exceptional is pretty bland. I mean the NYT Magazine piece Take Flight by Daniel Askill is best described as famous people floating around. That’s it. And they never even get that close, it all out of arm’s reach.

So it’s new to makers but it’s also new to viewers. Audiences in general have never seen the medium at all before so all the flabbergasted oohing and awing is understandable but once the novelty wears off, and it does really fast, there needs to be style and composition and authorship in the wings to carry viewers along. So let’s figure out how to do that: how to make great spherical video that will keep viewers interested long after “Wait, what?” and “Holy shit, really?”. In other words, how to texture.

First I’ll cover what texture is then how to organize it over an entire sphere and finally use the concepts laid out here to analyze the recent spherical video piece Waves of Grace.


What is Texture?


In film school the primary question we were taught to answer when going from script to screen was: What is the best viewpoint from which to see this event? Great vistas? Intimate conversation? Looking down the mouth of a valley snaked with train tracks? Up at the face of a deliberating judge? All of these position the audience in relationship to the material of the film creating tension, emotional context, foreshadowing, power relationships, et cetera.

Many of these techniques used in flat film can still work in spherical: using a lower camera height to make figures loom powerfully over the viewer; big open spaces to establish settings; using a close-up shot to enhance details by portraying small scale portions of a scene at larger than life scale. We have names for these recurring composition techniques: Close Up, Medium Shot, Two Shot, Long Shot, Extreme Long Shot, High, Low and Level Angle. Any one of which can be either subjectively—meaning the camera is situated or addressed as a present entity in the scene either verbally or with eye contact—or objectively styled (if you are totes unfamiliar with these terms check out the reference section for some cinematography reading material).

The difference between flat and spherical then lies in how these basic components are combined. Flat cinema relies on sequencing, either through camera movement or editing, to combine shot types. Spherical combines those same component types simultaneously, all in one shot then sequentially combines those aggregates.

I need to take a few lines here to lay down a shift in vocabulary. Instead of “shot types” as they were previously labeled I will now call them “region types” since you can have many different regions in one spherical shot. As in “Look at that close up region, the details, so pixel.” So texture then is how region types are mixed in one spherical shot. It is the juxtaposition of objects and actions at a variety of distances from the camera.


Cropped or Uncropped?


“Yeah, but how can I tell if it’s a close up or not?! It’s spherical!”

-Hypothetical human who asks convenient transitional questions

All the shot types listed above are determined by the ratio of the figure size to the size of the whole image. For example when the figure is larger than, and thus cropped by, the frame the shot is a medium or close up. Since in spherical there is no frame doing any cropping of close figures we need to find a different metric; one that maintains the figure to ground relationship but requires no frame.

There are two basic states an object or person can occupy within a spherical video: cropped or uncropped. Uncropped things can be seen all at once from at least one viewer position. Cropped objects or figures can’t. The viewer has to look around a bit to see them completely. How much movement the viewer has to do to see the entire figure depends on two factors: how many degrees of the visual field the figure covers in the footage and how many degrees of that same field the player used to play the video displays.

This will also vary by player but the standard Youtube player, as an example, displays 60 degrees vertically and 105 degrees horizontally in mono mode (meaning the video is displayed full screen with no headset) while in stereo mode (with the headset) it displays 90 degrees vertically and 75 horizontally.  Yup, not even close to the same. This is why it pays to aim at a particular viewing platform. My daily videos are shot with the lower immersion mono mode in mind, since my findings from Play/Room showed that most viewers preferred that watch method. Suki Sleepinghowever, was framed for the stereo ratio since the stereo depth was crucial to the design of the animation.

I will use the 60° x 105° field of view for the examples in this post. In order to know if a figure is cropped at a certain distance we need to do some basic measurements. These measurements are for a figure at that distance. The sky’s gonna cover as many degrees as it gosh darn pleases and object coverage at these distances are too variable by size to be a good guide. For this test video the camera was placed around 1 meter up. This is approximately half of Vi’s height, who was around 2 meters tall in that day’s shoes, which kept her centered pole to pole for easier measurement. Each square in the grid is 15° on a side.



These distance-to-degree measurements are going to vary based on the specs of the camera you are using. I’m using a Ricoh Theta S. It would behoove you to do a few experiments and figure out what the ratios are for your set up… even if you never actually measure a scene before recording. But enough disclaiming; for my set up at 0.5 meters, Vi occupies 135 degrees vertically. Since our example player displays 60 degrees at any viewpoint we know the viewer will need to move over twice the screen height to view her fully. In fact she does not become fully visible in one viewing position until she is 2m from the camera.




The point of all of this is not that the viewer will carefully scan her head to toe before moving to the next section of the screen but that instead—if the first time you see a character her whole outfit needs to make an impression—then she needs to be at least two meters from the camera. I am not far enough along in the research to make definitive statements about how a cropped and uncropped figure differ emotionally for the viewer but it points toward things around intimacy and personal space. Whether that’s welcomed or violating will depend on how the shot and the scene are constructed.




Okay, now that we have covered region types, vertical coverage, and cropped versus uncropped, let’s work on how to organize them around a sphere, starting simple and stepping up to more advanced techniques.


Beginner: The Baseball rule


Since engaging immersive shots are ones with texture, even the simplest shot should follow at least this rule. Think of your shot like the skin of a baseball with two wide curving regions that meet edge to edge. Each of these 2 regions needs a different region type, with objects and actions at a different distance from the camera. My video J is for Jaggery is a good, simple example. The close-up region includes the ceiling as well as the microwave and the pot of chai which are both very close (lower quality and visibility of the details due to poor resolution should just be ignored at this point, cameras will catch up eventually.) The other region includes the cutting board, door, table, and all of the action Steve and I do in the scene. This is a medium depth region.



Even a drone enabled shot from high above a city would benefit from the baseball rule. The addition of a flock of migrating geese, for example, which fly close to the camera but not between it and the scene below, a fast moving cloud bank over head, or even a plane taking off from the city’s airport and flying above and away would awaken the rest of the shot. (Again this is limited by current camera resolutions and by the fact we only have one kind of best-at-medium-range cameras at the moment, but near- and long-range capable spherical cameras are an eventuality we should stylistically plan for).


Intermediate: Finger Framing

Are you a fan of the good old fashioned finger framing method, using the thumb and index finger of both hands to make a rectangle to look through for mocking up images? Its a silly yet tactile way to get your brain think about about frames… so here’s a silly yet tactile way to get your brain thinking about spherical shots:




Make a C shape with each hand and hold them so they interlock, one hand running zenith to nadir while the other wraps three quarters of the equator. It’s like holding an invisible ball between your palms fingers wrapped around on all sides. You now have an imaginary spherical video with six general regions: right palm, left palm, right fingers, left fingers, right thumb, left thumb.



Yes, I know you can’t look through it to make a frame like the old way, and yes, I know this won’t work for my digitally disabled brethren, but use your imaginations people! I prefer this to thinking of the video as a clock face, with a detail at 5 oclock, action at 7 and vista from 8 through 4, because it’s so flat. It keeps you thinking about the horizon and organizing things along it, instead of the actual, contiguous, immersive space of spherical capture.

Okay, so you’ve got your 6 regions—not every region needs its own treatment; but this technique can get you thinking about how you organize different textural elements around the sphere. The baseball rule, by the way, would be one texture for the right hand and one for the left, but there are lots of other ways too. Maybe thumbs and palms are an extreme long region, the right fingers a medium region and the left a close-up.


Advanced: Strategic Visual Obstacles


Spherical video had been denigrated by some as “not real VR” because as Will Smith put it in his Wired piece Stop Calling Google Cardboard’s 360-Degree Videos ‘VR’, “360 video is inherently limited… you won’t be able to get up and walk around in a 360 video. The cameras just can’t capture the data required to allow that.” This is supposed to be the earth shattering insult, the last nail in the coffin, for why spherical (*cough* I will never call it 360 *cough*) video is lesser and should be banished from the happy, perfect fairyland of ‘real’ VR forever.  But what if we didn’t jump to conclusions? What if we looked at the qualities of the medium without assuming we know better, to see what it wants to be?

In spherical video the viewer can’t move spatially within the scene. How can we use this technical reality stylistically? Viola! Strategic Visual Obstacles! Just because you can place the camera out in the open, at an average eye height, where everything is visible for meters around, doesn’t mean it’s a good idea. Blocking the viewer’s access to a particular part of the scene, whether partially or completely, builds tension. And conveniently you know for a fact that viewers can’t just lean an inch to the right and mess up all your careful planning.

Lets look at The light on a pot and a petal as a super-simple example.



There are three separate regions in this video. There is a close-up region which includes the flower pot, drinking glass, and me drawing, the threesome to my left chatting over coffee in a medium region, and the background of people ordering food and chatting in a medium-to-long region. But then, what about the woman clearly visible between the drinking glass and the pot? She is sitting with her friends. She is completely visible, if a bit pixelated, but the person sitting across from her to whom she is talking is completely blocked by the glass. If the glass were not blocking the view I would say this was a medium region but since it is I’ll go with calling it a layered region. Layering a close-up over a medium shot increases the depth of field of the region but it can also increase suspense by adding mystery.


Denying a viewer access to some part of the scene is really powerful in an immersive piece where the viewer has a built-in expectation of agency over what they look at. One of my personal favorite implementations of this technique so far is attaching the camera to the back of my head. This camera position means the viewer can see everything around except what I am looking at. My gaze blocks the gaze of the viewers.




Waves of Grace




Now let’s use our newfangled tools to look at a few shots from the 2015 piece Wave of Grace by Gabo Arora and Chris Milk. I want to take a look at two different shots, the first of which starts at 2:02. Here a group of people prepare to bury a recently deceased ebola victim by suiting up in full-body bio-safety gear. The camera is placed in the center of the tarp-roofed shelter with people prepping on all sides. In the mostly obscured background you can see a field of white crosses that mark the graves of previous victims and another shelter a few meters away. The shot feels busy, cluttered and at emotional odds with the context of funeral preparations because the camera placement has no tension. It’s all one medium region with glimpses of other region types but no clear layering. Many figures are within the cropping boundary, but here it doesn’t serve to foster a sense of intimacy with the people around, but rather a feeling of being overwhelmed and disconnected, like a strange crowd pushing past you in a narrow place.

Instead of shooting from the dead center of activity which gives the scene even texture on all sides, the camera could have been placed to one side of the open shelter, juxtaposing the diligent efforts of the workers with the field of completely still, high-contrast white crosses. In traditional flat documentary footage, you would normally shoot the two pieces separately and leave it to the editor to reveal the visual tension between the two, but in spherical the two scenes can coexist. That’s the best part about spherical video: it can capture the wild contrasts and strange proximities and heart mending meetings that are part of all lives.

The second shot we’ll look at directly precedes the funeral preparations scene, starting at 1:45. This shot has better texture. It combines a long region, a field with milling people, with a medium region, a man chopping wood next to two girls chatting. A long row of risers stretches out from where the girls sit cutting a beautiful perspective line toward the field giving that region great depth. As you look around notice that no object or figure is within the cropped boundary. Even the large pile of chopped wood is uncropped. This gives the scene a sense of distance. Neither the people nor the world confront you proximally and you are free too look from vignette to vignette unobstructed.


Annnnnnnnd done!


We covered what texture is, why it’s important for creating engaging immersion; how flat shot types can be retooled into spherical region types, how to measure your own spherical video set up for vertical coverage and cropping boundaries, covered a few guidelines on how to organize regions around the sphere, and then used all that sweet, sweet know-how to take a more in-depth look at two shots from Waves of Grace.

If you have a specific piece you would like me to analyze or questions about these techniques feel free to find me on Twitter, @emilyeifler. Happy sphering everyone!





Cinematography, Theory and Practice. By Blain Brown. 2002.

The Five C’s of Cinematography, Motion Picture Filming Techniques. By Joseph V. Mascelli. 1965.

Video Art. By Michael Rush. 2003.

Virtual Art, From Illusion to Immersion. By Oliver Grau. 2003. (Translated from German by Gloria Custance)

Spherical video editing effects with Möbius transformations

posted in: Uncategorized | 0

Hello eleVR blog readers! This is Henry Segerman, guest blogging about a research project I’ve been working on with eleVR. I’m a mathematician and mathematical artist, and I also worked with eleVR on Hypernom and Monkeys.

First of all, go watch this discussion and tech demo spherical video:

If you want to jump straight into the code, here it is on Github.

Almost all current devices and platforms use an equirectangular projection of spherical video data for storage and streaming. This converts a sphere of data into a 2×1 rectangle of data, which fits in nicely with current infrastructure for video. What doesn’t currently work very nicely is editing spherical video in equirectangular format. As Emily outlined in her talk at Vidcon, what you can do in ordinary rectangular video editing software is quite limited. For example, if you want to rotate the video around anything other than a vertical axis, you’re out of luck.

It seems obvious that future spherical video editing software should include the ability to rotate all or part of a scene around any axis, the equivalent of translating and rotating flat video. What about other effects, for example scaling content to be larger or smaller (or equivalently, performing digital zoom)?

Möbius transformations

Möbius transformations are transformations of the sphere that include ordinary rotations of the sphere, as well as very natural zoom-like transformations and many other interesting effects, as we show in the video. The code we used to make the transformations to the video above is available on Github. It is written in Python, and modifies individual png files, very slowly, taking around a minute for each frame at 1920×960 resolution. (But fast enough for research purposes. Someone implement this in proper video editing software please!) The code should hopefully be easy to follow, but I’ll also outline the process here.

We have a sequence of transformations:

Pixel coordinates (say on a 1920×960 rectangle) → (0, 2π) × (-π/2, π/2) → unit sphere in R³ → CP¹

The first transformation just scales the pixel coordinates to be angles, and the second is the inverse of equirectangular projection. Next, I’ll describe CP¹ (one-dimensional complex projective space), and the third transformation. CP¹ is the set of pairs of complex numbers, (z,w), where we say that (z,w) is the same as (λz,λw) for any non-zero complex number λ. In other words, we can scale both coordinates and it doesn’t change the point in CP¹. It’s useful to think about a pair (z,w) as the single complex number z/w. Of course if you scale both the numerator and the denominator of a fraction by the same number, it doesn’t change the number that you get. So, CP¹ is almost the same as the set of complex numbers. The difference is that in CP¹ we can talk about the pair (1,0), while 1/0 doesn’t make sense as a complex number. So CP¹ is just the complex plane, with a single point added “at infinity”. The plane, plus a point at infinity, is topologically the same as a sphere. The third transformation in the sequence above, from the unit sphere in R³ to CP¹, is just realizing this topological fact, using stereographic projection. Here’s a photo of a model illustrating stereographic projection as a map from the sphere to the plane (you follow where the light rays go to see what the map does).


The north pole of the sphere doesn’t map anywhere on the plane, it gets mapped to a point at infinity, which corresponds to (1,0) in CP¹.

Ok, so now we are in CP¹. Möbius transformations are what you get by applying 2×2 complex matrices to each point (z,w), viewed as a two-dimensional complex vector. So, for example,


\begin{pmatrix}2&0\\ 0&1\end{pmatrix}\begin{pmatrix}z\\ w\end{pmatrix}=\begin{pmatrix}2z\\ w\end{pmatrix}


Thinking again of points of CP¹ as single complex numbers, this converts z/w into 2z/w. In other words, this scales everything away from zero by a factor of two. On the sphere however, it scales things away from the south pole, and towards the north pole. If we do this to a spherical video (that is, mapping all the way from pixel coordinates to CP¹, applying the matrix and then mapping all the way back to pixel coordinates), it looks like we are zooming in on the south pole, and zooming out from the north pole. Here’s a test equirectangular image, and the result of doing this kind of “zoom by a factor of two”, this time between two opposite points on the equator:



Notice that close to the center of these images, you really do get a zoom by a factor of two, and on the opposite side of the sphere (at the midpoints of the left and right edges), you zoom out by a factor of two.

If you want to rotate using a 2×2 complex matrix, you would multiply by the appropriate complex number. So for example to rotate by 90 degrees, you want to multiply by i, so you would use the matrix

\begin{pmatrix}i&0\\ 0&1\end{pmatrix}

Using other kinds of matrix (functions for which are included in the code on Github) you can rotate about any two points of the sphere, by whatever angle you want, or zoom from any point of the sphere towards any other point.


So, apart from the math being nice, why else are Möbius transformations a good idea for video editing? As we mention in the video above, they are all conformal, meaning that they don’t change angles. In ordinary flat video or still image editing, zooming and rotating don’t change the angle at which two lines meet, while if you shear an image then angles do change. If you look closely at just a small part of an image, it will look pretty much the same after a conformal mapping, while if you shear an image you can tell by looking at any tiny part of it that something is distorted. Of course things can change drastically on the larger scale, but at least each small part of the scene looks reasonable on its own.

If you have any questions about the theory, or the implementation, or make something cool using Möbius transformations, please tweet me at @henryseg!




P. S. I was inspired to start thinking about Möbius transformations applied to spherical imagery when I read The Mercator Redemption, a paper by Sébastien Pérez-Duarte and David Swart.

P. P. S. There is an awesome video, Moebius transformations revealed, which shows visually how various Möbius transformations of the complex plane can be interpreted as motions of a sphere. This sphere is close to, but is not really the same as the sphere of our spherical video!

WebVR Programming, Live!

posted in: Uncategorized | 0

I’ve been doing some live webVR programming on Twitch this past month, and am planning to do more in the future. All past broadcasts are still archived on my channel here:

I started with a line-by-line walkthrough of “Float,” the webVR game I talked about in my last post. Currently I’m working on this year’s webVR holiday project (last year’s was “Child“), which we started from the beginning on the stream, even including setting up the eleVR boilerplate from Andrea’s github and getting confused about whether I had the right file and how to github (git is always the hardest part of programming).

In-between those projects, I streamed the process of making Child compatible with the Vive headset in webVR (If you have a Vive and want to play Child, see the post on Float for info on getting Vive webVR working). The old version for normal browsers and Oculus Rift is at, and the Vive version is at

To make the player able to move to areas beyond the limits of the walls, we created a new experiment in VR movement design: the Sled. Sit on the sled to activate it, then sled in whatever direction you’re leaning, with the speed scaled according to how far you’re leaning. It feels really natural, and the whole process of doing the vector math was captured on stream.

It’s fun live because people in the chat are good at noticing bugs as I type them, and answering my javascript syntax questions. If I’m going to do a stream, I usually tweet an hour or two beforehand, and again when I start, so pay attention to @vihartvihart if you’re interested in catching a stream live, or do twitch follow.


livestream screenshot of trees

Suki Sleeping: A new technique for immersive 2D Animation

posted in: Uncategorized | 0

I love cartoons. My Little Pony, Gravity Falls, Adventure Time, Bob’s Burgers, Steven Universe, Legend of Korra all delicious. So because I am a cartoon nerd and because every single immersive animation I have seen is 3D, I developed a technique to make 2D animation as immersive video.

First go watch (and listen) to the cutest little short about dreaming ever: Suki Sleeping. This is a stereo video (that’s kinda the whole point) so if you can watch it on your handy dandy headset.


I know, that is some high potency adorableness right? Just let it all soak in. Its good for you, cause science.

Ok so first I’ll walk you step by step through how Suki Sleeping was made and how you can do this with you next animation project then we’ll talk about a tool that will need to be built to facilitate this kind of project on a larger scale.

How to:

First build up your materials.

Animate a flat character doing a thing. Draw/collage an awesome place for you character to live. Just keep each piece, each chair or tree or door or character separate from each other and on transparent backgrounds.


Make a sequence in Premiere that is the full size of the finished product. I went with 4k x 2K because it makes the arithmetic easier later on. Keep in mind that if you choose a 2 x 1 sequence like I did you will need to divide the height of each picture element (like a tree or a character) by 2 while keeping the same width. This is because we’ll be squishing it into half the frame. Alternatively you can make your sequence a 1 x 1 and skip this step. Just depends on how big you want your file sizes to be.

Now you need a depth measuring grid because we want the final product to have stereo depth. This is essentially just an arbitrary but evenly spaced set of lines that are numbered lowest to highest top to bottom. The grid helps you organize objects at different depths. The top line should be placed at the horizon and the height, but not the width, should be adjusted to fit your scene. Here’s the one I used:


Now in the bottom half of the sequence you are going to arrange all of your elements. It is really useful to do the layout phase with the measuring grid visible like this:


Since you are doing this in a spherically unaware editor: avoid the poles. The closer you place things to the top or bottom the more distorted they will be. One method to get around this problem can be seen in the veranda scene in Suki Sleeping. I traced over an equirectangular photograph so my drawing would be in the correct projection. This is an easy stop gap technique until spherically aware animation software is a thing.

veranda scene


Once you have a whole scene arranged and you have all the characters movements in the scene animated its time for the tedious part. Select everything in the scene and duplicate it. Your timeline should look like the one below with both copies of the stereo measure at the very top and two identical chunks of clips below. It is fine to instead nest your scene and work with two copies of that nest but I found this to be more annoying than just having everything in the same sequence.The sequence stack

First step is to move all of the copied clips and images to the top half of the video. This is where that easy arithmetic lives. Under video effects, subtract 1/2 of the total height of the video from the y position of each clip.
bottom position

top position

Then add the stereo measure to the x position of the same clip. For example take the pineapple in the image below. The bottom edge of the pineapple is on the 30 line.

pineapple positioning

So we need to add 30 to the top pineapples x position.

bottom pineapple

top pineapple

Simple! Now just do that for every single thing you put in the scene! You should end up with some thing like this. You can get even fancier by animating the disparity between the two copies to make something move forward or backward in the scene. with stereo measure


This is a super time consuming process. Suki Sleeping took weeks to make and while I would be much faster next time as I wouldn’t need to also invent the entire process from scratch, it makes more sense to just have a better tool. There are tons of great 2D animation tools out there: Synfig if you want to go open source, Toom Boom and Anime Studio if not, but none of these do either spherical or stereo.

In a real immersive stereo 2D animation tool would have users do composition work not on a screen but in a head mounted display so the stereo can be seen as they go. In the headset the animator would have a bin of materials to composite, a bucket of trees characters etc to place. In the center of the virtual working space would be a stage on which these elements could be placed. The animator would have the ability to see from either the center of the stage, the spot from which the video will be rendered, or be able to walk around the stage seeing things from the outside. They would be back stage. Once a tree for example is placed the animator would be able to bring it closer or farther away with minimal effort.

This would remove the entire duplication phase and instead generate the stereo effect with a combination of built in stereo rules and depth drawing tools to let users manipulate those rules. Stereo rules would be things the software automatically handles like: farther away objects have reduced disparity, a tall object’s disparity changes with height, and an object moving from one depth to another has tweening disparity etc. Each image/clip/piece of the animation would have one float included in it’s standard effects called distance. This would set the base distance for any clip and could be animated over time.

As for drawing type tools: it would need intra-plane distance definitions, for example the veranda in the above image is actually three images that each have their own stereo disparity. This wouldn’t be necessary in a system that simply allows you to change the disparity of an image using close and far point pins or hot spots. Eventually even animating this disparity as well so a character could seem to reach back for a mug behind them.

This is just a sketch of a tool to bring 2D animation into immersive platforms but its pointing in the right direction. If you try out this animation technique or dive into the animation software making business I love to hear about your adventures. Find me on Twitter @emilyeifler and happy stereoing!



Hybrid City

posted in: Uncategorized | 0

For the week of November 9th, I joined Paula Te and Michael Nagle, fellow researchers at CDG, as teaching staff at Parts and Crafts in Somerville, Massachusetts to try out a hybrid reality idea.

We came with a simple plan: build a physical model of a city with the kids, a special city in which anything could be linked to any other component or anything online. The links were inspected with iPads running an iOS app Paula wrote. The app allowed us to pre-generate all of the codes with numbers associated to rows in a spreadsheet instead of having to print a code every time a kid wanted to link something. It was originally designed to circumvent that hurdle because we figured it would be slow enough to break engagement, however building the system this way also gave us the freedom to manipulate the links as a storytelling device later on. The project was divided into two general phases: the building phase on the first two days and the story phase on the last two.

For me this project grew directly out of the Play/Room project: the linking, the spatial organization of virtual media, the hybrid physical digital thing. Here’s my recent Play/Room post if you want more context.


Building phase

Ometropolis was set 100 years in the future and had a population of 100 people. We used a scale of 2 feet to 1 inch and it was built on two 6 foot tables side by side. It was loosely based on Doreen Nelson’s original city building curriculum, now called design-based learning methods. We started the city off with a seed object, a city park covered in fall leaves built inside a small wooden box linked to a spherical video of Paula and I having a leaf fight at MIT. It was our way to show the kids what we meant by a hybrid city without too much explanation.


The first two days were focused exclusively on building and linking. The kids built homes, restaurants, a beach, a power plant connected to the city by a grid of power lines, a hospital, a school and many other things. I helped by building some of the city’s basic infrastructure: a multi story apartment building, roads, public transportation (the city’s only bus), and the central park.

Because we didn’t strictly regulate what kinds of things that were allowed to be linked, the linking started off very casually. Kids made objects they liked and linked them to favorite digital media without considering seriously considering relevance to a bigger picture: a burger linked to a gif of a Krabby Patty, a tv to the opening of the TV show Community, a person to the Kim possible theme song. This haphazard linking quickly evolved into more complicated and meaningful additions to the city with clearly related linked media. The aquarium connected to a video of a tide pool, a shark was associated with a gif of a great white eating a seal, and a yarn sun linked to a digital sun.


Paula helped kids make their own linked media by acting as information collector. One example was the diagram of kinds of cats as dictated by one kid that was associated with their model of lion.

On the first day an older girl, 13, built a detailed house with furniture for each room. As she built the model she explained that she wanted to be an interior decorator. She linked the coffee table to a picture of a living room design she found online. The dresser was linked to the inside of a beautiful and perfectly organized closet. The television to the opening theme of her favorite show. The little chair to one from a designer she liked. The floor plan acted as both a mockup and mood board.



Story Phase

On the morning of day 3 we added a twist. In order make the city feel more cohesive and alive we decided there would be a disaster. Instead of introducing a cookie cutter natural disaster, we built on the story the kids had already been telling about the city. Ometropolis was a coastal city with a thriving population of vampire squids living just off shore. The squids were so plentiful in fact they were a both a common food source and popular research subject. So before the kids arrived for another day of building and linking we usurped all the links the kids had already made, replacing their gifs and pictures and personal media with gifs and videos of vampire squids. When the kids discovered this, we told them the city’s network had been hacked by the squids. This idea took hold very strongly. Their suspension of disbelief was awesome. They never questioned how it worked or thought we were messing with them, just that their city had been attacked and they had to do something right now.



The vote

Once the kids realized the city was being threatened we had them take a vote. Should we stay, putting all our resources into defending the city and defeating the squids, or should we run, pouring our energy into building a fleet of spaceships to wing our tiny population to safety? They decided to stay, unanimously.

One of my favorite details from the vote was one girl, about 8, who publicly agreed during the vote that we should stay and protect the city. She was adamant the city needed saving, but when the other kids were off building laser fences and commanding police raids on the squids, she built a tiny, one-woman, two-cat spaceship to get her character Violet off-planet just in case. It even had two-pass authentication, a keypad and a laser fence tuned to only allow Violet’s DNA.

What happened

This is story of the invasion:

Ometropolitans were used to vampire squids. Their coastal waters were brimming with the cephalopods. Any common visitor to the science center could see one up close in the research tanks. Any common hunger could be sated with squid pizza over at Joe’s.


But one morning as the Ometropolitans woke to go about their business of running hospitals and making pizza and doing science and driving busses and building space boats, they discovered the city’s entire network had been hacked. The entire Internet, pages of cute gifs and scientific research alike, had been replaced with vampire squids. Gifs of them swooping about menacingly in the dark water, snagging fish and generally taunting the citizens of Ometropolis with their various arms and tentacles.


Shocking as this cyber invasion was, soon the Ometropolitans discovered that they had much more to worry about. The hackers had wormed their way through the science center’s firewall and released the Kraken! *Cough* Sorry I mean: released the squid they had been studying. There was a video of the break-out and at least one scientist was killed during the incident. Local news was soon on the scene and once the Ometropolitans had been informed of the developing crisis a vote was taken. Stay and defend the city or flee, putting all resources into space-based escape pods. The citizens chose to defend their home despite their terror.



Three methods were hatched to retake the city and its network:

The cyber security group, led by Violet, a researcher from the science center, developed the CPT (cute and pretty things) anti-virus. Due to her previous work on the now escaped squid she knew that vampire squids hated both cute and pretty things and theorized that together they would be an effective ward against further cyber attacks.

The developers retooled the city’s factory to produce DNA-reading laser fences which would zap any squid-based lifeforms from entering along the coast while leaving humans and other non-cephalopods untouched. The factory managed to erect laser fences along the city’s entire coast line as well as back up systems to protect the hospital and space port. It was an ingenious and Herculean effort but, little did they know, the vampire squids had a trick up their 8 sleeves.

The Ometropolis police force engaged in both active hacking and ground operations. The city was in near chaos when the vampire squids broke over the fences, and many people, both soldiers and civilians, were lost in devastating numbers in the attack including the leader of the police force, but in the end the Ometropolitans managed to defeat the invaders. This attack changed the city forever, inspiring the planning and building of Omicrom Prime, a space station which would act as a lifeboat in case of subsequent attacks. 

All of the videos and drawings and stories the kids made as part of the battle for Ometropolis refilled the network and drove away the virus. And though many were killed, in the end the city prevailed. Those lost were honored with a new graveyard erected in central park.


Spherical video

Much of this story was created as a collaborative improvisation for camera. We used a pair of Ricoh Theta M15’s which proved very hard for the kids to get in video mode but once the cameras worked the kids were hooked. It went like this. The kids would take one of the two cameras to shoot some new idea they had for a video. Meanwhile I would pull all the footage off the other camera, stitch it, edit it if needed, and get it ready for the kids to see when they came back. Then we would switch cameras and repeat. They kids took to calling this process “Taking it to the shop,” which I suppose made me the video mechanic. They made 24 videos in 2 days.

But while they made a ton of videos and were super engaged, the concept of spherical video was a harder concept for them to grasp than I was initially expecting. After we were done shooting and were looking back through the videos several kids reprimanded each other saying “Don’t look there you can see the person carrying the camera.” Or in a few cases in which a video was staged within the model city itself some kids were upset to find when you turned the viewpoint they were in the video too.

When we try this again I would change how I teach the idea from the start. Spherical video is closer to having an active audience member watching a play than controlled cinematography, so I would have the kids pretend to be the watcher. Before they shoot a video have someone play the camera as the others rehearse a scene. Where will the camera be? Put yourself there. Look all around you. What do you see? What do you miss from here? Do you want to see this scene from under the table, from the ceiling, from the hallway outside? How do you feel watching the scene from those places?

Another really helpful activity would have been watching each video all together with people taking turns controlling the POV. Ideally one person would be watching on a hand screen looking around with swipe and rotation, while their screen is mirrored, projected big on the wall. In theory this kind of communal watching would help kids get a more concrete understanding of spherical video. They could see how others look around the video differently, missing or catching different things, letting them design their next scene with those things in mind.


Guided Tours

Here is a layout of the city from above at the end of the week:


At the end of the 4 days the kids gave guided tours of the city to their teachers and parents. Using the iPads they went around scanning links and showing off the spherical videos they’d made and retelling the story of the great battle. It was a fantastic way of closing out the project with a moment of cohesion and reflection. Kelly Taylor, one of the full time teachers at Parts and Crafts, said the project “engaged kids across age groups, social groups, academic & intellectual interests… [and] allowed groups to intersect that often never do (despite being in the same building at the same time).”

We plan to build on this hybrid reality teaching tool/activity and will bring you more insights as they come!


Float: a puzzle-platformer for VR

posted in: Uncategorized | 0

If you’ve got a Vive devkit, set up Vive webVR and try out our new game Float.

Things to know:

If you’re reading this after 2015 probably everything has changed. Just tweet me @vihartvihart if you have a Vive and wanna get this running, either way. June 2016 updates in pink!

Press “f” or “enter” to go fullscreen. You’ll need to be fullscreen for it to work in VR. Or, with the new webVR 1.0 api, press “v” to go into vr mode.

It might work to some extent in your normal browser using arrow keys / wasd, or with other headsets in webVR, but it won’t be a puzzle game anymore. Don’t touch WASD if you’re using the Vive in webVR 1.0; it’ll make it track REAL weird.

Press “p” to stop the initial narration, and space to start it. We use that when we show webVR to people in our space, so that we can make sure they’re settled into the headset before we start the game.

Use IJKL U/O to move the entire scene, to get starting area aligned with your VR play space and the ground aligned with the floor. See image at bottom of post for how the game should start in the room. Hopefully future webVR will allow us to get enough of your room info to align it automatically.

You can change the height that triggers the platform movement by going into the debugger and modifying “crouchHeight”. 0 is floor, 15 is default. We’re a research group and pretty much only developers have Vives right now, so we don’t feel too guilty about using in-browser debugging as our menu options system. Well, maybe a little guilty, but we’ve got other research to do!

Code is on github here:

Soundtrack is on Soundcloud:

Transcript of above video:

Float is a short virtual reality puzzle game where you visit floating islands to make them come alive again. In theory, it’s not much of a puzzle—there’s nothing in the programming of the game that stops you from simply walking over the air to get to all the islands. The only barriers are the physical walls of the VR-supported room you’re playing in. The puzzle comes from how to use the moving platforms to manipulate what is within the bounds of those physical walls.

Most people aren’t used to thinking about layered spaces, especially when they move relative to each other. Winning this game requires thinking about space in a way that might be unfamiliar.

The idea for the game fell quite naturally out of the research we’ve been doing on VR movement design. Given only a small room, how do you allow someone to feel like they’re traveling through a large space? We’ve come up with a bunch of different answers to this question and this is one of them. Emily Eifler created a simple moving platform prototype in Unity a couple months ago as a proof of concept, and then I put that concept into the webVR framework that Andrea Hawksley built for us, using these 3D-modeled platforms created by Elijah Butterfield.

The biggest challenge in designing this thing is that it took me a bit to really wrap my head around the consequences of this kind of movement. We prototyped it using physical paper islands and cutout room boundaries. Once we understood that, I was ready to put it into code.

I’m not going to go too deep into the process of programming all this, but basically everything is sine waves. The rolling hills are just a pile of sine waves. The rolling clouds are sine waves. The platform movement is sine waves, the birds flap in sine waves, the butterflies flutter in sine waves. When the bloomflowers open, that’s because I’m moving each vertex by a pile of sine waves.

The first version had the platforms move along their sine-wave path just whenever you’re standing over them. Turned out this was a bad idea. One moment you’d be looking out at the landscape, the next moment the world would be sliding sideways because you’d triggered the platform, and… that’s exactly the sort of thing that gives people VR sickness.

The two problems with the old movement system were lack of visual context and lack of control over the movement. In a different game with a different style, we may have decided to put some sort of handrail or cage around the moving platform that would stay constant as you move, but that wouldn’t work for the natural setting of Float. So we decided to try something new: move only if you’re kneeling down or sitting on the platform. During the process of kneeling people naturally look down, so the platform is in their visual field providing steady visual context when the movement begins, plus kneeling down is a deliberate act so you can’t trigger the movement by accident. We added a bright green glow as a visual cue to make it even clearer that yes, you’re moving now, and that’s ok.

There’s also sound cues for when the platform starts and stops, which serve an important purpose beyond just fun and polish: they make it instantly clear to the brain whether the world is supposed to be moving around you, even if you’re not looking at the platform.

The music is also more than just atmosphere. The four major island-groups each have their own little piano theme. As you travel through the virtual space and bring islands together, you hear the intersections of these themes. New themes draw you to new islands and give the different spaces their own identities. Also when you win all the themes play at the same time to give a sense of cohesion and completion.

Float is available right now for the Vive headset in webVR at It requires about 3×3 meters of spatially-tracked play space, oriented such that the initial island sits in the space something like this:


If we were a game company and not a research group we’d have menu options for initial layout and crouch height and sound stuff and this would be just one level of many, but this is a research experiment so if you want to mess with that stuff you’ll probably just have to fork it on github, it’s on If you do have a Vive and want to try it out, probably tweet me @vihartvihart or find us at


posted in: Uncategorized | 0

Hyperlinks as object videos

“There is no end to the game.” Playern

Play/Room is a library, a database, an art installation, a Youtube channel, a memory palace and an artist’s studio. Play/Room is a room-scale mixed reality installation in which every physical object is linked/related to a spherical video. The links are printed and attached to each object and can be inspected using a handheld screen. Each tag also has the title of the video printed below the code. When an object’s code is inspected by a player the associated spherical videos show up on screen. The view point can be changed both by moving the screen and swiping with one finger.

This is an example of mixed or hybrid reality, meaning partially simulated, two overlapping, as opposed to adjacent, places. The seam, the API, between these two poorly compatible worlds, being a 3 tap clunky code reader and a web based spherical video player.

Play/Room started with a simple idea: when the web is spatial, when we are used to navigating networked VR, where do my videos want to live? And over the course of its one month existence I got to watch 13 individual play-throughs and one 3 person multiplayer run.



Here’s everything I learned making, thinking, and researching about this project. First I will cover some current models for interacting with groups of content online, then with that context I will talk about the specifics of the structure of Play/Room covering the types of object/video relationships that appeared in the piece, types of interactions observed during playtesting including trace leaving and the effects of visible titles on player choice, and the three main places conflict and frustration arose for players. All the videos from the setup and playtesting phases can be seen on the Play/Room playlist.


The Past

Search, surf, feed and playlist

“This is nothing like links on a screen.” Playern

I am bored with search and surf, feed and playlist. So bored. Homogeny of interface styles is like literally the worst. Google, Baidu, Yahoo, Amazon, Wikipedia, and Qq are primarily search based at the start of interactions (google “red panda,” click on more images) then switch to surfing, whiling your way through different pages via direct hyperlinking (256 pictures of adorable red pandas later…). That was all there was before RSS: a search field and a link and a huge pile of bookmarks of places you have already been and know you need to get back to. Then came today’s giants: Facebook, Youtube, and Twitter. These behemoths have some of that old school search and link quality about them but also rely heavily on feeds, reverse chronological lists of available content derived from good old fashioned mail bags: newest on top. Feeds, first as rss feeds, developed online to give users seamless access to frequently updated content without having to check all their favorite sites individually. They meant you wouldn’t miss out on something great, cause FOMO am I right.

But that built in anxiety, the catered too fear of the missed opportunity, of being out of touch, of being left behind has led to a boredom of its own. I sit on the train everyday watching people swipe up, swipe up, swipe up, swipe up, swipe up, swipe up. Their feeds filled with tiny rectangular images and strips of text. A brief glimpse of an acquaintance’s birthday party, above a mood gif from a favorite band, above a new podcast episode listen request from a prominent new writer, above a new hashtag campaign, above a baby picture from a total stranger, above a musing from a celebrity, above a breaking news story that might be related to that hashtag from before, above a video from a producer you followed once but rarely watch and on and on. Each creeping up to the top of the little hand held screen then disappearing, slipping below the surface and out of sight. Depending on how many outputs you follow the pile often grows unmanageable and is being added to so quickly it is inexhaustible.

But you are always missing out on something. Get over it.

The one externally curated model currently in wide use today is the playlist. Collected by an artist or user generated, played sequentially or shuffled, playlists remove newness as the primary selection factor but still limit interaction to ordered lists.

So what comes after search and surf, feed and playlist? Same as what came before: Places.


Organizing physical stuff in physical places.

“How do places work again?” Me, just now.

Take books. Books are great. People love organizing books. They make information browseable and thus non-linear and then we stick them on bookshelves which increases the density of information even further and facilitates inter-book browsing.  Then we stick those in libraries: stacks of bookshelves, which are stacks of books. You can pull them down and read them and stick them back somewhere new and everything works great until you have lots and lots of books and everyone is putting them back all higgledy piggledy. Then we have to make up book putting back science like the Universal Decimal Classification. It is a fantastic organization system if your goals are consistency and coverage across all branches of human knowledge. Classes of knowledge are categorized by number with longer numbers being associated with rising specificity of the classification. Meaning 5 is Mathematics and Natural Science while 539.120.2 is Symmetries in Quantum Physics.

All of this spatial organizing increases the findableness of the books but also fixes them in static relationships to one another. This is kind of ok though because books are unfortunately already restricted by their utilitarian form. If I could make you books that bounced like a soccer ball, or yipped like a tiny dog on a short leash, or hung the air like a soap bubbles until you popped them and their pages coalesced from the shattered iridescent skin I would totally do that but alas the physical world is kinda a stickler for physics and consistency and shit. Libraries are fantastic homes for books. The strictness in spatial organization methods of books fits their form, but without that restriction how we organize space loosens up.


Object/video relationships

“Everything’s a ladleful of spacetime in here.” Playern


Now we have ourselves some context, back to Play/Room. While unplanned the object/video pairs conformed to four general categories. They are listed from the most concrete, realistic and recognizable to the most abstract and conceptual.

  • Literal: A plastic skull, a toy train car, a pink shawl and Tumblr hat, a laser cut wood mesh, all objects that were in the video they linked to. This was the most obvious type of connection.
  • Diagrammatic: The main example was a scale model of my house with tags positioned where the videos had actually been shot. It diagrammed a specific systematic, in this case geographical, relationship between the videos linked.
  • Representational: Similar to the diagrammatic if more metaphorical these objects included a pair of teddy bears that stored a pair of videos of my husband and I, a t-shirt from a conference where a group of videos were shot, printouts of gallery websites linked to videos of visiting those galleries etc. These objects represented what they contained but did not appear in the video directly nor diagram any systematic relationship between them.
  • Conceptual: Time intensive paper sculptures linked to time intensive capital A art videos. Flimsy paper sculptures linked to less time intensive partially successful video experiments. Crumpled print outs of video screenshots tossed in a waste paper basket linked to the the videos that were utter failures. This category focused on quality similarities across different mediums.

The combination of all these different types of object video relationships was part of what made Play/Room so flexible for lots of different kinds of exploration. Let’s talk about that next.




Modes of interaction

“I want to do chaos to it.” Playern

The most surprising thing about watching my playtesters was the sheer variety of approaches players had to the room. Turns out spacial arrangements are super flexible.

While watching videos players sat on the floor or the poof, knelt, stood, paced, laid on the floor, danced around, and even played along on the ukulele that linked to several music videos. Several players even set down the handheld and wandered the room on their own first. Stacking the paper sculptures in various piles, pouring out the trash, or putting on the clothes before engaging with the links. But it wasn’t just posture or trajectory that varied widely from player to player, organization methods were all over the place. One players took special care, after inspecting an object’s link, to line it up with the others she’d finished in a straight line beside her on the floor like her own physical playlist. Another simply spread things out haphazardly as he moved from object to object. One refused to move anything calling it meaningless, while the next tossed all seen object videos in a pile in the corner. Another player posed various still lives around the room with object videos she felt went together, the paper fish head with the skull, the glass and the bottle and the spoon and the toy train all together under the stool, the bears went the gallery print out. As the playtests proceeded I learned that the more visibly organized the room was the more players pieced delicately though the objects touching as little as possible while visibly messy arrangements gave players permission to contribute to the mess.


Players often multitasked, propping the handheld up next to them, occasionally swiping to a new viewpoint, while they left their hands wander: picking up and feeling various objects, playing with playdoh, putting hats on bears, balancing piles of sculptures, posing still lives of emoji cards and dishes and dresses and the screen itself. Making and playing in a relaxed unfocused multi-modal state. The objects gave tactile feedback to wandering fingers while eyes and ears were trained on a video environment. The ceramic island object, linked to a video tour of a friends studio called “Ken’s Ceramic Studio,” was particularly enticing for fingers with nothing to do while other senses were occupied. In this case the object and the video environment have a literal link and together give the player to simultaneous entry points to the same world.

Two players made a game of discovering hidden tags. The tag on the bottom of a glass, on a tucked away power outlet, and the inside of the stool leg were all particularly engaging to these players. Three of the players took selfies, some with the handheld itself and some with the spherical camera that was included as an object video. One highlight from the multi user test was that each user had their own handheld and they would each scan a different object then lay the screens on the floor, collaging them together, using swipe to navigate to different views.

The most consistent note from all the playtests was that players rarely choose to use the higher immersion clip on headset option, a Wearality sky, when watching videos. Though initially interested players quickly abandoned it once they realized it slowed down the already clunky process of scanning the tags. At least half the players did return to it once when they found a video they wanted to experience more immersively then set it aside again.

Despite all this variance two fuzzy categories did seemed to emerge: long watches of few videos vs short watchers of many. Where as long watchers tended to make up their own decision criteria for navigating from object to object short watchers tended to want more structure, more guidance for a “good” trajectory through the installation. I could see these players enjoying a guided tour if a similar piece was installed in a museum context.



Effect of titles

“Wait, they have titles?”Playern

Titles were an unexpectedly polarizing aspect of Play/Room. For one player in particular the titles more than the objects themselves caught his eye and led him to choose which tag to scan. However the exact opposite was true for at least 3 other players, each of whom commented during their playtest that they hadn’t even noticed that the tags had titles at all. In the case of the vidcon shirt which was plastered with many tags, the tag titled “Sausage legs at Vidcon” was the most often inspected by players.

For abstract and representational style links the titles helped players ground the objects. The title is a top down classifying force and informational guide and while they seemed to support object oriented discovery for those who do not easily engage with the object directly, in some cases however the presence of language seemed to completely override the communication being done by the objects themselves rendering the room just an awkward unordered list. But even with this down side, ultimately the titles contributed to interactive flexibility.


Leaving Traces

“I want to leave a thing in the world for others to find and I want to find secrets they leave for me.” Playern

Players wanted to contribute little notes or their own paper sculptures for others to find or to leave traces that would let another player know what a certain video had made them think of. These types of contributions work well in the lab but I am unsure how that behavior would change outside our little sheltered cove. Texted based online trace leaving quickly devolves into bullying and all manner of discrimination but there is evidence for harmless object based trace leaving in art contexts. Sarah Sze, an American artist who makes large scale installations from ordinary objects, spoke in a 2012 interview with PBS’s Art 21 that museum visitors often left small objects from their pockets, like paper clips and coins, in her installations. The left behind objects may have been anonymous and even entirely invisible to other visitors as not part of the original piece but people still felt compelled to leave them. I will be exploring socially constructive ways of engaging this impulse in future projects.


Failures and Frustrations

“When an object is not literally linked to the video I think I am missing something.” Playern

There were three main pain points for players: Play/Room didn’t effectively teach people how to play with it, some players were disconnected from touching or moving the objects at all, and despite my expectations, many players struggled with Play/Room’s lack of clear structure.

Let’s take these in turn.

Players were given very few instructions upon entering other than a brief tutorial on how to use the scanning software. This meant it was not obvious to players what was and was not allowed. Many of the objects seemed fragile and were thus not considered play objects. In a completely virtual environment players would likely explore knowing they couldn’t actually damage anything, but in a mixed reality context players have the understandable fear that they will do it wrong and cause some irreversible damage to clearly unique objects. I’m not sure this is avoidable unless I used only objects that can be easily replaced if damaged or if installed in a public context place all tags in plain view so nothing is actually touched in the process of viewing. But eww, all looking and no touching is just eww.

The second hurdle to seamless use several players mentioned centered around the meaning of moving an object. “I would move things around but it doesn’t seem to mean anything,” said one player who also later mentioned a desire to see his trajectory laid out behind him showing which he had already watched. To my consternation, this player did not see the ability to move things as equivalent to the desired functionality. This type of spacial arrangement may never work seamlessly for some players or it may just be an audience behavior that needs to be taught more directly. I don’t know yet.




Lastly about a third of players wanted more structure than was provided. This came in two forms. First, players who wanted all object video relationships to be the literal type, and second, players who wanted the organization of the room and the meaning of the videos to have an overarching cohesive narrative structure.  Dissatisfaction with the link types because of feelings of missing something was justified. Players who focused on literal type links were missing something. One player framed it like this: “With books, I can tell what I am about to read when I see the cover.” And while I disagree with the judging books by their covers I think it points to a framing problem that could be solved with different environmental expectation setting. If installed in an art museum for example players would likely be better primed for a spectrum of link types. It should be noted that for these players the model of my house was the most satisfying object to inspect.

The latter group were less frustrated by any individual link than the lack of underlying discoverable narrative. The games Her Story and Gone Home were both mentioned as what players expected Play/Room to be like. (Her Story is a game in which you sift through unordered library of old police interview video files to discover a sordid tale of your family’s past. Gone home is game in which you wander your families unoccupied house examining ordinary household objects to figure out what’s been going on since you left.) Sure, these players were happy to wander the room watching videos and examining objects, but they wanted it to all mean something, to have some clear purpose, something to give their explorations focus. This is a need I hadn’t even considered when building Play/Room. It was designed to be a growable libraryesk place but there’s no reason an installation structurally identical Play/Room couldn’t fulfil this itch. Layering time also would be a very effective use of Play/Room style linking. Immersive theatre perhaps in which actors currently inhabit the space at one point in time but the space lets you look backward and forward to other scenes that happen in the same space.



Now what?

“It gives you privileged access to a private space.” Playern

I’m starting to realize that spherical videos are somewhere between an thing and a place. Like globes spherical videos let you see context instead of cropped portions of the world dissected. So where should they live? Where would they feel at home? Somewhere you could touch them like globes, somewhere you could feel the ridges of their mountains. Metaphorically or whatever.

A place you feel comfortable picking things up, touching things, going slow, a place like home.  Clothes in the closet, pans in the cupboards, book on the shelves, and creative messes everywhere (at least at my house) with things most often or most recently used migrating naturally to the front. Being invited into someone’s home is super intimate. You learn something close and peculiar about the person by what’s in their space: a shelf packed with vintage perfume bottles, a collection of gregarious high heels, piles of board games or religious paraphernalia. Play/Room successfully imparted this kind of personality on the players. Many mentioned feeling like they were visiting my artist’s studio or even poking around inside of my brain. One player even described the Play/Room as a physical memory palace. Like going through your grandmother’s attic or dad’s self storage unit while they are still around to tell you the stories of the objects and your family with them: hand sewn quilts with butterflies motifs made by great great great etc grandmothers, chachkies bought on trips to faraway places, the last piece of unbroken crockery from a long lost set, each with a story to tell. This response was strongest with the most documentary style videos but while players could look back, dipping into a stream of my memories, the objects also activated their own.

“With objects thumbnails could totally remember which ones I’ve seen and which I haven’t.” Playern

All of this touching stuff and memory talk has got me thinking and reading more about embodied knowledge and how our bodies and their sensory motor capacities understand stuff and how we can make better mixed physical virtual interfaces for them. So that’s where I am headed in the next post. Check back soon!




posted in: Uncategorized | 0

On June 16th, I decided to make a spherical video everyday. You can watch them all on BlinkPopShift. Vi asked me a bunch of questions about the project to help me write this post so heres an inter-eleVR interview! I am currently only 55 or so videos into this adventure so lets talk about early insights.

Why daily videos?

Because quality vs quantity is a false dichotomy. Sure, optimizing for quantity in the long run gives you billions served, but just letting yourself make a giant pile of probably not very good drawings or gifs or stories or spherical videos in order to learn what works leads to some sweet skillz. Or at least that’s the reason I give when “I wanted to be the person who has made the most spherical videos” seems like a less than lofty research/artistic pursuit.

What have you learned so far?

Turns out flat video lends itself to talking head presentational styles in a way that spherical video does not. 1st person web video has developed a style in which the maker/performer insinuates themselves into the scene of the video while still controlling the frame.  There is an edifice to them, they are about something: a topic, a song, a prank. But the more I try to shoehorn that style into a spherical format the more it resists. Many of the daily spherical videos so far feel more like ‘Come hang out with me while I do this thing I would be doing anyway,’ the largest number falling into the ‘I am making art and the camera is running’ category which reveals a lot about how I spend my life, something I never let happen when I was making flat video. In flat video I kept video Emily and RL Emily very separate, but not so in spherical.  Its feels like documentation without translation into textual or verbal language and the non-framed non-presentational all seeing eye of the camera makes me feel more relaxed, more open, and frankly just more adventurous and laissez faire about what I can shoot with it. Nothing gets cropped out, nothing gets left behind.

Do you feel like you’re better at making spherical videos than you were at the beginning?

Extra yes. The speed of iteration means it finally feels like getting good at shooting and editing spherical video instead of just getting good at finding workarounds for fussy software or stitching or building camera rigs. All of that background research has been crucial to my understanding and approach of the medium but I also feel liberated from the time consuming quagmire hardware and software battles to try things I am pretty sure will fail.

What’s your favourite video you’ve made for this project?

I love a small handful including 3 mins o’ y’stad’y: Climbable drawing because it is exactly what I wanted. A stark, high contrast, graphical image, a physical line drawing coated onto the sphere. 3M0Y: This is a song about Steve because it was totally unexpected and sweet and could only have happened that way in a spherical video. #MOY: Dance Dance with a Teen Turtle at VidCon because I set out to capture one thing, me trying to learn how Dance Dance revolution works, but the camera saw more than I was aware was even happening at the time.

What’s your least favourite?

There are several videos I don’t like simply because I am pointing things out you can’t see because of the resolution, or I’m trying to show something in a room thats too dark, or that are just boring, but my actual least favorite is 0:03 yesterday: I hate that word, deconstruction. It feels forced and artificial and flat. Its not that you can’t talk to a spherical camera the way you might talk to a flat camera, I’m sure that is much of how this technology will be used in fact, its just that I know I can find better more interesting things to make with it than leaning on that particular crutch.

How often are you surprised by the results of your video? Which was most surprising?

The ease that comes from the lack of a frame makes spherical shooting far lower impact than shooting flat both socially and cinematographically. Missing the action is pretty hard. It allows for greater serendipity and flexibility and creative course correction mid recording. All of the most surprising results have come from this tendency to the fortuitous: tiny dance parties with Steve, unexpected appearances of Teenage Mutant Ninja Turtles, sudden downpours and playing with dancers.

How do other people react when you’re filming in spherical in public?

Socially it’s lower impact because without the feedback of a screen the camera melts into the background. I ask for consent to record people before I whip out the camera but I have noticed that because of both the Thetas size and because I am not pointing it at anyone, it quickly becomes invisible to those around me. Flat video adheres to that good ol’ uncertainty principle which states we can not observe something without changing it. People put on their best faces, angle themselves to the camera, get just the right light, fuss with their hair and clothes, and look at themselves in the screen if one is facing them. But a tiny handheld spherical camera elicits none of those learned camera-is-present behaviors. This of course may change as spherical recorders become commonplace and we develop new behaviors to deal with them but I will enjoy the uninhibitedness in the meantime.

Do you ever go back and watch your old daily spherical videos? What’s your reaction?

I have gone back a watched a bunch of the old daily spheres. Once I have a few weeks seperation I can see them more as art and less as a things that just happened in my life. It reminds me actually of a piece I read recently by Joan Jonas, a visual artist and early pioneer of video and performance art in America,  titled Transmission in Women, Art & Technology by Judy Malloy. In the introduction she talked about her switch from studying “how illusions are created within a frame” in paintings and film to her study of performance.  “Performance,” she wrote “is not a space separate from ongoing activities of daily life.” Much of her early work was made improvising, just playing around in the city with the camera, and I want to try more of that as this project continues.

What camera do you use? What camera do you wish you could use?

This project is currently being shot on a Ricoh Theta M15 which shoots video at 15 frames per second and 1920×960 which sounds close to HD until you realize it the resolution is stretched out over an entire spherical FOV. I am itching for the day when they fit 4K at 30FPS or higher into that same sleek handheld form factor.

Do you prefer to watch them in VR, or in a flat spherical player like the YouTube player?

I prefer watching them using the Wearality Sky HMD but only if the camera was steady during the shoot, otherwise a mobile web player with gyroscope and swipe for tracking is my favorite.  

What reactions have you gotten? How do you feel about them?

One viewer, Hussien Salama (@hussiens), particularly liked#MOY: Sausage Legs at VidCon’ because it maximized viewer choice. “Usually a lot of the time the action of the moment is just in one spot at a time, but here I was able choose to concentrate on the conversation that you had with Betty (articulationsvlog) or just watching maliaauparis concentrating on the baby.” That video is really busy but it’s interesting that instead of being overwhelmed Hussien felt free to focus on what interested him instead of trying to catch it all.

What is fulfilling about this project?

  1. That I have a much clearer understanding of the tools that need to exist to make better videos: what would make the camera better, the stitching, editing, compositing, effects, web compression, the playback, all of it.  
  2. I love there’s not that much to go on. I like the open ended puzzle of it, the try it and see what happens, the weird, unformed, liminal sprawl of it all.

How long do you think you’ll keep doing this?

For a while at least. Maybe when I have made 1000 videos I will know what the heck spherical video really is.

How do you manage to do this in addition to all your other projects? Are you some kind of magic human or something?

Shh, don’t tell nobody.



A Place for Flat Video in VR

posted in: Uncategorized | 0

I talked a bit in the post ‘No Video is an Island’ about using VR to liberate video from the ubiquitous embeddable players that web video has become so dependent on. Little consideration can be put into where a video is watched when you are simply publishing to common video site. Your audience could be tucked under blankets on their phones trying not to disturb their husband inches away, on the train with tinny ear phones in, or tabbed away to a work document listening more than watching. Online video can be watched anywhere, which gives it flexibility, sure, but VR gives us the opportunity to see what happens when the watching environment is integrated with the footage.


I thought I’d try out an integrated footage and watching environment and 7_Water is the result. This place was inspired in part by the Desktop Monument episode of The Art Assignment in which the host Sarah Green and the guest artist Lee Boroson discuss creating desktop sized artworks that reference the feeling of being in a natural scene you may never have seen in real life. I went with rain in California, a rare thing to see these days. 7_Water is an art toy that lets you immerse yourself in a city storm when ever you get the urge. Just hit that raindrop button, by pressing the spacebar when you are looking directly at it, and down comes the drops.

I love city rain. I’m a city kid. Not much for camping or fishing or whatever it is outdoorsy people get up to in their quick drying, ultralight, double-protection thermals. In my opinion there is no natural world, there is only the anthropocene, and my city habitat is only the further down the spectrum of intensity of human intervention. So natural? check. What about never seen in real life? Here I deviated from the assignment, instead of picking a place I have never experienced I chose one I may never experience again: a San Francisco Rain Shower. They’re my favorite kind of storm actually, now that I think of it. These are not the thundering stampedes of rain that ruled the western rockies just across the range from my home town in Colorado. In those eastern rain shadowed foothills, raindrops came regularly at 4pm after school everyday, but, with little staying power, replaced immediately with sunny puddles. By the time the rain got to us the mountains had already sucked the storms dry and we were left with the tender edge. From the eastern rockies to the Mississippi, the land slopes downward away from the clawing peaks, tumbling down into a grassland half a continent wide.

But here in San Francisco, tucked away in the city, the rain was once an outfit we wore threadbare for the love of it. Rain that dressed more for moods than for seasons. But the drought is here to stay. I have to get used to it, to scrimping water, to brown medians, to no more rain. Ok ok you get the point. I miss the rain. Blah blah blah, prairies and childhood and I want my rain back, pout etc.

cubic comic 1

7_Water was drawn on 6 sheets of square paper, each scanned and then reassembled on a cube using Three.js. WASD controls let you look around if you are exploring the environment without a headset and space activates the ‘Make it rain’ button when you are looking at it. The video component, a shoot of me looking in on the scene from the outside, reduces the scale of the room, and you with it, back down to its original desktop size. All the code is available directly from the site if you want to make your own mini monument!


Making VR Video with Kodak PixPros

posted in: Uncategorized | 0

We’ve been experimenting with multiple Kodak PixPros!

The PixPro is a single camera with a single super-wide-angle lens. It’s advertised as “360” in that it captures a full great circle at the edges of its field of view, but it does not capture anything close to a full sphere. One camera can, however, cover the complete field of view of a human eye, and a much greater field of view than any VR headset currently on the market, with just the default lens and default settings.

To get a full sphere, you can use two at once, and stitch the footage together. We taped two PixPros back-to-back and stitched them using Kolor Autopano Pro, shown in comparison with footage from the Ricoh Theta in our “Back-to-Back PixPros vs Ricoh Theta” test. As you can see, the resolution of two PixPros is slightly higher than that of a single Theta, but the stitching distance is much greater (and having to stitch in Autopano is harder than the Theta’s automatic stitching software).


If you want to use just one PixPro to capture a section of the sphere and view it in VR, you can use the PixPro app to “unwrap” the footage into equirectangular footage (the app calls it “YouTube Format”), though at the moment the app doesn’t do a very good job of it. There’s a significant amount of distortion around the edges of the lenses. The app also currently can only “unwrap” with the assumption that the center of the footage is facing straight up or down, so you’ll have to manipulate the footage in another program if you want to change where the horizon is (such as by stitching the equirectangular footage to nothing in Autopano Pro, then changing the horizon before export).

The PixPro and PixPro app itself is quite finicky, so you’ll want to find the tutorials on YouTube that explain some of its quirks.pixpros-velcroed-to-wall

I recommend Kodak’s own “James” tutorials for things like using the camera and changing its settings:

“MarkHawkCam” has some on the unfolding software and its quirks:

Luckily, while there’s a lot of distortion in the final equirectangular format video, the distortion is at least consistent, which means two cameras side-by-side can create working stereo that fills an entire human field of view! So we put velcro on the back of each of our cameras, and started sticking them to everything.

We’ve been wanting to experiment with two ultra-wide-angle side-by-side cameras for semi-spherical stereo for a while, and the PixPro provided the perfect opportunity. We started with two cameras on the floor facing up, meant to be watched while lying down in a particular orientation. We were happy to find that the two camera’s views do indeed mesh (similar experiments with the Ricoh Theta failed, due to the two Theta’s stitching distortion being slightly different, possibly due to inconsistencies in the gyroscopes).

We show example footage, as well as explaining some of our initial findings, in “Stereo Spherical Experiments: Side-By-Side PixPros in the Dome“:


The “unwrapping” done by PixPro’s app is visibly stretched too tall around the edges of the camera’s vision, something easy to see when inside a geodesic dome. For stereo, this can create problems with exaggerating the stereo and making it un-meshable. I recommend keeping the cameras as close together as possible if you want to be able to mesh stereo that’s within a few feet of the camera.

Because the PixPro software can easily do a projection assuming the camera is facing straight up or down (though I’ve heard there will be more options soon), we wanted to try both. We did a test called “Ceiling Person“, and found that downward-facing semi-spherical stereo is really effective. It feels natural to view something that you’re looking down at, whether you’re sitting or standing. Conversely, looking up for long periods of time is really only comfortable when you’re lying down, and definitely not great when sitting in front of a computer.

In the first half of Ceiling Person, we simply velcro the cameras to the ceiling of our sound room and hang out a bit. In the second half, we try widening the distance between the two cameras. Everything close to the cameras can’t be meshed, but things further away are visible in hyperstereo, like we’re tiny people in a box rather than a room.

If you do watch this in stereo, don’t strain your eyes trying to mesh unmeshable things in the experimental footage in the second half.


In general we’ve found the absolute minimum meshable distance viewers are capable of is somewhere between 2 and 5 times the distance between cameras, depending on the person, and it’s not comfortable for long periods of time (though our tests are pretty informal). The industry standard for regular stereo film is a factor of 30 or more as the minimum distance (about 6 feet). I think for now we’ll see minimum distance standards for 3D 360 film settling in at more like a factor of 6 to 10 as absolute minimum (a compromise between the limits of current spherical camera technology, the unique effectiveness of close objects in VR, and the strain of viewing), but the usual distances of things-you’re-supposed-to-be-looking-at should be much further. Maybe we’ll go down that rabbit hole in a future post.

Both the above stereo tests require a specific orientation for the stereo to work, different from that of most current “3D 360” videos (stereo when oriented upright for a full turn, but mono when looking up or down). These tests definitely have a specific forward direction, and for some films and audience expectations, keeping one expected direction will be the thing.

Sitting in a non-swivel chair, facing one way and looking down to see crisp perfect stereo below you, is nice. Lying down and looking up and seeing crisp perfect stereo above you is also nice. And being able to look behind you is sometimes nice, but not always necessary. All these different expected stereo orientations will have their place in the near future of VR video.

The PixPro was perfect for these tests. Potentially it could be inexpensive and easy enough to use that we could recommend using them rather than better cameras with non-default lenses and 3rd party software, but first Kodak needs to put a lot more resources into their software.


VidCon Workshop Files Now Online!

posted in: Uncategorized | 0

Recently at the 6th annual VidCon down in Anaheim, we introduced an audience of online video creators to making spherical video. We had a lot of fun prepping for this workshop. American Beauty was reenacted with a pile of Google cardboards…

We broke out the quaddecapus charging brick and turned our hotel room in two a camera charging nursery…

We partnered with Google to give away nearly 300 google cardboards, and with Ricoh, who generously loaned us 30 Theta M15’s so our workshop participants could get started making spherical video right away. Kodak donated a PixPro camera SP360 to one lucky workshop participant, and five more participants got to take a Ricoh Theta home.

We have a playlist of all the workshop videos here. Unfortunately, a few of the groups only took photos so if you don’t see something from your group that is probably what happened.

If you missed the workshop in person don’t worry we have you covered. Our friend Malia Moss generously documented the whole workshop in lovely flat video.


So many awesome creative ideas come out of the workshop. This video of one group playing camp games from childhood around the camera puts me in mind of summer campfires and mosquitos.


One group took our web video assignment to heart and saw how many cute internet video we could watch at once!


Or this group that created a great practical effect with a plastic garbage bag.


These videos, as well as all of our other content is creative commons with attribution so please download stuff and start editing. We would love to see what you come up with. Share your finding with us on twitter (@vihartvihart, @andreahawksley, and @emilyeifler) using #eleVR.

Happy sphering!


posted in: Uncategorized | 0

Hypernom is kind of like 4-dimensional pacman, for VR on your phone or browser.

After many months of it sitting around mostly finished, we’ve finally finished the final touches on, put the paper on, and updated the documentation on github. It made its debut in the BRIDGES 2015 art exhibition, and the corresponding talk is now available on YouTube. works awesomely on a variety of phones, browsers, and VR headsets, so go eat some four dimensional shapes!


It kind of all started with a non-VR project Henry Segerman and I did a couple years ago, regarding four-dimensional symmetry groups and monkeys (see paper).


We brought some of this work to VR last year in video form, with 4Dmonkey.gif (see related blog post). Obviously the next step was to properly embed the viewer in the 3sphere, by doing some proper 4D graphics. So we collaborated with Marc ten Bosch, who wrote a 4D shader that we could use in our VR framework. Add some Andrea Hawksley magic, and we even had something that ran reasonably fast, and on a variety of devices! was born, and remains one of our most popular works (also on github). And then… well, it would be a shame to have this great 4D graphics shader and not also do all the regular polychora (4D analogs to the platonic solids), and, well, as long as VR headsets give their orientation data as a quaternion, it just seems obvious that one should map this data to the 3sphere (because it maps so nicely as a double-cover) to do movement, and of course as you move around you should carve out the cells to be able to see more of the space and see how much of the orientation space you’ve covered…

So you see, it was all very obvious and natural.

The piece in the art exhibition for Monkeys, “Monkey See, Monkey Do,” included 3d printed sculptures, the VR 4D monkey experience, and an interface to change which symmetry group you see in VR, made using laser-cut polyhedra with capacitive sensors.

The piece for Hypernom included projected images and a big red button that you could hit to cycle through the 6 different regular 4-dimensional polytopes.

There is a nice article about it on Aperiodical.

Emily filmed the talk in spherical (below). It is also available as a regular rectangle. You can find Henry’s talk slides here.

The end!


The Complete Theta Tutorial

posted in: Uncategorized | 0

The Ricoh Theta was designed as a still camera, but it’s still the best consumer product on the market for making spherical video. We want to make sure anyone can learn!

So here’s shooting, stitching, editing, and viewing 360 video using the Ricoh Theta camera, all in one complete tutorial. Video above and text below cover the same content, basically, so take your pick.


1. Turn on the camera in video mode

Hold down the wifi button and keep it held down while you press the power button once. Wait until you see that the power indicator light is blinking slowly in blue. Then let go of the wifi button.

Always check that it’s actually in video mode! We’ve all messed that up at some point, and came home from our shoot with just a pile of photos. If it’s not in video mode, turn it off and try again.

2. Start/stop recording

The big button on the front of the camera starts recording. There’s an indicator light above it that will be blue when it is not recording, and turn off when it starts recording (counterintuitive, but keeps the light from showing up in the video).

There’s two ways to stop recording: press the button again, or wait three minutes and it will stop automatically. There’s a three minute limit on this first version of the camera, after which the camera will quietly stop recording and the light will come back on.

3. Stitching line

The camera records a full sphere, with the view of the two lenses overlapping. The stitching of the two halves can be quite good, especially if the exposure is even and everything on the overlap is far away. But you probably want to orient the camera so the important stuff is in full view of just one lens.

Right now, the stitched footage puts the seam of the equirectangular video right down the middle of the front-facing lens (the one with the button), which is not ideal for certain players that don’t wrap perfectly or that process the video. So consider either offsetting during editing, or just face the opposite lens towards the subject of your video to get it to be in the center.

4. Gyroscope

The Theta has an internal gyroscope that tells it which way is down. So if you mount the camera upside-down, or hold it sideways, your stitched video will still come out with the horizon where it should be! You can even move and rotate the camera mid-shoot, but be aware the gyroscope can be slow to catch up, so don’t reorient the camera too quickly.

5. Exposure

Try to get even exposure on the two lenses. If there’s a bright source of light like a window, don’t face one lens towards the window and one towards the dark of the room; turn it sideways so both lenses get a bit of each.

6. Sound

The microphone on the camera isn’t great, and tends to click and pop and do weird stuff especially in noisy situations. If you have anything beyond just talking in a low-noise environment where sound is important, I recommend recording sound externally with a different device.

7. Storage and Battery life

The Theta has internal memory only, and not a lot of it. You can get maybe 40 minutes of footage before it runs out. The battery life is much longer than the potential shooting time.

8. Troubleshooting

If the front light is red, it probably means the camera is processing. If it doesn’t switch back immediately on its own, turn the camera off and on again.

Other patterns involving red lights on the front and power buttons can mean that it’s out of space or out of battery. If it doesn’t start working again after turning the camera off and on again, chances are you need to charge up and empty out your footage.


1. Get your footage on to your computer!

PC: Connect via USB and use like any other external drive. If it’s not showing up, try switching to a USB port on the motherboard, rather than an external USB hub or front-facing fancy USB thingy.

Mac: “iPhoto” or internal “Photo” app only, at the moment. Import and check “delete after import” (there’s no other way to empty space on the camera). Some versions will let you drag your videos out of your iPhoto library into a more reasonable location, but in Yosemite you’ll have to go to “Pictures”, right click on “Photos Library.photoslibrary” and select “Show Package Contents” to start seeing the folders inside it. Somewhere in there, arranged by date, will be your videos in a folder.

Alternatively, after importing with “Photo” you can right click on the video and choose “get info” to get the file name, and then search for it in an app like “EasyFinder” that will let you search through hidden folders and packaged content to find where your Mac is hiding your files, and then drag it out of there.

2. Stitch with Ricoh app

This is the easiest part. Just drag your raw .mov videos into the app and press convert. It will stitch automatically, and can even do large batches of files.

Download the app off their website:

Your videos will come out in equirectangular format, as mp4s. At this point, they are playable as 360 videos in the Theta app or the eleVR player, and can be uploaded to YouTube and will show up there as spherical videos as well.

Or maybe, you want to…


1. Make sure your sequence settings are right

In Premiere: when you drag your footage into your sequence, let the sequence take its settings from your footage (this will work if your Theta footage is the first thing you drag into your sequence). This will either happen automatically or there will be a dialog box asking if you want it to change settings to match your footage (you do).

In iMovie: As long as your Theta footage is the first thing you use, it should match settings.

2. Edit basically as normal

You can hard-cut, cross-fade, change exposure, basically do almost everything you do when editing flat video. Just don’t crop and resize your video to zoom in, or move your spherical footage vertically up and down.

The most useful tool to know about in Premiere is “offset”, which lets you rotate the sphere of footage, wrapping around from one side to the other. Use it to keep things oriented about the same way from one shot to another, to center points of interest, and to create visual continuity between objects in one shot and objects in the next.

3. Compositing and Titles

If you want to get fancier, you can do some amount of compositing, though right now no video editing software understands how to warp things to work in equirectangular format. The best way is to take things already in equirectangular format and composite pieces of those, without moving anything vertically up or down.

Perfect equirectangular images can be created from other programs such as Maya, ready to layer on to your video. Or, you can record footage of the thing you want in the proper place, layer it on, and cut out the rest.

If you layer a regular image or title onto the equirectangular video near the middle, it will look ok. As you move up or down it gets more and more warped. Also be aware that things layered onto the middle of the video tend to appear larger than you expect them to. Your title may look tiny when layered on to the equirectangular format video, but when it’s wrapped back into a sphere it might be too big to read. Experiment!

4. Export

In Premiere: Uncheck “match sequence settings”. Use H.264 encoding, and try “match source”. Make sure the output settings match the source settings, with a ratio of 1 by 2 (no black bars!). If they don’t match, go to the Video tab and change stuff ’til it does.

In iMovie and Final Cut: The default settings when you “share” as a file should work. Don’t use the YouTube option even if you want to put your video on YouTube, because we need to add metadata to our file before uploading.

Special YouTube Instructions

1. Metadata

Right now, YouTube has no user-accessible checkbox or setting on its platform to let it know that your video is spherical. So if you want it to show up in the spherical player, you have to tell it “I’m spherical!” by adding some metadata to the video file itself.

You’ll have to download the special 360 metadata app. When you open it, a window to select your video will open automatically. Select, then hit “inject and save.” We recommend naming it something specific with “data” or “metadata” in the name, so as not to confuse it with the original. The app will then create a new file almost instantly, that is the same as the old one except for the metadata and name.

Follow the instructions here:

They also provide a python script that supposedly inserts the metadata, but that hasn’t worked for us on either mac or windows.

2. Upload and wait

Upload your metadata-injected video as normal.

Unfortunately, YouTube can take a while to put it in spherical mode, with it just showing up and going out to subscribers as a squished equirectangular video for a while first. Make your video private and wait to share it until you can confirm it is working in the spherical player, if you care about that kind of thing. Also don’t use any of the fancy YouTube tools that change the video after it’s already uploaded.

View Your Video

1. YouTube on Desktop

Click and drag, click on the grey circle with arrows in the upper left corner, or use the WASD keys. Note that you’ll have to click in the video for WASD to work.

2. YouTube app on Phones

Physically look around using the native YouTube app on modern smartphones. Android phones may have a google cardboard option to see it in VR.

3. Ricoh Theta App

The arrow key controls are nice for looking around your video. No VR options, but good for a quick check of what things look like, as long as it’s already open from stitching.

4. eleVR Player

For viewing in VR on a computer with a headset like the Oculus Rift, use in a webVR browser (see if you have a Rift but no webVR).

Also works with stereo video in VR, or without a headset in a normal browser, no need to download any special viewing app. Just go to the link and click on the folder icon in the bottom right to pick your video, then use WASD to move your view and E/Q to rotate.


Make and share many spherical videos! There’s so much ground to cover and so many things to try. Anything you do will add to the creation of this new medium, whether it works or not, so go ahead and experiment and share your results.

Examples of our own Theta results can be found in our vlog playlist on YouTube, or on our Downloads Page.



CG & VR Part 2 – VR Rendering in Maya

posted in: Uncategorized | 0
Finally, Part 2 has arrived! It’s time to start learning how to render CG scenes as equirectangular stereo-spherical images using Maya!

The purpose of the following tutorial is to give you some of the basic knowledge to get you up and running with some results in the least amount of time. While there are other methods for rendering stereo-spherical content, I chose the software I’m using because they currently seem to provide the most turn-key solution.

Hopefully you’ve taken a look at Part 1 to get a general idea on how stereoscopic-spherical cameras work in the realm of CG, but you’ll be able to follow this tutorial without any trouble if you haven’t.

Elijah Butterfield
Elijah Butterfield – Intern


Maya 2016 – Modeling (There’s a Free 30-Day Trial)
Mental Ray for Maya – Rendering
Domemaster3D Maya Plugin – Stereo-Spherical Camera Creator
Photoshop – Combining Rendered Images into Over-Under format (Again, there’s a Free 30-Day Trial)
Firefox Nightly – VR Enabled Browser to view our Renders

Photoshop Over Under Formatter Action


Once you have you have these resources downloaded and set up, you’ll be ready to start!

If you already have a scene put together, feel free to use that. In my case, I’ll be using a scale model (accurate down to the quarter inch, as they say) that I made of our office space.

Maya Tut 1
The Office – WIP



Step 1 – Camera

Our first step is to create our stereo-spherical camera rig.

To do this, select the Rendering Menu, then select Domemaster3D > Dome Cameras> LatLong Stereo Camera

Maya Tut 2



You’ll now see that a set of three cameras has appeared in the center of your grid, this is our spherical-stereo camera rig. Notice that the cameras appear very small in relationship to my scene, this is because I have my scene’s units set to Feet instead of the default Centimeters. If you have encounter this, you can increase the size of your cameras without altering the camera’s separation by increasing the Cam Locator Scale setting in the Channel Box.

Maya Tut 3


Step 2 – Camera Settings

Now lets open up the Attribute Editor and take a look at some of the basic settings we can tweak on our camera rig.

With the camera rig still selected, open the Attribute Editor, navigate to the Center_LatLog_Stereo tab > select the LatLong Stereo Shader section.


Field of View: These sliders control what range of the scene the camera will render in the X (Horizontal) and Y (Vertical) axis. I’m going to leave these at default, because a 360° x 180° image is what we’re trying to achieve.

Camera Separation: This slider determines the distance between the left and right cameras. In most circumstances, you want this to be the average interpupillary distance of 2.5 inches (In my case, 0.212 decimal feet).

Zero Parallax Distance: The Zero Parallax Distance is located where the cameras’ lines of sight converge, aka, the Focal Distance.

Objects located at the Zero Parallax Distance will not have any 3D effect, while objects located in front the the Zero Parallax Distance have Positive Parallax and will appear to pop out of the screen, and objects behind the Zero Parallax Distance have Negative Parallax and will appear to be further away.

I have this attribute set to 20 Maya Units, which equates to 20 Feet with my current settings.

Zenith Mode: Leave this box unchecked for now. This setting allows our camera rig to work in either a horizontal (unchecked) or vertical orientation (checked).



Maya Tut 3.5

For full documentation of the Domemaster 3D plugin, please take a look at the Domemaster 3D Maya GitHub Wiki pages.

Now that we have our camera rig all set up, it’s time to see the fruits of our labor. Let’s configure our render settings!


Step 3 – Rendering

The first thing we need to do is tell Maya which camera we want to render with.
In the Rendering Menu, select Render > Render Settings.

Maya Tut 4



In the Render Settings Window, select Renderable Cameras > Renderable Camera > LatLongStereoCameraX (Stereo Pair).



Now we need to set the resolution for the images that are going to be output.
The standard aspect ratio for stereo-spherical images is 2:1. However, we’re going render each eye in a 4:1 aspect ratio instead and then stitch them into a 2:1 image in post. Rendering in 4:1 instead of 2:1 cuts our rendering time in half and saves us from having to scale the images later on, which is typically a lossy process.

Here are a few 4:1 resolutions to try:

1k Image – 1024 x 256 per eye
2k Image – 2048 x 512 per eye
4k Image – 4096 x 1024 per eye
8k Image – 8192 x 2048 per eye


In the Render Settings Window, select Image Size and enter your desired resolution in the Width and Height dialogue boxes.

Maya Tut 6


In the Rendering Menu, select Render > Batch Render.

Maya Tut 7


Once the render is complete, your left and right images will be in two separate folders located in the images directory of your Maya Project folder.
In Windows, the default location is C:UsersUserNameDocumentsmayaprojectsdefaultimages

You should have two images like the ones below.

Left Image
Right Image



Step 4 – Combining Images

Now we’re going to move into Photoshop to combine the two images into the Over Under format. Keep in mind that Photoshop is not required for this step, and you can use other image manipulators such as Gimp to achieve a similar outcome.

To simplify the Over Under formatting process, I’ve put together a quick Photoshop Action that will format and save our image for us.
You can download it here if you haven’t already.


With Photoshop open, press Alt+F9 to open the Actions Pane, click the pane’s Menu Button, and then select Load Actions.

Maya Tut 8


Navigate to where you saved the Action file (VR Stuff.atn), select it, and click Load.

You’ll now see that there is a folder named VR Stuff in the Actions Pane, and inside that folder is an action named Over Under.


Select the Over Under Formatter action and click the Play Button to start it.

Maya Tut 9


You will now be prompted to open an image file. Navigate to where your Maya renders are saved, select the left image, and click open. Once you’ve done this, you will be prompted again. Select the right image and click open.

After a few moments, you’ll see the images have been put placed in the Over Under format. Save your image.




Congrats, you’ve just (hopefully) rendered out your first stereo-spherical image! Now go grab your HMD, open up Firefox Nightly (or just regular Firefox if you’re using a phone and Google Cardboard), and navigate to the eleVR Picture Player so you can take a look at your render in all of its stereo goodness.


Once you’ve made your way to the Picture Player, click on the Folder icon in the bottom left corner, navigate to the image you saved from Photoshop, click Open . 

Maya Tut 10


Click the Equirectangular Menu, select Equirectangular 3D from the list.

Maya Tut 11


If you have your HMD already set up, you should now be able to bask in the glorious spectacle of what you have just achieved.


CG & VR Part 3 – Spherical Compositing in Maya

CG & VR Part 1 – Rendering Challenges

posted in: Uncategorized | 0


Hello, World. My name is Elijah Butterfield, and I am eleVR’s very first intern! I am a tech instructor with a passion for mobile app & game develop, and I am also VR enthusiast. With a background in 3D modeling & animation, video game design, and CG environment creation, I recently published an educational history VR Google Cardboard app on the Play Store.


The purpose of this blog post and its subsequent parts is to give a brief overview of a few ways to take your computer generated environments and render them in a VR format. I’ll be briefly covering VR rendering for Cinematic use utilizing Autodesk Maya in the form of short tutorials.

Elijah Butterfield
Elijah Butterfield – Intern



Stereo Rigs

Before we launch into any hands-on stuff, we’re going to explore what makes pre-rendering CG scenes in stereo-spherical for cinematic with a software package like Maya a bit challenging in comparison to rendering them in a mono-spherical format. To start with, mono-spherical images are relatively simple to produce in a digital environment, as they are shot under the same principle as in the physical world — with a single camera rotating around fixed pivot point.
See Figure 1.0 & 2.0.


Figure 1
Figure 1.0
Figure 2.0
Figure 2.0 – Mono-Spherical Render in Equirectangular Format



Stereo-spherical content is a bit tricky though, as it needs to be shot/rendered with two cameras, each one shooting different footage for each eye. Initially, this might seem simple. All we have to do is take two mono-spherical images (Like figure 2.0) that were taken side-by-side and use one for each eye, right?


Well, almost.


Because mono-spherical images are shot with a camera with its own pivot point, their view is that of a single eye looking in all directions. If we were to apply this to the cameras on a stereo rig, it would be the equivalent of our eyes spinning around in their sockets, which is clearly not how they behave. The result would be an image where we see a stereo-3d effect in the direction the cameras were initially facing, and a cross-eyed effect in the opposite direction due to the cameras viewpoints being swapped when they’re turned 180 degrees. See Figure 3.0 & 4.0.


Figure 1
Figure 3.0
Figure 4.0
Figure 4.0



To get around this headache inducing effect, we need to create a camera rig that behaves the same way our eyes do in relationship to our heads. This means our two cameras need to rotate around a single shared pivot point with our preferred eye separation as the distance between the cameras. See Figure 0.05


Figure 5.0
Figure 5.0



Camera/Stereo Configurations

Now that we’ve covered the basics of how stereo camera rigs are created and how they move, lets take a look at stereo configurations that we could apply to the cameras themselves.
The three primary configurations are Converged, Parallel, and Off-Axis.



A converged stereo rig consists of two cameras toed-in to focus on a single plane of space known as the Focal Length or Zero Parallax Plane. This type of stereo configuration might seem intuitively correct, as it behaves the way our eyes do.

However, when the two angled views from a converged stereo rig is displayed on a flat surface, you’ll notice what’s called Vertical Parallax, or keystone effect, on the projections of each eye.

This is caused by trying to display the offset perspective of each camera onto a single screen that is not perpendicular to either of the cameras.

This method can cause eye-strain due to the distortion and objects not converging seamlessly.


Converged View





As you may have guessed from the name, Parallel stereo rigs are comprised of two cameras parallel to each other.

This configuration may get rid of our distortion/keystone issue, but it immediately introduces the problem of our Zero Parallax being stuck at infinity. This means everything in our scene will appear to pop out of the screen.

We can fix this in post by artificially adding a convergence point using a technique called Horizontal Image Translation, but this involves cropping down our images and is a time consuming process.

We’re better off avoiding this altogether and using a different configuration.



Parallel Parallel View



Off-Axis are generally the most commonly used stereo camera rigs, as they provide the best of both worlds from Converged and Parallel set-ups:

They consist of two parallel cameras to eliminate any head-ache inducing Vertical Parallax (keystoning), and the cameras have asymmetrical frustums which allows us to control our Zero Parallax plane.

One drawback of this method is that due to the cameras being parallel, objects at infinity will have the same disparity as the rig’s camera interpupillary distance. This means that objects in the far very distance wont fuse when looked at.

Despite this, Off-Axis stereo rigs typically provide the best overall stereo viewing experience.


OffAxis Off Axis View



For more detailed and in-depth information on these stereo configurations, take a look at these resources:

vfxIO – Parallel vs. Converged
Paul Bourke – Calculating Stereo Pairs – Depth Positioning



Now that we have our basic stereo camera rig configuration all figured out, we can move into rendering out some stereo-spherical images. However, this is a slightly complicated process, because now as we rotate our stereo rig, our cameras aren’t in a fixed position anymore. How are we supposed to generate a single still image when our cameras are moving around?


For software I’ll be using, we have two options. The first of which is the ‘traditional’ way of rendering CG scenes into stereo-spherical images, the fun process called Strip Rendering. This is where we rotate our stereo rig in increments of 1° and for each eye render out a 1° wide by 180° tall strip of the scene on each subsequent 1° rotation. At the end of the render, this leaves us with 360 slivers of pixels from each camera that we then have to stitch together into our left and right images. While this is a viable option for single frames, this can potentially make any sort of animation project unfeasible due to how labor intensive it is. For more information on this method, I would recommend taking a look at this article from Paul Bourke.


Our second option is an awesome plugin that lets us render out stereo spherical images without (most of) the hassle! Thanks to visual effects artist Andrew Hazelden and his Domemaster 3D plugin for Autodesk Maya, 3DS Max, and SoftImage, we can render equirectangular stereo-spherical images in an animation/time friendly way. This is the method I’ll be using in Part 2 of this post, where I’ll cover basic stereo-spherical rendering in Autodesk Maya.


CG & VR Part 2 – VR Rendering in Maya


Sphrical Cinematography 101: Scale

posted in: Uncategorized | 0

This is the second post in a series an VR editing. The first was ‘The Choreography of Attention’.

Let’s start this off by looking over the video piece I will be referencing here.  “A Journey of Self Discovery” is a recent video available both on our downloads page for those with headsets or above. Play first then read on.

In the last post I argued that VR is a somatic medium and that any wearable medium but especially head mounted immersive experiences should inherently take the body and its knowledge as the axis of focus. But how exactly are we supposed to do that? This post is going to discuss one way: building off the earlier and by now well established vocabulary of filmmaking to think about how body positions that are simulated with particular camera positions and framing can be instead realized physically for the VR viewer. In particular we will be focusing on scale.


Scale Technique 1: The Up and Down


Scale is no stranger to traditional filmmaking. Even though every frame of a film is exactly the same size both camera distance, extreme close up to establishing shot, and camera inclination, high and low angle shots, are used in flat flims to simulate the sensation of scale changes. Take this frame from the one of my childhood favorites: Matilda (1996).



Matilda (Film)

From this vantage point the older woman looms over the small girl, giving her an air of authority and power. Theater too uses inclination to denote power as in this still from the musical version of Matilda set for the stage. The theater audience’s relationship to the overall image does not shift as it does in the film example but here we can project ourselves in the place of either character to similar effect.



Matilda (Play)

So instead of thinking like a cinematographer positioning cameras, imagine yourself in the place of the camera or girl on stage. You are 3 to 4 feet tall and the other character’s position in space requires that to see them you must look up, tilting your head back, elongating the front and shortening the back of your neck. That’s a lot of words to try to illicit a simple sensation and both the proscenium theater production and the flat film are similar attempts. Words and images can only describe the action of tilting your head or show an image that might result from that positioning, in VR the body must actually assume the required position. Jumping jacks to fetal position, when watching a flat film the viewer’s body position does not affect what is seen on screen. In VR however instead of predetermining a viewers gaze with fixed, framed shots the viewer is allowed to experience of actually looking up to (or down on) someone. Your neck muscles actually have to do a thing.

This a very simple technique. If you place the camera at eye level with a subject, or at an average human height, the viewer will look around the world feeling like a basically average adult human. However if centered above or below the eye line of a subject the placement introduces a feeling of power dynamics to the viewer. For VR the camera’s height can simply and effectively communicate similar power dynamics usually associated with high and low camera angles. No need to actually frame the old lady, users will find the action themselves.

And that’s exciting, not scary! Creating moments of immersion that hold with in them positions of serendipity which allow uses to ‘happen’ upon a vista will prove far more engrossing than the most beautiful establishing shot in flat filmmaking. Mass participation has already become the new normal in the rising creator economy, why not take that notion to the obvious content oriented conclusion and allow users to not only co-create the framing of an experience but also learn from all those bodies wearing our content. There is meaning in the tilt of a head, in a look over a shoulder, in a lean, and not just watching an actor make these movement on screen but in the feeling of them in your own body. In VR these kinds of shots take the viewer’s body and its knowledge seriously. What does it mean to look up to someone? Is it a gaze of admiration and respect, or of looming threat?

For an example, near the end of the video above, enjoy a tiny motorized model carnival that lives in the bakery window display down the street from my house. What positions do you find yourself in while watching?


Scale Technique 2: The In and Out

Remember rear projection? Big screens with roads flying by behind handsome actors in fancy cars pretending to drive? This example is from the 1962 Bond film ‘Dr.No’.




Rear projection, before we had green screening and mountains of computer graphics, was the effect of choice for guys playing Bond while driving cars. Look for the scale shift, the impossible perspective, the hulking black roof of the chasing car with its wide front wheels framed perfectly, shoulder to shoulder, to complete a menacing black halo around Mr. Bond. This shot is a layering mismatched scale for effect. Now let’s try that idea in VR.

Say for example your camera is tiny and you want to take advantage of that fact because for a year now you have been working with a camera that weighed more than most infants. So you put your camera in a cupboard in a over priced vintage furniture store for example, or the in refrigerator between shelves of eggs and homemade ketchup. But once you get the footage into the editing suit you realize there is a better use for this ability than just a feeling of being reduced down to ‘Honey I Shrunk the Kids’ proportions. You can also layer scale.


So much like the Bond chase scene you end up with two disparate stratified scales, the tiny cupboard in the foreground and the swaying leaves outside. What to call the inner and outer russian dolls? I don’t know, the ship and the sea have a nice mental image to them but metaphor usually loses out to utility when it comes to naming techniques.


Two bits of practice now one bit of theory

As I mentioned, the ability to layer scale is afforded by the recently shrunken camera size. These kind of tandem advancements, one part hardware one part aesthetic practice, will continue to be common in the VR lineage, but it’s valuable to keep in mind that mass-production of representational technology can not be conflated with the development of a nuanced medium.

Take for example the film historical example of the legend of the Paris Cafe. The story goes: The Lumière Brothers first public film screening was held on 28 December 1895 in a Paris Cafe. The pair showed ‘L’Arrivée d’un train en gare de La Ciotat,’ a one shot film of a train coming into station. The images so overwhelmed the uninitiated audience that they run screaming to the back of the room to escape the crushing train. And of course this urban legend is often dug up to connect early film with early VR by comparing it to the dozens of videos online of people losing it while wearing an HMD. As Janet Murray wrote in her book Hamlet on the Holodeck (a book I highly recommend to those interested in VR media theory) “The legend of the Paris Cafe is satisfying to us now because it falsely conflates the arrival of the representational technology with the arrival of the artistic medium, as if the manufacture of the camera alone gave us the movies….In the first three decades of the twentieth century, filmmakers collectively invented the medium by inventing all the major elements of film is storytelling, including the close up, the chase scene, and the standard feature length. The key to this development was seizing on the unique physical properties of film: the way the camera could be moved, the way the lens could open, close, and change focus, the way the celluloid processes light, the way the strips could then be cut and reassembled.”

Which is all just to say: in the inventing VR editing we need only focus on the unique physical properties of VR, try everything, and be satisfied with baby steps.




The Choreography of Attention

posted in: Uncategorized | 0

At nearly every VR related conference I’ve been to someone either on stage or in discussions steadfastly claims: You can not edit in VR. Usually this is followed by a quip about teleporting audience members from place to place and how ‘just not real that is.’ I’m not sure how this strange notion got started, but let me assure you: it’s all lies. Lies I tell you!

Seriously people. You have to stop saying this. Not only is it not true, but people believe it. Not knowing how to do a thing is not the same as the thing being impossible.  Stop biasing such a young medium. Recent decisions by some in the field to stick close to hardcore gamers, a timid if unsurprising move to snag the only audience thought willing to spend $600 on launch day, is bias enough. Let’s take this apart piece by piece.



Editing is a thing

When an expert says that you can’t edit in VR what they usually mean is that the standard film language for shot types: establishing shot, close up, etc, have no easy analog in VR. Sure, we don’t have a codified language for the VR video editing canon yet, but can you give us a minute? There are already plenty of videos hanging around with plenty of editing in them, but it took years and hundreds of films to develop from the blockbusters 1890s, static one takes of trains arriving and people walking, full of onscreen movement but during which the camera never moved. In fact the first film that strung together more than one take was Robert W. Paul’s Come Along, Do! (1898). Scene 1: Two people eat lunch while waiting to visit a gallery. Shot 2: They go inside and look at art. Thrilling!






VR recorders are not cameras, like mobiles are not phones

Now that I have thoroughly convinced you that VR editing in a thing, let’s talk turkey. To translate from the image above: ‘This is not a camera’ and ‘This is not a phone.’ The frame, like making a call on a telephone, has up to now been a fundamental property of this technological species. The name coming from its Latin classification camera obscura. It meant “dark chamber,” and it used a tiny pinhole of light to project an image of the world outside onto a flat surface opposite the hole. That flat frame and its ability to crop the world into two dimensional chunks has been with the camera ever since. If this is true, at least in so far as it can be trusted axiomatically in this argument, it means that VR video recording devices are fundamentally not cameras. But if not cameras then what?



VR is a somatic medium

You might notice that nearly every filmic shot type is defined by its relationship to the main subject being recorded. Close up, medium shot, low angle, long shot, they all generally define how much of the subjects body in visible in frame. But VR, in much the same way participatory theater works depend on the unique but guided actions of an audience member, VR pieces take as their core mechanic the action and position of the body.The audience is not a viewer but a wearer, the content is not seen but worn. VR is choreography over camera work.

We know things with our guts. We encode emotion with the sensation of our bodies. Our body language and the volume of space our positions occupy change not only how others see us but also how confident or anxious we feel. There’s content in there. Dance has been until now been the primary artistic medium of somatic information, but with wearables and head mounted displays feeding back an ever increasing stream of body specific data, creators can now consider the body of the audience, not just the body of the performer.

This makes the editing and cinematography of VR  less the manipulation of cameras and frames and more as the choreography of attention. We understand at this point that what happens ‘on screen’ happens to the viewer. When I reach down a grab the camera in a piece shot on the Ricoh Theta at least some of the audience will feel that I just grabbed them.

And there are roadmaps to teaching and assessing physical information from somatic educators of all kinds: dance and fight choreographers, basketball coaches, music teachers, yoga instructors, they all have their systems and orders.  Lets take yoga as an example. In a well taught class there is an opening or scene setting phase in which participants mentally transition from the world outside to the yoga mat, usually involving brief storytelling or participatory music. Next comes the warm up in which participants physically transition. Bodies curled inward from a day spent hunched in the driver’s seat of a public bus or lopsided from the weight of a child carried on one hip are slowly reminded of the existence of joints and spines. As heat is built this phase slowly ramps toward higher and higher intensity movements and positions, breaks are had, intervals of high and low are alternated through, and then when the time is nearly up the ramps flows back toward slow cooling stretches and breathing. This entire format is often closed with a reference to the start, a communal sound and a thank you and a moment of silence before returning to the outside world.

This format is all about guiding people through an immersive experience, body and mind. The participants need not know any of this structure is even happening. They just follow along and trust that the teacher knows what they are doing. I’m not claiming every VR experience need to follow this exact format, unless you know you want to make VR yoga classes, but viewers should be able to follow along with any immersive experience with the knowledge that their somatic experience has been considered and the content knows what it’s doing. Because as a former professional dancer I can tell you: somatic mediums are hard on the body if approached unconsciously.
I don’t have a shot list for you yet. I haven’t found every nook and cranny of this inside out way of thinking about editing and cinematography; but in the next few VR editing posts we will explore further the implications and techniques of this choreography over camera work idea and give you some examples to chew on.

Relaxatron 2: I didn’t want you to miss the bells

posted in: Uncategorized | 0

If you have been following eleVR for a while now you might remember a video from way back: The Relaxatron. We had seen a lot of roller coster demos by that point and were ready for a little break in our happy place. It received high praise in this BoingBoing article on VR as a tool for therapy. And now, well we are back for more. Come hangout in the park, enjoy a little people watching, and listen to the far off sounds of a sunday trumpeter in a duet with the glittering of church bells. If its not your happy place, it will be soon. Also available our downloads page.

Don’t Look Down

posted in: Uncategorized | 0

“Down” seems to be a recurring problem in VR film. We’ve talked extensively about the theoretical problems, from stereo to stitching distance, so today’s post looks at “down” from a content perspective, surveying what people have done so far.

Here’s 11 ways to deal with down!

1. Embrace the tripod

In most of our live-captured videos, you can look down and see our tripod, in all its too-close stitch-artifact glory.

Screen Shot 2015-05-08 at 11.29.08 AM

This is not exactly the most professional of options, but we made an early decision to focus on content and research rather than production quality. Especially in our talk shows, which are about VR in VR, we simply leave in all our production equipment. Laptops, mics, everything, all visible when you look down.

But there might be an unexpected benefit to leaving in the tripod. Some viewers, especially those new to VR, find it disturbing to look down and not “exist”. We’ve had many people look down in our videos and comment “I am a tripod”, which perhaps is better than “I am nothing at all”.

Then there’s our new Ricoh Theta cameras, which are tiny and stitch almost around themselves in the down direction, leaving a stitch line and a tiny sliver of color. They’re small and light enough that they can easily be mounted from any angle, or hand-held, with the internal gyroscope correcting the final footage to always keep down down. They’re great for very informal low-production vlogging, so often the “tripod” is an arm and hand.

Screen Shot 2015-05-08 at 7.02.19 PM

It wouldn’t be hard to mask out this tiny sliver of color. We also got a tip from Jim Watters that you can stick reflective tape on that part of the camera so that it absorbs the environment colors instead of showing a bright yellow sliver. But for informal purposes, I kind of love the bright yellow sliver, and for vlogging there’s something appealing and authentic to me about it being hand-held.

For non-fiction content, it’s easy to embrace the reality of production. But this attitude does not work for all things!

2. Mask out tripod

In our stop-motion animation library.gif, the “down” direction is just a grey rug with nothing going on, so it was easy to do a quick hacky job masking over the tripod using nearby rug footage. This gif is meant to be seen while sitting down on a similar rug (we did a couple installations of it including a rug and library objects), and we didn’t want the tripod to get in the way of the viewer feeling the real rug, or remind the viewer of the reality behind the whimsically moving objects.

Screen Shot 2015-05-08 at 11.54.28 AM

Ideally, library.gif would have had fun stop motion stuff going on straight down as well as everywhere else, but because the camera is set so low to the ground, it’s just too close to get good stitching with a multi-camera setup (this was done with 14 gopros).

There is also the question of mono vs stereo. All around you, the space is animated in stereo. But down would have to be mono (or else have terrible disparity problems), which might be jarringly different if there were actual content there. Some people barely notice stereo vs mono, or don’t have stereo vision at all. But the occasional person is very sensitive, and will look down and report that it looks like the ground drops infinitely away from them as it goes to mono. The rug is fuzzy and smooth enough that you can’t really focus on it or get distracted by it, avoiding all the problems.

Google street view also masks out their tripod using footage taken of the ground under the car in nearby frames. Of course, in their case the tripod is an entire car and the process is automatic, leaving plenty of artifacts, but it works well enough that most people don’t even think about the fact that there should be a car when they look down. The car’s shadow is often there though, following you in a ghostly manner. Google street camel cam is especially strange, with a trail of footprints leading to nowhere.

Screen Shot 2015-05-08 at 1.27.51 PM
Screenshot of Google Street View,

3. Offset tripod

In “Don’t Look Down!”, Emily uses a “selfie stick” (in this case also an everywhere-elsie stick) to hold the camera off the edge of a cliff. The “bottom” of our spherical camera is, then, off to the side, not blocking the down direction.

Screen Shot 2015-05-08 at 12.00.13 PM

There’s a lot of VR out there that plays with people’s fear of height, putting you on the edge of a cliff or dropping the floor out from under you. To get that effect, you need very embodied VR, where the person feels they are in a persistent space and the only change in view are the changes made by their own motion. In the above video, the camera moves around, giving a very third-person camera feeling from the beginning. Being in black and white adds to that.

JauntVR‘s cliff footage for North Face also offsets the bottom of their camera, which in their case is much more imperative since their camera does not film a full sphere. This lets them save on number of cameras and avoid stereo/mono stitching problems, but you can see why their next camera is reportedly going to capture a full sphere.

Screen Shot 2015-05-08 at 12.36.56 PM
screenshot of The North Face: Climb, from

But let’s take a detour and ask: if you can’t film a full sphere, what do you do with your empty bottom space?

4. Brand Your Bottom

In the above screenshot, you can see that JauntVR has branded their empty space with, in this case, the North Face logo (as well as their own). All of their video content I’ve seen has this sort of branding, which I find distracting, but makes sense for JauntVR given that most of their work is with and for brands.

Our Giroptic 360cam also does not film quite a full sphere, and the footage comes out with a branded bottom that also marks it as a development kit:

Screen Shot 2015-05-08 at 12.49.32 PM

Because the Giroptic camera is a consumer camera and we can do what we like with our own footage, we can just remove their branding and put something else there (or nothing at all). So what do we do with that empty space?

5. Do art to it

In 9:72, filmed with the Giroptic 360 developer camera, we reflected the rest of the sphere into the empty space at the bottom. If you look down, you get a bubble view of everything.

Screen Shot 2015-05-08 at 1.01.43 PM

Relatedly, though the opposite problem, when we filmed our talk chat show thing episode 3 the upwards-facing camera did not function, leaving us with a hole in the top of our video. Emily fixed this by cutting out the ceiling entirely and using footage we’d taken in a bamboo forest.

Screen Shot 2015-05-08 at 4.59.35 PM

6. Put Equipment There

if you’re going to leave a hole at the bottom of the video anyway, one might consider just how much equipment you could hide down there, including the camera itself.

The larger the distance between cameras, the harder it is to stitch the footage. There’s a limit to how small you could theoretically make the radius if you want stereo, but for mono ideally all the lenses would actually be in the same exact spot.

Screen Shot 2015-05-08 at 4.21.56 PM
Omnicam360, image adapted from

The Omnicam360, developed by research organization Fraunhofer, uses mirrors so that each camera, while virtually in the exact center of the panorama, is actually arranged vertically below the mirrors.

 It’s fun to imagine using a similar technique to film simultaneously with many RED cameras, which are otherwise too bulky to fit close together.

7. Put Player Controls There

Some players assume that “down” is a dead zone, and use looking down to control things like exiting the video. This makes sense as a short-term hack to get around the limited controls of things like the google cardboard. For some content this doesn’t work, as it restricts you from looking down in videos where there is actually something to look at, but for certain types of content it might be nice to reserve space for things outside the content, such as player controls, links, metadata, or whatever else.

8. Hang your camera

Pretty much all of what we’re talking about is live captured 360 content. There may be post-production, but the sphere of images are captured simultaneously, which is going to necessarily include any visible equipment. If there’s no hole in your footage, that’s going to include whatever’s holding up your camera.

Chris Milk and Beck’s “Hello Again” minimize this by hanging their camera with what looks like wire or fishing line.

Screen Shot 2015-05-08 at 1.44.23 PM
“Hello Again”, screenshot from

Doing this requires multiple lines and anchors in different places so that the camera doesn’t rotate, and it’s certainly not the most convenient or flexible option for all cases, but there’s many situations where I think this is ideal.

9. Live-Rendered Hybrids

Another way to go is non-simultaneous capture, shoot each direction one at a time and composite later. This solves a lot of problems and seems ideal when filming anything with scripted action, or where the important action happens in one direction only. You can hide all your equipment, lighting, tripod, etc, behind the camera’s field of view just as in traditional filming. For the purpose of this post, it wouldn’t make for a very interesting research question, but it does get interesting when you go further than simple non-simultaneous capture and actually live render objects in the view.

One great example of non-simultaneous captured content plus live-rendered fixing of “down” is Felix and Paul’s Strangers with Patrick Watson.

Felix and Paul: “Strangers”, image from

The video itself is beautifully-stitched non-simultaneously-captured video using two REDs, and definitely involved quite a lot of post production. But the most novel part to me was that, in the gearVR version, when you look down you find that you are on a rendered futon, composited on top live in the player, that responds to head motion to create a parallax effect.

I think it’s a brilliant way to solve both the problem of covering the tripod and the panoramic twist problems of getting down to look good without stereo disparity. It’s also very subtle; probably most people don’t notice it.

In Emily’s, video and live-rendered objects are combined in a more obvious and content-driven way. There’s a dome of video above, while an entire half-sphere of “down” is replaced with a live-rendered ground. There’s also 3d modeled creatures wandering around, and you can use the arrow keys or a gamepad to wander around the space.

Screen Shot 2015-05-08 at 4.04.31 PM

This is an art piece that is not trying to simulate the “down” direction as it appears in the filmed space, but it hints at how one might go about solving the problem for regular spherical video. Flat floors are very easy to render live with proper stereo, as are many other static objects, and it’s interesting to think of the possibilities of combining video with rendered objects, even things as simple as a flat wooden floor.

10. Fake Body

3d games avoid all the problems of capture and stitching, so “down” can be anything. “Presence” is the buzzword, and many games strive to make you feel that you are not a disembodied camera or 3rd-person viewer, but a person embodied in the scene. I’ve seen many many VR games where when you look down you see the static neck-down body of a generic man, sitting perfectly still.

Right now the only fake body I’ve seen in video is in Kite & Lightning’s Insurgent VR experience. Kite & Lightning are doing really interesting things with combining video and 3d rendered environments in both live-rendered experiences and rendered out as video. Insurgent VR was created in Unreal Engine 4 using live captured video as well as 3d models, then rendered out as a spherical video from the location where the head of this headless body would be.

Screenshot from Insurgent VR
Screenshot from Kite & Lightning’s Insurgent VR on Android

We’ve idly considered how fun it would be to use a headless mannikin as a tripod, but haven’t taken any steps in that direction. Some people like having a fake static body. Personally, I feel less embodied when I look down and see a static body different than my own, than no body at all. But I’ve also seen very convincing demos where my motions are tracked and move a VR body, making it “my” body  whether it looks like it or not, which will definitely be the thing for embodied games! Perhaps the VR video players of the future will allow you to import an avatar, and live-render your body below you, on top of the video.

11. Real Body

So far, we’ve seen several different people’s home-made VR camera helmets and head-mounts, so that their own body is seen in the down direction. Now there are several companies starting to produce these sorts of cameras, though as far as I know none are available yet.

This sort of head-mounted camera seems ideal for personal embodied experiences, and we’ve seen it used in vlogging, action shots, and erotica, as with Natacha Merritt’s VR work.

from upcoming work by Natacha Merritt, (NSFW)
from upcoming work by Natacha Merritt, (link may contain nudity)

Real-body first person VR has a lot of potential for empathy, because you are seeing the body of someone you know is a person, sharing their real experiences. The most well known thing in this space right now is probably The Machine to Be Another. This project by BeAnotherLab, along with MIT, is doing some interesting research as well as art, with switching the video feed between one body and another.

screenshot of The Machine to Be Another:

The Machine to Be Another is done live, with one “performer” who copies the movements of the “user” to allow the user to feel they are experiencing being in another body. So it’s not exactly in the category of video questions this post is about, but is perhaps one of the more interesting answers to what should happen when you look down.


Genres emerge and separate based on the feedback loop of interaction between the expectations content creators create, and then audiences having those expectations, which then content creators in turn wish to fulfill in order to communicate effectively (further reinforcing those expectations). For VR video, one of those sets of potential expectation/content interactions will involve the treatment of “down”.

It will be interesting to see how viewer expectations change. Right now, many people don’t bother to look down at all. Some people, when first introduced to VR, face forward the entire time and need to be taught to look around, and whether they learn to keep looking around depends on whether there are things to look at. Others find their embodiment, or lack thereof, to be an important sticking point.

People will continue to try all sorts of things and I don’t think any one technique or set of expectations will win out, but I expect different genres of VR film will emerge with different standards. For some artists, having complete control over “down” will be essential. For some types of content, a player that lets you import your own seating and avatar seems like the right thing. Some types of content will gravitate towards embodied viewing, others will gravitate towards 3rd-person disembodied viewing.

I’m looking forward to seeing the many more creative techniques people will come up with, as an increasing number of people gain access to the tools required to make VR video.


P.S. all our own videos referenced are available on our downloads page and YouTube.

Are We Living in a Virtual Reality?

posted in: Uncategorized | 0

I woke up in the middle of the night and could not sleep.

Got up to work, figured I’d do some writing. Late at night is a good time for writing.

I went through our blog, reread some old posts, and thought, I am tired of writing tech posts about shallow effervescences of new technology! What I really want to do is draw from my other life as a philosopher and talk about some of the deep and interesting questions virtual reality confronts us with.

Yes, late at night, when there are no distractions, is the perfect time to consider the fundamental truths of the universe, and questions of what is real.

And so, I started a post titled:

Are We Living in a Virtual Reality?

And I began to think:

There’s different interpretations as to what this question might be asking.

1. Are We Plugged Into “The Matrix”?

Are we in The Matrix?[1] A virtual world, embedded in a real world that is very much like this one? Could it be that our “real” brains and bodies, which are very much like our virtual bodies, have had their sensory input subverted or bypassed by humans or humanoid aliens or a human-created artificial intelligence?

Given that we are human-like creatures who are starting to create virtual worlds very much like our own world, meant to subvert our real brain and body’s senses, this possibility suddenly seems much less far fetched than it used to.

One of the more unrealistic things about “The Matrix” is that the simulated reality is so close to actual reality. Human brains are malleable enough that there is no reason this should be true, unless you need backstory for a power fantasy movie plot.

We are still far from being able to hook up a grown human, raised in our culture and with the usual array of senses, into a VR device and have them not know the difference. But even now it is within our technological reach (though not our ethical one) to be able to hook up a human baby to a virtual world in such a way that that they would never, ever, think to question their reality as they grow up.

It would be a reality significantly different from ours; we might have to mostly paralyze them, for example, and sustain their bodies through feeding tube and intravenous fluids, as in The Matrix. But unlike The Matrix, with our current VR technology there can be no sensation of eating, no taste or smell or touch. We’d have to drug them into unconsciousness often, to do VR headset maintenance. A human growing up this way would not question any of this, and they would grow up to be a very different human than anyone our culture has produced.

We’d have to be a cruel society to do it, but it is possible, and if it’s possible, how can we discount that it might be happening to us? The human brain is so malleable. Who says reality isn’t fundamentally different from this, our simulated reality? What difficult-to-simulate senses might we be missing? What but a cruel society would, out of everything the human brain can be, choose this virtual reality for us?

In the Matrix interpretation our perceived world is virtual, but we have an existence outside of this virtual world, real bodies in the “real” world where we actually are. These real bodies need to be physically sustained in order for our virtual selves to survive.

In the related Brain In A Vat[2] thought experiment, there is still a real world with your real brain, but your brain is no longer part of your body. It is your a human brain that once was part of your real human body, and that is the body being simulated, by hooking up your brain to false input (run by a computer, in this case).

In either case, it seems unnecessarily complex that we should need to eat, or breathe, or any other number of virtual actions that in theory should not affect our physical bodies. In the movie The Matrix, if you die in the matrix you die in real life, but the only reason for that “rule” is to up the stakes for the sake of drama.

This question, and idea, existed well before computers, though. In Descartes’s version,[3] the virtual world you live in comes not from an evil scientist outside your virtual world, but an evil demon outside your virtual world. Even earlier, Plato’s allegory of the cave[4] does not require any outer forces of science fiction or fantasy, but is a virtual reality created by humans, different and limited compared to reality (much like our technology today). An important distinction for Plato’s purposes is that the shadows on the wall are of real objects.

Lastly, the Holodeck variation,[5] where our outer non-body senses are being subverted by a simulated reality, but not bypassed. The body we sense is our actual body which is really there, the photons we see are actually hitting our eyes, the forces we feel are actual forces. There’s a few different ways Holodecks “can” work (matter replication, organizing molecules with force beams, magnetic bubbles created by holo-emitters), but in all of them, the virtual part is less about whether things are physical, and more about that their physical existence relies on things outside themselves.

The Matrix variations avoid matters of consciousness. We know our “real” human brains have qualia, conscious experiences of the qualities of things, so as long as our hypotheticals involve a real human brain, we don’t have to ask ourselves hard questions of consciousness.


2. Are We Sims?

Perhaps not only is the world simulated, but we ourselves exist purely as simulations as well. In this case, there is an outside world, but we don’t exist there; we have no real selves to wake up to, or bodies to be reconnected with.

The argument goes something like: “We are capable of making computer simulations of virtual universes, and future humans will be even better. If future humans make simulations of the past, wouldn’t it be an incredible coincidence if we ourselves happened to exist in the real universe, not some higher universe’s simulation?”[6]

In this case, not only are we pure computational objects with no outer existence, but our virtual existence depends on hardware in the real universe (and the simulated simulations between us and them). If the real simulation stops running, we’re gone.[7] But we can also be saved, stored, reset, and run again.

This assumes that it is possible to simulate the human mind,[8] that the brain is itself like a computer.[9] Computationalists are not sure just how it would work, but are pretty sure that features like consciousness just come along with the computations as some sort of macro effect. All we need to do is simulate brain computations, the inputs and outputs, to create an artificial intelligence that will automatically be self-aware, conscious, have qualia, etc.

Computationalism is a fashionable belief these days, probably partly because computers and computational thinking is so new, powerful, and exciting, and partly due to the severe lack of decent scientific conjectures about consciousness.

There’s also plenty of counterarguments.[10] Programming a perfect simulation of behavior does not necessarily imply that the simulation will experience and understand the world. If I were a simulation, it would not be necessary for me to “understand” anything, and yet I do. We currently have no idea how to create artificial consciousness, or what it means to understand something, and there’s no reason to think that our minds are capable of figuring out how to simulate minds anything like ours.

Generally this train of thought has an outside real world that decides to simulate us specifically, as human minds or historical figures from the past. We are created, non-accidental, and exist as individual artificial intelligences. But if forming minds like ours is really hard that lowers the chance of us being simulated, as well as the chance that we exist in the first place.


3. Is The Universe A Computer?

It is not a terribly uncommon view among scientists and mathematicians, especially those involved with quantum physics, to view the substance of the world as being information, or of the universe as being purely computational.

In the computational interpretation of the VR question, we still are “simulations,” in a sense, but there is no outer universe running ours. We’re as real as it gets. Or if there does happen to be an outer universe simulating ours, that universe is also just as computational, and turtles all the way down, so there’s not much point making a distinction.

Given what we know about physics, this starts to make sense. We can’t actually find any “stuff,” and the universe doesn’t behave the way we intuitively think “stuff” should.

Some believe the universe is itself a turing machine, perhaps a giant cellular automaton.[11] Some believe the information is organized differently than in a traditional computer, perhaps contains random elements or continuous things or quantum things. But the common thread is believing there’s nothing else, no “real” reality, just information running itself.

In this view, we are not specifically programmed as artificial intelligences, we just happened to happen. We spiralled into being as an organized subroutine of the computational universe, but don’t have any fundamentally separate self. There is no “real” version of the body, or of the mind.

But if the universe is a giant computer program, what is it running on?[12]

In the formalist view, a computer program has no inherent meaning. When the human mind is outside, writing and running a program, one might say that the human mind creates and attaches meaning, but if no one is outside it, meaning cannot exist (so says the formalist). The realist, on the other hand, might say that the computational universe, like all true mathematical objects, exists in the world independently of the human mind.

I like this question because of its connections to philosophy of math. If the universe is a computer and we experience meaning within it, then mathematical objects exist and have inherent meaning, hooray! But on the other hand, it is no longer fashionable in philosophy of math to view mathematics as pure thing that is fundamentally this or that (and I count theoretical computer science as a type of mathematics); mathematics is much more empirical (or quasi-empirical) than we like to imagine.[13]

Then there’s consciousness. In the second scenario, we thought maybe it would be difficult for a mind to develop a strong AI of itself. But in this case, the Strong AI needs to not only be able to exist purely computationally, but it also must happen by accident as the universe tumbles along its course.

Luckily, even if it’s impossible to simulate consciousness with computation, we can still save the simulation hypothesis. I have witnessed computer scientists follow this logical train of thought to the conclusion that they themselves must not be conscious after all.


4. It’s All A Dream

I’ve written extensively about dreaming’s connection to virtual reality before.[14] We are all capable of running extremely realistic virtual worlds in our heads, and we have no idea how we do this.

Are our brains doing something like a computation, a virtual world program? Just what kind of computation are our brains doing in the first place, even when awake? If what they’re doing can be called “computation” at all?

This is almost the exact opposite of the last scenario. Instead of the world simulating us, we simulate the world. There’s no doubt that there’s a strong sense in which we do indeed experience only a simulated world, just, we often assume that this simulation is based on an outer reality that our senses interpret accurately.

Experiences in dreams can be as real as any real life experience, the perceptions can be just real (in fact, unconstrained by reality, things in dreams can seem even realer than reality). Many philosophers have attempted to make distinctions between real life and dreams: that pain is lessened,[15] or that dreams are more absurd,[16] or that dreams don’t actually exist,[17] or that dreams are not actually experienced,[18] and thus far, science has shut down every objection.[19][20]

It is not enough, for reality-distinguishing purposes, to say dreams can be like this or tend to be like that. If there is a true distinction between dreams and reality, it needs to distinguish in all cases.

It makes sense that dreams can simulate reality so well, given just how simulated our brain’s version of reality is in the first place. It seems incredibly unlikely that we could tell which is which, and yet we often can, even when dreams are logical or reality is absurd, and we have no idea how.[21]

We have some notion of “reality” and mysteriously believe that dreams do not fit the definition, for reasons which I suspect include causality and permanence and all the other fun things that make some quantum physicists suspicious of “reality”. If it’s impossible to tell whether the reality/dream dichotomy is true, at least we could hope that it’s consistent.

We understand very little about dreaming, but for now the idea of all the world being a dream is not incompatible with the idea that it’s also a computer program. Unlike the computational VR theory, if we are the ones running our own dream program, perhaps we have a real self to wake up to after all.

No matter what the case, this is an area where empirical science has had something to say in response to what was once pure philosophy, which is a good sign that perhaps this is a fruitful area of inquiry, rather than just a philosophical trap.[22]


5. Dream Without Dreaming

If we should consider the world as a computation without an outside computer, perhaps we should consider the world as a dream without an outside dreamer. Or perhaps we’re in a dream, but we’re not the one dreaming it. Either way we’re trapped in a dream but have no self to wake up to.

The idea of a dream without a dreamer doesn’t make much sense to me, as part of my personal definition of “dream” includes that it’s a thing you can wake up from, but I’d like to consider it for the sake of symmetry.

If the universe is computational, then things within our universe such as our minds and dreams are computational. Just as a computational mind can be simulated by a sufficiently powerful computational universe, a computational universe can be simulated by a sufficiently powerful computational mind. Anyone who accepts the possibility of a purely computational universe must also accept the possibility that we are all living in some other sentient being’s dream.

This seems more likely to me than the idea that our minds are being simulated by a standard computer; our own dreams can simulate much more convincing beings than our computers can. Why should we think hypothetical “real” superbeings are any different?

Or perhaps dreams are not virtual worlds separately running in separate heads (whether our own or hypothetical others’), but something else altogether. Perhaps we switch between dreaming and awakening without dreams being contained within a “real world”. Perhaps the perceived difference between reality and dreams is a false hierarchy, and there is no more difference between reality and dreams than there is between a dream and a dream-within-a-dream.

Perhaps Zhuang Zhou was both a butterfly dreaming he was a man and a man dreaming he was a butterfly.[23]


6. All The World’s A Stage

“Real” does not refer only to a technical state of matter or to the perception of physical objects, but also refers to an interpretation of objects or events.

Imagine you’re in a giant reality show where everyone else is an actor and all the objects are props.[24] It’s not that the existence of the world is not real, but that the meaning of the world is not real.

We have an idea of what it means for people to be “genuine” or “just acting”, though it’s not clear to what extent this is a hard distinction and that all human behavior is not, in some way, a performance.

It is fun to imagine that a couch, as part of a set of a living room on stage, is merely acting like a couch. In a systems view, one might view the definition of objects as relying on their context in the greater “living room” and “household” system, rather than the structure of their physical (or informational) matter.

This argument might require meaning and reality to come from the human mind, and if there’s more in the world than the human mind (as would be the case if we’re living in a computational world rather than the dreamer of the world) it is extra difficult to account for where that meaning comes from.

But truth is fuzzier than we like to pretend, even when meant in a mathematical sense.[25] Somehow it seems progress in human knowledge increase our understanding, without ever once increasing what we know for certain.


Also relevant to this post are things like solipsism, the Buddhist concept of Maya, and the common religious idea that there is a realer world waiting for us once we leave this one, but it’s about time we got to the answer.


The Answer

So, back up to the outer story, in which I cannot sleep and begin writing this post (It’s the same story, of course, just as all stories-within-stories are plot devices more than true structural differences. You can exit Hamlet without separately first exiting the Mousetrap, just as you can wake up directly from a dream-within-a-dream-within-a-dream without having to wake up separately through each member of the stack).

It’s one thing to understand a theoretical possibility, but it’s another to immerse your brain in a worldview where this theoretical possibility becomes an actual possibility.

And so I tried to do just that. I leaned back and tried to really grok it, the way that if we were in a virtual world then the real world might be nothing like this one, that our bodies and perceptions might be nothing like they are here, that in fact the harder reality is to simulate the more likely it is we’re trapped within whatever’s easy. I tried to believe in the technology we’re creating, and will continue to create, that suddenly makes all this seem possible, in the same way that Mars rovers and SpaceX launches make the old science fiction of humans on Mars suddenly seem not just potentially plausible, but a definite part of the future of humanity.

And after a few minutes, I started to get it. Really get it. I could feel it.

I asked myself: Are we living in a virtual reality?

And to my vast surprise, a great conglomerate of robotic voices speaking directly into my head answered:


In the first split second when my brain disconnected from my body, all I could think was that somehow, I must have grokked unreality hard enough that the universe had answered by pulling me back to reality, and that was exciting!

Sight went first, then every other sense. I could not move my body or feel my body because I no longer had a body.

I was being pulled.

My formless self switched to wordless thoughts; strange how much my ability to think in english words seems to happen in my mouth (which I no longer had) rather than my mind (which, at this point, I could feel separating from the fabric of the universe with a growing vibration).

In the second split second, excitement turned to apprehension. Whatever was happening, it felt potentially irrevocable, and perhaps I should not tear my mind out of the fabric of the universe just because some robotic conglomerate voice was still echoing in my head?

Which, I realized, could not possibly be real. In fact, this sensation, of being entirely disconnected from all senses and physical form, was not entirely unfamiliar to me; I had felt it in dreams before. Which, quite obviously now, is what this was. I must have fallen asleep when I’d laid back to grok. It made sense, that I’d been thinking so hard about all this stuff, and then fell asleep and dreamed about it.

And so I woke up, still in the void, and did what I did in all those dreams long ago– wait in the void to get my body back. And then, once I did, wait some more in sleep paralysis before being able to move it.

The eyes open first.

I realized, in those moments of waiting, that I hadn’t just fallen asleep while trying to grok the potential virtuality of the world, I’d been dreaming the whole time. I dreamed I couldn’t sleep. I dreamed I got up and read old blog posts. And I was dreaming when I typed out the title:

Are we living in a virtual reality?

The truth, the actual real answer to my question, was that the world is indeed not real, and that the manner in which it isn’t real is option 4, “it’s all a dream.”

You may not be satisfied with this as being the true answer to the original question. I am not satisfied with how, even though my understanding of the question and all it refers to has not changed, even though I am capable of self-awareness and logical thought while dreaming, even though my dream was perfectly coherent and realistic (up unto a point), that the answer to the question should change independent of my understanding of it.

It would be nice to have a question and answer that doesn’t change based on tautology (“I was in a dream therefore the answer was I was in a dream, but now I am in reality so the answer changed to being it’s real”). If the answer has changed, the question is an empirical one, and pure reasoning alone is not enough to answer it. If the answer has not actually changed, either I was originally right and am still in a dream, or I was wrong about that part and perhaps actually was being pulled back into reality by my computational overlords, and in my moment of fear I missed my chance.

No matter the case, it’s experiences like that that remind us not to take reality for granted.

Personally, I suspect that, as with many things, our understanding of the question is wrong to begin with. Perhaps, just as we perceive there to be dreams-within-dreams and plays-within-plays where there are only plays and dreams, perhaps we perceive a virtual reality where there is only the same old meat brain struggling wildly to turn a vastly complicated universe into this organized story that we call “reality”.

Vi Hart


1. “The Matrix”, as in the 1999 movie. Relevant to this discussion is that people’s real bodies are just like their virtual ones, and the virtual reality is just like the real world of 1999.

2. See Hilary Putnam’s “Brains in a Vat” chapter in Reason, Truth, and History for more Brain in a Vat fun! Putnam argues we cannot be “brains in a vat” because we are only capable of referring to things in our perceived universe, therefore “brain in a vat”, in the only sense that could refer to an outer universe, fails to refer.

3. René Descartes, Meditations on First Philosophy. Translated as “Evil Genius” in the linked version. Descartes’ famous “I think therefore I am” comes from the Evil Demon thought experiment. The more metaphorically inspirational version:

“He can never cause me to be nothing so long as I think that I am something.” -Descartes

4. Plato’s allegory of the cave, in Book VII of The Republic. This is what “Cave Automatic Virtual Environment” (VR using a room full of projectors) references.

5. The distinction between “real” matter and matter that is made out of, like, particles and forces, kind of implies that in the Star Trek universe matter is really real and not itself a simulation or computational object. Also you can apparently have a holodeck within a holodeck. See Joshua Bell’s Holodeck FAQ for everything you ever wanted to know about holodecks.

6. Nick Bostrom has a more rigorous treatment of this question in “Are You Living in a Computer Simulation?”

7. Based on the above, Phil Torres argues that with so many simulations going on, there’s a quite a high likelihood we’re about to wink out out of existence. This assumes that there’s not just a higher society simulating us, but stacks of simulated societies simulating societies.

8. John Searle calls AI with full consciousness and qualia “Strong AI”, and AI that behaves convincingly human but without self-awareness “Weak AI”.

9. Hilary Putnam is one of the originators of the Computational Theory of Mind, which states that the mind works essentially like a big fancy computer. See “Mind, Language, and Reality”. Putnam is also one of its major disputers, but the idea has caught on.

10. See John Searle’s “Chinese Room” thought experiment, originally published in “Minds, Brains, and Programs”.

11. Steven Wolfram’s “A New Kind of Science” is a beautiful book on cellular automata, which also contains a thorough covering of the view that the universe is purely computational in a discrete, turing-machine-like fashion.

12. Brian Whitworth uses this argument to say that if the universe is simulated, there is an outside simulator. In “The Emergence of the Physical World from Information Processing” he argues that if the universe were simulated, it would explain many strange quantum effects (like, the speed of light is simply due to the refresh rate). By turtle-avoidance, this would imply that the outside universe has nice simple intuitive physics, which does have its appeal.

13. “New Directions in the Philosophy of Mathematics”, ed. Thomas Tymoczko, is a great diverse collection of papers that quickly get past the old paradigms of foundationalism to fun new stuff. (Make sure you get the expanded edition!)

14. “Lucid Dreams: The Original Virtual Reality” by Vi Hart. <— that’s me! 😀

15. Locke argued that pain cannot be as sharp in dreams as in real life. Turns out he was wrong, but his legacy lives on as every day people pinch themselves in real life for no scientific reason.

16. Hobbes thought the difference is that real life is not absurd. There may be a strong correlation, but that’s not the thing that differentiates them. Dreams can be extremely coherent, and as for life not being absurd, I’d like Hobbes to get together with Camus on that…

17. Norman Malcolm’s “Dreaming” and “Dreaming and Skepticism” has some very interesting ideas about dreaming, including that you cannot verify that a person is ever dreaming because dreamers cannot communicate to the outside world, dreams do not actually exist, and all dream reports are false memories made upon waking. Now we have scientific data on all that and know he was wrong, but it was a fascinating way to try to circumvent Descartes’ Evil Demon. Fight skepticism with skepticism!

18. Daniel Dennet, following Malcolm, laid out his own version of skepticism against dreaming in “Are Dreams Experiences?” with the answer “No”. The argument has many interesting things to say about the nature of memory and experience. It is true that one can “remember” things that one did not actually experience, and that this happens more often in dreams than waking life, but contra Dennet, it is not true for all dreams. Those practiced in dream recall call these “false memories,” and are fairly good at distinguishing them from the experienced part of dreams. It’s quite an interesting phenomenon and well worth both philosophical and scientific attention.

19. Zadra et al, The Nature and Prevalence of Pain in Dreams. Basically, pain in dreams often mirrors what you’d expect in real life. Not always, but definitely not a hard separator between dreams and reality.

20. Stephen LaBerge has done a number of interesting experiments involving tracking eye movements (which can be controlled in dreams) to allow lucid dreamers to communicate out from dreams as they are having them. “Lucid Dreaming: Evidence that REM Sleep Can Support Unimpaired Cognitive Function and a Methodology for Studying the Psychophysiology of Dreaming” shows some examples of people carrying out experiments in their dreams, and communicating out through EEG signals. This doesn’t prove that cognitive function is entirely unimpaired, but it does provide evidence consistent with reported experiences, and a methodology for learning more.

21. I mean, we call it “Lucidity,” and we know that it happens, but not how it happens. I find my own sense of lucidity to be completely dependable at distinguishing between dreams and reality, when I bother to employ that sense. But I have no idea how that sense brings back the correct answer. I could guess, though, that something in my brain knows whether it’s creating a world with or without the help of sense data, and that someday we’ll know more.

22. Bertrand Russell, in “The Problems of Philosophy”, immediately capitulates to the dream argument in one sense, saying that sure, there’s no way to tell if life is not a dream, and then concludes that therefore there’s no particularly good reason to complicate the world by thinking it is, so, Occam’s Razor it and get on with your life.

23. Zhuang Zhou’s butterfly dream:

Once Zhuangzi dreamt he was a butterfly, a butterfly flitting and fluttering around, happy with himself and doing as he pleased. He didn’t know he was Zhuangzi. Suddenly he woke up and there he was, solid and unmistakable Zhuangzi. But he didn’t know if he was Zhuangzi who had dreamt he was a butterfly, or a butterfly dreaming he was Zhuangzi.

24. I’m thinking of the 1998 movie The Truman Show, where Truman finds out his life is “fake”, a reality show created for the benefit of its viewers. It’s not uncommon for children to imagine that perhaps there are hidden cameras, or aliens, or otherwise they are being watched all the time, and that everything that happens is planned for the benefit of the story. Sometimes this persists in adults, and in the case of reality shows, is informally called the Truman Show delusion.

25. Penelope Maddy has this great concept of “The Second Philosopher” (in opposition to Descartes’ “Meditations on First Philosophy”), a cross-discipline thinker who approaches knowledge from an outside empirical perspective, often to arrive at the same truths we imagine are gained rigorously. Quite fittingly, she explains this worldview by giving examples of the behavior of the Second Philosopher, rather than trying to define it. “Second Philosophy” (this links to a short paper, not to be confused with her later book of the same title) describes this viewpoint, but those with a background in mathematics and logic will enjoy her more technical papers.

Talk Chat Show Thing: A Commotion of Cameras

posted in: Uncategorized | 0

The new season of Talk Chat Show Thing is here! Come hang out with us on the park playground.  Andrea seriously rocks the monkey bars and we chat about a pair of new spherical consumer level cameras. We test the 360cam from the Giroptic Kickstarter which is a nearly spherical camera but is missing the down direction, and a new technique for getting spherical video with just your phone and a pair of clip-on lenses, and our team’s new favorite the Ricoh Theta.

You can try it out on Youtube’s now functional spherical player by mousing around or using WASD keys to rotate the video, or as always, download it from our torrent and play it on your favorite spherical player (You knows it’s eleVR player, admit it).

eleVR Casual

posted in: Uncategorized | 0
Ricoh Theta
Ricoh Theta camera

So, we decided to start vlogging at each other with the easy-to-use Ricoh Theta cameras. They are available as torrents in full quality, or on YouTube (which won’t play them as spheres, yet, but someday soon).

We’re in love with these little cameras. The quick and dirty production makes it easy to share thoughts and experiences in real time, as well as rapidly iterate through different design and camera placement. Automatic stitching means we can try EVERYTHING, no more hours of stitching work every time you move the camera, no need to carry bulky equipment, no setup time.

In other words, perfect for researching spherical film as a medium for communication.

In eleVR casual 001, Emily takes us to a variety of places and places us at a variety of heights. I like how the automatic gyroscope makes the landscape stable while filming driving up a hill, and how the camera captures things that were not intentionally framed and could never have been predicted, such as the truck that startles us by speeding by. I particularly love being under the wire spooler near the end. The camera fits in spaces I could never fit in!

In eleVR casual 002, I (Vi) reply to Emily’s video. In contrast to Emily’s whirlwind of locations, I focus on two locations (the Vibrary and a drum practice room), and play with editing between them.

In eleVR casual 003, Andrea takes a turn. She really wanted to film a concert, but, of course, filming was not allowed, so…

A benefit of spherical for casual filming is that framing your scene is no longer necessary. You know everything’s gonna be in frame; no turning the camera back and forth. It’s a sign that there’s an art and skill that’s NOT happening, but it also makes things easier when the purpose is more communication and less visual art.

Still, I’m starting to get a sense how VR video can have lots of skilled set design and composition techniques; at one point I had the high hat of the drum room overlaid right where the lid of the blueberrye jar is in the Vibrary for visual matching, which was the slightest hint of just how much could be done with consonance between locations.

You can torrent the videos and then open your local copy using the eleVR web player, or any other spherical player. Check out our downloads page!

Updates: WebVR, PhoneVR, Wearality Kickstarter, etc

posted in: Uncategorized | 0

A bunch of quick related updates on headsets, webVR and phoneVR stuff, and new cameras.

1. Wearality Sky prototype and kickstarter

Wearality is kickstarting their first commercial headset, the Wearality Sky. We expect it will have no problem reaching the funding goal, so this isn’t a plea to help them out. But if you do phoneVR or are interested in VR and have a 5 or 6 inch phone, I highly recommend preordering a headset on the kickstarter.

We’ve gushed about Wearality’s beautiful optics in previous posts, and David Smith guest starred on Talk Chat Show Thing episode 3. We’ve been working with the new design for the last couple months and are excited we can finally talk about it! Basically, I prefer a Galaxy Note 4 inside the Wearality holder over any other thing you can buy right now (including the gearVR and Oculus DK2).

Like Wearality’s previous prototypes, the field of view is amazing. After seeing David’s work, it’s hard to imagine how other headset designers settle for such a narrow field of view. Unlike previous prototypes, it folds up and fits in my pocket. And it really does. It’s a functional item.

Mostly though, I love Wearality’s approach to open VR and the web. Their vision of what VR can be, who will use it, and who will make things for it, is, much like their lenses, open and flexible to many viewpoints and many kinds of faces. (Rather than the metaphorical narrow exit pupil of companies with narrow expectations of what VR is and who it is for.)

Also, GLASSES. If you wear glasses, you will very much appreciate this design.

At first I thought the open design was a concession to foldability, but after working with it I found it to be a huge asset. The optics make the field of view wide enough that there is less need to block out the sides and no need to put a divider between the two halves of the screen. The openness makes it feel good, instead of being trapped in a black box you’re still in the world. This is extra good for sharing, and is less intimidating to those new to VR. You can see what people are looking at, or if you’re the one looking, actually hold a conversation and talk with shared context. Also it feels cleaner and can be used even on a hot day.

Talking to David Smith, we learned that the open design was actually based on a body of research about what makes people feel sick, that having a stable world in your peripheral vision is important to avoiding nausea. I’ll note that it’s still possible to get nauseous using the Wearality Sky (especially if you’re in love with roller coaster apps), and I’m not the best person to ask for comparison because I don’t easily get nauseous in either case.

The best part about it being open, though, is that you can access the screen. Finally, actual interactivity and controls in a wireless device, beyond the inconsistent single-button magnet of a google cardboard or peripheral-heavy gearVR!

We immediately needed to experiment with this, which brings us to:

2. webVR experiments, now on mobile (and Firefox Nightly).

For a while everything was broken everywhere, and then Andrea did Andrea magic. Maybe she’ll post more about that later, but if your webVR stuff is all broken (right now even Mozilla’s own demos are broken), might want to check out Andrea’s new version of VREffect.js on github.

Also, now we have a lot of our old webVR stuff working on mobile. Works best in Chrome, needs a phone that can run webGL. Tap to go fullscreen.

There’s still a lot to experiment with, but I wanted to try tap-to-move controls for use with open phoneVR holders like Wearality’s.

Child” (which I blogged about earlier) now has tap-to-move, where a tap sets your forward speed depending on how high you tapped. Near the bottom to stop, near the top to go fastest, or anywhere in-between. Already I can see where I might want to improve or add things, but a good starting point.

Monkeys” is probably our most popular demo, because it uses awesome monkey models (by Will Segerman) transformed into various 4 dimensional symmetry groups and then projected back down (done in collaboration with Henry Segerman and Marc ten Bosch), also the 4d normal rainbow shading is very pretty. Tap to change symmetry groups. More on this and related 4d Virtual unReality soon.

underConstruction” is a very important example of web technology. Tap to change between three choices of obnoxious background color. I aspire to add more tap-to-do-thing technology, as well as a page visit counter, but we’ll see.

All my old webVR experiments on my github also work (though cannot access movement and keyboard options), our other stuff will be working soon, and many better experiments will work in the future.

3. New Cameras!

All our new camera equipment arrived yesterday in a giant pile of awesome. Here’s some initial thoughts.

3a: Gyroptic 360cam dev kit

It’s pretty cool, auto-stitching footage from three lenses in almost a full sphere (no bottom). Not very high resolution and as a dev kit still has some bugs to be worked out, but very awesome to have an easy-to-use mostly-spherical camera that autostitches (not perfectly, but, it does it!). It will be very exciting assuming we can get the gyroscope and livestreaming working, but, we’re not there yet. We’ll talk more about it and share footage soon.

3b: Ricoh Theta

We are extremely impressed with it. Almost never in VR does a thing actually do what the marketing claims it can do, but the Theta delivers. A truly consumer-ready device, takes real pretty fully spherical photos with good automatic stitching, has an app that lets you access all sorts of real camera photo options like ISO and exposure. The video option is low-res and limited to 3 minutes, but the ease of use makes it very appealing. Think YouTube 2005. Very excited about it, also will share lots of stuff soon.

3c: CamRah 235-degree clip-on phone lenses

We got clip-on super fisheyes for our phones, and are pretty surprised at how easy and cheap it can be to make a video that, while it might not exactly be spherical or have good resolution, is effective at sharing an experience in an immersive way, with just a phone and an inexpensive fisheye. We’ll share that soon too.


Finally, lots of easy-to-use camera options for actually making content! After a year of hacking together gopros and hours of frustrating stitching, the future looks light and breezy. It is such a weight lifted.

Combined with upcoming headsets and increased support of webVR by Mozilla and Google, I’m pretty optimistic! Everything is so much easier now!

We’ll cover all this stuff in more detail, including footage of and from various devices, in a Talk Chat Show Thing coming soon.


eleVRant: 360 Stereo Consumer Cameras?

posted in: Uncategorized | 0

There’s a bunch of mono spherical cameras coming to the consumer market soon, with automated stitching and easy workflow.

But what about stereo? Why are there no consumer stereo spherical cameras, and what might we look forward to seeing once they do exist? What will be the standard for consumer VR video capture?

Spoiler: the ultimate conclusion is that 1. stereo is hard, 2. sacrifices must be made, and 3. that it will look less like this…

stereo pentagon
There are cameras in my camera!

…and more like this:

Pretend the googly eyes are ultra-wide lenses.

But this is research, so we need to go at it from the direction of most efficient camera setup based on geometry and technology and algorithms, no matter how much I am tempted by an a priori “everything is phones, everything will be phones, phones.”

1. Stereo is Hard

First, let me reiterate that Mono is easy. For mono, there’s a “correct” answer. Two camera views, taken from the same spot but in different directions, have a correct stitch that you can aim for (and do perfectly, in the best case). The idea of a consumer camera for spherical VR video, that automatically stitches from all the lenses and outputs a regular mp4, doesn’t warrant very much skepticism.

With stereo, you cannot simply make the perfect calibration for your camera setup. The same thing that makes stereo vision work, that things only match up one depth at a time, is exactly what makes the perfect stitch impossible if there’s distance between cameras (and to get stereo all around, you need to have distance between not just the left and right eye footage, but footage within one eye).

A stitching calibration can only ever be correct for one depth at a time, and only in the case of mono video does everything appear at the same depth and thus perfect stitching is theoretically possible.

In the stereo pair below, you can focus on the pink, orange, or yellow highlighter, each at a different depth, exactly because they appear in opposite orders from one eye to the other. But how would you stitch that?

Stereo pair. Back away from your monitor for best results.

Some cases are theoretically possible to deal with algorithmically. When someone is sitting on the stitch seam close to the camera, you could detect that by building a 3d model of the environment, and stitch for that depth. That calibration will not work for the stuff behind them if the person moves out of view, but with really good software (that may or may not someday exist), the stitching could dynamically change to stitch at the new depth at the seam, constantly building and updating a depth map of the space to always stitch the closest objects correctly (which can hopefully be reasonably rendered just from the footage, or maybe by adding an infrared camera or something).

But even this process for “perfect” stitching based on computed depth only does a correct stitch for that one depth, and if your scene has visible objects at more than one depth, objects behind the closest object will appear doubled. And most objects are not perfectly flat and oriented directly facing the camera, so even your most perfect stitching of an object will be a little off, which will be noticeable for things like human faces. There is no such thing as a perfect stitch that avoids this, because while stereo video pretends it can be mapped flat onto a sphere, that’s a convenient hack rather than a mathematical truth.

The ideal automatic stitching software might be able to detect and stitch optimizing for the pink highlighter, or the orange, or the yellow, or the background, but how would it choose?


You must make choices and there’s no right answer. When stitched by hand and filmed with the limitations of stereo in mind, it is possible to set things up in ways that avoid hard choices. You can decide what the focus is of the scene, what the right depth is, whether doubling some things is better than losing others.

If there’s enough overlap between footage, you can make the stitch lines avoid hard areas. You can do piles of post-production to paint out doubled objects and other errors. The overlap can look similar enough that it can be warped into looking smooth, and you can take advantage of that with good set design and blocking, with avoiding having multiple things at multiple depths near the seams.

You can film with lots of cameras and try to spread the inevitable distortion evenly across an entire sphere with as many cameras as possible, or you can collect it into convenient corners.

You can leave the realm of computation and enter the realm of artistry.

Which is not exactly a good answer for a consumer product, and might leave you wondering if it’s worth even trying.

If you have the means to do piles of production and postproduction, it’s not clear that simultaneously captured stereo spherical video is the right choice (except for live events); you could shoot asynchronously and composite the pieces, green screen actors onto 3d rendered backgrounds, or just do completely 3d modeled stuff that you can actually move around inside of. But given the expense of both creation and playback (user-created VR needs to be playable on phones, and video will out-pace 3d models in graphical realism for a while) that stuff is going to be out of regular consumer’s hands for a long time.

But consumers are also creators, and whatever the future brings, humans will still be driven to capture and share their own experiences in whatever medium can express them, perfect or not. The only question is what form consumer VR video might take, and what sacrifices must be made to the perfect ideal in order to make it inexpensive and automatic.


2. Sacrifices must be made

Our most consumer-like stereo spherical camera is our Hippo prototype [above], with pairs of cameras facing out in four directions, plus top and bottom.

It is not very consumer-like at all.

If the GoPro camera could capture as tall as it could capture wide, we would only need one camera on top and one on bottom (top and bottom aren’t actually in stereo because math), or if they could actually capture over 180 degrees vertically, we could eliminate the top and bottom cameras altogether.

Bringing the number of cameras down to 8 would still not solve any of the real problems keeping it from being usable by the average person:

  • It must be a connected piece of hardware where the different cameras talk to each other to sync timing, exposure, white balance, etc.
  • The hardware and lenses need to be super precise and rigid, so that a stock stitching calibration can look good at least in the places that don’t have stitch lines (no having the right eye slightly tilted from the left eye).
  • The stitching software, besides automatically stitching the pile of footage, needs to be user friendly, reliable, and efficient enough to run on a normal laptop.
  • The footage must be organized and stored on a single accessible SD card, which can be transferred or USB’d over to a computer where it can be automatically stitched using a standard calibration.

It would be expensive and have huge stitching errors, but there’s no technical reason it couldn’t exist today, and it would at least be usable.

(This is your regular reminder that right now a pile of GoPros does not have any of those necessary features, currently isn’t even close to a consumer VR camera, and is NOT a good choice for those who want to actually make stuff rather than research and innovate and frustrate.)

But as long as large errors along stitch lines can’t be avoided with automatic stitching, I think your best bet is to use as few cameras as possible and avoid putting stuff in error-prone areas. As long as the cameras are very precisely aligned, you can at least get the non-stitched sections looking good, and put the burden of avoiding stitch lines on the person filming, not the person producing.

We can make this burden as light as possible by making our stitch-safe zones as wide as possible, and assume people will get used to having a few terrible stitchy areas, just as signs of amateur production are usually assumed and ignored in other amateur-created content.

Such content turns a technology into a medium. All the pictures in this post were taken with my phone and are objectively terrible as photography, but it doesn’t matter because the point is not the photo itself but what the photo is of.

Googly eyes represent wide angle lenses. Such content! Wow!

Instead of eight cameras in a square, you could do six in a triangle (plus top and bottom). Stereo pairs would have a sharp stitching angle with a very far stitching distance at the three corners, but you’d also have three wide stitch-safe zones perfect for filming selfie-style or with a friend.

The errors might be pretty bad, but they’d be clumped together into three avoidable stripes of error from floor to ceiling. With some creative set design and heavy post-production, a wide-angle stereo triangle could probably do pretty well for capturing live events. Plus, it’s still theoretically possible to get the corners to stitch into actual stereo video with post-production work by hand.

Going lower than 6 cameras and still getting theoretical stereo all around is difficult. 5 cameras couldn’t work in stereo pairs, but could work using virtual cameras and panoramic twist techniques.


Unfortunately the stitching tradeoff is extreme. Five ultra wide angle lenses (like 180 degrees), used as 10 virtual cameras, means ten stitching errors. That’s not acceptable for consumer cameras with automatic stitching.

I’m also not really sure how noticeably the image would warp, as you go down to sharper and sharper angles between cameras. Could you do 4? The field of view would have to be at least 200 degrees to get the 8 virtual camera views you need for stereo all around.

Three cameras, and you’d need each lens to be like 280 degrees, wider than any lens I’ve seen. That’s the minimum if you want each point in space to be seen by two cameras with parallax, and there’s a nice geometric effect where the result would be quite a lot like the 6-camera stereo triangle. It doesn’t help the stitching because it’s still 6 virtual cameras, but the stitch lines would overlap, for 3 stitchy areas rather than 6.

3camcircle1At this point, the smaller number of cameras can be good or bad considering the drastic field of view and resolution changes, but there’s possible worlds in which the costs fall in favor of either one. Right now I think 6 is better, and the only reason to use 6 in a polygon rather than 8 in a square is cost of stitching, not cost of cameras. I’m not an expert on production costs for various lenses and resolutions and other hardware bits, but I’m happy to provide theoretical setups and let others decide what’s optimal to produce using current hardware. Maybe 280 degree lenses are easy and have just been waiting for an application.

Resolution is important, though, and a few more stitching errors (as in a higher order stereo polygon with top and bottom cameras) might be worth it if that’s what you need to do to get enough resolution in. Bad resolution is a bottleneck keeping stereo spherical videos capabilities from really shining right now. Resolution not only highlights the awesomeness of livecaptured real actual things over rendered objects, but has a drastic impact on the stereo effect. Stereo vision relies on differences between two images, and sometimes those differences are subtle.

Anyway, maybe the highest priority is not aspiring to get full spherical stereo, or even just 360 stereo. Maybe a hybrid stereo camera, a video sphere that’s mostly mono with a stereo section at the focus, is the right choice for consumer VR capture. For example, three 200-degree cameras could be arranged as a stereo pair in the front, both stitched to a single mono camera in the back, for a full sphere with stereo in one direction.

6 gopros rubberbanded together
experimenting with a six camera mono/stereo hybrid

We’ve tried some experiments with combining stereo and mono, and concluded they can live happily together. Right now most stereo spherical video is mono in the top and bottom sections, and most people don’t notice that it’s not all stereo, or that there’s a transition where it moves from stereo to mono (the occasional person is sensitive to it).

As long as the focus of the camera and viewer are both on something in the stereo section, nothing is lost by having other parts be mono, and perhaps it even would help to subtly focus attention, as with normal flat-video focus techniques.

We cobbled together a 6-camera setup [right] to test out using front-facing right and left lenses that both get stitched to the same side-facing lenses, for stereo front and mono sides. The intent was to imitate the way that when facing forwards there is no stereo vision at the right and left edges of our vision (nose gets in the way, though in VR we can see through our nose; someday we’ll make Nose Simulator for a more realistic experience). In the resulting video, there’s no magic harsh transition. The brain just deals with it.

In the 2nd episode of our talk show we switched from stereo to mono briefly, and even though we announced when it was happening, I still didn’t see it happen. I had to pause the video and double-check. Yet, starting in mono and moving to stereo, suddenly everything changes, because my brain is getting more information, not less.

This needs further experimentation, but I’m pretty sure there’s lots of ways that stereo and mono video can work together to trick the brain. Stereo for still shots to give a sense of space, mono for moving shots that already give you depth information from movement parallax (and moving shots are a stitching nightmare in stereo so they should be mono anyway). You can use stereo for an important focal point, and mono for unimportant areas. If you look at a part of the room once with the stereo portion of your hybrid camera, it may be that your brain is happy with it being in mono from then on.

But if you’re going to have only sections of stereo, what about the good ol’ stereo polygon design form, but with only four wide-angle cameras on a 2-sided polygon?


You’d get a 2-gon, which seems like the minimal stereo polygon and thus simplest choice, though unlike stereo polygons with volume it’s not stereo all around. There’d be a giant completely mono section around the stitch line in a circle from floor to ceiling. It’d be spherical video with stereo in just the front and back directions.

And I think that’s awesome and perfect, especially for standard consumer use. I’m definitely ready for a camera that you can point at yourself while you vlog in 3D (and know you won’t get a stitching line in your face), while simultaneously also pointing the camera at a thing or scene or building or mountain that you want to talk about or just have in the background. Right now many videos have a lot of turning the camera back and forth, but if you could film both directions at once? Awesome!Ricoh Theta

The Ricoh Theta [right] is a consumer mono spherical still camera (it technically has a video function but it’s super low res and low framerate), and it’s a cool proof of concept that back-to-back ultra wide lenses can stitch for mono.

We’ve had people ask if you could use two and get stereo, and the answer is that out of the box it won’t work, and it can never be full 360 stereo, but in theory the lens arrangement could do front/back partial stereo and the information captured is sufficient, if the Theta lets you access the raw footage and stitch it by hand.

(We just bought one so we’ll get back to you with more info after it arrives.)

You’d need a completely different stitching technique than what the Theta uses, because for good stereo, it’s not enough to stitch two spheres separately; you need a calibration that aligns the two spheres as well, plus more fanciness if you want to stitch so that you don’t see the cameras themselves.

But assuming someone makes the stitching software and a version is made with high enough resolution, it’s a viable lens type and arrangement for consumer VR video.

And you know what shape that camera lens setup would fit perfectly on? A rectangle that’s about the size of, oh, a phone.

3. Everything is phones

Modern phones already have front/back cameras and are already used with VR headsets. Front/back stereo spherical pictures and video captured on the same phone you use to view and share them? Seems obvious to me.

Screen Shot 2015-04-02 at 4.43.45 PM

Right now, I think a simple four lens front/back stereo spherical camera would align pretty well with consumer pricing, workflow, content style, and the shape and capabilities of smartphones already being used for VR. Phones are tools for both capture and consumption of so much other media, and if they’re going to be used for consumption of VR, they’d better be producers of VR too.

Biggest problem is lenses. I like to think that lens technology will get to the point that four ultra wide lenses could be seamlessly integrated into a smartphone, but right now, not so much. Turns out the camera lenses are the biggest limitation to smart phone thickness, and also a substantial part of the cost. And that’s for normal narrow field of view lenses.

It could be that the four phone cameras have to have narrower field of view lenses, and there’s a clip-on accessory that snaps four wide-angle lenses into place. Clip-on ultra-wide phone lenses already exist and work (though could use some improvements), and I think a phone with four cameras plus a double lens clip designed for it could work pretty well.phonecamstitch

Camera placement also might not be able to be ideal; rather than directly back to back, since one camera takes up almost the entire thickness of a phone, they might need to be vertically offset. It’s important that each stereo pair have no vertical offset from each other, but as long as we’re planning on having terrible stitching lines anyway, having the back pair a centimeter lower than the front pair won’t make much difference.

Whatever ends up being the most common consumer VR camera, it will be on a phone, but it’s possible that stereo spherical isn’t gonna be the thing, and so consumer stereo spherical video will be uncommon compared to normal wide-angle forward facing stereo, or regular narrow-angle stereo facing a single direction, or whatever does end up fitting on a phone.

But that the efficient and simple 4-lens front/back stereo camera setup just happens to possibly fit on a phone, it’s too good not to try. Hopefully this is all obvious enough that someone with physical camera hardware skills is already working on it.

In the mean time, we’re excited enough by this design that we just ordered some ultra-wide clip-on lenses that we’ll use to film with multiple Galaxy Note 4 phones arranged as if they were a single phone with double cameras. How will the resulting stereo spherical video look? We don’t know yet, but whatever happens, we’ll post about it and make the result available for download just like all our other stuff.

So stay tuned! Why yes, we do have an rss feed.



eleVRant: Camera Circles for Stereo Spherical Video

posted in: Uncategorized | 0

Previously, we’ve discussed mono camera balls and stereo polygons. Today’s topic is camera circles, usually a set of eight or more cameras placed symmetrically and directly outward-facing in a circle, that are intended to capture stereo 360 video (possibly with the addition of upward/downward facing cameras for mono completion of the sphere).

The conclusion will be that camera circles can work for stereo if you do it right, but it’s probably better to arrange those same cameras in a stereo polygon instead.

Screen Shot 2015-02-26 at 9.13.19 PM

If you want to see in stereo, you’re going to have to have different views for each eye, which means the views you take for each eye can’t both be outward-facing from the center of a camera ball, but that doesn’t mean the cameras themselves can’t be outward-facing.

For stereo vision you need parallel views an inter-pupillary distance apart. But you don’t need actual physical pairs of cameras one IPD across, with one set of cameras for left and one for right (as in stereo polygons). There’s other setups that contain the necessary views, as long as you take care to use the footage correctly, and a circle of cameras can do it as long as the field of view of the cameras is enough to double-cover the scene.

Whether the camera itself faces outward and you take a section of footage centered at an angle, or whether the camera faces on an angle and you take the center of the footage, that doesn’t really matter in a mathematical sense.

If you have eight cameras in a circle, you can point them all outward and then use each camera twice, taking the footage facing to the right for the left eye and the footage on the left side for the right eye. It’s equivalent to having 16 cameras with a smaller field of view that are set up in the stereo polygon configuration.

Screen Shot 2015-02-26 at 9.18.32 PM

I’ve talked about panoramic twist as a mathematical thing before, but when it comes to physical camera setups, I like to think of these 16 footage slices as virtual cameras. This is especially useful for considering resolution and field of view. One lens with a 120-degree FoV might become two virtual cameras covering 60 degrees each, or 45 degrees each, or something else, depending on your setup.

The goal is to have right eye and left eye virtual cameras be centered on angles such that the views are parallel one inter-pupillary distance across on your camera setup (meaning they’d look like standard stereo pairs), and if the size and field of view isn’t right this might mean leaving out the center section of the footage, which would be inefficient, or having overlap between the right and left eye sections, which would cause stripes of mono in your final product.

While eight cameras divided into 16 virtual cameras does work, at the moment I don’t think it’s better in practice than eight cameras in a pair square. Eight outward-facing gopros means eight pieces of footage stitched together per eye, and that’s eight stitching lines per eye, which means not a lot of room for a person to be filmed without them getting a stitching line through their face. While each view is a bit more accurate, I have yet to see the kind of automated stitching that would make it worth it.

JauntVRs current setup
JauntVR’s current setup

 In theory, with great algorithms that can smooth over stitching errors by themselves, you’d want as many cameras as possible, leading to a huge amount of needed stitching, but each stitching error would be tiny and fixable algorithmically. This is what I like to imagine companies like JauntVR are going to work towards someday.

JauntVR is one of the only companies serious about trying to figure out stereo video in a simultaneously-captured sphere (or sphere minus bottom, in their case), and they have the best results I’ve seen from anyone with a camera circle setup, so I’m gonna talk about them a bit.

Right now, it looks like Jaunt’s camera setup has 12 gopros around, all angled directly outward (plus two on top for a mono ceiling).

Jaunt is known for frequently saying that they do not use stereo pairs of cameras, that they instead use many outward-facing cameras and then use 3D algorithms. But whatever they brand it as, the end result is a stereo pair of spherical videos that stitches together right and left eye views, if not from stereo pairs of literal cameras, then from stereo pairs of virtual cameras, and personally I think that’s plenty exciting without the extra mystique.

(Jaunt has never divulged their secret algorithms to me, which leaves me free to speculate. Many of their videos are available from their website and it’s fairly easy to see the ring of 12 stitch lines per eye. Whether their stitching of 24 views happened as a result of standard panoramic twist techniques to get virtual stereo pairs, or whether they got a similar-looking result from some more complicated process involving algorithms, I cannot say.)

It’s fun to imagine what one might get if one took those 12 cameras, split them into two sets of six, stitched together two panoramas, and hoped for the best. There’s left and right eye cameras spaced apart, so every point in the scene has different views for left and right eyes, which is what you need for stereo. But there’s no panoramic twist…So would it work? What would happen?

Eight outward-facing cameras in a circle, divided into two sets of four for right and left eyes, does not work.
Eight outward-facing cameras in a circle, divided into two sets of four for right and left eyes, does not yield correct stereo.

The proof of failure lies in one word: symmetry. When you split a circle of outward facing cameras into two sets, which set is the left eye and which is the right? Both sets are the same under rotational symmetry, completely achiral, so any choice is arbitrary.

But we know in actual views of the world it matters which eye is left and which is right. The stereo polygon setup breaks that symmetry, yielding a chiral pair that tells you which eye is which.

The actual effect of splitting an outward-facing camera circle into two sets, rather than splitting the footage and using panoramic twist techniques, has the stereo effect warp back and forth between muted and reversed. Every point around the camera is seen in stereo, but half the time the eyes are reversed.

We know from experience that subtle wrongness in stereo, from vertical shift to reversed eyes, can easily trick the brain into thinking it’s pretty good, not perfect but not fundamentally broken. But often in VR a little math can show that what you thought only needed a tiny tweak is actually fundamentally wrong!

Anyway, back to doing things the right way, with panoramic twist.

The stitching errors should in theory each be smaller the more cameras you have and thus easier to fix algorithmically, especially if you use all that overlap information to calculate stitching distance (which I hope someday will be completely automated by stitching software; it’s technically feasible but no one’s done it as far as I know). But the direct relationship between more cameras and smaller errors only exists if you’re adding more cameras to a camera ball of constant radius, not if adding cameras means making your camera ball bigger. 12 gopros around makes for a big radius, and the further from center the cameras are, the greater the total stitching error.

A.J. Raitano and his steadicam are hiding beneath the JauntVR logo in some scenes of "The Mission."
A.J. Raitano and his steadicam are hiding beneath the JauntVR logo in some scenes of “The Mission.”

For purely theoretical realistic stereo capture purposes with ideal tiny high-res wide FoV cameras, there’s no advantage to having a camera ball much bigger than one eye-width in diameter. But in the current landscape where gopros are the best thing on the market for these purposes, you might be willing to sacrifice some radius size if it means you can get more gopros in there and then use a narrower field of view to get higher resolution.

JauntVR’s overly large radius in particular works for them right now not only because they’re using gopros and need that extra resolution, but also the greater radius leaves room to hide an entire cameraman in the blank/branded area below their camera (I don’t think hiding cameramen in your nadir is the future of VR film, but there’s something beautiful about it nonetheless), and because they currently produce videos in hyper-stereo, which will never be standard but currently serves well to immediately signify to new viewers that they have entered the surreal world of 360 stereo video.

A multi-outward-facing camera setup, with enough overlap, would in theory allow the producer to change the amount of panoramic twist to be more or less stereo-y in post-production. Moving hand-held shots, which have built-in parallax information, benefit from the easier stitching of low-twist shots, transitioning to deeper stereo and realism when the movement stops and focuses on an actor’s face, for example, and even to hyper-stereo when you want that surreal feel. For gopros right now, filming at a wider angle allows post-production twist flexibility, but at the cost of resolution, and I don’t know that anyone right now has the software or the time to want to tweak twist in post, but something to think about.

Early Jaunt prototype

While I can understand wanting a camera setup that allows for more horizontal stereoscopy than is realistic, there’s no reason to give stereo camera balls more vertical distance between cameras than is absolutely necessary. Cameras look cool when arranged in a ball, and for mono spherical you’d want them evenly spaced as closely together as possible, but if you absolutely must have more than one camera to cover the vertical field of view they should be as close as possible. Assuming you’re rendering stereo video meant for a viewer looking around with their head level, there should be as little vertical parallax as possible.

Jaunt smartly abandoned their first camera ball prototype where vertical tilt cameras were spaced apart and not aligned with the center row, in favor of a single disc of cameras with a wide vertical field of view. I’ve seen many camera balls with the kind of camera placement that would be terrible for stereo, but I really only know for Jaunt that its intended use was for stereo. I’ve seen other camera balls where the middle rows of cameras were vertically aligned, and those would work for stereo if the field of view is wide enough. Worst case the horizontal stitching errors are as bad as the vertical ones, but the vertical stitching errors get better and better the more you flatten your setup.

Of course if you’re gonna do things right and make your vertical cameras close together, you might as well go all the way and also put them in a stereo polygon. I’m delighted by the design of the Bivrost, which as far as I can tell has 20 cameras in a stereo pentagon, vertical field of view covered by two cameras as close together as possible. I don’t think any physical prototypes have been made yet, but at least whoever’s marketing it is marketing something that could potentially do what they’re claiming it does, so, that’s already better than a lot of VR companies.

And is pretty 😀

But in the end, if one really must capture an entire sphere of high resolution really well-produced film, the best way to avoid the many stitching errors of the camera circle is to do it the way actual cinema is filmed today: with great cameras, a huge crew, lighting, sound, and done almost all piece by piece to be composited later.

Whether your footage ultimately gets composited into a rectangle or a sphere, many of the techniques currently used by the film industry will transfer right over. Actors increasingly act alone in minimal environments. Many major motion pictures are almost entirely, if not actually entirely, digital. Film production is ready for VR as soon as we get better spherical compositing tools, and then stitching errors can be avoided entirely.

Especially when it comes to the future of VR cinematography, if we want a close-up shot where we’re on a desk slowly wandering through a giant landscape of crumpled up discarded poetry for the opening scene of our film about writer’s block or whatever, torn and scribbled words towering over us to make us feel the helpless inferiority of the protagonist who comes into view cheek down on the desk, camera finally coming to a rest just out of range of her quivering left eyelash, cue monologue, slowly pan out by growing bigger and leave through the ceiling to find we’re only looking down into a toy universe where our poet’s angst seems exactly as existentially bereft of importance as she thinks it is, we can’t shrink our camera ball to fit between the pages. It’s going to be rendered and composited just like so much of today’s visuals.

(Yeah technology tends to get real small, but given the physics of lenses and light I don’t think we’ll have a quality pea-sized spherical camera anytime soon. Though I like to imagine tiny servos expanding the tiny camera ball to make it hyper-stereo as a robotic arm moves it through the ceiling…)

Good design of spherical cameras isn’t essential for VR film to have a future, but it IS essential for live captured events, streaming, vlogs, and consumer use. Next time, we’ll talk about minimal arrays and hybrid stereo for consumer applications.


Homework question for 3d modelers: when you render out views from highly realistic 3d modeling or architecture software, can you input any vector field to collect virtual light from? I imagine most default to orthogonal projection, but architecture loves wide-angle views, and I don’t know how much work it would be to input the two vector fields for left/right panoramic twist and get out a stereo spherical pair of stitch-free views that you could then use as background for greenscreening etc, or if these sorts of softwares let you write your own raycasting thing that you could use with it, or what.

No video is an island

posted in: Uncategorized | 0

HyperlinkSpace has a new node and it’s part video, part game, now with controller support! Before we get into the grit of analysis and the branching potential of mixing the media of video with that of modeled environments, go look at the thing>> 6_Escape (or the main page if you need a tutorial). That way the rest of this won’t be spoilers. Here’s a pretty gif to entice you.


Excellent. Welcome back.

We need better words than “video game” for a place made of 3D models that you do stuff in, so I am going to steal a word from Vi and Nicky’s Parable of the Polygons and call this a “playable place.” This place has two primary components. That tiled platform tufted with peeled grass? Let’s call it the table. Like this? Off table, Vi and I read from paper scripts performing voiceover for a pair of models hovering around acting out the play on table. Off table. On table. Okay? Okay. Now that we have our few definitions, lets walk the edges of what the heck this type of mixed media thing could become.


The arrow of online education, for example, has been toward presenting the learner with the teacher and the toy, side by side. Online ed still lacks the ability to learn with our hands, by playing with the materials of the lesson. So why not set up experiments on table, with guided lessons off table. Hands-on physics and chemistry labs come to mind, but also theatre history, teaching Shakespeare in situ at a reproduced Globe. Educational simulation on table, and an engaging face-to-face teacher off table. Add in a simple behavior tree, and off table teachers could respond to actions taken on table.

But for me at least, this effort in putting you there, bringing users minds and bodies together in a world rich with things for them to play with, is more than just giving web video the confit of continuous reciprocal causation: that simple you do a thing, I do a thing dynamics we love about face to face interactions and video game characters. It’s more than liberating video from embedded players and siloed platforms. For me, it makes video new and strange again.

Let me explain. A few weeks ago I watched a talk titled Inventing on Principle given by Bret Victor, a fellow researcher here at the Communications Design Group. In it Bret describes the principle around which he centers his design work: closing the distance between creator and creation; and asked others to consider what their own principle might be. Inspired by the call, I wanted to solidify a principle of my own that was simple clear and direct. So here goes.  *The grand red velvet curtains sweep open as trumpets ring out*

Make chimeras

Two little words (and an obligatory picture of a gato). Make work that is neither one thing nor another but instead staple together bits of debris from lots of neighboring categories together until weird liminal creatures emerge. HyperlinkSpace’s newest node, 6_Escape, is one of those chimeras. In science we have a principle called convergence which when boiled down means that “evidence from independent, unrelated sources can “converge” to strong conclusions.” I simply claim that convergence applies to cultural forms as well. From proper old school mixed media works that brought together paint and ink and collage all on one canvas, through the remixed quilt of audio samples that is hip hop, to the fledgling genre of playable places: the best stuff, the weirdest stuff is always in between.


posted in: Uncategorized | 0

Screen Shot 2015-01-12 at 3.14.13 PMA few weeks ago, I posted about spending my Christmas making an atmospheric snowy landscape for webVR. Over the next week, that landscape slowly gained a number of interactive elements until finally it got a win state and became a full-fledged webVR game.

You can play it.

The landscape was created for the soundscape, and most of the interactivity also exists just for the purpose of triggering sound events while having the space change to reflect the appropriate mood.

Having briefly been a professional composer before finding that doing math and tech were much better for things like affording to eat, it’s satisfying to find something I can do that combines the two in a way where it’s not art about sound (unlike some of the videos about music I’ve made), but something where the art is the sound, combined with a digital interactive space in a nontrivial way.

Now that this small vacation project has given me some idea of what can be done and what I can do, I can perhaps be more ambitious with a planned project. WebVR is just too fun to hack around in right now, so we’ve all been kind of sucked into it (have you seen the latest at Emily’s

Meanwhile, I’ve been using my fledgling game dev skills to work with Andrea Hawksley and Henry Segerman to create HYPERNOM, coming soon…




TCST Episode 3: David Smith and Wearality

posted in: Uncategorized | 0

After much footage wrangling we finally got episode 3 up and ready to watch!

This time we are chatting with David Smith of Wearality. He showed us one of our favorite pieces of virtual reality hardware so far. Using a pair of large lenses for each eye, Wearality achieves not only an impressive 170 degree field of view but a very wide exit pupil. Exit pupil flexibility means Wearality’s headset, unlike any other VR headsets we’ve tried, easily accommodates the wide array of eye distances and face shapes we have on our team, while keeping the image in focus in both eyes. It’s also not fussy about perfect placement on your face.wearality


You can find it on our downloads page. Episode 4 has already been released but was shot just after episode 3, so we are sticking with shooting order as episode order. This episode, shot way back in the beginning of October, took longer to finish due to some epic camera failures and lost footage. We hope you enjoy the bamboo forest ceiling that generously came to the rescue for that same now-dead camera.

Wearality has some exciting new projects coming down the line, so expect to hear more about them soon!


Hyperlink Space

posted in: Uncategorized | 0 is part VR web comic, part interactive choose your own adventure, part postVR art. Explore neighboring worlds with liberated hyperlinks, no longer links between words on a page, but space travel, opening doors between places. Making this site has got me thinking a lot about postVR art’s new implications for capital “a” Art. One example is how postVR changes the connotations for site specific work, whose access is limited by physical proximity to a geographic space. PostVR is a bridge spanning the gap between a thing and a place.

Thing vs space was a fundamental dichotomy, a central question, in art back in 1979 too, when Rosalind Krauss wrote “Sculpture in the Expanded Field.” Artists had taken to dethroning, or at least deplinthing, sculpture. Chisels were exchanged for bulldozers and shovels, and as Krauss put it, “As the 1960s began to lengthen into the 1970s and “sculpture” began to be piles of thread waste on the floor, or sawed redwood timbers rolled into the gallery, or tons of earth excavated from the desert, or stockades of logs surrounded by firepits, the word sculpture became harder to pronounce.” But these works, teetering somewhere between thing and space still had the anchor, some would even say the validity, of scarcity. The thing/space had to be pilgrimaged too, it could not come to you. So what does it mean when a space can meet you where you are, when it can be inflated around you at will? Frankly I have no idea yet. We’ll have to wait and see.

Screen Shot 2015-01-09 at 2.22.45 PM Screen Shot 2015-01-09 at 2.20.56 PM 1


The biggest grievance I have with the site right now is the inability to remain in fullscreen VR mode when jumping from space to space. Yes I know I could use a frame; but a frame would make the spaces less shareable because individual spaces would not have unique urls. While I wait on that my next steps are to get some tracking cookies working to 1: let the site know where you’ve been so it can generate a map; and 2: allow users to have a persistent inventory from space to space.

W, A, S and D rotate the view for those without a headset (with a bit of Q and E if you get really off kilter) and arrow keys move through space. For the best experience view in the VR-specific FireFox Nightly build with your VR headset, but it works as a flat-site too (at least in Chrome and Firefox, haven’t seen it run in Safari or IE). There isn’t a mobile version mostly because I have only been coding for 6 weeks and I have no idea how to do that yet.

But enough tutorial, go explore HyperlinkSpace and send me tales of your journeys!


A Virtual Holiday

posted in: Uncategorized | 0

Two holiday webVR pages for your face: Child and Twelve.

1. Child

I’m not sentimental about the holidays, but I do love having some free time where the whole world slows down and I can work on fun things in peace. I no longer live somewhere with a real winter, but now I can make my own snowy virtual world, and put it up on the web where anyone with an oculus rift and webVR browser can access it (also works in regular reality in most normal browsers).

The snowdecahedra that make up the landscape represent the dodecaphonic music (writing precise twelve tone music is like a tricky puzzle where the prize for solving it is cool noises). Dodecahedra have twelve sides, while the wireframe icosahedra have 12 vertices. The snowflakes are actually five-sided starflakes, to match the fiveness of the dodecahedral faces.

The music is a massive manipulation of “What Child is This,” with the melody taken from a different 12-tonification of it I wrote, and then in the harmonies at the end the three voices take that tone row and sing the inversion, retrograde, and retrograde inversion, at the same time. So, highly symmetric tone rows and highly symmetric objects.


2. Twelve

The Twelve Tones of Christmas is a take on the classic holiday song, but in hyperbolic space and with 12 tone music. It’s part interactive music video, part educational math visualization, and apparently genuinely terrifies some people.

I’ve been working with Henry Segerman on hyperbolic space stuff recently, so as long as we already had a tiling of right-angle dodecahedra working in VR (which happened during a long evening of hacking with Mike Stay), and since I’d just recorded a 12-tone rendition of 12 days, it just made sense to make the dodecahedra have each face be one of the twelve days and then set them to turn on and off in time with the music.

So we pulled some long nights and made it happen, along with a video version and dodecaration craft.

Theoretically, each gift tiles out infinitely in infinitely many hyperbolic planes, and then the 12 sets of intersecting planes form the dodecahedral tiling. It’s cool how it builds up over time, and so amazing to fly around hyperbolic space in VR.


I want to talk more about alternative spaces in VR soon, because we’ve been doing fun stuff in hyperbolic spaces, 4d stuff, etc, with Henry, Mike, Andrea, and Marc ten Bosch, and it’s really what I want VR for.

We’ve made the music and art for both holiday projects public domain, and the source code is on github.


Charge of the Light Field Brigade, a Manifesto

posted in: Uncategorized | 0

A while back, Andrea did a great post about why cube maps are clearly superior to the gluttonous pixel-wasting mess that is the equirectangular projection, but I have a new claim: Projections are like the despotic rulers of a bygone era, powerful, all consuming, and totally pointless. They, like all dictators in movies where the good guys win, will inevitably be overthrown by the revolution of light fields. Let’s dethrone the monarch first, then I’ll write you a manifesto to rival the masters: a Charge of the Light Field Brigade. There will be moving music and everything.

We are screen biased. We are used to getting our digital world from a forward-facing pixel-dense discrete surface, and the effects of that display technology run deep. It biases the construction of all digital content, all playback paradigms, all compression formats, all development environments, all operating systems, everything.

For example, say you stick a bunch of cameras together and make a natively spherical video, and say that video has all the information you need to get some fancy stereo working in your HMD. Thats great, congratulations, no seriously, good work. Unfortunately, the next step: saving your video so you can look at it, or store it, or edit it in any way, requires you smoosh your fancy spherical video in to some arbitrary rectangle. At the moment you are probably stuck with some 1 by 2 proportion, but the layout of the rectangle hardly matters. After all, your screens are flat and your storage format is rectangular and your compression standard is a lazy 2-dimensionally-biased jerk that’s only there as a stopgap. Projection mapping is just an intermediary step while we are stuck working mostly on screens instead of in VR/AR.

Don’t get me wrong, theres no need to break out an .obj file for video, a pile of discrete triangles. You don’t need to store mountains of 3D information for every frame, just stop trying to put a round peg in a square hole. Flat screens were a bootstrapping technology necessary to interface with the digital until we could bring it out of the metal boxes and glass-fronted handhelds into the world with us. But if we’re not careful, that history of rectilinearity will bias off-screen digital content in the most seemingly innocuous of crevices. Take the pixel:


Before you get all: “HMDs all use LCD screens with square pixels, and in fact so do all projection based AR systems, so your wrong, so there,” take a breather and let me explain.

This is a storage issue, not a playback issue. No matter what we talk about here, there will always have to be a conversion step for your favorite viewing technology. A physical display-type pixel currently looks like a glowy red thing really close to a glowy green thing which is really close to a glowy blue thing. Change the brightness of any one of the three and voila, we see colors. This kind of light-emitting pixel isn’t going away anything soon.

But the kind of pixel we use to measure digital images: PPI, 1080P, and 4K etc, are getting pretty frustrating in VR video.
I’m not saying we need to fundamentally change the relationship between pixels in a raster image, essentially just hex color values grouped by your favorite compression, but instead rethink the way VR/AR content is stored. Keeping everything spherical would mean you never resort to flattening. Instead of reusing old formats, spherical-based compression could be optimized for storing the two data points required for spherical coordinates (polar and azimuthal), rather than stretching at the poles and splurging my limited pixel budget in the worst possible places.

Spherical pixels with uniform area, from Nasa's HEALPix project
Spherical pixels with uniform area, from Nasa’s HEALPix project

Just this, just eliminating this waste, plus optimizing compression for stereo by educating it on redundancies in the overlapping eyes, increases the quality of stereo spherical video and also improves playback. Sure actual export and compression times might increase, but I am all for an upfront investment of time if we get less wasted pixels and fewer distortion passes and thus higher playback performance as a result. It’s a good old time/space tradeoff

What should we call this native storage format? Perhaps the .ssv or the .s3d or, and this is my personal favorite, the .FLMS (frankly lazy middle step). That’s right, you heard me: a frankly lazy middle step. This whole compressing and storage discussion is a little near-future for my tastes. Time for that overthrow I promised you.

A few years out and we can ditch spherical surfaces that need to be awkwardly subdivided into healpix or sphixels (so that each pixel covers the same surface area), and replace it with volumes of light. One way, the way most people are used to thinking about 3D environments on computers, is a wiremesh stage-dressed with geometric primitives coated with different materials and a bunch of lights. But that’s only one way to think about it.

Yes, yes, I hear some of you shouting from the rafters with your ghostly wails, “Liiiight feildsss *spooky noise* Talk about Liiiiight feeelids.” OK, ok:

For those that haven’t encountered these before, the concept of light as a field was first theorized by Michael Faraday in 1846 after years of working with magnetic fields, but the first time anyone called it a “light field” was in a 1936 paper by Arun Gershun on radiometric properties of light in three-dimensional space. Don’t stop reading just because that sounds complicated. It’s not that fancy. The basic idea is that light can be measured in every direction from a central point. Think of it like this:


In the above image, iron filings act as a spherical sensor which follows rays of magnetism from a central point. The vectors of the field are captured as well as the foot-candle-esque drop-off in intensity over distance from the central point. The digital life of light fields started way back in 1996, because they, like most things in VR, were theorized by researchers well before mobile screen technology matured enough to give us the snappy HMDs of the current VR revolution.

My favorite of paper of the period was written by Marc Levoy and Pat Hanrahan of the Computer Science Department at Stanford University on Light Field Rendering. I’m gonna quote this paper a lot, so maybe just go read it.

Light field rendering is a form of image-based rendering which is just a method that uses 2D images of a scene to both whip up a 3D model of that scene and allow for the creation of new views, no modeling required. A one-directionally-captured light field (currently the only kind that exists) looks a bit like this:



 An array of renderings of a 3D computer model of a buddha statuette (at top) and the transpose of the light field (at bottom)

In the top image the plane V looks out much like the human eye, with a cone of visible light converging on one point, creating an array (what would be in humans a 2-image array) of more or less the entire scene. But this is problematic as to change the view point would require, to quote Levoy and Hanrahan here, “a depth value for each pixel in the environment map, which is [only] easily provided if the environment maps are synthetic images.”

(It is usually at this point in the diatribe that people start with the hot-gluing-kinects-to-my-camera talk, but we will leave that tangent for another post.)

Light field capture and rendering saves us from this segregation of types. Light field rendered images and light field captured images can be effortlessly mixed, and with a bit of simple linear reprocessing: any view, any head position, any tilt or angle can be rendered by the inept-est of graphics cards.

So that is my now not-so-secret goal for VR: that VR will become a medium of space. Not of video or games or the web, but of spaces created with both capture and generated images, moving and still, seamlessly integrated. And it looks like spherical light field cameras are the way to do it. Viva la revolution.



webVR experiments

posted in: Uncategorized | 0


Making webVR sites is now super easy, thanks to Mozilla’s webVR browser and three.js webVR boilerplate! All you need is to know a little javascript, or, alternatively, have your coworker be webVR pioneer Andrea Hawksley, and make sad puppy faces until she helps you.

I’ve found both those strategies to be extremely successful, and have been posting new webVR experiments on my github every day, learning some three.js basics and sticking them on my face.

Open up the webVR-playing-with index in your webVR browser and hit enter to go fullscreen (be sure to click “accept” on the fullscreen dialog or it will eat your mouse), or navigate in a normal browser using WASD + E/Q.

We’ve had a lot of discussions on what the best theoretical controls are for webVR, but for now we need to design with mouse+keyboard in mind. How do you click links? Mousing around a 3D space with a regular cursor as you also turn your head is extremely awkward, so for these pages I tried using mouse movement/click as an abstract way to interact with the scene, no absolute position in space required.

For links, we like the idea of looking at a thing and pressing space, because space is a big button that’s easy to find by feel, and I like the metaphor of using space to “jump” into a new page. Another possible button would be “enter,” but I like enter for entering VR mode.

Starting from the index in VR you can navigate around without having to exit fullscreen. The individual URLs are index, spindex, compound, wave, and pointland (so far. I’ll probably keep adding things until I get bored. [edit: undefined and particles exist]).

Ideally all these directions would be integrated into the site via an overlay or text or good UI like mozvr has, which I will probably learn how to do eventually.

Partly I’m just messing around to try and figure out how to do very basic things. Making an index that contains mini-versions of larger objects is a great exercise because I have to repeat the same basic idea but hopefully with better understanding so I can re-implement it in a cleaner way.

The other motivation is to test the sorts of interactions that we want to have in the actual webVR version of our own site:


Andrea’s experimental navigation system for eleVR is the start of a more ambitious webVR project. Ideally, when you visit it would drop you right into our office, where you could look around and see all our latest stuff. For now, you can look around our office and select various floating spheres textured with our 360 pictures/videos, and press space to enter/exit one (though at the moment only one actually has a video behind it).

I’m sure Andrea will post more about it when it’s further along, but it’s very cool and I’m excited.

Emily has also been experimenting, bringing her iconic style to The World’s First WebVR Glitch-Horse:


Start getting your pre-nostalgia for the real world while you can, because webVR is going to change everything.


eleVR tries castAR

posted in: Uncategorized | 0

The other day we got to visit Technical Illusions and try their very cool AR headset, castAR.

Basically, you have projectors mounted to your head, and a headset that tracks your movement and projects the view you’d see in the alternate reality universe onto the real universe. Here’s the really clever part though: they cover parts of the room with an inexpensive retroreflective cloth (similar to shiny biking gear) that reflects your projector’s image crisply and clearly back to your own eyes, but not to anyone else’s. This means multiple people can be projecting onto the same place, and still each see only their own views in a clear and convincing stereo image.

We were very impressed by how well it works. It really feels like you are looking at objects on the table. You can still see your hands and everything else in the ‘real’ world in front of you as well, so it’s easy to want to grab the objects virtually sitting on the table in front of you.

The castAR can also pick up cues from little infrared LED’s embedded in the scene. This means that you can actually move your whole body around and look at the virtual objects from different angles.

The current field of view leaves something to be desired and the example demos so far are uninteresting, but the technology is there. It’s going to be very convincing once some good applications are made that take advantage of having an entire alternate universe sitting on the table in front of you that can be seen and manipulated by multiple people in the room.

Most amazingly, you can look at other humans! This seems basic, but it’s amazing how much it’s been trained out of me, having worked primarily in VR for so long. I had to actively remind myself that I was able to look away from the 3d architectural model sitting on the table and look at people when we talk. Each time, it felt like I was cheating somehow. Magic!

Also magic, because you can see the real world around you, none of us experienced any motion sickness (and some of us are quite sensitive to that in VR).

Technical Illusions has a video where you can see it in action, and there’s also some footage from our visit in the beginning of Emily’s latest vlog.

– The eleVR Team

MozVR launches with eleVR player

posted in: Uncategorized | 0

Today marks the 10th anniversary of Firefox, and as part of the celebrations, the Firefox webVR team has officially launched an amazing VR demo website: MozVR.

At eleVR, we have been fully behind the VR web since before there was any official support for webVR (the first version of our player used a third party plug-in to get Oculus data). We were amongst the first to have a real functional webVR demo, and were delighted to contribute when Josh Carpenter and the Firefox webVR team asked us if we would be interested in integrating our player into the Firefox webVR experiments. We share the desire of having the VR web as accessible and open as the flat web, and it’s been great working with Mozilla to get our open source video player and creative commons content functioning on MozVR.

The MozVR website is intended to showcase how one might interact with the VR web, allowing you to navigate between a number of VR demos. It currently includes two films that are being played using the eleVR web player (one of which is our very own eleVR talk chat show thing episode 4). It’s a small glimpse into a future where you plug in any headset, open your browser, and have the entire web around you, viewable with no extra plugins or applications.

Mozilla’s webVR-enabled browser (which is just as easy to download, run, and install, as regular Firefox,) natively supports several versions of Oculus headsets, and will support more types of headset as they come out. The experimental VR Chromium browser also works with the site, and we hope to see more browsers supporting webVR soon.

– The eleVR Team

A Collaborative Work

posted in: Uncategorized | 0

Go ahead. Watch the video before you read a word I have to say. I know you will anyway.

When you get back: This is a painting.

natural history museum

The Natural History Museum, Rackstraw Downes, 1976-1977

Look at the dark detailless bulk of building rendered skyline by backlighting, the spreading fingers of the trees, the sidewalk bent around the warping curve of projection. The field of view requires a nearly 180 degree turn of the artists head to capture, and it was this motion Rackstraw Downes was painting as much as the landscape itself. He painted a shifting point of capture.

Once you move from the 3D world…to a flat surface you inherently move to the world of metaphor.

-Rackstraw Downes

Downes work approaches landscape painting as a study in looking and translating from the language of volumes to that of flat images, a topic I have been more than a little obsessed by since I started on this whole building VR web video of the future thing. VR is after all still a very flat place, flat screens with head tracking on top, flat video codexes, flat image sensors. But a rant on the exact escape velocity required to get us natively 3D through out the video pipeline is another blog post entirely.

Lets stick to that shifting POV. Downes said of his work, “Everything changes as you make the minutest motion of your head, and still more when you move your shoulders.” He depicted his experience of the changing gaze in his paintings, but if that same insight is applied to audiences, I can paint with your gaze as well. If the motion of your head, an action made input in VR, changes everything, then the space around you, how you perceive it, and you, yourself are all up for grabs in VRs palette.

Enough stalling, time for the proper post-mortem:


The video above is a portion of a piece titled “A Collaborative Work” and was made by Arletta Anderson, on dance, Mike Rugnetta, on audio and me, Emily Eifler, on video. We worked with curator Ken Becker to create a site specific work at the Wattis Institute for Art in San Francisco where it was originally shown as a site specific piece along side live performance, projection and generated sound. Each participant had a live portion and a recorded portion.

Here’s how the project went for me:



Specification: Depict layered change

: First change the space, then point of view, then you (or at least your position).

Step one: Started with reproduction of real space

Step two:  Break it

Things that happened I didn’t expect:

Slowly time shifting one set of walls meant that eventually looking inside a building is also looking back in time



Specification: Act as support vehicle to those traveling


“Your turn.” *soft smile, friendly, welcoming* “Come sit. Have you ever tried a VR headset before.” *listen carefully* “That’s fine. no experience necessary. We will put this part over you eyes and you’ll pull the straps over you head. If you ever feel uncomfortable and want to stop just tell me, I’m not going anywhere.” *help with headset* “Feel free to look around. I’m going to put the headphones on now.”


*Help take headphones and headset off* *smile*


Things that happened I didn’t expect:

Many people after viewing the video portion made a point to tell me it was emotionally moving in a way they couldn’t quite describe.

For the 4 minutes I sat watching each person experience the piece, watching for reactions, I was free to stare. To look closely at the persons outfit, or neck muscles, or necklace. To see the little smiles they didn’t think anyone else could see.

Mike and Arletta and Ken were all very helpful in shaping my think around making a space specific work. Now I am off to go bug them into making more art with me in the future. If you are interested in seeing more of our collaboration check out (soon). You can download the “A Collaborative Work” video from our downloads page. Also, going forward on eleVR, I’d share the trove of artists out there who’s thinking and work have interesting things to say when applied to VR.


Photo Credit: Rosa Jung

Talk Chat Show Thing Episode 4: Mozilla and WebVR

posted in: Uncategorized | 0

Episode 4 is here! (Yes, I know they are out of order.) You can find it on our downloads page.

This time we are chatting with a team from Mozilla about their work on the VR web, but really if we are being honest you want to watch this for the stunning sunrise we got up at 5 in the morning to shoot just for you. Come sit, perched atop the Mozilla building at the foot of the Bay Bridge in San Francisco watching the glittering Bay Lights fade as the sun rises over the east bay hills, and listen to us chat with Josh Carpenter and Diego Marcos, a couple of the people working on bringing VR and the web together. Their team also includes Vladimir Vukićević, a co-creator of webGL and now Director of Engineering for Mozilla. 

The Mozilla webVR team wants to create a browser that allows the wealth of web developers out there to easily create native VR websites without having to manage the specifics of output to every HMD that might grace the market. The web has long been a hardware-agnostic medium and they want to keep it that way for the VR web as well. How will we navigate this new VR web? We discuss ideal abstract user controls and the potential of voice and even EEG as hands-free interaction. What will it look like? That’s up for debate as well, but we do look closely at comfort, usability and meat-space examples like phones as positionable customizable HUDs overlayed on the 3D environment of reality.




10 fun and easy things that anyone can make for VR

posted in: Uncategorized | 0

“I want to make VR stuff like you do. How can I get started making content if I can’t afford tons of cameras and equipment?”

People ask us variations of this question all the time. And it makes sense, because being able to build your own thing and own a creation and show it to people and say “I did that” is one of the most empowering things for people. I love things that give me the opportunity to figure things out, expand on that, and then create something of my own. There’s a reason why my favorite video game, Chuck’s Challenge, is a puzzle game with a detailed and easy to use level editor that players can use to create and share their own levels.

Repeated OpinionAt eleVR, we come at video from a “Youtube” background rather than a “Hollywood” background. We want VR to be accessible to lots of people all making their own unique content. For that reason, I’m very excited by the various Kickstarter campaigns for affordable panoramic video cameras like bublcam and 360cam that stitch for you, even though these cameras can’t do stereo video (a much harder problem). But, what if you want to get started right now, or you don’t want to have to shell out for a panorama specific video camera. What can you create right now?

One option is to create a remix video, like Vi blogs about doing here.


I’ve been exploring in an entirely different direction. You might remember that I really like the idea of phone VR. There’s something really compelling about having a single little box that can do everything for you. A magical device that you can use to check your email, watch cat videos, take pictures, view VR content, and maybe even talk on the phone. But, you know, if that magical device is really going to do everything, you had better also be able to make your VR content on it.

And, handily enough, there are already tools for creating VR content on your phone. Still panoramas can easily be made with a variety of phone apps. For example, Google’s Photo Sphere application allows you to quickly and easily capture a fairly high resolution equirectangular still panorama on your phone. In fact, a high enough resolution image that it’s actually too large to be used as a texture by webGL on my computer without downsizing it a bit! You can check the max texture size that is supported by your setup by going to this url: For my computer the max texture size is 4096px, so any image with a larger dimension than that needs to get downsized before it can be used as a webGL texture.

I first played with panorama applications a few years ago, but without VR, I didn’t really find them compelling. Mousing around the world and looking at a tiny slice of it on a phone screen made for frustrating ways of looking at pictures, and the raw image with it’s weirdly distorting equirectangular projection was just not that easy to look at. And, even with VR, it’s not necessarily immediately obvious why mono photo panoramas are particularly compelling, but I’ve actually been having a ton of fun experimenting with them.

To make it easy to see what I’m describing, I’ve also created a fork of the eleVR web player to show panoramic images, with a series of panoramic images that show these ideas. Just like the eleVR video player, the eleVR Picture Player works with the Oculus on webVR enabled browsers. It also works with Google Cardboard on your webGL enabled iPhone or Android device.


Go check out the eleVR Picture Player here to see the example images that I describe below. If you make your own panoramic images you can look at them in the picture player as well, by loading them using the file load button in the bottom right. Just make sure your image resolution doesn’t exceed the maximum texture size for your device!

Here are 10 fun and easy things you can make in VR using phone panoramas as your starting point. You can look at examples of the still panorama ideas by choosing the appropriate image from the image selection dropdown on the eleVR Picture Player. Although I mention some ways to create video and stop motion using still phone panoramas as a starting point below, I’m going to save examples and thoughts on those for later blog posts.

  1. Take and share awesome VR vacation photos. “Kirby Cove” is an example of this. This is the easiest thing you can do (it’s really what phone panorama apps were intended for), and it’s way more fun than just sharing old school flat photos.
  2. Create an “I spy” or “Where’s Waldo” type experience. Hide things around the scene, then take your panorama, or have a friend move around to different places as you take the panorama so that they show up multiple times. Share it with people and try to see if they can find all the places that things or people are hiding. Vi shows up at least a baker’s dozen times in the “Utrecht Canal” panorama. Can you find all of them?
  3. Apply an image filter to turn your image into a surreal universe. For example, I used a “sketchification” tool called “My Sketch” to make the “Mosaic Math Art” panorama look more like a sketched universe. A fun side effect of “sketchifying” the panorama is that it makes the stitching errors look more like artistic decisions than problems.
  4. Combine different parts of multiple panoramas to create a scene that could never exist in the real world. I combined a Mars panorama from Nasa with a night sky to make the “Mars with Stars” image. This picture was inspired by one of the demos on the new Samsung/Oculus Gear VR.
  5. Try taking a couple of panoramas from the same point but starting the panoramas just slightly offset from each other, and then combining them one on top of the other. It’s possible to do some adjustments as you line them up as well. This is really hard to do well, although it does work better freehand than you might expect. “phone stereo test” is an example of this. Ideally phone panorama taking software could actually stitch stereo panoramas for you, but nothing can do that right now. Pro tip: You can shift an equirectangular panorama left and right without affecting it’s ability to get stitched back into a sphere, so you can line up the panoramas as you put them on top of each other.
  6. “Photoshop” in something interesting. I like cats, so in “Denver Botanical Gardens”, I have gratuitously added some cats from my earlier blog post on projections to the scene.
  7. Take a series of panoramas where you always starting looking at the same point, but move slightly each time to create a stop motion-y VR hyperlapse experience.
  8. Experiment with other ways to create stop motion videos using your phone panorama as a starting point and moving things around in the scene (probably using some photoshopping software).
  9. Make a remix video like Vi’s incorporating panoramas that you have taken yourself.
  10. Combine your panorama with a green screen video to create an easy VR video experience without needing tons of cameras. I love the idea of using a simple pair of stereo cameras to capture video over a green screen and then putting that video into a still panoramic background to easily create a “stereo” VR video experience.

Obviously, this is just brushing the surface of things that anybody can create with just a smart phone (and sometimes also a computer), but I hope that it gives people who have been wanting to try creating their own content in VR, but feel like they don’t have the skills or money a starting point to explore from.


The Process By Which Repeated Opinion Becomes Fact

posted in: Uncategorized | 0

A new experiment is available for your face: a VR remix video.

I think it’s the world’s first VR remix video? WORLD’S FIRST VR <enter specific thing here>!!! And you can torrent it here.

Somehow, VR film has already attracted some “common wisdom:” things like that you can’t do hard cuts, that you must aspire to be as realistic as possible, that you must have zero visible stitching, a high frame rate, high resolution, etc etc.

Two weeks ago I saw a draft of Emily’s VR film for “A Collaborative Work“, in which she breaks space by stitching together still footage from moving footage. It was transformative. In that moment VR film changed forever for me, and I decided to throw out everything I thought I knew about VR film and do some hardcore space-breaking.

I’d just made this experimental music piece “The Process By Which Repeated Opinion Becomes Fact,” and it demanded to have an experimental music video.Repeated Opinion

So I tried out a few things:

1. Remix video, to prove that you don’t need your own fancy 360 camera to make VR video art. We now have many VR videos available under a Creative Commons noncommercial share-alike license on our downloads page, and I’m hoping that as our available collection of footage grows we’ll start seeing more artists take advantage of it.

For this video, I chose a moment that had consistently gotten a reaction from viewers: when I demonstrate how close you can get to the camera without stitching errors in the second episode of our talk show, and tried repeated experiments on it to mirror the form of the music.

2. Non-realistic abstracty stuff, because while virtual reality is cool, virtual unreality is even cooler. It’s tough to invest all the work of filming and stitching on something crazy that might not work, but having pre-existing footage to experiment with makes it worth it. So I tried out a bunch of effects and surreal things to see what works. Turns out, our brains are surprisingly forgiving with many of the effects. Now that we know of some crazy things that do work, we can confidently film with these things in mind.

Stereo video adds some difficulty to the process. If an effect affects the left and right footage differently, you might no longer be able to mesh the stereo. Slight differences between eyes end up being huge differences when high-contrast effects are applied, and the mirrored footage is not actually mirrored, but mirrored with a swap of left and right eye and then blended.

3. A flat cut of a VR video. I used animated camera motions to make a flat version of the full spherical video, and uploaded it to my secret second YouTube channel:

We learned a lot from this, and are already working on the next experimental remix video.

I’m interested in the relationship between the flat cut and the original VR film. Future VR music videos might have a lot going on, and you’ll miss a lot on the first viewing. Short music videos are especially conducive to repeated viewing (this may be more likely if it’s your favourite pop song than an experimental 12-tone piece). I like to imagine people watching flat cuts of a music video that highlight various cool things going on in the VR version, and then enjoying finding the things from the flat cut during repeated viewings of the spherical version.

I can’t wait to see a VR music video directed and filmed from the beginning with this goal in mind. And we’d love to see what other people can do with our creative commons footage.


eleVRant: Stereo Polygons

posted in: Uncategorized | 0

Last time we talked about camera balls for mono spherical video. Today we’re getting into stereo spherical video, starting with a focus on the most common “3d 360” paradigm: the stereo polygon.

IMG_20140606_185643Full panoramic stereo video is made by capturing two panoramic videos of opposing panoramic twist (see earlier posts to brush up on how that works). You only need two cameras to cover each place where you want to see in stereo, but those two views need to be level with where you expect the viewer’s eyes to be. This is why fancy polyhedral arrangements, as beautiful as they are, are mostly thrown out in favor of discs of cameras that have additional mono coverage on top and bottom, though there are other possibilities.

The simplest arrangements involve one set of cameras for the left eye and one set of cameras for the right eye. You could think of this as being stereo pairs of cameras that then get stitched to other stereo pairs of cameras, which most setups make obvious to see, but it’s more accurate to think of it as two panoramas stitched from views with opposite twist.panocam

The Panocam3D setup is a good canonical example of a stereo polygon, with right/left camera pairs easily visible around a hexagon.

Think of what the left eye sees as it looks around at the left eye video for this camera setup. For anything right in front of you, it always sees the view slightly to the left of it, just as it would in real life. Of course, this holds true even when looking to your left. When you turn your head to the right and look out of the left corner of your eye at that same object in the video, in real life you’d now be seeing the view from the right of that object, but in the video you’re still seeing the same thing you were before. You still see the view from the left.

This makes stereo spherical video fundamentally imperfect, but it’s enough to trick your brain, because whenever you’re looking at an object with both eyes, even if the view isn’t quite realistic, it’s still a consistent stereo view. Your left eye sees lefter than the right eye, and thus you can focus on the object and perceive depth.

Jim Watters' Star Heptagon CamJim Watters’ homemade panoramic setup is a little less obvious, but has the same idea. Seven pairs of cameras around an entire 360 panorama, only the pairs overlap to give a star heptagon rather than an obvious polygon like the panocam.

There’s a twisty circle of cameras for the right eye, and an opposite-twisty circle of cameras for the left. The smaller radius of the setup (compared to a non-star heptagon) should make it easier to stitch.

You can see the cameras in this setup as being in pairs, but the important thing is that right and left views one interpupillary distance apart are parallel and level. Wherever there’s a left-eye view, one IPD to the right is a parallel right-eye view.

We followed this same double-camera-disk idea for our first stereo spherical camera prototype. Four cameras for left eye and four for right eye complete a 360 stereo panorama.

Our foam-core prototype works surprisingly well, though it doesn’t allow for plugging in the cameras (gopro battery life is very unreliable), and doesn’t help GoPro’s overheating problems in hot environments. The camera placement works quite well, though, with all the stereo parts being created by level parallel views from an interpupillary distance apart.

Screen Shot 2014-10-07 at 3.27.00 PM

The Panocam and Jim Watters’ setup shoot a stereo 360 panorama with no top or bottom of the sphere, but with additional cameras their stereo panorama could be completed with a non-stereo top and bottom. There is nothing jarring about a transition from stereo around to mono looking upwards; most people don’t notice the transition, which is why stereo panoramas with mono completion are standard for stereo spherical video right now.

In our prototype above, after the four pairs in a stereo square, four more cameras cover top and bottom, all of which get stitched to both the right and left eye’s video, for a complete spherical video that is stereo around the middle. If gopros had a field of view as tall as it is wide, we’d only need one camera each on top and bottom.

If you’ve been following our previous posts however, you’ll know that if we want stereo on top and bottom, we run into problems. If we have it so there’s a stereo pair of gopros pointing up that stitch such that you can look up from a forward-facing view, then they’ll go out of sync if you look up while turned to the side. You can plan to have your viewer only face their body forwards, or you can plan to have them spin around, but you’re going to have to sacrifice 3d somewhere.

Unusable Camera Mount for 3d SphericalFor an example of what not to do, think of what would happen as you look around a video captured with 360Hero’s flagship rig. It’s designed to be a pretty and polyhedral arrangement of cameras, with no thought given to the result of what those cameras actually capture.

When looking at objects in front of you, sometimes your left eye sees them from slightly to its left as it should, but then, as you turn your head, suddenly you’re seeing the view of an object in front of you as if you were seeing it from slightly above. This alone wouldn’t be a huge problem if the right eye saw the object as seen from slightly above and to the right, but instead the right eye sees the object as seen from slightly below. Instead of level left and right views that mesh to give a stereo image, you get two images that don’t mesh at all unless you suddenly tilt your head 90 degrees.

It’s interesting to think about stereo pairs of spheres designed for viewing with a tilted head or smoothly changing head orientation, but that 360heros rig has huge discontinuities in stereo disparity.

Do not buy this camera holder. I’m still hearing from people wondering why they can’t get their $1000 camera holder to output videos that work, and people still considering buying it because it’s so pretty and well marketed.

Screen Shot 2014-10-07 at 2.26.10 PM

A pyritohedral arrangement, so optimal for mono spherical video, makes zero sense for stereo and so I dismissed the idea on principle, but Emily and Andrea wanted to try it out anyway just to see what it would actually look like, so they hacked together a setup that covers all the relevant singularities and tried stitching a video out of it.

This turned out to be a good idea, because we learned just how sneaky the resulting video is to our brains. Parts of it mesh, and parts of it just barely don’t, and you’re left with a feeling that it almost works, and that maybe with just a little tweaking in the stitching software you could get it to work.

We know mathematically that the error is in the hardware setup and not our stitching, but if we didn’t, we can see how you might think it’s a viable setup and your own fault if you can’t get it to work. It was a really useful exercise in helping us understand the way the brain tricks us and to guard against making similar mistakes.

360heros also has a couple stereo polygon holders for gopros that do work. Gopros arranged vertically have a field of view that will stitch with five cameras around, so there’s two pentagonal arrangements of ten camera disks, one with one gopro each on top and bottom, and one with two on top and two on bottom. 360heros claims that the one with two cameras on top does full stereo all the way around, which as previously mentioned is not mathematically possible. You could have it stereo all the way in a horizontal circle, or theoretically all the way in a vertical circle if you spin upside-down (though you’d have to switch which camera stitches left and which stitches right), but you can’t have both, so I’ve edited their original infographic (left) to reflect what would actually happen if used as suggested (right).

Screen Shot 2014-10-07 at 4.12.50 PM

We got this camera holder for use as standard equatorial 360 with mono top/bottom, and I’m going to review it now:

You don’t actually need both top/bottom cameras to stitch a spherical video that’s fully stereo around the equator and mono on top and bottom, but we thought it would be nice to be able to use both on bottom and stitch around the tripod. It turns out that when the gopros are in the holder they actually cover up the camera holder’s tripod hole. You can mount it on an angle, though the weight makes it difficult to keep level, or leave out one bottom camera. Either way the tripod is quite visible.

IMG_20141007_191015I like that the holder holds the cameras more securely than our friction-fit foam core prototype (securely enough that you need to use a screwdriver for leverage when popping the holders open), but it doesn’t hold them extremely accurately, and there’s often a slight tilt between one camera and its stereo partner. All the buttons and slots are accessible while in the holder so you can keep it plugged in all the time (very important, given inconsistent gopro battery life), though the memory card slot is narrow enough that we need to poke it with a screwdriver through the opening to get the card out. It’s got a lot of flaws, but both Hero360’s pentagonal setups work for equatorial stereo if you know how to ignore their marketing and do it right.

Most importantly, it’s currently available, unlike most 3d camera setups which are still in development or being kickstarted or are proprietary. You could definitely make a better holder yourself for less raw cost, or wait until someone else makes a better one, but for our research group time wins over money and we got pretty much what we expected. I’m hoping someone starts selling a better holder soon, designed using actual theory.

One more example of a stereo polygon: NextVR‘s stereo triangular setup using six RED cameras. A fancy camera with interchangeable lenses means you can stick an extremely wide-angle lens on there, capable of capturing the kind of field of view you need for that sharp an angle between camera pairs. And being REDs, they capture the kind of resolution that still looks good even when you stretch it over a large field of vision.

I’d be curious to see how the stitching looks, and if it’s possible to get anywhere near decent stitching with live capture at that sharp an angle and with camera pairs so far apart. But anyone filming with this camera probably has the resources to do some post-production or non-simultaneous capture tricks to smooth over those errors. The stitching distance on the corners is probably far enough you can’t put anything near the foreground, but on the other hand, far enough that you could hide equipment such as mics and lights that get stitched around completely.

I’ve seen other examples of stereo pair polygons, but I hope this gives you a sense of what’s out there in this space.

So are stereo polygons the way to go? How many sides would we want? Why not just make all the cameras face outwards and grab footage with panoramic twist? Why not do something else entirely?

The short answer to these questions is that fewer cameras means fewer but larger stitching errors, and lower resolution. For a small low-res camera with small field of view, you need seven pairs, but three pairs of REDs sounds good to me. Each of the three stitching locations will be more difficult to stitch, especially if you want anything close or moving between pairs, but it’s probably better to have large safe stitch-free zones if you’re going to have anyone close to the camera.

For video with no forward-facing bias, truly meant to be seen level on the horizon in many directions all around you, stereo polygons plus mono top/bottom are pretty ideal. However, there’s no reason a film needs to stick to one stereo vector field the entire time, and there’s more flexible setups that are possible, in addition to entirely different ideas to explore! We’ll get to that next time.


eleVR Player with Native Browser Support

posted in: Uncategorized | 0


When I first wrote the eleVR web video player, webVR wasn’t something that people were really doing. Oculus support was through the “functional, but finicky” third-party vr.js plugin, and the whole experience felt less native to the web than I would have liked. In the ideal of webVR, the web developer should not even need to know which HMD a viewer is using – that is taken care of by the browser itself, but that kind of VR support simply didn’t exist when we first released the eleVR player.

Since that release, both Firefox and Chrome have started developing experimental versions of their browsers that “natively” support VR headsets. I love the experience and the idea of native webVR support. It seems much more sensible to not force the developer to have to develop separately for every possible HMD and not requiring a plug-in makes it instantly accessible to more people.

Thus, I am thrilled to officially announce that the latest official ‘major release’ of the eleVR Player works out-of-the-box with your webVR-enabled browser with your Oculus DK1 or DK2. You can check out our demo player here, but make sure that you are using a webVR-enabled browser if you want it to work with your Oculus!

eleVR player

I’ve also added some more keyboard controls, so that you can easily access standard player functionality even in VR mode. Hopefully the next version of the player will come with intuitive native VR functionality such that you won’t even have to peek out from under your HMD to press the play button, but we found the keyboard shortcuts to be extremely convenient.


Key Control
p play/pause
l toggle looping
f full screen webVR mode (with barrel distortion)
g regular full screen mode (less lag)
w up
a left
s down
d right
q rotate left
e rotate right


Right now, the experimental web browsers introduce substantial lag to the player. The projection of the HMD is so abstracted from the developer as to be part of a “second pass” done by the browser, and not only is that second pass a bit slow, but it’s also just sub-optimal to need two passes for a video player. If you want to see just how much lag is introduced for yourself, try comparing the ‘f’ full screen mode with the browser added HMD appropriate projection to the ‘g’ full screen mode without the distortion. Beyond that the webVR-enabled browsers have only just started being developed, we expect them to become less laggy and generally improve rapidly over a fairly short time frame.

The ideal of having the HMD entirely abstracted away from the web developer may end up having some of the same issues that developing for mobile web has, where, ideally, you shouldn’t need to test that your stuff works with every phone, but you end up wanting to anyways because of small fiddly phone implementation differences. That said, I’ve had no issues when testing with the two different Oculus HMDs that I have at my disposal, so we’re doing pretty well so far.

As always, the eleVR Web Player is open source on GitHub, so please feel free to fork us if you want to experiment and possibly contribute to our project.


eleVRant: Camera Balls for Mono Spherical Video

posted in: Uncategorized | 0

Today we discuss setups for mono spherical video, the case where you simply want to collect all the incoming light from the world as seen by a single point in the center of the sphere. Orthogonal projection, simultaneously in all directions.

There’s a bunch of cameras designed to do full or almost-full spherical capture, from just a few wide-angle lenses to dozens of lenses facing all directions. (Note that I am saying “spherical,” not “360,” because “360” is often used to refer to panoramic cameras that capture all around you in a circle but are missing the bottom and/or top).

The more lenses, the fancier your camera looks. But how many lenses do you really need, depending on what you’re trying to do? What’s the best setup for different potential situations?

bublcam 4-lens consumer camera

For mono spherical, you don’t need cameras placed in a way that captures parallax, and you don’t need to capture 3d information, so you can get away with having a small number of cameras. Fewer cameras means less expense, fewer stitching lines, smaller file size, easier file management, less computational power, and a host of other advantages that makes everything so much easier.

Mono cameras with just a few lenses is where inexpensive consumer-level cameras with an easy end-to-end pipeline are going to be for a while. I like where the bublcam is going, and look forward to when the capabilities of small end-to-end setups starts to rival what we right now need more expensive and complicated hardware and software for.

There’s two advantages to having more lenses for mono video. First, you can get slightly more accurate views with less distortion around the edges of the camera’s views, which matters if you want something with really high production quality. The closer the cameras are, the more similar the overlap will look, so with good stitching algorithms the more cameras the more seamless the stitching. But this is only an advantage if you have precise enough hardware that tilt and distortion errors don’t negate the benefits of precise camera placement, and if you have good enough stitching that the seams look seamless, otherwise instead of the occasional stitching error contained to a few locations you’ll have tiny stitching errors all over everything.

VisiSonics 5-lens camera for streaming A/V

Especially for applications where you know what you want to focus on, such as a person’s face, it’s better for it to be easy to tell where there are large stich-safe-zones so you can put important things there. This is true for both stereo and mono filming, and knowing your actor’s face won’t have a huge stitching error down the middle is an absolute necessity if you don’t have the ability to really tweak the stitching calibration or paint out stitching errors in post-production.

The other advantage of more cameras is resolution. A 4k camera becomes low-res when you slap on a super wide-angle lens and spread the footage across your entire vision. If you’re working with gopros for example, and are willing to sacrifice the annoyance of working with a ton of cameras for higher resolution, it’s better to use more gopros with a narrower field of view just to get the resolution up.

However, depending on your application, you might actually need to have lower resolution and easier stitching. VisiSonics makes a 5-lens camera that outputs synchronized video and audio in real time, made for streaming live events. Live streaming means no time for complicated stitching algorithms or tweaking things by hand, and internet speeds can’t handle extremely high-res spherical video anyway, so five cameras is probably a good number.


It’s hard to arrange five cameras in a mathematically beautiful way, but I must say, VisiSonics more than made up for the ugly camera placement with the most beautiful mic arrangement I’ve ever seen. I am quite surprised to see that the many tiny mics, which together capture a sound field, are arranged like the vertices of a circumscribed propello-icosahedron! I didn’t even know anyone knew that shape! Whaaat!

Panono still camera, made for throwing

But an ugly camera arrangement is ok when dealing in Mono video, because in addition to mono’s easier stitching, non-head-tilt problems, fewer necessary lenses, and lower file size, it is very forgiving in camera placement. You can get away with placing cameras in any arbitrary locations on the sphere, as long as you’re covering the entire sphere of vision (“arbitrary” appears to be the case for Panono, a camera ball for spherical still images). You can just keep sticking lenses on there and adding them to your finished video, since what you are trying to capture is an orthogonal projection of the sphere, which unlike the problems of stereo capture does have a perfect solution.

Of course, there is a question of efficiency. You want the least amount of camera overlap necessary for getting a good stitch at the distance you’re filming at, because in the end there is no camera overlap, just one mono video. If you have cameras with variable field of view, you can give them narrower field of view for farther stitching distance in higher resolution, or change to a wider field of view for lower resolution but closer stitching distance. Just make sure all cameras have exactly the same field of view, unless you really want to spend some time struggling with your stitching software.

Immersive Media’s Dodeca camera

As far as mono camera placement goes, my first instinct is to treat the lenses like points on a sphere with regular placement. The dodecahedral arrangement of Immersive Media‘s Dodeca camera is real pretty, and makes me want an icosahedral setup.

Problem is, right now cameras film rectangles, not pentagons or triangles or circles. What would have been a very efficient circle packing on the sphere is not necessarily a good idea for rectangles, and the pretty picture hides the fact that each lens represents an oriented rectangle.


(Someone let me know how easy it would be to manufacture a camera with a natively circular image censor. I can’t think of any technical barrier that should stop it from being produced the moment a manufacturer decides to do it, and I am looking forward to VR freeing us from the idea that everything has to be rectangles!)

There is one perfect mathematical solution for rectangles. It’s the pyritohedral arrangement you might be familiar with from volleyballs. The arrangement is similar to a cube in that it has six cameras filming six faces, but by carefully orienting them (related to the lovely fact that the vertex graph of the cube is two-colorable so you can have points of rotation that alternate clockwise and counterclockwise, and have I mentioned this is one of my very favourite symmetry groups and I loves it?), you can take full advantage of the rectangular view of the camera. A cube would waste all the pixels outside the center square of footage. A pyritohedral arrangement is optimal.

This is perfect for GoPros, which have exactly the right field of view such that four gopros of alternating orientations will have just enough overlap for good stitching. We used this for our gopro mono spherical rig (made out of foam core), and so has 360heros and Freedom360, and you can 3d print your own holder at home if you have a makerbot, thanks to dtLAB.

duct tape + foam core
dtLAB on thingiverse

Given a bunch of specs for cameras, I’m sure there’s some relevant research on sphere covering that would help you write a program that tells you how many of what type of camera in what setup leads to the best bang for your buck.

For eight cameras, it might be tempting to space them evenly into an octahedron, but cutting an equilateral triangle out of a rectangular FoV means you’re wasting at least half of your footage. You’d actually be better off just filming in a pyritohedral arrangement with six cameras.Spherical_tetragonal_trapezohedron

My best guess for eight cameras is orienting them like in a tetragonal trapezohedron, but I have a suspicion that you’d still be able to cover the sphere with only six of those same cameras filming in the same FoV. If there’s a benefit, it’s probably not worth the hassle and expense.

But someone else can math that one out! I’m more interested in the question of a 12-camera setup.

[Edit: someone took the above challenge! Spherical video creator Jim Watters compared a trapezohedral 8-camera rig using a particular 16:9 field of view to the same cameras in the pyritohedral arrangement, and as you can see below, the pyritohedral arrangement just barely does not cover the whole sphere.]

Eight cameras covering the entire sphere trapezohedrally
Six of the same cameras don’t quite make it pyritohedrally

[I think you’re better off switching to a squarer aspect ratio or otherwise increasing your horizontal FoV by 5 degrees rather than adding the extra two cameras, but at least we know exactly what the tradeoff is now, so thank you Jim for sending that in and letting us use your images 😀 /end edit, back to 12-camera setups]

cam-eyessplitThis is where things get kind of interesting, because there’s two fundamentally different highly-symmetric ways to arrange 12 points on a sphere. The regular dodecahedral arrangement seems obvious, and it in fact is very closely related to the pyritohedral arrangement. If cameras filmed circles, it would definitely be the right choice. The other contender is a rhombic dodecahedral arrangement. Rhombic dodecahedra not only tile space but have nice rhombic faces that look closer to being like a

So which is better? Well, if we compare a best-fit of a regular pentagon and a sqrt(2) rhombus on a rectangle with our camera’s aspect ratio, we should get a pretty good idea (the sphericalness complicates things only slightly, in favor of the rhombus). In fact, it turns out the sqrt(2) rhombus always wins, even in a best-fit rectangle snuggled around a pentagon. The pentagonal dodecahedral arrangement is slightly more regular so there will be slightly closer stitching at the edges, while the rhombic dodecahedron has very slightly further stitching at eight locations, but the difference is very small. More significant is that the rhombic dodecahedron only has 24 edges, that’s 24 stitching lines, as compared to the regular dodecahedron’s 30.

Screen Shot 2014-09-27 at 4.33.15 PM
And resolution! There’s a very nice way to symmetrically arrange all the cameras on the faces of a rhombic dodecahedron, oriented to have maximum overlap. For GoPros, unlike the 4×3 Wide setting necessary for the pyritohedral arrangement, you can go down to the narrower 16×9 Wide for higher resolution and close stitching, and I believe you can go down as far as the 16×9 Medium setting and still have room to stitch.


The pyritohedron and rhombic dodecahedron have a nice relationship and fourish symmetry that makes them work well, and for 24 cameras the pentagonal icositetrahedron would do nicely, but once the number of cameras grows high enough you might want to abandon those symmetry groups in favor of something icosahedral. I’d want 30 cameras placed and oriented like the faces of a rhombic triacontahedron, but I’m holding out for a pentagonal hexacontahedral setup with 60 cameras (see title image).cam-gif

One final note: it is possible to get a good spherical video even out of a setup that’s inefficient or that doesn’t have all the cameras quite facing out from the center. Every single one of our stereo videos contains two mono videos stitched from a subset of the cameras, and even though the projection has a panoramic twist it looks fine (though it’s more work to do the stitching).

The beautiful efficient highly-symmetric polyhedral setups perfect for mono recording are, unfortunately, not so appropriate when it comes to filming in stereo. On the bright side, stereo filming has its own set of interesting questions. Stay tuned for part 2!


Are you eleVRanting, or am I projecting?

posted in: Uncategorized | 0

Projections show up everywhere when dealing with VR video. Not psychological projection, of course, but all sorts of graphical and map projections.

They begin as soon as you start filming. Cameras, just like eyes, are lenses that are looking at a 3D scene and projecting it onto a flat surface. The kind of projection that cameras and eyes do is called a perspective projection. A perspective projection seems natural to us because it’s the same projection that our eyes use, but just like every projection from 3 dimensions to 2, information has to be lost.

photo 1 (4)

Some of the information is stuff that we are consciously aware of losing (what’s behind that wall?), but much of it is surprising. Our brains are amazing at inferring and reconstructing a 3 dimensional world from the flat perspective projections because that’s the kind of information that they get from our eyes. To really get a sense of how much reconstruction our brains does to the perspective projection of the world to compensate for what we lose from a perspective projection seen from a single viewpoint, it can help to look at optical illusions. Here are just three examples of optical illusions that hint at how our brain reconstructs a 3D scene from a perspective projection.

While we can’t know for sure what’s behind a wall, our brains do make inferences of connectivity and consistency to infer the full shape of things that are partially obscured. When these two figures are obscured, the ‘natural’ underlying shape that our brain fills in is of two straight bars, even though the underlying shapes are actually bent.


Similarly, we find it difficult to believe that these two cats are the same size because our brain uses the perspective lines to infer depth cues, and we know that a cat the same size would look smaller at that depth. Even though we logically know that we are looking at a flat image, we can’t help but perceive this image as showing a perspective projection of 3-dimensional space with two differently sized kitties.

photo 1 (2)

Finally, I highly recommend watching Kokichi Sugihara‘s videos of “Impossible Motion” to see some amazing and mind bending examples of how our brain tries to reconstruct the world and of what sorts of information are lost with perspective projection.

If we were just making a single flat video then the perspective projection of the camera would be the only projection that we’d have to worry about, but for VR video the perspective projection generated by each camera is destined to be warped even more.

First, we need to get a full 360 degree view of the world. There don’t exist cameras that can film in every direction at once, so we’re going to achieve this by filming with a lot of cameras. Each camera is capturing a flat image of the world, and the light going through that world, some distance from the center point of all of the cameras. All of these images need to be stitched together to create a panoramic sphere of video.

If you’re thinking that it seems like it should take an infinite number of infinitely small flat faces to get a perfect sphere, then you are absolutely correct. Our spherical videos aren’t really representing the true view around a point. We’re stitching together a bunch of videos to create a large, roughly spherical image with a lot of flat parts, which we, of course, are going to project onto a sphere.

These inaccurate 360 panoramas let us take advantage of panoramic twist to create two different 360 panoramas around a single point and generate an impression of stereo video. But both of those panoramas are actually incorrect, there is only one accurate panorama around any give point. If we had a camera that let us capture exactly that panorama, we wouldn’t have a second panorama to show the other eye and trick our brain into perceiving as stereo.IMG_20140606_185643

Because of our panoramic twist trick, our projection of a stitched many sided polyhedron onto a sphere is actually a win for us. But you might wonder if it would be an advantage to not have to do this messy stitching and projecting onto a sphere step. Wouldn’t it be nice to have a real 360 camera? Certainly, I don’t think any of us would be distraught about not having to use loads of cameras and muddle around with imperfect stitching software.

But a 360 camera wouldn’t fix my projection problem. All of our films and formats and file types are for storing flat video information. This means that even if a single camera could record spherical video all the way around it, it would still have to make the world flat to store that information.

There do exist extremely wide fisheye lenses that can capture 180 degrees of video around a point. The light that enters these lenses is a true hemisphere of light, which we promptly mush flat and distort. If you’ve ever seen pictures from a fisheye camera, you probably already have some idea of how their lenses project the images that they capture onto a flat surface. If you haven’t, go check out some real estate listings. Realtors love fish-eye lenses because they make rooms look bigger.

The projections that we have to use to make spherical video flat cause the most eleVRanting for us. The spherical panoramas that we generate by stitching together several videos face the same problems as our hypothetical 360 cameras of needing to be stored in formats that don’t understand spheres. If spherical video becomes a more popular, perhaps we might start seeing video information being encoded in entirely different spherical formats, but, for now, we are storing our our spherical videos flat.

Everyone who has compared a map to a globe or tried to flatten an orange peel has some real intuition for the fact that there is no way to flatten a sphere without distortion. The question for us then becomes: What is the best projection from a sphere to a plane to use for storing and playing our video?

There are a lots of projections to choose from. The one that we have used for our videos thus far is the equirectangular projection. This projection makes lines of longitude of our sphere into parallel lines of constant spacing. The lines of latitude have the same constant spacing, creating “equirectangular” squares between them. We can think of this projection as cutting a hole in the top and the bottom of a sphere, stretching it into a cylinder while keeping the diameter the same and then slicing it and rolling it flat. Emily talks more about the specifics of this projection here.

photo 2 (4)

This projection has massive distortion at the poles, as well as way more information there. In a sufficiently high quality video, this shouldn’t make a difference, but in a lower quality one, like the ones that we package with our player, you can really see that the top and bottom of the world (the ‘poles’) are way clearer than the equator.


The equirectangular projection seems to be the standard that most current panoramic video players accept and that most tools for creating panoramic video output. It also looks fairly nice in it’s flat form as nearly everything is properly connected. However, I really hope that this does not become the long-term standard. Not only does it have severe singularities and not distribute the video data very evenly, but it’s also fairly computationally intensive to turn it back into a sphere. In particular, it is necessary to calculate two arctangents for every pixel in order to get the correct color off of the video texture and onto the projected video. It’s incredibly important to get a high frame-rate and low lag when doing virtual reality; needing to perform the fairly expensive arctangent operation so many times per frame really hurts our ability to hit our desired frame rate. Finally, because the equirectangular projection is so distorted, it’s particularly difficult to edit the videos in their flat forms and no software currently exists for enabling it to be edited as a sphere, although we really wish that someone would make some plug-ins for that.

As you can probably tell, I’m not a big fan of the equirectangular projection. So what do I think would be better? My personal preference would be for us to use the cube projection, which is the easiest projection of the sphere onto a regular polyhedron. Just squish the sphere onto a cube, then unfold the cube into six squares. You can even rearrange the squares to fill a rectangle so that there isn’t random blank space in the flat video that you are storing.

photo 3 (3)

Turning the cube projection back into a ‘sphere’ is easy – just put your square faces together into a cube. Cube mapping is already a standard way of storing graphical environment information for games and panoramas. It is generally preferred because it is far simpler and more computationally efficient. Modern GPUs are designed to be good at cube mapping because this technique is so standard. And, in case you think that putting the faces into a cube rather than something truly spherical might look unconvincing, here is an example cubical ‘skybox’ panorama.

While it’s impossible to have the spherical video information be completely evenly distributed across a flat rectangle, the cube map stores the graphical information much more evenly and with less distortion at any point, so the resulting panoramas actually tend to look overall a bit better than equirectangular panoramas in my opinion. They don’t look as good in their flattened form, but that’s not really how you should be viewing spherical VR video anyways. Less distortion also means that videos stored as a cube map may be easier to edit.

Of course, there are a great many other possible projections. We could, for example, get less distortion by using an octahedron, icosahedron, buckyball, or really most other spherical polyhedral mappings, but this would be relatively little gain at the cost of being less computationally efficient to display and more difficult to store efficiently in a rectangular format. There are also more exotic projections that haven’t been explored much at this point. Short of actually creating a video format specifically for VR video, I believe that the cube projection is the most sensible.

Let’s summarize where we are so far. First, we took our real 3D world and used perspective projection to turn it into a large number of flat 2D videos. Next, we took those 2D videos and stitched them together into a spherical panorama, where, if you stood in the middle and looked around, it should look fairly similar to how it would look to stand in the real location that we were filming and looked around. We’d love to have stopped there, but we don’t have any way of storing a sphere of movie in playable data, so we have to turn our spherical panorama into something that we can store flat in a video file. Finally, our player takes the projected video and inverts the projection, then shows you part of the world that you can actually see – your “field of view”.

In conclusion, I’m definitely projecting…



Talk-Chat Show Thing, Episode 2: Stitching, Web VR, and DK2 review

posted in: Uncategorized | 0

Episode 2 of the eleVR Talk-Chat Show Thing is here! You can find it on our downloads page.

In this episode we talk about the two kinds stitching errors you’re likely to come across: time and space; we give you a quick tour of the new camera head we are trying from 360 Heros; we introduce you to Andrea Hawksley, the developer on our team, and she and I talk state of the VR on web and what part mobile will play. Next we sit Andrea down for a first time look at SightLines newest VR experience ‘The Chair’ and do a review of the DK2. To top it all off we muse on the interesting phraseiness of VR jargon phrase ‘Chromatic aberration’. That and ‘electrolyte,’ cause what fun would it be to have a talk show if we didn’t indulged ourselves in a few diversions.

Having a second episode released, a second episode that was premiered at XOXO over the weekend, makes us the first episodic VR content producers. It’s a small but exciting step for the medium because regularity means production times are coming down making it more feasible to iterate content techniques instead of just focusing on improving technical know how. Also this is probably the best example of 3D spherical video ever released, at this point.



Editing spherical 3D in Premiere and After Effects, A Design Document

posted in: Uncategorized | 0

We are interested in working with a plug-in developer to create a plug-in for working with virtual reality video in Premiere and After Effects so I put together this little walk though to describe what we need. Here goes the excitement!

The plug-in needs two distinct branches of functionality: an editor and a viewer. Both branches will need to work with two formats: a 2D spherical video or a Top Bottom stereoscopic spherical video (left eye on top). Both of these formats are exclusively 1×2. (We currently use the equirectangular projection for all our videos, it is defined below.)


Typical 2D spherical equirectangular video as seen in standard playback


Typical top bottom stereoscopic spherical equirectangular video as seen in standard playback

Branch time!

The Viewer:

One of the difficult parts of editing video for VR is not being able to see the results until after exporting the video out. The viewer would use Mercury Transmit to make playback visible in the Oculus Rift (and hopefully eventually other HMDs as well). This would mean taking head tracking information from the HMD and integrating it with playback to allow me to look around inside the sphere of video I am working on in AE or Premiere.

The Editor:

Now that I can see the video as a projected sphere instead of just a warped whole I need to be able to edit titles and effects in the same projection as the videos. Currently a title in Premiere, for example, stays the say size and skew no matter where in the video frame you put it. This works great if your video is flat but I need a deformation mesh to distort the shape and size of the title as I move it across the frame. Here is the current projection we use and an example of how this would look.


What stuff means time!

Definition: Equirectangular Projection (with pretty pictures!)

An equirectangular projection is a method of mapping a sphere on to a flat plane. It transforms both the meridians of the sphere to vertical straight lines with constant spacing and transforms the spheres latitudes to horizontal straight lines of constant spacing. This is commonly seen on maps of Earth.


To maintain this alignment the surface of the sphere is warped, maintaining its original aspect ratio near the equator and smearing horizontally as it approached the poles.  The circles on the map below are an example of how the titles effects etc would change depending on their placement.


The deformation can be seen more clearly with larger shapes such as this area of coverage map of an orbiting satellite. As the shape approaches the pole the horizontal stretch is increased.


Definition: Equirectangular Projection (Again, but with math!)

Given a spherical model

x = λcosφ1

y = φ



λ is the longitude;

φ is the latitude;

φ1 are the standard parallels (north and south of the equator) where the scale of the projection is true;

x is the horizontal position along the map;

y is the vertical position along the map.

The point (0,0) is at the center of the resulting projection.


If you or someone you know is interested in working on this project shoot us an email! We can compensate you for your time but you should know that we are a research group, not a production company, so the plug ins and well as all associated source code will also be released online open source. All contributors will of course be credited for their super extra awesome contribution!



Three Adventures in Cable Management

posted in: Uncategorized | 0

hippoIt’s crazy how often working with cutting-edge futuristic technology involves a literal cutting edge.

The same box cutter that Emily used to create our prototype camera head out of foam core has found a new job stripping audio cable.

The past couple posts have been all about designing spaces for viewing VR in a way that gets away from the forward-facing bias, as well as being inviting and low-pressure. We already had setups for sitting on the floor and for lying down, so we designed an experience for standing up, where the VR magically comes down from above.

cables-hanging2I had this vision of a minimalist VR environment that you see while standing between two hanging speakers, which not only creates extremely realistic positional audio for the two objects in the environment, but also attracts you to the art installation, gives a cue on where to look once you’re in there, and avoids the common problem of how to manage headphones on top of headset.

Cable Management Adventure #1: Speaker Cables

I had these speakers with a tear-drop shape that I thought would be perfect. Removing the base of the speakers was easy. Trickier was that they were designed to plug into a central sub that had already routed the stereo audio into mono for each speaker. Plug either speaker into a standard stereo jack, and they would only play the right ear.

I wanted one right and one left, so that I could play my two audio tracks in two different speakers, but I didn’t want the cable management nightmare of using the subwoofer hub that not only has to sit on the floor but also requires separate power.

I really love cable puzzles. I love looking at a pile of wires and splitters and converters and having to figure out what convoluted combination might work for getting me what I need. I had a similar puzzle later with trying to extend and fit different hdmi cables into a single mac mini using an assortment of cables and dongles, but audio is fun because the insides of the cables are easily reconfigurable too.

cables-splitterFor my speaker puzzle, one option might have been to cut off one speaker’s jack, open up the cable to presumably find a single wire, and connect that to the other side of another jack (the speaker’s jack was thick molded plastic that seemed difficult to cut into without accidentally destroying it). But I wasn’t quite sure what was in there, and didn’t have extra jacks I really wanted to cannibalize.

I ended up taking a standard headphone splitter and switching the right and left wires on one of the outputs. Then I could plug the splitter into the headphone jack and use one splitter jack for each speaker. Both speakers think they’re playing only the right ear but one speaker is being fed backwards information. No jacks were destroyed, and the doctored splitter still works as a splitter.

Perfect! Now on to the next problem.

Cable Management Adventure #2: Oculus Rift Cables

cables-bungeesOur prototype camera head relied heavily on the use of rubber bands. Our prototype minimalist hanging VR environment relies heavily on bungee cords.

Oculus Rift cables are not meant to be long enough to route around the ceiling! Maybe I should’ve used usb and hdmi extenders, but for the moment the best solution is to suspend the mac mini above ground.

The mouse is long enough to reach a nearby chair, but the only option for the keyboard is to bungee it to the corner of the frame.

Starting the video in VR currently involves precariously mushing my face into the hanging headset with one eye closed, while using the mouse against the wall, and reaching up to use the keyboard bungeed to the frame.

We are creating the future, here.


Cable Management Adventure #3: GoPro Power

The minimalist art film for the above installation was captured using multiple gopros, and gopros are not known for their long battery life. I knew the first shoot went wrong when one of the files transferred really quickly.

Luckily, Emily created a beautiful solution so that we can keep all our cameras plugged in while they are filming. 14 cameras and 14 15-foot mini usb cables should be a cable management nightmare, but over here at eleVR we know how to braid in a variety of media.

Each one is labeled at each end using sharpie on white electrical tape. Also we have a usb hub that can power 14 usb cables.

Cable Management Conclusion:

It’s amazing that VR can transport us to other worlds and take us on wild adventures. It’s even more amazing, to me, that it can take the mundane and make it magical.

I love art that lets us experience invisibly-normal things in new ways, and this little minimalist VR piece is a foray into that genre. Likewise, even if this blog post isn’t a fancy technical post or DIY tutorial or release of open source software, I hope it will reach those kindred spirits who delight in simple things made magical by their context.

I love cable management. I love cable management for virtual reality. That is all.



VR installations with DK2 and Mac mini

posted in: Uncategorized | 0

library-gif-setupThree months ago we released The Relaxatron as our first video on our brand new eleVR website. It seems so long ago now and we’ve learned so much since then. Flat video? No stereo? Soooo three months ago!

We’ve been preparing our stuff for showing at a couple events, and designing inviting spaces to come see VR video in other ways than sitting at a desk. We decided for the Relaxatron we’d use artificial turf and artificial plants, to lure you into an artificial experience of real plants.

Last post we showed the sit-on-a-rug setup we’re using for the VR gif animation. We ran the demo on a Macbook Pro, and were happy to find you can close the screen and it still runs fine for extended periods.

Turning on and off the DK2 screenSo we decided to see if the DK2 would play video off a Mac mini! Turns out it works great. With looping video and a mac mini hidden under a box of plant, we can run a fun experiential demo space with no maintenance.

Right now the best spherical player we have for Mac+DK2 is the version of eleVR player Andrea has working in the experimental VR version of Firefox, but if you don’t want to install experimental code for experimental browsers on your machine I recommend KolorEyes for Mac mini installs.

We currently have the Relaxatron set up in a quiet corner of our office. The DK2 has an on/off button for the screen on the headset itself, so we’ve just been leaving it running all day, with instructions on the headset to turn it on.

As you’ve probably noticed in the last few posts, I have finally learned how to do finite Euclidean gifs, and I don’t know if I’m ever going to stop.

The Relaxatron















Virtual Reality .gif

posted in: Uncategorized | 0



So we made a 12-frame spherical stereo VR .gif. It is also available as a video from our downloads page, in case your VR video player does not support .gifs

The idea was to have as many little gif stories happening around the space as possible, so you could spend any amount of time looking around finding new little things going on around you without fear of missing anything. For this end, we recruited some of the other people in CDG (the research group that eleVR is a sub-group of) to help us create stop-motion animations all around our office library.

Director: Vi Hart

Producer: Emily Eifler

Developer: Andrea Hawksley

Animators: Glen Chiacchieri, Chaim Gingold, Robert Ochshorn, Adrienne Tran, Bret Victor

This .gif is best viewed while sitting on the floor, so your head is the same height as the camera. If you’re viewing on the DK2, with most video players you can unplug the IR camera for better cable management, and it will still work fine.

The gif is also available in a smaller and non-stereo format, in case you don’t have a VR headset and just want to put it somewhere because it looks nice out of VR too!


We made this demo available at the SFVR meetup last night. We wanted to make sure people could sit on the floor and look around without accidentally bumping into anyone or getting trampled on in a crowded space, so for this reason, and for general fun, we set up a demo rug, where you are surrounded by a perimeter of objects from the video.

We were excited to bring this to an event and let people experience VR from a perspective they probably hadn’t tried before. Down with forward-facing bias! The future of VR will not happen sitting in a chair in front of a desk!


It worked pretty well, though along with our two traditional desk demos our demo space required constant maintenance against the forces of crowdedness and garbage. None of us felt it wise to leave our corner of the demo space long enough to try any of the other demos.

Luckily people came to us! It was awesome to find that many people were already familiar with our videos and tech posts, so we could skip the introduction and go straight to discussing the deeper stuff. We’re planning to demo at XOXO and Oculus Connect, so demoing at SFVR was a valuable test run of our experimental setups.

2. On Short Form Looping Media

I think about short form looping media a lot. I think about it again and again.

I make video that is considered long for the internet, things very precisely crafted to have story and all that. I also write music, often long complex things with story and subtlety. I am often annoyed by short looping music that never goes anywhere. Short looping videos that never go anywhere.

Yet there is power to it, there’s something about being able to take a medium that happens in time and remove that tricky time element, loop it short enough that you can really get a mental handle on that slice, yet still have a chain of events.

It never goes anywhere, so there’s nowhere to go but deeper into what’s already there.

There’s power in that someone with zero musical training can, while also holding a conversation and eating, completely grok the music that’s playing. Dancing at the club, you can really get into it, because it’s still there to get into. Through the power of gif you can truly understand, and feel, every aspect of the plight of that adorable kitten in the box. There is greater empathy towards cats in a world where they don’t disappear forever after they jump into the void, but jump again, and again, and again.

Cat from this famous video: gifized by Benjamin Grelle of
Cat from this famous video gifized by Benjamin Grelle

A great story is even better when we already know how it goes.

One of the mental games I like to play is “what is the gif of ____?”. Anyone who has played “what is the gif of video” was not surprised when Vine became popular (the gif of video is not the gif, obviously). It’s fun to play “what is the gif of games.” I think it’s all the repetitive casual games where the mechanic repeats over and over, and though your actions vary slightly they are all in service of completely wrapping your head around the mechanic. The short loop is in the way things behave, though you may have slightly different input.

Obviously the gif of VR is a gif that happens around you in space. I think the future gif of VR will also have horizontal head tracking and be 3d modeled, perhaps not everything is within view from the starting position. We’d love to try a stop-motion 3d model. Imagine walking around a house where the entire house is on a 12-frame loop.

The point of the VR gif we made is to implement this idea, that despite that you can’t see the entire world at once and there are parts of the story that happen behind you, it’s ok! It’s on a loop, and you can turn to see them later. You have time to look around and take it all in, without fear of missing something.

We think this came out great in practice, and are definitely convinced to make more, taking more care to have smooth stop motion, perhaps twice as many frames, and to not bump the camera in the middle.

It’s also awesome that stop-motion VR gifs work so well, because we think this is a much more achievable thing for the average person to make right now. You don’t need 14 video cameras and a computer plus software to render stereo spherical film. Two still cameras (for stereo images) can be used asynchronously to capture each direction, and stitched together as still panoramas, as long as you’re careful about what crosses the overlap. Even one still camera could work, if you have an extremely accurate way to switch it back and forth between locations.

The end!


How VR Headsets Could Reduce Lag

posted in: Uncategorized | 0

Frames rendered slower than the gif moves

This is a guest post by Andy Lutomirski, a contributor to eleVR Player.

I’ve played with, and written code for, the Oculus Rift DK1 and DK2. Both of them have a nasty problem: when you turn your head, there’s a very noticeable lag before the view in the headset starts to pan.  I find this to be distracting and to make me think I’d rather watch something on a normal monitor. Other people seem to find it unpleasant or even sickening.

In theory, there’s a nice way that this is supposed to work.  Every frame drawn on the Rift is rendered somewhat in advance.  When a VR program starts to render a frame, it knows when that frame will be displayed, and it asks the Rift driver to estimate the viewer’s future head position corresponding to the time at which the frame will be seen.  Then it draws the frame and sends it off.  Rinse, repeat.

This doesn’t work very well.  If I’m currently holding my head still, then unless the Rift is measuring my brain waves, there’s no possible way that it can know that I’m going to start moving my head before my head actually starts moving.  But it gets worse: computer games almost always try to render frames in advance.  In part, this is because you generally don’t know how long each frame will take to render.  If you take a bit too long to render a frame, then you are forced to keep showing the previous frame for too long, and this results in unpleasant judder *.  Actual games often render several frames ahead to keep all the pipelines full.

On the web, performance is especially unpredictable, so programs like the eleVR player really need to start rendering each frame early.

For 3D on a monitor, the only real downside is that your mouse or keyboard input can take a little bit too long to be reflected on the screen.  But, when you’re wearing a VR headset, the whole world lurches every time you move your head.

To add fuel to the fire, the Rift DK2 has an enormous amount of chromatic aberration.  To reduce the huge rainbows in your peripheral vision to merely medium-sized rainbows, the computer needs to render everything full of inverse rainbows, which makes frames take longer to draw, which adds even more lag.

I think that the real problem here is that Oculus is doing it wrong. The Rift contains a regular 2D display behind the lenses.  For some reason, Oculus expects programs to send the Rift exactly the pixels that it will display at exactly the time that it will display them. In other words, each pixel sent to the headset corresponds to a particular direction relative to the viewer’s head.

I think that this is entirely backwards.  Let programs send pixels to the headset that correspond to absolute directions.  In other words, a program should ask the Rift which way it’s pointing, render a frame in roughly that direction with a somewhat wider field of view than the Rift can display, and send that frame to the Rift.  The Rift will, in turn, correct for wherever the viewer’s head has turned in the mean time, correct for distortion and chromatic aberration, and display the result.

Taking a piece of larger frames

Let’s put some numbers in.  The Rift display is approximately 2 megapixels.  A decent rotation and distortion transform will sample one filtered texel per channel per output pixel, for 6 megatexels per frame.  At 100 frames per second, that’s only 600 MTex/s and 200 MPix/s, well within the capabilities of even mid-range mobile GPUs.

There’s another huge advantage of letting the headset draw the final distortion pass: the headset can draw several frames per game frame. So, if it had a beefier GPU, it could display at 200 or 300 fps even if a game or video is only rendering 30-60 fps.

A better solution wouldn’t use a normal GPU for this kind of post-processing rotation and distortion.  A normal GPU writes its output into memory, and then another piece of hardware reads those pixels back from memory and scans them  out the display.  That means that the GPU needs to be done writing before the frame is scanned out, and it takes some time to scan out the display, all of which is pure nausea-inducing latency.  A dedicated chip could generate rotated and distorted pixels as they are scanned out to the physical display, eliminating all of extra latency and dramatically reducing the amount
of video memory bandwidth needed.

If we had a headset like this, it would be easier to program for it, and it would work better.

* nVidia and AMD both have technologies that try to coordinate with a monitor to delay drawing a frame for a little while if the frame isn’t ready.  This is only a partial solution.



Audio for VR film (binaural, ambisonic, 3d, etc)

posted in: Uncategorized | 0

Using a Rhombic Dodecahedral Dummy Head to Create a Binaural Recording

Binaural audio is perfect for VR. Binaural audio recordings, on the other hand, are not. Not at all.

Just as a stereo pair of videos gives us the illusion of a 3d view from exactly one perspective, but does not contain the information to let us know how the world looks if we move our head to the left, so too is a binaural recording a 3d illusion of sound without the information to tell us what the world sounds like from any other ear locations.

In stereo film and binaural recording, all the computation and 3d-ness happens in our brain without the recording having a 3d model or any idea what 3d is. With enough cameras on a camera ball, you could create an actual 3d point cloud of the world within camera view (assuming software that doesn’t quite exist yet). Or you can use a light-field camera to capture how all the light waves look from all the locations within some small space. Both those video options aren’t really viable solutions for VR video just yet, but 360 stereo video is good enough to make my brain happy.

What about sound? What are our options, and what is good enough?

SoundField tetrahedral micBinaural recording is not good enough, but sound fields are easier to capture than light fields. You can do a decent job with a small tetrahedral mic array, which through some mathematics can model the sound field at that point.

This is known as ambisonics, and it’s a relatively open technology (most patents have expired), yet most people haven’t heard of it just as most people haven’t heard of binaural recording. The information can be stored in just 4 regular audio tracks (or more for higher-order ambisonics), which unlike normal audio formats doesn’t represent the sound that should come out of speakers, but the information for a sound field. This “B-format” audio can be decoded using good ol’ fashioned mathematics to a more standard tracks-that-should-play-out-of-speakers format, or basic stereo, or can turn into an equation for a spherical harmonic series (the place where that series truncates depends on the order of your ambisonics).

This technology has been around for a long time, but real-life uses are finicky. In a room full of speakers, the effect is only perfect in one location at the center of those speakers, even assuming you can get your specific speaker setup done properly and decode the sound for it, making it not practical even for home theaters. For pre-VR headphone uses, it has to become stereo anyway, so why bother. But with VR, you’re always at the exact center, and the stereo encoding can change in real time based on head-tracked rotations!

With just 4 small mics, there’s no way the sound field can perfectly simulate exactly what you’d hear as you move your human-sized head around a head-sized sphere. But is it good enough? Is it convincing in VR?

I don’t know, because I haven’t actually tried it in VR yet, which is why I’ve waited so long to post about it. But at some point it’s time to suck it up and write a post, with the promise to get back to you with results later.

Here’s some methods of VR video sound implementation I’ve encountered, from ourselves and others:

1. Do Nothing Because WhateverAd-Hoc Binaural Dummy Head

We recorded a concert, and assumed that at concerts everyone is used to sound being dislocated from musicians because it comes out of speakers, so we didn’t worry about head tracked audio. We wanted that nice binaural feeling of stuff happening all around you, so we made an ad-hoc dummy head out of a rhombic dodecahedron and modeling clay [right] and put it above the audience. The audience noise is constantly changing with no one specific source anyway, so it works.

We’ve also done videos with voiceovers, where the voice is supposed to float magically anyway, so whatever, head tracking! The locationless voiceover for The Relaxatron is supported by binaural sound clips of birds and stuff, so, it’s all good.

Talk Show Screenshot2. Read Your Mind

In our first VR talk show recording, rather than get fancy with audio, we assumed people would mostly be facing the couch and looking slightly back and forth between me and Emily. We simply created a regular stereo track of our vocals, no head tracking or anything, which creates a convincing illusion that we did something fancier, as long as you only behave as expected. We got feedback from someone who thought we were doing head-tracked sound, bwahaha!

3. Render different sound clips in locations in a 3d environment

In our VR Video Bubbles demo in Unity, various spheres textured with video were placed around a 3d environment. The sound for each video came from a virtual speaker placed where the narration was supposed to come from in the video. Unity’s integrated Oculus head tracking takes care of the 3d sound from there: walk towards the video bubble, hear the sound grow louder. Turn your head, and hear the sound pan around.Video Bubbles for Oculus Rift

It would be trivial to place speakers in still locations on a video bubble, such as use the VR talk show as a spherical texture and then place each of our voice recordings where our heads usually are. Our locations are constant enough that this implementation would work well.

But we can do better!

The technology already exists to create motion for speakers in 3d rendered environments, so with some tedious work-by-hand you could give any sound clip a motion that follows the thing you want to be playing that sound clip. This is yet another place where rendered environments are way ahead of captured film, because each entity already exists in a defined location, unlike the mysterious pixels of film that only become objects in your head.

4. Code up a special specific implementation

"Blues," by Total Cinema 360Total Cinema 360’s “Blues” is a demo where a musician multi-tracks on a bunch of electronic instruments and each instrument’s sound file is set to that instrument’s static location in the video. It’s definitely worth checking out, though it’s advertised as an example of realistic 3d audio, which it’s definitely not. Sound files play and pan and cut out suddenly as you look around, and it’s not intuitive to associate the digital sound with its video counterpart. The package is more than a video file, requiring each separate sound file to be programmed to be in a place, and turn on when that pixel is in view or whatever their thing is doing (I didn’t dig into their code), so in its current form it’s not viable for anyone besides Total Cinema 360 to create something with it.

It’s not realistic or usable yet, but as an example of the potential of VR experiences it’s interesting. Why not have a specific sound suddenly cut in when you look at a thing? I can think of plenty of fun things you could do with a more focused version of that idea. There’s already VR experiences where looking at things affects them and creates sound (I’m thinking especially of exploding asteroids with my mind in SightLine’s The Chair, where audio feedback is key), and I like it. It’d be fun to do something like record video in a museum and hear audio narration about the thing you’re looking at. Definitely looking forward to seeing what Total Cinema 360 does next.

5. Pan between multiple binaural recordings and hope for the bestFree Space "Omni" Microphone

I love the way 3dio’s Free Space Omni-Binaural microphone looks. It’s beautiful, and it’s extra-beautiful when mounted on a dummy head as part of a performance. Each of the four lovely-looking binaural mic pairs are good binaural mics, so this beautiful creature can record four good binaural recordings at the same time. That is what it can do. It cannot do more than that.

This mic was developed for Beck and Chris Milk’s “Hello Again,” a cool 360 visual/audio experience. You can pan around the concert video, and the four binaural recordings are panned to match, mixing together the two closest dummy head ears when your ears are between them. The mics, cameras, and stage are constantly moving in circles.

I love the implementation for this piece. I don’t love that now they’re producing and selling this design advertised as actually recording 360 degrees of binaural sound. Humans can localize a forward-facing sound to a precision of a single degree, so I’d be comfortable saying that 3dio’s Free Space Omni records 4 degrees of binaural sound. Our peripheral sound localization skills can be as bad as 15 degrees, so you could say it records 60 degrees of binaural sound if you really stretch it.

It reminds me of this very beautiful but completely unrealistic camera mount.Unusable Camera Mount for 3d Spherical I like 360Heros and we’ve used their pentagonal stereo mount, which works, but if they’re selling that thing (left), I’m guessing they don’t understand how their working camera mounts actually work, because math.

Anyway, recording live music that’s all routed through speakers makes it difficult to judge a microphone system. None of the sounds quite come from the thing making the sound, but is that the mic’s fault, or a bug in the video player, or that the speaker playing that sound was somewhere else? Is the interference from the concert speakers, or is it from mixing together two recordings taken six inches apart? When you move a tiny bit and the sound leaps 90 degrees, is that because the recording is weird, or because the actual microphone is moving around during recording?

It’s good enough though, for that particular implementation. It’s great for anything where you need binaural sound full of cool noises in 3d and don’t care that those noises may be a bit distorted and only accurate to within 90 degrees. Mixing binaural recordings doesn’t average the locations of the sounds any more than layering two stereo photos gives you what you’d see if you looked from your nose, but it can create a cool effect and smoothish transition. Still, in the end we just need better image stitching.

6. Wild speculation

When it comes to fading between multiple mics, enough mics on a mic ball might be good enough for 3d sound that has a believable accuracy. The audio interference from mixing together mics placed a couple inches apart is audible, but you probably wouldn’t really notice it. And if your binaural recording lets you localize a noise within 1 degree of accuracy, but your head is slightly between mics so that perception is ten degrees off from where the noise is supposed to come from in the video, it’s probably good enough.

Rotating spherical harmonicsOr, we could skip the gimmicky stuff and use real mathematics! Sound waves and sound fields, each mic being not a representation of the human ear, but another data point making our model of reality more accurate. That’s why I’m interested in ambisonics. There’s thousands of papers and good cold hard research about it, and if we can do math to it, we can do VR to it.

Ideally, once you’ve got your sound field, you’d render it based on 3d models of the listener’s ears (which you rendered via a few photos of their pinnae) to create for them a true spherical binaural experience. It’s your own 3d ear’s unique distortion of sound in space that lets our brains turn what should be a 1d amount of information into 3d perception. Dummy-head binaural recording using off-the-shelf ears can sound awesome, but will lead to much less accurate spacial sound perception than using a 3d model of your own head and rendering the sound just for you (Andrea told me about this experiment where they’d put fake ears on people and they’d be bad at sound localization).

(Also ideally, instead of just recording the spherical harmonics around a point, we’d get data around the space of possible ear positions, because spherical harmonics totally generalize to higher dimensions and I bet there’s lots of papers on this and someday soon we’re going to have the best amazingly realistic VR sound YESSSS)

Anyway, pretty much nothing you hear at the theater comes from one single recording that includes actors’ voices, footsteps, and background noise, so in that sense live ambisonic capture probably won’t be the future of high-production VR film. Fancy films record clean separate sound effects, music, and vocals, plus stock sounds, and mix them together later to sound like they’re in the right place (as well as to sound epic and level and clean and all that). Existing video editing tools are pretty good at tracking chosen objects in a film, with minimal work-by-hand, but as far as I know no audio is currently mixed by tracking it to an actual bit of pixels. It’s not necessary. Or at least, it wasn’t.

Another Kind of Head Tracking for VRWe could use the same tech that lets us place an explosion effect on a car, tracking the car to make the explosion realistically move with the shot, and use that information instead to track the car sound effect around you in VR.

In the gif to the left, I loaded our latest stereo spherical talk show video into After Effects and simply stuck a motion tracker on my head, and I already have a separate vocal track for my voice because I’m using a wireless lapel mic, so we’ve got all the necessary information. Then you have to get the information out, and into the listener’s ear.

One option would be to do a fancier version of what Total Cinema 360 did with “Blues”: have a folder that contains files for the sound effects and a folder of their exported tracking information, then transform and sync it all together in the player itself. This would be a little bit of work, but relatively straight forward. Of course, I don’t really want to have to have a video file plus a folder of trackers and sound effects that can only be assembled by a special program, in a format that may or may not become standard for other players and that has lots of room for separate components becoming misaligned.

As convoluted as it seems, I can totally see people exporting After Effects tracking info to be compatible with Unity, where you can then compile the entire video with all its tracking info and sound effects as a game, and then download and run an entire Unity game to watch a video. Actually I wouldn’t be surprised if you could already port After Effects tracking info to Unity.

Virtual_Microphone_AnimationOr, even if you didn’t originally capture your sound ambisonically, you can still use the ambisonic format to encode your fancily-produced spatial sound information as a sphere of sound instead of a million little clips and trackers, using a program that only the video creator, not the consumer, needs to use. It seems natural and easy for spherical video players to natively support ambisonic sound. A regular video file can store the info as a standard series of audio tracks representing a nice simple sphere of sound. Apply a rotation to match the head tracking, then collapse it into binaural stereo using virtual microphones. Mathematically simple. And anything fancier, such as higher-order ambisonics, is an easy extension of the technology.

It all seems so easy, perhaps too easy, to implement basic ambisonics, that I’m surprised I haven’t seen it done yet. It should be as simple as this:

record with sound field mic -> convert to standard B-Format -> use head-tracking info to apply rotation transform -> collapse to stereo.

Just how well will this theory work in practice? I don’t know! Perhaps I am making a fundamental error or something! I guess we’ll find out soon enough. I’d appreciate any insight you might have on the topic.


eleVR Talk-Chat Show Thing, episode 1

posted in: Uncategorized | 0

The first episode of our talk show is live! This is our highest-resolution spherical stereo video yet, and you can get it from our downloads page and watch it in spherical 3d on your favourite virtual reality headset using your favourite VR video player.

It’s also on YouTube:

In episode 1, we talk about VR, in VR! Actually that’s going to be every episode, but this first one covers a vague overview of stuff without any special guests or special segments.

We decided a talk show would be ideal as a low-pressure way to try out stereo spherical video things and refine our workflow without worrying about production value or spending time getting all the stitching correct, while simultaneously sharing the latest VR knowledges and showing off different content, hardware, software, people, everything. We think talk shows and vlogs are ideal for taking advantage of current VR video technology and that feeling of “presence”.

It’s very VR-satisfying to look back and forth between me and Emily as we chat, the same motions we’re used to in real conversation. The stereo 3d looks awesome in high resolution on the Oculus DK2, and simple things like talking human faces are so compelling to our human brains.

We had enough fun with this one that we’ll definitely do more. Next time we’ll talk a bit about what we learned while making this episode, perhaps have an interview, show off a clip of another project we’re working on, and do whatever else we feel like, or if you have suggestions/comments, tweet us #eleVR


Wearality and Field of View

posted in: Uncategorized | 0

Today we got to try out David Smith’s latest Wearality prototype.

Like many other virtual reality head-mounted displays we’ve seen, a smartphone/tablet screen gets put into a holder with lenses. Unlike other VR HMDs we’ve seen, the resolution and field of view are amazing! We’d seen some of David’s earlier prototypes, so we know just how compelling and instantly-immersive a wide field of view can be. The moment you put it on, you really feel you’re in the scene, not looking at it through a tube. The lenses are also concave out from the eyes, so unlike the Rift you don’t get that problem where your eyelashes or glasses rub against the lens.


I was surprised when I learned that the Oculus DK2 was going to have such a narrow FoV, and then after I tried it, surprised that the DK2 is so immersive despite the comparatively narrow FoV. The narrower FoV of the Oculus means they can pack more resolution and better graphics performance in the FoV that is there. Real-time head tracking is a huge part of what makes the DK2 so convincing. You may start by feeling like you’re looking through a tube, but it doesn’t take long to get into it.

I think in the early days of VR we may see a split between HMDs designed for gaming, where a narrow field of view helps performance, and HMDs designed for viewing pre-rendered things like videos, where a wide field of view allows for greater immersion without sacrificing performance. On the one hand, today’s smart phones/tablets don’t have the capability to do fancy game graphics with low latency. On the other hand, that means this kind of HMD can focus on optimizing for truly spectacular high-FoV immersive VR video!

We’re really excited to see how our own videos look with a higher FoV, and look forward to adding Wearality to our list of compatible devices.


VR Video Bubbles for DK2

posted in: Uncategorized | 0

(download VR Video Bubbles for Oculus here)

It was clear as soon as we saw the first DK2 demos that VR video wanted to somehow be integrated into a 3d environment with interactivity and head tracking. So in celebration of the DK2’s release, and the frankly convincing way it tracks the subtle side to side motion of a little lean or a curious look under that virtual table, we collaborated on a shiny new interactive video experience with Christopher Hart, game developer and VR bubble builder extraordinaire.

Come see our new world, or at least a place that has lots of worlds, each a planet, a hovering orb of permeable video like the atmosphere of a gas giant. Explore a new style of storytelling in this vast black space spotted with spheres of video. Hunt each new bubble and notice the forced perspective with some bubbles looming huge in front of you or appearing like tiny specks until you cross the threshold and pass inside, only to discover an entire world inside fit just to your size. Other bubbles may look just right from the outside but once you enter your perspective shifts and you rise giant-like through the lush green garden within.

This is our first step toward a large unexplored field of combined spaces that use rendered environments and real-world capture. We’re working towards:

Stories meant to be experienced in disparate chunks, letting people wander through the timeline of a narrative like one might wander through an unknown neighborhood on a foggy evening.

VR experiences with hand controls in which you spoon bulbs of video from your morning cereal, and when you, or at least your avatar, swallows them whole, the captured world swells around you trapping you inside until the drug of that little pill wears off, fading away, and returning you to the real rendered world.

Virtual museums where gazing at an object can bring up a video bubble with actual footage and more information on the history and context of its origins.

Virtual topographic maps you can roam around on, with immersive video bubbles of real footage from specific locations.

And of course, bubble-bubbles, underwater 😀

As always, our little prototype taught us a lot:

The size of the bubble matters surprisingly little, once you are inside of it. What matters is where you are within the bubble. Being at the bottom of a small bubble makes the bubble feel bigger than if you are in the middle of a large bubble. Head-tracking is awesome for letting you change your height and location in an intuitive way.

When you import a video asset into Unity, it looks like it’s crashing, but really you just need to wait a half an hour. Also, exporting the video in quicktime format seems to work best.

Bubbles-within-bubbles are super cool, and the layers add to the immersion factor.

We tried a prototype where bubbles are combined with a 3D virtual world, and the combination is really compelling (we can’t actually share it due to use of 3rd party material, but hopefully soonish!).

And of course, alongside our learning we are also running in to yet-unsolved challenges:

Getting the frame rate to work properly without juddering the entire world.

Getting different video textures to render for each eye, for stereo video.

Importing large high-quality h.264-encoded video files into Unity.

We’re gonna have to wait a bit before we work out all our own issues for licensing other people’s stuff and working on more demos, but in the mean time, if you’re developing for VR, we hope you’ll try downloading some of our creative commons spherical videos and putting them in your own virtual world to share. Head over to Chris’ site to download this piece and have fun!


Tutorial: 3D Spherical Camera Head

posted in: Uncategorized | 0


So you want to get in to VR video? We want to help! The first step is getting a working 3D spherical camera. One caveat before we get started is that this camera head requires 12 GoPros (Hero3+). My hopes are that instead of having to shell out the 5 grand for that many cameras, a group of people, each who own 1 or 2 cameras, could pool their resources. With that out of the way on to the tutorial.




For this project you will need:

Foam Core

Ruler and/or T square and/or other measuring device

Box cutter or other sharp cutting thing

Hot glue and glue gun and electricity and an outlet

Googly eyes (absolutely required for proper 3D)

One long bolt and finger tightenable nut

Rubber bands

Duct or gaffers tape, preferably the same color as your foam core

A marking utensil such as a basilisk fang dipped in the blood of child wizards



**Exact measurements will vary depending the thickness of your foam core and what versions of the GoPro you are using so while I will give you what my pieces measure I will trust you to measure things yourself, cause you are cool like that.**



1. This is how the main 8 cameras are laid out.


2. Cut 2 squares that are 13.8 cm (2 x the length of the camera + the width) on each side. Then mark a square 2.5 (the width of the camera + the width of the foam core) cm in from the edge.


3. Cut 2 4 cm (the height of the camera) by 11.6 cm (2 x the length of the camera) foam pieces as well as 2 x 4 cm by 11.1 cm (2 x the length of the camera – width of the foam) then assemble as shown using tape or glue with same length sides opposite and the shorter pieces capped on both ends by the longer.


4. Attach this piece to one of the 13.8 cm squares you cut earlier. This is what it should look like with the cameras.


5. Don’t attach the other 13.8 cm square yet. This is what the head looks like for the bottom so far.




6. So you should have something that looks like this. If not, well, either you are just reading along because you like my witty charm and have no intention of building one of these babies or you fail at directions.


7. This is where the bolt comes in. I can neither confirm nor deny that I used a left over bolt from an Ikea table that was laying around the office. Choose something long and about .5 cm thick.


8. Mark the center of your beautiful square blob and start screwing.

9. Once you have it settled in flush to the foam, pile on the glue. If you have an extra bit of foam core I would recommend gluing it over the top of the bolt. All the weight of 12 cameras will be resting here so go glue crazy. Alternately get a bolt that is threaded all the way up and get some large washers and an extra nut to support the foam from underneath. This a real good plan.

10. Once you are convinced you have a strong bond, James Bond, pop on the top.

11. and stick some cameras in to check your work.





12. Cut 4  5.7 cm x 4 cm (height x length of the camera). You’ll need 2 for the top and 2 for the bottom.

13. Tape 2 of these pieces along the long edge so they hinge and hot glue them, centered, so the other long edges are each 3.5 cm from the center of the 13.8 cm square on the top. (7 cm from each other).

14. Cut 4  2 cm x 5.7 cm (width x length of the camera). You’ll need 2 for the top and 2 for the bottom. With the cameras held against back pieces already attached glue these bottoms at the correct angle.

15. Once you have all that up put two cameras in and trace along the edge onto another piece of foam core.


16. It should look a bit like this.


17. Cut it out, obviously. Then glue it on.


18. If you are a good at tutorials it will look like this from the side


19. and from the top.




20. Use the remaining pieces of foam from step 12 and attach, also 7 cm apart along the bottom 13.8 cm square.


21. Hot glue the center of each to the center post.


22. This is another tracing step. The foam core should be about the same width as the bolt so slide some in the gap trace out what fits, cut out and glue in for added strength.


23. See like that.


24. Not shown. Repeat steps 14-17 to make the bottoms and sides.


25. Then add the goggly eyes. This is actually important! The cameras on the top and the bottom can but go in with the lenses toward one side or the other. Its will make your like much easier to put them in the same way every time. Hence choosing a front, and hence the googly eyes. Label your camera slots and your cameras so they fit together the same way every time.




26. Get yourself a tripod head. I like this one.


27. Remove the plate


28. Remove the threaded post (I like this head because its easy to remove this part)


29. Put the plate back on the head and thread the bolt into the hole


30.Tighten the nut and


BLAM! (More on how to use this monstrosity soon!)















Lucid Dreams, the Original Virtual Reality

posted in: Uncategorized | 0

Lucid Dreams vs VRVirtual reality is not a more immersive version of movies and video games; it’s qualitatively different. VR can contain games, and right now there’s a lot of overlap between the tech we need to create virtual worlds and the tech we already have for games, but that’s a tiny part of what VR will be. Virtual reality is not a medium.

I’ve heard people compare even current clunky VR to being on certain drugs, and I’m not the only one with ethical concerns over how we use this potential drug. There’s no reason to think people will escape to VR with any less frequency than people currently escape to TV, games, drugs, etc, but there’s some concern that VR will go beyond capturing the current escapist audience. It’s easy to imagine a VR dystopia where everyone’s living a virtual life at the expense of their job, friends, real life.

I like to think about this question, because if it’s actually a potential problem, it’s more efficient to battle it before it happens. In my experience thus far, the closest comparison to the experience of VR is dreaming, the original virtual reality technology, which is conveniently already available in all of our own heads, so today I’m going to talk about it from my own personal perspective, with plenty of tangents and wild speculation. (Also, turns out all of us here at eleVR are lucid dreamers. Possible correlation between people interested in VR and people interested in lucid dreams?)

Question: Since I can lucid dream, and I can virtually experience literally anything I can think of in all its interactive sensory glory, why don’t I? Why do I get into it sometimes because I know it’s cool intellectually, and then forget about it and move on with my life? Why doesn’t everyone prioritize lucid dreaming above all else, optimizing for frequent REM periods throughout the day in some dystopian polyphasic sleep zombie scenario where all anyone does is escape to the dream world all the time?

Could it be that the potential future where everyone escapes to VR is as unlikely as everyone spending all their life literally dreaming?

I’d done a tiny bit of lucid dreaming as a young child, but only as a tool to redirect or wake up from nightmares, without thinking much about it beyond stopping the chronic nightmares (which it did). Later I found out lucid dreaming is a real thing, there’s all these people out there consciously controlling their dreams just for fun! And that’s when I dove back in. I was pretty intense about it for one summer, years ago.

Lucid dreaming is often lumped in with things like astral projection and ESP, which are not science at all, so I dug into the research to try and separate out what lucid dreaming actually is. It didn’t take long to get a feel for everything that’s been studied on the academic side of things, which is unfortunately not very much, both because of its previous associations with pseudoscience and because it’s hard to get non-self-reported data. As one who lucid dreamed as a kid without even thinking of it as anything weird, I was surprised to find skepticism in the science community that lucid dreams were even possible. It’s like if someone who doesn’t remember their dreams claimed that nobody dreamt at all, they just all make it up.

I could understand skepticism that dreams exist, if you don’t remember yours. I mean, you’re claiming I hallucinate uncontrollably every night and then forget it all? Really?

It turns out you can signal out from your dream in real time through controlled eye movements (REM in lucid dreams follows how you’re moving your eyes in your dream), proving at least that people can become conscious enough while sleeping to remember to do the signal. This and other science lends support to the idea that dreams actually happen when we think they happen, not just as a false memory created later. I hope to see a lot more research on this stuff, because I think lucid dreams are a unique avenue into the nature of perception and consciousness and all that.

Anyway, without a good body of research, I was going to have to learn to have them again, make some direct observations. First, I had to focus on remembering my dreams every morning in the first place, which I wasn’t doing regularly at that point. Think about how nuts it is, that we all hallucinate crazy things every night and for the most part don’t even care. All these realistic yet surreal experiences, with full immersion and interactive exploration, fully tangible, every night, and I hadn’t even been bothering to try to remember them when I woke up in the morning! Why is dreaming a recreational novelty with such a small market?

With a little practice I was at the point where if I did happen to realize I was dreaming while I was dreaming, I would actually remember it when I woke up. At the beginning of this, when I was extremely excited and focused on researching and practicing the whole thing, I could basically will myself to lucid dream and because my entire brain was immersed in this research it invaded my dreams the same way everything I work intensely on makes its way into my dreams. I was focused. As things got less intense, I relied more on the many other different methods people use to induce lucid dreams.

I particularly liked the one where you simply stay aware as you fall asleep, and then you’re asleep, and still conscious, simple as that. I mean, keeping your sense of self through all the crazy hypnagogia that tries to happen to you, that can take a lot of attention. It’s mental effort beyond what consuming popular media requires of people. So, that’s one possible answer to why everyone doesn’t escape to the virtual world of dreams all the time. VR technology will remove that barrier.

But say you do put in the effort and practice, and get a nice crisp lucid dream. Then what? Choices are hard, in the infinite sandbox. Maybe just watch a “let’s play” of someone else’s dream, instead…

Oh, wait, I know. Let’s experiment with the nature of reality and perception!

Lucid dreams inform the relationship between raw sense data and the model of the world we build in our heads. Lucid dreams show me that I can simulate anything I’ve ever experienced, completely realistically, without any incoming sensory data at all. This means that my experience of the world is truly mine, my perception of reality could be 100% flawed and I’d have no idea. We are all capable of hallucinating an entirely realistic world, and the only way I really know whether it’s a dream is that I just know, as if it were its own sense, which maybe it is.

And I can simulate much more in my head than what my real senses can input. I can see beyond my waking field of view, beyond my waking spectrum of light, which means seeing an entire spherical video all at once in hypercolor is definitely on the table once computers can interface directly with the brain (not predicting this in the near future).

I can access deep raw emotions, terror or love or despair or joy. A dream joke can seem super funny but be not funny at all in real life. If there’s a future where direct access to the brain lets us simulate everything that could be done in a dream, that’s the real potential for dystopia (instead of just a laugh track sitcoms could directly hack our brains into thinking they’re funny. Also, much more efficient tear jerkers, instant brand loyalty, and it will probably be categorized as a disorder if you resist doing virtual happiness, like refusing to take medication).

Dreams tell me that my physical body is not hard-coded into my brain. I can have wings, or extra arms. I can be in more than one place at the same time, simultaneously operating separate bodies, other types of bodies, animals, or no body at all. I can be an entire rave of dancers all at once, several household appliances, or all of empty space. When I’m lucid I remember that I am just one individual Vi Hart, existing in an enduring physical reality, and while my ability to change that is quite limited I nonetheless find that I am my favourite thing to be. Reality bias, maybe.

It’s crazy what our brains can do, how much more they can simulate than our usual experience, and I see philosophical implications. I’m not sure I’d believe reality is a thing, if there weren’t this completely open dream world to compare it to. The real world is so remarkably self-consistent.

Then there’s other dream people. It’s common for people to become not-all-the-way-lucid in such a way that you realize it’s a dream but still believe other characters in your dream are actually people, and behave towards them like they’re people. I fell into this trap sometimes when I started, but now when I’m lucid I’m fully aware that the other person is me, everything is me, and the person goes from being a person to being a thing in an instant, and then I understand what extreme sociopaths probably feel like about actual real people and then that’s pretty creepy, and then I wonder whether it’s creepier to treat non-people like real people because you don’t know the difference, or creepier to treat perfectly-simulated people like non-people, whether it be actively treating them like things or simply dropping them from consciousness, winking them out of existence with no regard to who they previously seemed to be.

Nothing I’ve experienced in games or VR has reached that moral uncanny valley where I don’t have a clear mental separation between human being and avatar, whether it be my own self or someone else’s, but we’re going to get there soon. Already with our demo someone mentioned that they felt like they were being rude to virtual Emily when they looked around the room while she was talking; they knew it was just a video but it was real enough to trigger ingrained social rules. Already people instinctively identify what they see happening to their VR avatar as something happening not to an avatar, but actually to them. What happens when you can’t tell whether another person is real or not?

What happens when AR is so good that you’re, say, in the office, and you can’t tell at a glance whether that person-shaped-thing walking down the hall is a physical human being, or the avatar of someone teleconferencing in, or whether it’s one of the virtual non-people your company programmed in because they did some productivity tests and found that filling the office with attractive productive-looking virtual people makes the real employees get more work done?

If it’s legal and makes money, people will exploit it as hard as possible. Imagine the future: not enough oil for people to use cars anymore, not enough housing in the city, so you telecommute in to work on a secure VR device that your company has complete remote control over (though it’s pretty laggy because the way we’re going in the US, in 50 years internet speeds still suck). You go in to your virtual work environment, where they choose what everyone around you looks like, and what you look like to other people, if there’s actually real people. You won’t know whether your real boss is walking by and yelling at you to get back to work, or whether it’s a boss copy, and knowing you don’t know, your real boss has plausible deniability for their real words to you.

In order to talk to your friend on your virtual facespace, you’ll first have to endure an algorithmically optimized attractive objectified maybe-person who wants you to try out this great new virtual product with her, and will give you a guilt trip if you want to skip the ad. Imagine how this kind of stuff could be used “productively” in schools. And it wouldn’t it be so much easier to raise young children if they really were the only being in their universe, and those around them actually had no independent existence, thoughts, or feelings?

Maybe in VR there’s lines that should not be crossed, maybe it should be a requirement that there’s no ambiguity as to whether another simulated human is just a simulation or an avatar of an actual human experiencing the other side of the virtual encounter in real time. I don’t know. The idea of it creeps me out, but plenty of purposeful dehumanizing is legal and prevalent even now, and the fabric of society hasn’t entirely torn apart. Still, treating an image of a person like a thing, or seeing an objectified person, is fundamentally different from having to treat actual people and non-people equivalently, yourself, in real time, in your actual life (whether virtual or not). Probably the answer involves extensions of existing things, like workplace guidelines and advertising laws.

Anyway, that’s a separate ethical problem from the escapism and lucid dreams thing, so let’s get back to that.

The thing about lucid dreams, where I can do anything, is that I’m built to want real things. Maybe it’s reality bias, maybe I just burnt out, but I lost the hardcore interest I had. I found myself in a lucid dream and realized there was nothing in particular I wanted to do. I had an entire list, but nothing seemed very compelling at the moment. The things humans desire, all those sensory and intellectual pleasures, evolved for reality. I had lucid dreams where when I thought about what I wanted to be doing right then, the answer involved waking up and then doing the thing in real life.

This was hugely valuable experience, because there’s a lot of things I zombie my way into doing in real life, and I’m a lot more productive when I become lucid in waking life and realize that I don’t actually want to be scrolling through twitter or whatever, that if I could be doing anything right now, possible or impossible, I’d choose to write an article about virtual reality and lucid dreaming.

Maybe this is a way in which lucid dreaming destroys itself, or maybe it’s just me. VR, on the other hand, doesn’t require self-awareness in order to work.

I also had dreams where I’d realize intellectually it was a dream, but then decide against actually becoming lucid because nothing I could consciously do on purpose could possibly compare to the awesome thing my subconscious was coming up with. This, again, is the “Let’s play” effect.

So I’m not addicted to lucid dreaming, and almost no one else seems to be either. I also burnt out on enough terrible grindy games as a kid that I cannot fathom feeling addicted to any of the games people are addicted to now. Maybe savvy audiences will not have VR addiction escapism problems, not once the novelty wears off, and those who do will be on the same scale as those addicted to current games or TV. Maybe.

I’m not sure though. I also have a lot of reasons to believe I don’t quite fit the norm for the things I’m talking about in this particular article.

I’d like to note that many people do have lucid-dream-like experiences that they report are actually real, such as that they are psychically connecting with another real person who is sharing the dream world, or they’re astrally projecting and travel to see real locations, or they’re seeing the future or whatever. Perhaps they just never employed their sense of reality to ask themselves a real solid “hey, am I dreaming?” and instead just went along with the story, or maybe the answer came back “Yes, you’re dreaming, but who says that’s not real,” or maybe the sense of reality I have is not universal. Some people do “reality checks” where they try to trick the dream into breaking in some way and thus proving it’s a dream, but I’ve found that these are not reliable, and even when they do work they’re a great way to go through the motions of thinking you’re becoming lucid without actually becoming lucid, and then you have a normal dream which only has the plot “I am lucid.”

It’s enough to make one reconsider the philosophical game of “what if I’m the only actually sentient being and everyone else is a consciousness-less zombie,” but just as I feel solipsism is less likely because if it were true my brain might as well have as much power as it does in the dream world, I feel that philosophical zombies are made less likely by the fact that people claim varying powers of lucidity. So. It’s not proof that the world is real and full of sentient humans, but it’s enough for me.

I’d also like to take this moment to note that there’s no such thing as a dream-within-a-dream in a technical sense. That nonsense drives me nuts. If you dream you fell asleep and are dreaming, or if you dream you wake up, that’s all just dream level 1. There can be levels to the story of the dream, but not the actual dream. We could make a VR video with the story that you’re going into virtual reality, but that doesn’t actually make it VR within VR. And if it turns out that all of life is a solipsistic dream, I’m gonna be real mad at how inefficient my solipsism is. If we’re already in the matrix, oh man, how our evil overlords must laugh and laugh!

Anyway, I think it’s likely that VR will be much, much more attractive as an escape than lucid dreams are, for these reasons:

-It’s easier and dependable. Doesn’t completely break if you stop paying attention for a moment.

-It’s got better PR than your own dreams. VR will be advertised and packaged to manipulate you into wanting it.

-Entire companies will be created with the goal of optimizing your addiction, for profit. You are the sole author of your dreams, and they’re probably super boring.

-Even though VR isn’t real, it does exist in real life. The other humans involved are actually people. You know you’re not dreaming; the reality bias is not instinctually felt.

-Some of those who wish for escape don’t like themselves very much in the first place, in which case being hyper aware of your own consciousness isn’t a win.

-It’s actually a plus if you don’t have complete control and don’t have to make all the choices. Let someone else make the choices, become their story.

So I see all this potential for manipulating people through VR, and I am an artist experimenting with VR, and I don’t want the things I make to be ethically horrible. I hope that, being self-expression rather than optimized ad-driven microtransaction gamified social experiences, I’ll be able to sleep at night knowing I’ve added to the world instead of taking away. If you escape your self, it won’t be into the void. It will be into my self. And when you get back to your self, I hope you will be more yourself than you were when you left.

There’s something important about story, art, self-expression. That someone is making specific content with purpose. I see stuff about super interactive dynamic stories and games, as if the goal were to simulate the lucid dream and its infinite choices, and I’m like, yeah, sounds interesting in a technical sense, but dreams don’t usually make very compelling stories. Even my dreams make boring stories, and I’m great at dreams.

If your interactivity isn’t there to help you further understand the thing itself, whatever that thing is, story or concept or skill, then it’s junk food. When you take away the empty interactions, what’s left? Plenty of games are all medium and no message. They give you nothing outside of themselves.

I like perfect precise art pieces, stories understood in their entirety, art that is a thing itself, something you can take with you when you go. I don’t want choose-your-own-dynamic-Beethoven-style-generated-sounds, I want just the existence of the entire static piece of music. I don’t want to experience it, I just want to know it. If an efficient way to know it happens to be experiencing a recording through time, or reading through the sheet music, that’s not the point, because while a Beethoven piece can be constructed out of noises or dots of ink, neither of those things really have anything to do with what the piece is.

Similarly, the art of film has nothing to do with TV, despite that TVs can show films. TV is junk food, but the analogous healthy food is not art film. The purposes are entirely at odds.

TV is not a junk food version of film, but of sleep.

Mobile games are a junk food version of awake.

Perhaps VR will be a junk food version of death. Total annihilation of the self, just like any good junk food death.

Or perhaps this tells us that people want to spend more of their awake time sleeping, not more of their sleeping time awake. Thus lucid dreaming is unpopular, and VR will creep into that space of things we do to delay the onset of tomorrow.


Next time: nightmares?

Hank Green and the Perfect Strangers, Live in VR

posted in: Uncategorized | 0

A few weeks ago we had our first big outing with our 12-camera spherical 3D rig. We were shooting Hank Green and the Perfect Strangers live in concert at Slim’s, a local hot spot for touring bands. I’ll admit I was nervous. Our fledgling was going on its first big shoot and we hadn’t given it an easy job. I spent the entire concert staring at blinking red lights and fussing over camera remotes like a mother pheasant, obsessed with getting the best footage. While concerts, with their constantly changing lighting and fast moving, excited performers, might not be ideal conditions for perfect stitching, they are fantastic places for great VR video and that is exactly what we got.

I am really proud of this video. There, I said it.

While we shot almost two hours of concert, due to temperamental cameras, it wasn’t all useable. We even managed to break the prototype in the process. Basically everything that could have gone wrong short of total failure managed to find a way to happen. And that’s a good thing. I am learning to live with the glacial pace of progress from shoot to final piece, with the certainty of camera failures, with the imperfections. I am learning to let go of the drive to make the space inside the video an exact copy of the place it was shot. To let it be an etherial, broken, second place made from but not analogous to its physical progenitor. To let drummers have 5 1/2 arms and calibration errors eat out black holes and duplicate people (because who doesn’t want more Andrew Huangs in the world am I right? #ClonesforHuang)  The perfectionist in me certainly tried to make it as seamless as possible, but where that just wasn’t going to happen I let the mistakes live. And man, many of them turned out beautifully.

I learned that there are lots of problems to fix before the software pipeline is ready for the web video crowd but now that I have a good workflow figured out I am going to spend the next couple of weeks making step by step tutorials for everything: building a duct tape and hot glue level camera head; the joys and woes file management; why Avanti is your best friend; how to trick PTgui in to doing things it is clearly not designed for; how to occupy yourself while waiting through never ending loading bars (otherwise known as stitching); effective exposure management; why you should never trust auto synchronization; and how to get Premiere to cram it all into one 3D frame. The whole kit and caboodle.* That plus designing a new laser-cut camera head and AfterEffects and Premiere plugins for proper editing in spherical 3D mean I got my work cut out for me.




* I realized I have no idea what a caboodle is. Wikipedia informs me that is means “a group, bunch, lot, pack, or collection of things or people” and that it is derived from booty.  Which basically means its just a piratey way of saying all the things. Arrrrrrrrr, caboodle. 


posted in: Uncategorized | 0

We’ve just released 4Dmonkey.gif, one of our more abstract stereo spherical video pieces yet, containing many rough experiments in layering video. You can get it on our downloads page to watch on the video player of your choice, and it’s also on YouTube (though YouTube is not compatible with VR headsets yet, so it gets even more abstract).



The video includes some footage we took ourselves, as well as some creative commons background images from tycho, Masakazu Matsumoto, subblue, and Bernd Kronmueller. Most importantly, it’s got 4Dmonkey.gif:




The above gif was created by Henry Segerman, following a paper we wrote on the quaternion symmetry group (Evelyn Lamb’s article on Scientific American is a good intro).

When Henry was in town a couple months ago, we convinced him to create the stereo monkey gif, which we immediately turned into a video to stare at forever and ever and ever. But then we wanted to add a background behind the monkeygif, and maybe some background narration or music. The first draft of the audio included Vi’s voice narrating Henry’s code using Andrea’s looping direction and Emily’s glitch-style editing, and things only got more out of hand from there. You can get just the audio on SoundCloud if you want.



The monkey gif itself shows a projection of a 4-dimensional sculpture with the best symmetry group. In 4d the monkeys are all shaped like normal every-day monkeys, they’re all the same size, and they don’t warp when the sculpture rotates. But to see this monkey arrangement in 3d, we have to project it down, and in the projection the monkeys warp and grow smaller or larger depending on how close they are to the camera. The symmetry makes the gif appear to loop after the sculpture rotates only 90-degrees in two perpendicular planes simultaneously.

The quaternion monkey sculpture is especially appropriate because quaternions are heavily used in the code for eleVR player. Playing spherical video is all about projecting from flat things to spherical things and back! But Andrea can tell you more about that, in an upcoming post.

We also learned a ton about the needs and current limitations when it comes to editing this kind of stuff, so I’m hoping Emily will post about that soon.


eleVR on Android

posted in: Uncategorized | 0


You may have already heard the hubbub surrounding Google’s release of Cardboard, a little box that turns your smartphone into a VR headset.

Since Cardboard came out, we’ve already gotten lots of requests to see our content on Cardboard, so we’re delighted to announce that the eleVR Web Player now supports Google Cardboard and other Android-based VR systems (ie. Durovis Dive, the folks that Cardboard got their lenses from).


Just go to on your mobile browser (sorry, Android 4.3+ support only until iPhone supports webGL), and you can start playing our videos on Cardboard immediately!


Tap the bottom of the screen to bring up the detailed controls (including full-screen, which you’ll probably want).

If you decide that you love our player and want to use it more often, you can “Add to HomeScreen” it and it will behave just like a native android application.

Pasted_Image_7_1_14__8_33_AM Pasted_Image_7_1_14__8_35_AM


We include two videos with the player (The Relaxatron and Vidcon), but you are welcome to download more from our downloads page and load them up into your phone as well.

That said, unless your phone has an incredible graphics system that probably doesn’t fit in a phone, you will almost certainly need to downscale the videos from our downloads page. We’ve already done that for the ones available at Which means, of course, that the quality of the videos currently packaged with our player isn’t amazing; go download the full size versions for vastly better resolution and vastly lower device compatibility.

You can learn more about our player or fork us on Github.

Now, go have some fun!


VidCon reacts to eleVR. Also, social responsibility.

posted in: Uncategorized | 0

At VidCon we had a chance to demo the first VR vlog to a ton of actual vloggers. Being a vlog with actual content, not just a tech demo, meant that people who know that medium can get sucked in right away and see how they themselves might use it.

Many people put their hands in front of their face or tried to touch things they saw. Some said they wanted to stay forever. A couple people said it was weird to look down and see that they are a tripod. Not “there was a tripod,” but “I am a tripod.” And a couple people found their re-entry into the real world to be jarring, surreal.

If that happens in a low-res 2-minute video vlog, I’m concerned about the possible psychological effects in a longer video piece or game. After the 3d movie Avatar came out, there was some news buzz about viewers becoming depressed and suicidal, reportedly because the beautiful images they saw were not real. I don’t know to what extent the movie actually contributed to those feelings beyond triggering them and giving something to point to, but I am concerned about similar reactions to beautiful VR experiences.

The video game industry seems to mostly be focused on making addictive experiences and then capitalizing on that addiction in very predatory ways, these days. We are very, very close to the future where amazing virtual experiences will make the real world seem empty in comparison, and you can bet there’s a lot of people already working on how to exploit this as hard as possible. I like to be a socially responsible media creator, so this is something I’m going to be thinking about a lot.

Anyway, the VidCon response was extremely encouraging, and I managed to collect some of the instagrams and twitter responses, embedded below.




eleVR Web Player Press Release

posted in: Uncategorized | 0

eleVR [el-uh-V-R] today released a first of its kind: a 3D fully spherical virtual reality video player for your browser. eleVR Web Player is taking web video next gen.

Much of the chatter around virtual reality has been about hardware.  With the release of the next generation Oculus only weeks away and the constant buzz about virtual reality headsets coming out of Sony and Microsoft, we wanted to share the content side of the VR coin.

Even more than gaming, streaming video will be the killer app for VR headsets.

VR video breaks the control of the static frame and lets viewers choose where to look. This difference, while perhaps not as staggering as seeing the first motion picture, is a major sea change in the future of media. VR video for the web will allow web video creators to share stories from within, bringing their audiences beyond the setting and set dressing of movies to actually being in a place, will give teachers a new way to immerse their students, will spawn whole new genres.

Many people are hacking together rigs to create spherical video, but until now there was no way to view them on them web with a headset. Just like YouTube allowed individuals to share video with the world, now anyone will be able to create fully immersive virtual reality experiences and share them as well. It’s that sharing that cultivates rich media ecosystems online, so we want to make sure VR will be in the hands of anyone who wants to share.

We don’t have to put up with tech demo after tech demo any more, so what video are we spotlighting in our one-of-a-kind player? Why, the first VR vlog of course. It’s not just personal—it’s personal space.

And so in an effort to foster that budding diversity we are going all open source. All of our videos are available for download under creative commons licensing and all our code is up on github, so if you have an idea for that next must-have functionality: get forking.

eleVR Web Player works with Chrome, Firefox and Safari on both Mac and Windows. Those without an Oculus can view and navigate the videos using keyboard controls. For headset users, you’ll need the vr.js plugin. If you’d like more technical information regarding our player check out our readme, and our team, Vi Hart, Andrea Hawksley, and Emily Eifler, make regular tech posts on our blog.

We will be at VidCon the 26th – 28th. Contact us if you would like a demo.

eleVR is a project of the Communications Design Group sponsored by SAP.

Important links: The site, the player, the github, the blog


Introducing the eleVR Web Player

posted in: Uncategorized | 0

While the hardware for VR is getting progressively better, the software still tends to be pretty fiddly and difficult to use. This applies not just to the software used to stitch and edit the videos, but also to the software used to play video.

We had been creating VR video for about a week when it became clear that we were going to need to develop our own player – if only so that we could play our videos on our Macs.

At eleVR we believe that both VR video and internet video are the future. When I started developing our player I knew that I wanted it to work on the web. Of course, getting Oculus data into your browser isn’t so easy, but I was fortunate enough to discover the vr.js plugin by @benvanik, an open source browser plugin that lets you do just that with surprisingly little fuss.


The brand new eleVR player lets you watch 360 flat and stereo video on your Oculus Rift from Chrome, Firefox, or Safari and on Windows or Mac. Videos shown in the player can be rotated using keyboard controls (a/d, w/s, and q/e), as well as by the Oculus Rift if the vr.js plugin is installed.

Go check it out now!

The player currently supports spherical video with equirectangular projections and spherical 3D video with top/bottom equirectangular projections. eleVR Player comes bundled with two relatively small *.webm files, one for each projection.


Want to watch a different video? We’ve got you covered. Just open the file using the folder icon at the bottom right of our player.


eleVR is committed to making our content openly available, and we have all of our source code (and documentation!) on GitHub. Please feel free to fork us and make our player better!


Mixing your D’s

posted in: Uncategorized | 0

Now that all our hopes and dreams have been crushed… Wait yours haven’t been yet? Well, go read all about how perfect spherical stereoscopy in video isn’t possible with a hardware only solution, cry, and then come back here.

Now then.

line upWe have a new prototype camera head to add to the family. This one got nicknamed the The Hippo not for its badass wrestling moves or because I stuck eyes on it to accentuate the cute little ears on top, but because with 12 cameras onboard the thing weights in a bit over ton. And The Hippo isn’t just a heavy weight, it’s got a serious girth problem, but more on that later.

The Hippo approach is a simple solution to the problem of hairy balls. Combing the vectors of visible space smooth in one plane then giving it a buzz cut on either end. What you get is nice convincing stereo around the circumference attached to some good, old fashioned, and so last month, spherical 2D at the zenith and nadir (the fancy words for top and bottom of the sphere). The effect is great. So great in fact that if I didn’t point out that part of the video you’re watching is stereo and part 2D you might not even notice. Your brain sees enough stereo that it’s fine passing over the distinctly flat sections with little protest.

It’s great but just not right. It’s the easy way out. It’s the caveat, the welsh. The pitch goes a little something like this: “We, the amazing wizards of geometry that are eleVR, have invented the WORLD’S FIRST 360 3D camera! Bow before us!” Google it. There are lots of carefully worded claims out there. As I was saying: “You should take us very VERY literally. Our camera can see stereoscopically all the way around. Of course thats only really a circle and not anything like a sphere but, hey, that’s what 360 degrees means people! And, yes, you can’t tilt your head more than a few degrees out of the original orientation of the cameras before you totally break the whchargingole illusion, but, hey, most people don’t tilt! Tilting is for communists or socialists or something equally BAD. Down with tilting! Down with tilting!” In the future a hefty dose of image interpolation and camera heads that look closer to a fly’s compound eye will get us real stereo we are looking for but for the moment its less flies and more hippos.

Back to the big belly issue I mentioned before. One clear problem with this prototype is that rig size is directly proportional to stitch length. As we add more cameras to facilitate the stereoscopy the camera head also gets bigger bringing the camera lenses farther apart and increasing the stitch length, the distance from the camera head where reliable inter-camera stitching begins. With a long stitching length faces get blurred, hands get cut off at the fingertips. It’s not a pretty picture. Ideally the whole camera head needs to shrink down to the size of a fist. Oh, and rest on a room temperature superconducting magnet so I can get rid of the monopod. That’s not asking too much, right?

Here’s the hippos very first video: A Vlog for VidCon





This is what Science sounds like

posted in: Uncategorized | 0

In order to have great stereo in part of a static spherical video, you’ll have to have not-so-great stereo in other parts, and finding the best ways to mess up the stereo without making it seem that messed up is pretty important. We made this video to test different angles and distances between eyes in stereo video, by filming with two cameras and moving them live as we filmed. You can download it and watch it yourself and see what you notice, though careful not to strain your eyes trying to get unnatural views to work. Here’s what we’ve learned from it so far.

Free-viewing vs Oculus: I (Vi) can get much wider ranges to align in proper stereo when I’m free-viewing it, watching a small version of the video on my screen and going wall-eyed, than in the oculus. I suspect this is because when the video is small on the screen, tiny adjustments in my eyes can have large effects, while in the oculus the video is right in front of your eyes and you can’t just cross your eyes a little more to dramatically shift the distance between the images.

I (Andrea) suspect that when you are free-viewing you are already doing tricks with letting your eyes focus weirdly in order to get 3D, whereas when using the oculus, you are, for the most part, just viewing normally and the stereo happens. When you are already free-viewing, adjusting the free-view is probably easier than suddenly having to make your eyes misalign while watching a video because the stereo is way off. One interesting difference for me between free-viewing and viewing in the oculus, is that when the stereo falls apart free-viewing, you just “lose it”, whereas when the stereo falls apart in the oculus, it’s a much less sudden sensation, rather you notice because of doubling, or the 3D effect becoming less strong. This is presumably because as soon as I can’t free-view, my eyes try to bounce back to normal viewing whereas they never need to go weird for the oculus to begin with, so you don’t get that sudden “I lost it” sensation.

Free viewing having a larger effective range than headset viewing suggests that if you can free-view it in stereo but it doesn’t line up in oculus, there’s a way to edit it into being correct (maybe the footage isn’t lined up correctly, or zoomed the right amount, or the correct distance apart), whereas if you are an adept free-viewer and can’t get the stereo to work free-viewing, there’s not much point in wasting time trying to get it to work in editing. We will definitely have to experiment more and see to what extent this is actually the case.

photo 2 (1)

Interpupillary distance: The distance between cameras doesn’t need to closely match that of your eyes for the stereo to work. We barely noticed entire centimeters of change. This is good news for stereo video, as interpupillary distance can be centimeters different for different people.

This is in contrast to our tiny-eyes 360 3d experiments, which did seem to make the world larger with smaller interpupillary distance. Maybe the difference is in the relationship between the stereo and the field of view of the world, because in Approaching Spherical 3d the field of view is in the context of a sphere of vision. In “This is what science sounds like”, the field of view does not really change, but perhaps by zooming the video in or out along with changing interpupillary distance you could get some really cool and maybe interestingly subtle effects.

photo 4 (1)

Angle of convergence: Unsurprisingly, this seems similar to what’s easy to focus on in real life. When you’re looking at something extremely close to your face, it gets harder and harder to focus on, with accompanying eye strain, until you simply can’t do it. At the time of filming it seemed to make sense that when you make the cameras point close together, you’re focusing on something close, but it’s clear upon watching and thinking about it that the inward angle is what becomes too great too fast, while far-away objects can still be seen in stereo that is perhaps a bit more emphasized. So, tilt in to focus out, tilt out to focus in.

When the cameras point outward at a slightly divergent angle, you can still focus on what’s right in front of you, where the wide-angle lenses still capture overlapping footage at a converging angle. What surprised me is how much my face in particular warps when this happens. Maybe something in our brain’s specialized facial construction software seems to be sensitive to this angle, or maybe it’s the gopro’s wide-angle warping changing as my face goes closer to the edge of each camera’s field of view. Needs more experiment.

Rotation and vertical shift: even a tiny amount of this makes it very difficult to get the images to align, for all of us. This will be a big limitation to which panoramic twists will work for stereo spherical video, and is the same as the problem of head-tilt. When you tilt your head in a spherical stereo video, it goes out of alignment pretty fast.

Vertical tilt: with one eye tilted upward a little more than the other, the video still lines up pretty well for me (Vi) when free-viewing with my head tilted a bit to the side to compensate for the vertical shift. There’s the same sort of slight warping of the face as in the angle of convergence. I didn’t really expect differently-tilted images to align so well, so definitely need more tests! It’s probably ideal to change the panoramic twist subtly through the video in a way that always makes faces look good, concentrating the greater divergences on non-face things.

The vertical tilt doesn’t align so well in the oculus with this particular video, because the way the camera is tilted makes one image higher than the other and it’s harder to tilt your head in relation to the image when the image is strapped to your face. We could compensate for the vertical shift in post-production, now that we know this is a thing. Definitely want to experiment with what happens when you edit the same footage different ways, next time.

photo 1 (1)

Non-aligning images: We found the layering of not-even-close-to-aligning images so interesting that we stuck a bit of video on the end where one eye is completely upside-down, and they overlap to create a rotationally-symmetric cool-looking thing. Unlike just layering video, each image is itself crisp, and the eyes and brain can choose which parts of which to “layer” over the other eye’s image. We also played with this idea in #3D Selfie, where when we turn the cameras completely out of alignment you no longer bother to try seeing stereo but instead create a layered story of different faces, which then converge again on the other side.

The effect of non-aligning images isn’t one that is really possible to see with free-viewing, at least for me (Andrea). But it’s totally possible for me (Vi). It’s not automatic like in the oculus though. It takes work and deciding what I want to see, and already knowing how it looks in the oculus gives my brain a goal to go for when free-viewing.

That’s what we’ve noticed so far! All these observations aren’t exactly super scientific, but they’re great preliminary results to point us in interesting directions.


eleVRant: The Hairy Ball Theorem in VR Video

posted in: Uncategorized | 0

The day has finally come where I looked at a real-life practical problem in my actual work as a video director, and said to myself, “Wait! I can figure this out using the hairy ball theorem.”

1. What’s the Hairy Ball Theorem?

The hairy ball theorem is an abstract century-old result in algebraic topology popularly known for its amusing name and wonderfully intuitive visualization: rather than thinking of continuous tangent vector fields on the 2-sphere, imagine hairs on a ball, and try to comb them all smooth and flat with no whorls or cowlicks. Hairy ball theorem says you can’t.

The theorem works even if the ball isn’t perfect. According to topology a squashed ball or a ball molded into the shape of a cow are just as good as a perfect sphere. But the ball must theoretically have an infinitely fine hair at every point, and also must be complete, all the way around. So if you were thinking of a mostly-round and hair-covered body part that is connected to other more hairless parts of the body, such as how the human head has a lot of hair but is connected to hairless parts of the face and neck, that’s not what this is about.

Hair on the head often whorls around a point. You could also part it down the middle and comb to either side, or comb it inward along a line like a mohawk, or comb it all together to a single point such as a ponytail. In a limited patch of hair like the head, it is possible to comb away all discontinuities and zeroes by either slicking it all straight back or all to one side as in a combover, or almost straight back but with a bit of a wave, or in graceful arcs that gently transition from a combover on top to straight down in the back. These continuous hairstyles rely on the fact that all zeroes can be combed out of your patch of hair and onto the hairless part where they no longer exist, but when you’re an entire hairy ball you can’t avoid them. Every cow has a cowlick. Every tribble has a tuft. Continuous hairstyles seem to be favored in business situations, so this mathematically explains why so few tribbles become CEOs.

Wind speed on the earth follows the hairy ball theorem. Screenshot of by Cameron Beccario @cambecc
Wind speed on the earth follows the hairy ball theorem. Screenshot of by Cameron Beccario @cambecc

The canonical example of this theorem in action is for windspeed around the globe. The speed and direction of wind is naturally represented by a vector, and the hairy ball theorem tells us that there will always be somewhere on earth with no horizontal wind speed. Of course the atmosphere is 3d so there could be vertical wind speed, as in a hurricane where the wind vectors whorl around a horizontally-calm eye, while downdrafts from the upper atmosphere give the eye its distinctive visual clarity.

Stereo spherical video is full of vectors on spheres, which means plenty of opportunity for the hairy ball theorem to ruin everything. Relevant vectors on spheres include:

-The final pair of spherical videos, where each point has a panoramic twist vector for how 3d space is projected onto it.

-The original sphere of cameras, each of which point in some direction.

-The position and orientation of your eye on the sphere as you view the video, which determines which portion of the spherical video is displayed.

-What point on the sphere of video you are actually looking at, and what direction you are looking at it from.

It’s a good thing the theorem has such a lovable name, otherwise we’d have no choice but to hate it, because the hairy ball theorem sure leads to a lot of inconvenient truths, some of which follow.

2. Hairy Ball Theorem and Panoramic Twist

In my last post we discussed how stereo video is created by panoramic twist, and how the angle of each camera on a circle or ball can be thought of as vectors that map views of 3d space onto a circle or sphere. On a 360-degree circular panorama it is easy to get a nice even panoramic twist all the way around so that the stereo effect is convincing no matter where you turn your head. When you’ve got an entire ball of cameras for full spherical video, things are less clear. How do you do panoramic twist on a sphere?


Whenever you have a vector field on a sphere, you can bet the hairy ball theorem is lurking somewhere close by.

The hairy ball theorem applies to tangent vectors, while panoramic twist vectors should all be directed out from the ball, not combed completely flat. You can get a field of tangent vectors from the panoramic twist by thinking about the horizontal part of that angle projected onto the sphere (the shadows of the sticking-out hairs, or where they’d be if you used heavy hair gel), and then apply the hairy ball theorem to that. This will tell you that there must exist either a discontinuity or a point where the camera view faces directly outward, no matter what, such as the center of the whorl of views you’d get at the poles of the video if you simply twisted the sphere around its axis.

You might think that in the real world these theoretical problems with continuous vector fields don’t exist. In practice, any camera ball will have a discrete number of cameras yielding discrete pieces of footage. You can give an angle to every physical camera or crop every field of view, but even then, the hairy ball theorem cannot be escaped! If the footage is stitched together into a spherical video, then somewhere in the final video exists a place where the point of view is from a ray that sticks straight out, or where the stitching itself does not work because the discontinuity in the fields of view is too great and thus you end up with an unavoidable stitching error in your final video. Because you need an area of good overlap for stereo vision, the effect of a tiny discontinuity actually spreads out.

3. Hairy Ball Theorem and View Orientation

Thinking about the problem from the other direction, there’s another natural way to get a tangent vector on a sphere. For the point reflecting where one eye is, there’s a vector that points to your other eye. This tells you the orientation of your head.

View orientation is a classic problem in computer graphics, where the hairy ball theorem prevents game developers from being able to make a nice continuous function that will always output an orientation of your field of view given a direction you are looking in. You might be familiar with this phenomenon in first person shooters, where when you look straight up or down there’s a point where the view suddenly flips around, and depending on which side of the discontinuity your mouse hits, a matter of one pixel, you might suddenly whirl around to the right, or left, or not whirl at all.


VR video completely solves this problem for games! In VR, you don’t just have a single point telling the game where you’re looking. You don’t need an algorithm to choose an orientation for the field of view, you already have it! If you decide to do a backbend and see what’s behind you upside-down, your VR headset will (or soon will be able to) detect that, unlike flat games which will flip the field of view right-side-up. Your own body is an engine for creating realistic view orientations. You store your own view-state, in physical positions like backbends and the non-commutative rotations of your head.

The body as a state machine is a beautiful concept that’s worth a bit of a tangent. If you look straight down and want to continue looking back and behind you, you might want to turn your head around yourself to the right and come back up facing up, or turn to the left, or do a handstand and come out with a view that’s upside-down. If you start going to the right but then tilt your head to get a field of view that’s a bit more to the left, the same view you’d have gotten if you’d turned to the left in the first place, your body won’t suddenly flip around as if you’d turned to the left. Your body’s position stores the state of having turned to the right, so the computer doesn’t need to know anything about it.

Games also have the luxury of being able to compute specific views in real time. You can tilt your head however you want, and change the angle from which each eye sees the thing you’re looking at. In a static video, you can’t do this. However a section of the video is shown to one eye, it’s going to stay that way. There’s exactly one panoramic tilt vector for that point in the video, and whatever angle the camera saw it from, that’s the angle that’s shown. Forever. So for stereo to work in static video, you have to have exactly one expected head orientation.

The hairy ball theorem in this case works just like it does in flat 3d games: there’s no continuous way to have expected head orientation. So even if you imagine someday there is an expectation for standard viewing orientations, and that savvy viewers will naturally put their heads only in those orientations while viewing videos, if everything is stereo there will be some point where just one pixel over from the viewer-expected video-approved stereo will be something that appears incredibly misaligned.

Unlike the 3d games case however, zeroes are ok. We need an orientation if we want stereo, but we don’t necessarily need stereo. It’s better to have the very top of a video smooth out to flatness than to have a jarring discontinuity and double-vision. Stereo is a useful effect, but certainly not necessary everywhere all the time. Which is good, because it’s mathematically impossible.

4. So what do we do about it?


On the one hand, the nature of spheres ruins all our hopes and dreams, but on the other hand the hairy ball theorem let us quickly figure out that certain things are impossible so now we can focus our efforts on working within these limitations. For example:

-Have panoramic twist with proper stereo around in a circle, with vectors fading to perpendicularity at the top and bottom so that up and down are not stereo, but not misaligned. Definitely wanna try this soon.

-Have panoramic twist optimized for a forward-facing bias, that has good stereo to the right and left, straight up and straight down, but not behind you. There’s a great vector field on the sphere that has only one pole, and it should work really well for this case. If the vectors go perpendicular at the pole but the background behind you is far away, you might not even notice the lack of stereo.

-Future stereo spherical video editing software will want to be able to have a variety of available panoramic twists, and to let you input your own, and be able to apply the twist to your pile of camera footage dynamically, computing vector fields that smoothly change from one to another. This way you can change the twist depending on what’s important in the scene or where you expect the viewer to be looking, and always have great stereo at that point, or even use changes in panoramic twist as a method of moving the viewer’s attention.

-Perhaps smoothly fading or deforming from one static video to another is not too jarring or difficult to do in real time, and if the video player detects that the viewer is looking at misaligned stereo (it would need to know the vector field of expected orientations and compare) the player can fade one eye’s video to be the same as the other eye’s, so it is at least flat instead of misaligned. Perhaps some small number of static videos, stitched and rendered ahead of time, can have a combination of panoramic twists that give really good stereo in all natural head positions.

-We need to do more research on just how much we can misalign or warp images and still get a stereo effect, to find optimal twists. The twists for each eye don’t have to be exact mirror images, and don’t necessarily have to have twist in opposite directions for the two rays to converge, but we’ll have to test to what extent this actually works. We just put our first preliminary 2-camera width and angle test on our downloads page, “This is what science sounds like” (note that the angle of cameras is for the center ray only. Even when the cameras have a divergent angle, much of their field of view may be convergent).

-It is likely that people differ in their tolerances and preferences, so maybe videos will have to come in different amounts of stereo, or maybe viewers will want to input their own biometrics, then render a video that works just for them.

-Stereo isn’t everything. There’s other ways to trick the brain into thinking there’s depth, and they can probably be combined with panoramic twist in effective ways. Stereo can be used where it’s most effective, and is not needed all the time everywhere.

It’s possible that the most reasonable response to this hairy ball problem is to stop bothering with static stereo spherical video altogether. We could simply give up, in which case there’s two reasonable alternatives. Option 1: assume it won’t be all that long until average people’s computers and VR video software will have the power to create 3d point clouds and render specific views in real time, and focus on producing videos using full wide-angle camera balls, which would be compatible with these future theoretical players. Or…

5. Option 2: I never liked spheres anyway


The hairy ball problem ruins spheres, but there’s other shapes in the world. A torus, for example, is easy to comb completely smooth. And in many ways, a torus is a more natural shape for video to be on: take a normal flat rectangle, wrap the top to the bottom, and the right side to the left side! No weird stretching of pixels, no stereo problems. We can’t help but think of tons of ideas for what we’d love to do with this. We’d have to write a player that could display it, but it’s tempting enough that we might have to do it.

See, with toroidal video, the 360 of video you see as you spin horizontally is different from the 360 of video you see when you turn vertically. So many super cool things could be done with it. I want it so bad. Should probably make an entire post about this.

Or we take a panorama and apply twist in a different direction: mobius videos! When you spin all the way around, you see the same exact footage but upside-down, with no seam, and stereo still works great. A vertical mobius strip would be a bit trickier; we’d have to make sure one eye’s footage wraps to the other eye’s, so the stereo still aligns. For a stereo klein bottle, you’d want it to wrap horizontally to itself with a flip, and vertically to the other eye’s video with a flip. So basically each eye has four copies of the video in a 2×2 torus that appears to be wrapped onto itself into a klein bottle.

Or you could wrap without a twist, a 720 panorama. It seems like a 360 panorama, but when you turn once you come back to a different view, or two turns get back to the same view. In theory you could have any amount of wrapping, and some interesting storytelling opportunities, such as a story that unfolds at your own pace as you turn around to simulate moving around a space.

There’s any number of cool spaces you could theoretically view video from within, and I’d like to spend some hardcore time thinking about which ones would work. Remember that tangent about how the body stores information about its position in space? It’s one of those things that seems too obvious to even notice, but when you start thinking about how the information stored in the body’s state might interact with virtual spaces that are different from ours, you start to realize just how crazy even the simplest things are.

For now, we’re still experimenting with stereo and spheres, because the weirdness of how our brain actually perceives stereo video is hard to predict with theory. As always, you can get our latest experiments on our downloads page, and see for yourself.



posted in: Uncategorized | 0

Haven’t taken up the by now massively viral craze of 3D video selfies? Want to learn the precise choreography of sharing a pair of eyes between you? Well have we got the video for you.

After most of the office had gone home for the day, we decided that rooves really don’t get enough stereoscopic affection. (I climbed a ladder and everything). We wanted to do a few experiments in eye width and needed some vistas to play with. Why eye width? Well, you know those tilt shift videos that use angle and a narrow depth of field to make everything look miniature? You can get a similar effect in stereoscopic video by moving the eye distance very far apart. You won’t be able to see anything nearby but once you crop and align a bit tiny stereo scenes will pop into focus. Create a sequence that is twice as wide, but the same height, as your sources. Align one to each edge taking care not to mixup which footage is right and which left. Full size the viewing window in Premiere by highlighting the window then pressing the [ ~ ] key, then view with the oculus. This really helps get the stereo alignment just right. One thing you’ll notice is that there’s not just one right position. Sure vertical alignment is pretty one-spot specific, but the horizontal works in a small range of positions. Our brains are awesome like that.

Oh but we were talking about selfies. You can start with the basics of course, just standing next to one another, shoulder to shoulder, extending an arm each (camera phones, dslrs, gopros should all work). But once you’re bored with that, which will be almost instantly, try it with a turn. In the first position arms will need to be crossed. Then once you’re happy with the framing, keeping the cameras level, smoothly pivot 180 degrees. No arm crossing needed on the other end. The trick is to keep the cameras as level and steady as possible, though some mismatch is easily fixed in editing. It takes a bit of image stabilization and motion key framing in Premiere but soon you’ll have yourself a 3D video selfie. (Again with the side by side then full screen technique)

You can’t tell when watching this all flat-like, but when you pop it in the Oculus, half-way through that 180 turn is the best part. Your eyes are looking at one another creating a strange meld of the unlinked views. Both parties’ faces remain visible but they mix and mirror before returning to stereo.

If you do either of these let us know! Tweet at me @emilyeifler with the hashtag #3Dselfie and we might feature your 3D selfie on the blog!


eleVRant: Panoramic Twist

posted in: Uncategorized | 0


Today we discuss panoramic 3d video capture and how understanding its geometry leads to some new potential focus techniques.

With ordinary 2-camera stereoscopy, like you see at a 3d movie, each camera captures its own partial panorama of video, so the two partial circles of video are part of two side-by-side panoramas, each centering on a different point (where the cameras are).

This is great if you want to stare straight ahead from a fixed position. The eyes can measure the depth of any object in the middle of this Venn diagram of overlap. I think of the line of sight as being vectors shooting out of your eyeballs, and when those vectors hit an object from different angles, you get 3d information. When something’s closer, the vectors hit at a wider angle, and when an object is really far away, the vectors approach being parallel.

But even if both these cameras captured spherically, you’d have problems once you turn your head. Your ability to measure depth lessens and lessens, with generally smaller vector angles, until when you’re staring directly to the right they overlap entirely, zero angle no matter how close or far something is. And when you turn to face behind you, the panoramas are backwards, in a way that makes it impossible to focus your eyes on anything.




So a setup with two separate 360 panoramas captured an eye-width apart is no good for actual panoramas. But you can stitch together a panorama using pairs of cameras an eye-width apart, where the center of the panorama is not on any one camera but at the center of a ball of cameras. What does the Venn diagram of their final panoramic video capture look like?

Each set of cameras is centered around the same point, so the circles overlap entirely. It’s the sort of thing that gave me a moment of: wait, how can this possibly work? How do two panoramas around the same point actually give a stereo effect? But the panoramas are not the same, the footage is definitely different, offset somehow, and it’s more than just turning the footage. So, what is the relationship between space and these two different circles?

Here’s where thinking of eyes as vectors really helps. When you put the vectors of camera direction on the circle for each eye, you can see they capture space with a twist, and you’ve got one circle of each chirality.




Depending on the field of view that gets captured and how it’s stitched together, a four-cameras-per-eye setup might produce something with more or less twist, and more or less twist-reduction between cameras. Ideally, you’d have a many camera setup that lets you get a fully symmetric twist around each panorama.

Or, for a circle of lots of cameras facing directly outward, you could crop the footage for each camera: stitch together the right parts of each camera’s capture for the left eye, and the left parts of each camera’s capture for the right eye.

But just how much to crop? What is the ideal twist angle?

Basically you need some ratio between the radius of the circle of cameras (which in this first case we’ll assume is the circle your head would turn in while staying still) and the distance between your eyes. for me, this might be about 8cm for radius from center of head to eyes, and 6cm for interpupillary (between-pupils) distance. So you take a circle of radius 8, and a chord of distance 6. Your two eyes shoot out a pair of parallel rays staring straight ahead, and you want to know the angle between the right eye shooting out of the sphere parallel to the left ray, and the ray that would shoot straight out from the circle.

That’s is all the information you need to find the angle by doing geometry to it, and then you’d figure out the field of view to pull from each camera to get the footage from that angle and stitch it together into a panorama with a twist.




But I’m less interested in the details of that than the much more interesting question: what happens if you make the angle different than is natural?

Imagine angling the field of view of two cameras towards each other. Now, when you look straight out ahead with parallel eye rays, the cameras make those rays meet at some finite distance. When you look closer, the cameras make it seem like you’re looking even closer. All of space moves inward.

While if you angle the cameras out, the world moves further away, until it’s all flat at infinity, and after that you won’t be able to focus on anything.

This suggests an interesting way to focus a panoramic camera closer or further. The camera stays still and the field of view stays the same, but by slowly shifting what footage in your multi-camera-panorama gets stitched into the final twisty panorama, theoretically you could “zoom in” and make everything seem closer.

Even more interestingly, once you have the footage, there’s more subtle twisting you can do in post-production.

I like to think of the twisty-circle as a vector field. Vector fields on the circle behave so nicely, with many possible continuous vector fields to choose from! What if the angles of footage pulled for stitching changed depending on what part of the circle you’re on? You could make certain sections of your panorama seem closer or further, and with enough cameras, even focus on individual objects by giving them the stereoscopic illusion of sticking out. All with regular video filmed with regular cameras, and exported into two single flat panoramic videos.

But what about spherical?

Spheres are weird, and vector fields on spheres are weird, and twisting 3-space to collapse it onto the sphere is weird. Maybe more about that next time, or maybe we’ll put together some examples using these different techniques. Who knows where we’ll go! Wooo, research!


eleVRant: My brain plays tricks on me

posted in: Uncategorized | 0

I have a background in neuroscience, and one thing that every neuroscientist knows is that your brain is playing tricks on you all of the time. There is actually an entire research community devoted to the science of illusions. There is even an annual competition for the creator of the year’s best perceptual illusion.

The thing about perceptual illusions is that even though they seem baffling, as though our brains were making crucial mistakes, they actually tell us really important things about how we perceive the world. And, generally, what they tell us is that our brain is taking what actually ought to be insufficient information and processing it based on some fairly reasonable assumptions about what our world is really like to come up with a good understanding of what is probably actually happening. Sure, we can trick our brains by giving them something so unusual that it probably wouldn’t come up in the real world, but only because it’s not something that would really happen.

One of the things that we can learn from optical illusions is how much we really want to see the world in 3D. Our brains have optimized for this so drastically, that even when looking at flat images on a screen, we adjust for a world of light and depth and color.

Take this illusion from Ted Adelson at MIT. You’re probably entirely convinced that tile A and tile B are different colors. They’re not, of course, or it wouldn’t be an optical illusion. But, why?


As it turns out, our brain takes way more cues into account when it decides whether something is 3-dimensional than just that stereo cue that everyone and all the 3D movies are so keen on. We live in a world where light tends to shine from above, where shadows alter colors and shades, where things get smaller in the distance. And our brains use all of those cues to process everything we see, even if it’s on a screen.

A and B look different colors to use because we see B as being in shadow, and our brain has made an automatic adjustment for the fact that shadows make colors darker. That’s how the real 3D world works.

Optical illusions usually aren’t examples of our brains being stupid. They’re examples of our brains being clever.

Computer vision is a difficult problem because the world that we see has been cleverly parsed by our brain to make sense of shadows and light and edges. For example, we have an incredibly uncanny ability to identify someone as the same person when they have rotated a few degrees.

To do really completely authentically realistic 3D 360 virtual reality video we need an actual 3D map of the world at all times that is rendered differently depending on your angle of gaze and your movement. Otherwise, we’ll be ignoring parallax and screwing up stereo half the time and getting all kinds of visual cues wrong.

My logical, math-y side that understands how real 3D images change is completely convinced that there is no way that generating 2 static 360 panoramas (one for each eye) should work to give us a feeling of stereo 3D. Frankly, it only works because we don’t have real 360 point cameras and our stitching algorithms stitch together two slightly different worlds centered around the same point.

But, it does work. Because my brain is playing tricks on me. Or, more accurately, because my brain is so good at seeing the world as it really is in it’s wonderful, gorgeous, light and shadow filled 3D glory, that it will happily work around the imperfect cues that we give it to create a world that is convincingly 3-dimensional.

In the end, I want VR video to be perfect, but my brain is happy with good enough.


Just don’t tilt your head!

posted in: Uncategorized | 0

When you go to the 3D movies you are looking at 2 flat stereoscopic videos. The videos are projected through a vertical polarizing filter for the content meant for the left eye and a horizontal one meant for the right eye. It’s pretty convincing (unless you’re me and your brain just refuses to be fooled by such primitive attempts). But now we have screens like the Oculus which can completely different image to each eye without the need for all that polarization. So why is it harder to made 3D for VR than it is for 3D movies? In the theatre nobody gets to turn their head and mess up the fancy alignments. You can read Vi’s recent post all about this.

Today I hacked together this camera configuration based on an attempt to get just half a sphere really convincingly 3D. Each pair of forward facing cameras will give you great stereo is you keep you head in a basic level forward facing arrangement and the side camera provide 2 dimensional peripheral vision meaning you can turn you head and look from 3D space to 2D space in the video. Sure, you can’t turn around, and, sure, you can’t tilt your head seriously sideways, but, hey, it’s great for tape and rubber bands.




You can download the video here.

How to play


Tape and rubber bands forever!


Flat Video, Broadcast Television, Silent Film

posted in: Uncategorized | 0

If you need an introduction to the current state of VR video and you want it provided in an actual VR video then have I got just the thing for you. (Also congratulations on your meta tendencies. You will do well here.)

This video is something of a hybrid between the flat screen videos of old and the burgeoning field of live action virtual reality: shot in spherical 360 but still dependent on editing and effects programs meant for nothing but flat video.

Now is this type of presentational lecture the future of the VR medium? Of course not, but one, transitions don’t happen all at once and two, the VR medium is a diverse place. There are lots of flavors of virtual reality. Theres live action, in which cameras like capture information from meat space, which can be  both pre-rendered like this video or live streaming. You can use scanners to make 3D models of real spaces. Then theres things more like games that deal in totally computer rendered spaces. And all of these styles can have a whole variety of interaction options from game character like control to being locked in place. Different combinations of which will completely change the feeling, style, and meaning of the work. So much awesome!


eleVRant: The problem of 3D spherical video

posted in: Uncategorized | 0


eyeball_get_it_700In full spherical virtual reality video, there’s this idea that the eye sees a sphere, and so you create an entire sphere of video that simulates all possible directions the eye can be looking in, and this is like reality.

When you naturally move your eye around the pivot point of your neck, there’s a space of possible eye locations. The eye itself sweeps in a circle as you turn your head, and you can sweep by tilting your head up, which is also an arc of a circle. This space is roughly a sphere, so that meshes with the spherical video theory, right?

Not so much!

Do this quick experiment: close or cover one eye and look at something relatively close against a farther background. Turn your head very slightly and notice the parallax effect: the close object moves compared to the background. The eye is looking at the same thing, but from different locations on its sphere of possible locations. What it sees from different locations is, obviously, different. Just as obviously, a camera at each of these locations would capture different views, even on the parts of their views that overlap.

Now try this (without straining your eye too much): move your head and body around while keeping your eye still and focused on an object. If you’re actually not moving your eye, what you’re seeing will be the same; no parallax effect.


Once you’re used to that motion, try moving your head and body around a fixed eye location while staring straight ahead, changing the direction you’re looking in to see the full sphere it is possible to see from that point.

That is what a spherical camera captures. It’s theoretically easy in a technical sense: you have a bunch of cameras or one or a few wide-angled cameras and you capture incoming light from all directions. But the sphere that a theoretical perfect eye can see is different from the changing sphere of vision you’d see as you turn your head around.

The 360-degree spherical video that is captured around a point I call point 360. The 360 field of vision you see with one eye as you move your head I call natural 360. In natural 360, it’s not just the portion of the visual sphere that changes. Everything in your vision changes, in small but very important ways.

So now let’s talk about stereo vision. It’s extremely easy to capture stereo video as seen from a fixed direction: one camera for each eye, and the two cameras capture light the same way your eyes would. The slightly different views let you see depth information.

So how do you capture video that is both 360 and in stereo?


For a while I’d been thinking about it like this: point 360 is easy, and stereo is easy, but putting them together is the hard problem everyone is trying to solve. What I realized recently is that that was completely the wrong way of thinking about it. The hard problem is not how to create 360 and stereo, but how to create natural 360, the 360 you see as you turn your head, for just one single eye.

If you can do natural 360 for one eye, doing it in stereo is trivial, as both eyes have basically the same sphere of possible positions.

Try closing one eye and then the other, seeing how the parallax of a nearby object changes. If you look out of your right eye, and then turn your head so that your left eye is where your right eye was and look at the same location, your two different eyes should see the same view. 3D 360 isn’t a matter of setting up two different spheres of cameras, one for each eye; the same camera can function to show the right eye view when your head is turned relatively left, and the left eye view then the head is turned relatively right.

Someone with vision in only one eye can capture all the depth information that someone with vision in two eyes can see, it just takes a little longer. All you have to do is turn your head a bit. The experience of that information may be slightly different, but we create models of the space around us in the same way.

I think this is one of the reasons wavy handheld “documentary”-style video is popular as a way to make things seem more real. Functionally you have much more 3d information about the space when you simply move the camera a few inches, than with a non-moving 2-camera “3D” shot.

There’s no way to get that information when there’s just one fixed sphere of footage to choose from per eye, no matter how many cameras were used to create that sphere. There’s no possible camera rig, setup, or software that can output a normal static video or pair of videos that have that sort of actual stereoscopic spherical content. The amount of information you need (a section of a sphere for each possible eye position) would require storing all the video for all the cameras (theoretically this could be in a single video file, not that it’d be watchable in a standard player), and then the video playing software would track your head, see where the eye is, grab and interpolate the footage from the closest cameras, and stitch together the view that the eye sees, in real time.

This is certainly possible. You’d need a standard camera setup that the software knows how to deal with. Also the software would have to exist. And you’d need a really fast computer to be able to do this in real time fast enough to avoid simulation sickness.

There’s another possible way to get a true simulation of natural 360 stereo: if you’ve got a 3d model of your space, you can easily simulate what the eye sees from any location. Creating the 3d model ahead of time means less real-time work for your computer. That’s one reason that fancy 3d games are some of the first things popping up for virtual reality. Any game where you move around in 3d already has a way to render the view from one arbitrary point, so doing another for the other eye is trivial. A pre-recorded static video where you can’t even actually move seems like it should be simpler, but yeah, it’s not.

It’s remarkable that the brain can take flat (or flat spherical) camera footage and render it into 3d by putting multiple views together. 3d movies rely on this ability to capture 3d footage without the camera actually knowing any of the depth information. It’s like cheating. But once you can move your head, it may be easier to actually capture that depth information, build a point cloud by processing the different camera views or using a 3d scanner or Kinect or something, and create an actual virtual 3d model of the world that you can render different views of the same way a video game would, rather than film and play a video where the only 3d-rendering processing power is inside the human brain.

That’s all assuming we want true 360 3d. If all we want is to create a video experience that seems like 360 3d, the answer may be different!

We are currently experimenting with using multiple cameras in separately-stitched spheres for each eye, with some amount of stereo vision but no natural 360 for a single eye, no stereo if you tilt your head, etc. It could be that in the same way that the brain creates an effective 3d model of the world when you look out only one eye and move your head, to the extent that many people without stereoscopic vision don’t realize they’re missing anything, perhaps when you see a video that has some amount of stereoscopicness but lacking parallax information you won’t even notice the lack of parallax.


I have two strong predictions:

  1. Stereoscopic vision and/or head-tracked parallax will be an integral part of VR video. Modern flat cinematography is full of cameras slowly moving along tracks or flying around to help give us a sense of 3d space through parallax, and eventually seasoned VR viewers will be able to handle these large-scale camera motions without getting too sick, but VR is also an amazing platform for small-scale intimate video, for truly feeling a sense of being yourself, still in your own body and head, elsewhere, without motion. When you don’t have camera movements to give depth information, you’ve got to get it some other way.
  2. Flat video is going to be old fashioned before long. Video, or virtual reality video as we now call it, may not be fully spherical, but it will be fully immersive. Perhaps you face in one direction the entire time and can see in your entire field of vision, with no video behind or above you. Perhaps there is only stereoscopic vision for what’s right in front of you and not in your peripheral vision, just like in real life, and viewers know they’re not supposed to turn their head much, just as no one expects part of a movie to appear behind them in a theater. Our initial results in this style are promising and we expect we’ll have a demo to show you in a week or two.


Or perhaps VR video will always feel sickeningly unrealistic until we add every single detail right down to layering on a 3d digital reconstruction of your own nose. We don’t know! Nobody knows! But we’re working on it, and we’ll be sure to continue sharing our results along the way. Hooray for the existence of research groups.

In the mean time, check out the demos we currently have available!

There’s also a sense in which none of this matters. I’ve seen movies that I can recall many visual details from clearly, but not whether it was in 3d or not. I have clear visual memories of stories that I can’t remember whether I actually saw a movie of it or whether they’re images I constructed in my head while reading a book. There’s been times when a person flapped and vibrated their wet transient bacteria-covered meat in a way that vibrated the air in a way that my brain decoded into sounds, and then words, and then meanings, that have more real visual impact than most things actually experienced through my eyes.

It’s a well-known yet bizarre psychological fact that people regularly replace real memories of actual fully-imersive fully-3d life experiences with complete fabrications, and don’t know the difference. I’m certain that in the future people will think back on an experience of a story and be unsure whether they saw it in virtual reality or on flat video. So why is it so important to try to simulate reality as accurately as possible?

In the end I think it’s not really all that important, which is why I want it to exist as soon as possible, to be easy to create and view content without all these tech problems, and then we can get to the part where we start using it as a medium. After all, what makes a great medium great is that it’s not the medium itself that’s interesting, despite how new and exciting it is right now. Someday soon, the most interesting thing about VR will be what we use it for.




posted in: Uncategorized | 0

Our experiments with our little alien setup seem long ago now, but the video lives on in “Hexaflexatesta”, our creepy homage to the hexaflexagon. Check it out on YouTube or download it now, then play the video on your VR headset or practice crossing your eyes to get the full stereoscopic effect.

Although very little actively happens in the video, we see an assortment of oddments lying about the desk of our protaganist (Vi Hart). And so, an “I spy” challenge.

On this cluttered table, I seem to have found something as fun as Vi Hart and Henry Segerman‘s hypercube of monkeys.

An African talking drum, and a developing dragon curve from Henry Segerman and Geoffrey Irving.

I spy some papers and 1, 2, 3, 4, 4.5, 5 shells. Two George Hart sculptures – a wooden triakis tetrahedron and an orange 12-part puzzle.

And last, but not least, a screwy* stellated dodecahedron by Chris Palmer.

Can you find them all?



* No really, it unscrews, it’s really neat!

The Relaxatron is now available for download

posted in: Uncategorized | 0

We’re pleased to make The Relaxatron video available as a torrent for you to download and watch on your personal VR headset in all of its spherical glory.

Our files are large, and we spent a while trying to decide the best way to make them available to you. We eventually decided to use BitTorrent protocol because it is one of the best ways to download and share large files over the internet.

You can find the torrent on our Downloads page.

Check out our suggestions for spherical video players here.


The Relaxatron

posted in: Uncategorized | 0

Oh… hello. Welcome to the Relaxatron. Our very first spherical video! We are currently working on a place for you to download it from but for now why not get prepared for the event by picking out your favorite spherical video player. Here are a few to choose from:

VR Player is an open source player so if you are in the mood to fiddle and mess this is the player for you. It does require a bit more effort upon installation so read the about page carefully before you jump in. I have had some problems getting it to work on my pc.

Total Cinema 360 is my current favorite. Its simple to install, run quick tests, but has finicky UI.

Kolor Eyes is the slickest and most branded of the three. It lets you change to projection quickly and even has a few flashy novelty effects.

All three have Oculus support built in but none of these players really give me the control or experience I’d like (We are working on it but that’s a whole other blog post.)

Enjoy ‘The Relaxatron’



Sometimes w/ Teeth

posted in: Uncategorized | 0

Sometimes when you start with an idea the first step is to make a prototype. And sometimes that prototype has teeth. The teeth are very important to the process. The teeth remind you that those cameras that you velcroed to a piece of wood which you then double stuck to a tripod head are not just an collection of things but a unit, and not just any unit but a little alien with eyes and teeth. Your first try.

First tries are hard. They usually don’t work, or maybe it works a little bit and you can just see a glimmer of the dream lurking behind those eyes. But glimmers are hard too. Glimmers show you every thing you need to do, and the work and space and time between where you are now and where you want to be, what you want to be making, what you think you can, should, will be eventually making. And that space is frustrating.

We made this prototype a month ago, and looking back on it now I think: What took me so long to figure out my current set up! Its so simple! But its only simple now because I understand how it all works. Thats how research works. You do a thing to learn a thing. Its not fast but it is fun.

The process of making spherical video is frustrating because its new. The rigs are custom, coaxed into shape with velcro and hot glue. Shooting only works when you can convince your half a dozen finicky cameras to all record at the same time. Then theres the quicksand of getting those dozens of files in the computer and organized and lined up and stitched and exported in any reasonable about of time.

Film was this way in the old days too. It wasn’t just shoot and edit like we do now, whipping out our phones every second to generate a mountain of easy footage. Film had finicky cameras and nightmarish postproduction and each and every shoot could be easily ruined by any number of tiny mistakes. This is where we are with shooting footage for virtual reality. It might be slow and painstaking right now, but it wont be that way for long and we want to share with you every step of the way!




eleVR: Making a place to put on your face

posted in: Uncategorized | 0

Hello, World! We are eleVR [el-uh-V-R]. We, by which I mean Vi Hart, Andrea Hawksley, and Emily Eifler, are making VR (virtual reality) video and we are excited to get to sharing. Coming soon we will have spherical video to watch in your handy dandy Oculus or even on one of those old fashioned glowing rectangles if your headset is at the cleaners. Join us!