Collision in VR: 9 Techniques

posted in: Tech Posts - Other | 0

Traditionally in video games, when you successfully detect that the player camera has collided with an object such as a wall, the camera stops moving and the player is under the impression that they have been physically stopped by the object. But what do you do when the camera position is set using the player’s real-life head position, and when as a VR programmer you don’t have control over whether the player’s head stops moving? (At least, assuming you don’t yet have fancy collision robots getting in the physical path of your player.)

Here’s some techniques we’ve seen or experimented with, for room-scale spatially-tracked headsets like the Vive and Hololens.

1. Hard Cut

The simplest method is to ignore the problem and allow the user’s face to smash into and through all virtual objects. This can be uncomfortable for some people, or easily set off human reflexes when fast-moving objects suddenly get too close to our face, which is why some VR things set the cutoff distance a little further from the player’s face. 

Screen Shot 2016-08-17 at 4.56.16 PM
Screenshots from Float, with two different cutoff distances. Pesky butterflies.

It can be a little strange to see your head motion chopping off the outsides of nearby objects, leaving their textureless insides exposed, so in most of our webVR stuff (such as Float, above), we prefer to set the cutoff distance to almost zero. But in things with lots of moving parts that could potentially pop up into people’s faces, it’s a quick and dirty fix that works.

2. Object-level Hard Cut

Similar to the above, but instead of chopping off parts of objects as they get too close, the entire object disappears.

Many users chatting in AltspaceVR

This works well when the relevant disappearing objects are contained and well-defined; for example, in AltspaceVR (a social VR chatroom) you can set a personal space bubble, and any user within that radius will instantly become invisible. I’ve found this to be an invaluable feature given the unpredictable motions of people moving about the space as well as the predictable actions of trolls, and it’s better to have the entire person disappear than to have the graphics cut off halfway through a person.

I could see this method working for other sorts of small moving objects as well, though it might not work so well for large objects made up of many components, such as buildings and environmental obstacles.

3. Object Fade

In the default interface of the Hololens, rather than a sudden disappearance, parts of objects fade out as you get close, avoiding the sudden pop-in-pop-out of objects on the collision border. There is also a hard-cut about a foot from the viewer.

This is a quick easy fix for AR technology because the background is the real world so making objects translucent is as simple as making them darker. Possibly this method could be brought to many VR objects as well, though a little more time and attention would have to be paid to the graphics.

This gif output by the Hololens gives an idea of the technique, though because of how the Hololens video function composites the real and virtual, the hamster actually appears to fade darker than the background and so a hard line occurs, while when actually using the headset no virtual object will ever appear darker than the background.

One of the default Hololens “holograms” of a hamster

This is a good method when crashing your head into a convex object against a real-world background, but as you can see in the above gif, it doesn’t work so well when other objects or parts of objects are behind the fading triangles. I think a better solution would be whole-object fade, because in the Hololens all objects are small objects, at least in the standard interfaces and official apps. The large environmental objects of AR are your real walls with real collision, so you don’t need to worry about whether fading away an entire wall is a good idea.

4. Destroy/Collect

Of course, instead of making your object disappear temporarily in a purely graphical way, you could make it disappear permanently as a gameplay effect.

A classic game mechanic is the collectible object, where colliding with something is not a problem but the goal. It disappears because you’ve collected it, conveniently removing itself from your path in the process. Likewise, destructible objects are quite popular in games, often including large items like furniture and even walls. If you want to make a thing that’s about running around destroying everything you touch, VR can work for you!

Screen Shot 2016-08-17 at 4.27.20 PM
Out of 120 dodecahedra in the 120-cell, about half have been eaten.

In Hypernom, the gameplay was designed around the fact that we wanted to show 3D projections of 4D platonic solids, which means that all of 3D space would be completely packed with polyhedra. But how do you see the structure of these shapes from within a single solid cell?

We decided to make the cells destructible/collectible, so that as you move about the projected hypersphere of space, you carve out sections of it and can better see the structure. Each cell pops away with a “nom” sound effect, and if you eat the entire polytope, you win the level, like a sort of 4D Pacman.

One could see this aesthetic working well with games that fill Euclidean space with polyhedra as well, such as carving tunnels in a cube-filled world. In SculptrVR one can use a deletion tool to make cubes disappear as you collide your hand with them, but so far if you want to avoid colliding your face through an object it’s up to you to make sure your deletion-hand keeps up with where your body wants to be.

5. Collision Shapes The World

For our holiday 2015 project “Snowplace”, I wanted to explore the movement design of using a sled and the leaning of the user’s body to move about a VR landscape. But how do I make a beautiful flowing snowy landscape full of snowdrifts and sine waves, when the physical floor beneath the user is flat? If we keep the user at one height, they’ll clip right into the snowdrifts. If we set them at snow height, the up and down visual motion could make them sick, and then what happens when they get off the sled and walk around? As interesting as it might be to artificially induce the Extra Stairstep Effect as users try to walk up or down a visual hill despite a flat floor, that wasn’t what this project was about.

And so the sled became a plow that carves through the snowdrifts as you move around the space, keeping you on flat ground by making sure that wherever you go, the ground becomes flat by virtue of your being there, not through whole-deletion of ground-parts but by smoothly changing the geometry of the landscape. It works with the story and setting, helps you keep track of where you’ve gone, and creates a more interesting landscape as you sled down the canyons carved through steep snowdrifts.


A similar aesthetic is available in some VR sculpting software when using a tool that carves, squishes, or pushes, though I’ve yet to see a sculpting tool where you can sculpt using your entire body rather than just your hand collision.

Obviously smoothly carving into things only works for certain sorts of things, but there’s other ways to apply the aesthetic of shaping the virtual world to fit your physical space. In Snowplace, there are two kinds of trees, which gave us the opportunity to try two kinds of collision design. The decorated trees simply get pushed out of the way as you walk or sled into them, leaving them safely beyond the bounds of your space.

In platforms where the shape of the physical space is not known to the program, I imagine virtual spaces where all sorts of objects (including walls) dynamically fit themselves to positions where you can no longer collide with them as you move about your physical space.

When the programming has access to the room model, there’s another option:

6. Space-Aware Design

Our first explorations into space-aware design were only space-aware in so far as we were creating specific things meant to be experienced in our own space, rather than things that sense and adapt to any space. We started by using the 3D model of our own office that Elijah Butterfield made, and matching the model to our physical space in the headset location. It feels very natural to put on a headset and be right where you know you are, and I like being able to keep track of how I am oriented in the physical space. It makes it easier to move past the “where am I” part of VR, and focus on whatever it is we’re actually working on.

Screen Shot 2016-08-17 at 5.41.05 PM
Our old office. Model on the left, photo on the right.

A more challenging technical problem is to make a virtual room adapt to match up with any arbitrary real room, rather than just our office that we already have a model of. The best example I’ve seen of this is in Fragments for Hololens, which has little to recommend it as far as game content goes but makes great technical use of space-aware technology. The Hololens detects where your real walls are, and so the game can place “fragments” of atmospheric setting: bits of decayed bricks on the wall, leaky pipes coming out of the ceiling and dripping onto a puddle on the floor, and even characters who will sit on anything couch-shaped in your room if you have it. Within the restrictions of the Hololens, the use of space is quite cleverly designed.

I look forward to a future of more clever ways people will design virtual spaces to adapt to physical ones, especially when the correspondence gets more complicated than a static room to a static room.

7. Temporary Context-Dependent Collision Avoidance

There’s one other aspect of collision that Fragments does beautifully by design: every object starts as a collection of exploded triangles that come together into a single object as you look at them (which helps you learn and discover where objects are supposed to be, despite the Hololens’ extremely narrow field of view), and the objects re-explode into triangle bits if you try to collide your head with them, reforming when you back away again. There’s story explanation for the graphical bits and pieces coming together, which makes it more than just a pretty graphical effect.

A mouse forms from triangles in Fragments, shown here so you never have to actually play this awful game.

Our own first explorations into the idea of a temporary story-related collision-avoidance effect, also part of the Snowplace collision experiments in 2015, involved the snow eagles (they look like snowmen but the carrots are clearly beaks). As you get within radius, the snow eagles “melt” in proportion to how close you are, until they are flat puddles on the ground when you would have otherwise been colliding with them. As you sled or walk away, they re-form back out of the ground. Not quite realistic, but still object/context-dependent, and amusing to watch.

Screen Shot 2016-08-17 at 4.58.45 PM

8. Anthropomorphic Avoidance

In our virtual office, we had a virtual version of a drawing that M had put on the wall of the real office. As it looked rather humanoid, I turned it into a 3D “ghost” that could run away if you got too close. There was also a virtual lego brick on the floor, and to avoid stepping on it, it too would scuttle away if you got too close. This turned into a surprisingly compelling game of “Lego Chase”, where the player could try and fail to stomp on the brick as it avoided the player within the confines of the virtual room (which matched up with the physical room).

In Snowplace, the pine trees jump out of the way of the player, in a move inspired by the Knight Bus in Harry Potter. More than just a way to avoid collision, it’s quite enjoyable to sled towards trees and watch them politely jump out of your path. In the snowy landscape of Snowplace, everywhere except the player position is a perfectly good place for a pine tree (they’re placed randomly to begin with), so not much finesse is needed. But I like to imagine a world where all objects have their own particular anthropomorphized behaviors, and are nice enough to make room when we’re trying to get by.

Gif from livestream of programming the jumping code.

9. It’s Not A Bug It’s A Feature

Rather than avoiding or ignoring collision, we could embrace the fact that in virtual worlds people can safely stick their heads inside of objects. Perhaps we should see the skeletons of those who collide with us in virtual chatrooms, or the inner workings of machines.

When I tried Tilt Brush for the first time, I looked at something M Eifler had drawn and saved, that looked like flat coverings of walls and rooms, with a big lump on the floor. I stuck my head through the lump to see how Tilt Brush dealt with collision (hard cut), and was delighted to see that inside the apparent lumpy surface was a human figure.

In Unscannables (working title of M’s latest piece), M also embraces the sticking of heads into virtual objects. One component, for Hololens, places a giant art object virtually in the room. Unlike most virtual objects, the inside is textured and meant to be seen. In another component, viewers are put into a virtual version of a nearby physical gallery of objects, and again, being able to go inside of sculptures is part of the design, highlighting one of the many differences between the real and the virtual.

It allows me to imagine a world in which our instinct to avoid colliding our faces into objects is considered quaint.

Intersecting M’s Unscannable sculpture in the Hololens.

BONUS TECHNIQUES / Letters to the editor:

@AustinSpafford writes: “Call of The Starseed blurs your vision as you intersect a wall, which is both a soft deterence and helps hide clipping.”

@Alientrap writes [about Modbox]: “we’ve found the best is just to fade out the camera and move it outside the object”

I look forward to seeing more techniques used and experimented with soon!