Last week we added a new pair of headsets to our research pile: the Microsoft Hololens!
As I got ready to go into the office and see them for the first time, I did what I do every morning that I know I’ll be working in-headset: I took my hair down from its ponytail and put it into a low French braid. So the first surprise was to see that the design of the headset has a gap that allows for hair put up in a ponytail or bun, while the bulk of the holding power comes from a band slung low across the back of the head that is easily tightened with a knob.
“You can wear your hair up” should maybe not be our first concern with what Andrea calls “the single most impressive single piece of hardware I’ve ever interacted with,” but the simple truth is that before you can have your life changed by the future of immersive technology, you have to be able to get it on your head.
So, that was a nice surprise.
Getting the thing strapped on and angled correctly was pretty difficult the first time, but got easier once I got used to the fact that the field of view is simply tiny and no amount of adjusting can change that. Once you get used to the content inside the headset, the actual process of strapping it to yourself becomes fairly invisible.
My first time around the office I was delighted to see objects that had been placed around by coworkers before me. An AR campfire at the center of a circle of real sofas, an animated fish swimming inside of a real-life clear container, a YouTube screen overlaid on our real-life projection screen, and our company name in big 3D letters over the door. The placement of these objects made sense in the room, and clearly the headset was tracking and remembering our room to keep the placement persistent.
Well, at least semi-persistent. Occasionally you’d walk around and come back to find virtual things shifted a little from where you left them, or the room would need to recalibrate a bit. On the other hand, physical things shifted in relation to the virtual objects, too. I had to tell a coworker “don’t move that coffee table, there’s a bird on it!”
The mapping and tracking aren’t perfect, but they’re hugely impressive. The amount of sensors and computation power built into the space of a tiny headband is incredible. It’s tracking not just the entire space, but also your hands and your voice, and then rendering objects imperfectly but still impressively persistently in relation to the space. Plus, in addition to sensing all this and computing it, it has to output both the visuals and audio. All without any external tracking or computation power, all untethered. Plus battery. I just have no idea how they managed to fit it all in at that size and weight.
The wonderful thing about it in contrast with the VR devices we usually work with is that this is definitely AR, and indeed, it wouldn’t work otherwise. I was extremely skeptical of the narrow field of view, but when the real world is part of what you’re looking at, it doesn’t feel like blinders so much as a magic region of vision where you can see an extra layer. For applications that want you to ignore the real world in favor of the virtual components, the narrow field of view becomes very annoying indeed, and it’s interesting to see how different apps go about dealing with it. We’ll probably go more into depth on this another time.
The other thing about being AR with a narrow field of view is that it makes you less likely to get sudden motion sickness from a single bit of lag. There’s not enough virtual to override the real, so even when objects don’t track perfectly or the headset loses calibration with the room, the real world has your back. The visual quality and frame rate are good enough to avoid the more common problems as well, at least in the short-term. Various effects can still build up to make you dizzy after prolonged use, so watch out for that. It sneaks up on you.
I’d like a much larger field of view, not just for AR but also so that the same device can functionally do VR too, but the tracking needs to get much better before this can be done without making the viewer instantly sick. There’s also a lot of work to be done regarding the map of the space, tracking thin objects like table legs accurately, and recognizing the difference between non-moving inanimate objects and the people around you.
The first time using it working from home in my apartment, which has white walls rather than the bright pink of our office, suddenly we became aware of a constant pinky-purple rainbowy sheen from the lenses, which took a bit of getting used to and is still an annoyance. I highly recommend painting the walls bright pink in any space where you intend to use the Hololens often.
Now on to the interface. I’ve always been a proponent of the idea that icons for programs should be 3d models that you place around you in the room, so you know where to get them. I like this aspect of the interface, though the minimum size of the models is too large for me to arrange them functionally on my desk; it truly demands a room scale. Fun for showing off the technology, but not so useful. Many apps are simply rectangular windows, which want to be large and on walls or hovering in the air blocking a large portion of the room, and when you’re not using them they sit there blankly, not running, just taking up space.
Perhaps it is the low field of view and lack of subtle hand tracking that makes arranging small icons on your desk implausible with the current headset, but with all of 3d space to work with, it would be nice to take advantage of having more things within reach!
Not that actually selecting things is without its problems. Pointing and clicking, by pointing your head to move the cursor and then doing the “air tap” gesture, is horrible. We can hope for eye tracking in a future version, and tracking of hand gestures that don’t require holding your arm out in front of you and are more versatile than a button press. And I would like to be able to lean back and set out some windows where I can see them while reclined, but there is no option to tilt the windows towards or away from the ground.
Grabbing the right object or hitting the correct button can be a challenge, and the voice commands are functional but slow and unreliable. And there’s many little annoyances with the software, such as the menus on objects constantly reopening themselves, windows ceasing to function when they’re out of focus in a way that often breaks what they’re doing, and objects sometimes finding their way beyond the constantly-updating border of your room, where you can no longer retrieve them. I’ve also run across rarer bugs, such as a window mysteriously squishing itself to a line only when I looked at it, and buttons disappearing when I look at them.
Mostly, I am saddened that currently the only way to work with it is to make unity apps to run in it, and it’s very strange to me that you can’t use your own “holograms”. The stock options are a fun proof-of-concept, but only go so far. And webVR is a must!
Pretty much all those problems are solvable using the current hardware, and the hardware itself is very impressive, so I’m willing to forgive those things for now. Time will tell whether the platform becomes usable itself, and a functional thing to develop applications for, or whether the closed-platform nature of it will prevent it from reaching its potential. Either way, it’s an impressive step forward, and hints at just how good the technology could be in the near future.
I made the following video demonstrating its use: