How VR Headsets Could Reduce Lag

posted in: Tech Posts - Other | 0

Frames rendered slower than the gif moves

This is a guest post by Andy Lutomirski, a contributor to eleVR Player.

I’ve played with, and written code for, the Oculus Rift DK1 and DK2. Both of them have a nasty problem: when you turn your head, there’s a very noticeable lag before the view in the headset starts to pan.  I find this to be distracting and to make me think I’d rather watch something on a normal monitor. Other people seem to find it unpleasant or even sickening.

In theory, there’s a nice way that this is supposed to work.  Every frame drawn on the Rift is rendered somewhat in advance.  When a VR program starts to render a frame, it knows when that frame will be displayed, and it asks the Rift driver to estimate the viewer’s future head position corresponding to the time at which the frame will be seen.  Then it draws the frame and sends it off.  Rinse, repeat.

This doesn’t work very well.  If I’m currently holding my head still, then unless the Rift is measuring my brain waves, there’s no possible way that it can know that I’m going to start moving my head before my head actually starts moving.  But it gets worse: computer games almost always try to render frames in advance.  In part, this is because you generally don’t know how long each frame will take to render.  If you take a bit too long to render a frame, then you are forced to keep showing the previous frame for too long, and this results in unpleasant judder *.  Actual games often render several frames ahead to keep all the pipelines full.

On the web, performance is especially unpredictable, so programs like the eleVR player really need to start rendering each frame early.

For 3D on a monitor, the only real downside is that your mouse or keyboard input can take a little bit too long to be reflected on the screen.  But, when you’re wearing a VR headset, the whole world lurches every time you move your head.

To add fuel to the fire, the Rift DK2 has an enormous amount of chromatic aberration.  To reduce the huge rainbows in your peripheral vision to merely medium-sized rainbows, the computer needs to render everything full of inverse rainbows, which makes frames take longer to draw, which adds even more lag.

I think that the real problem here is that Oculus is doing it wrong. The Rift contains a regular 2D display behind the lenses.  For some reason, Oculus expects programs to send the Rift exactly the pixels that it will display at exactly the time that it will display them. In other words, each pixel sent to the headset corresponds to a particular direction relative to the viewer’s head.

I think that this is entirely backwards.  Let programs send pixels to the headset that correspond to absolute directions.  In other words, a program should ask the Rift which way it’s pointing, render a frame in roughly that direction with a somewhat wider field of view than the Rift can display, and send that frame to the Rift.  The Rift will, in turn, correct for wherever the viewer’s head has turned in the mean time, correct for distortion and chromatic aberration, and display the result.

Taking a piece of larger frames

Let’s put some numbers in.  The Rift display is approximately 2 megapixels.  A decent rotation and distortion transform will sample one filtered texel per channel per output pixel, for 6 megatexels per frame.  At 100 frames per second, that’s only 600 MTex/s and 200 MPix/s, well within the capabilities of even mid-range mobile GPUs.

There’s another huge advantage of letting the headset draw the final distortion pass: the headset can draw several frames per game frame. So, if it had a beefier GPU, it could display at 200 or 300 fps even if a game or video is only rendering 30-60 fps.

A better solution wouldn’t use a normal GPU for this kind of post-processing rotation and distortion.  A normal GPU writes its output into memory, and then another piece of hardware reads those pixels back from memory and scans them  out the display.  That means that the GPU needs to be done writing before the frame is scanned out, and it takes some time to scan out the display, all of which is pure nausea-inducing latency.  A dedicated chip could generate rotated and distorted pixels as they are scanned out to the physical display, eliminating all of extra latency and dramatically reducing the amount
of video memory bandwidth needed.

If we had a headset like this, it would be easier to program for it, and it would work better.

* nVidia and AMD both have technologies that try to coordinate with a monitor to delay drawing a frame for a little while if the frame isn’t ready.  This is only a partial solution.

 

-Andy