Using a small, dynamic display like this would enable much higher resolution without resorting to the clumsy method of stuffing pixels at higher resolutions across the whole field of view.Įven better, this approach could lead to higher fields of view than could otherwise be achieved with a single flat display.įoveated displays are already being worked on by innovative companies like Varjo. This concept involves a smaller, more pixel-dense display, which is physically moved to wherever the user is looking (via eye-tracking) to create a sharper picture in that area. Where foveated rendering tries to achieve this by concentrating more rendering power on the part of our vision where we can see sharply (and less on our low-detail peripheral vision), you can achieve a similar effect by upping the pixel count. Foveated displaysįoveated displays offer a different solution to the same challenge that foveated rendering aims to address: replicating the way our eyes show the things we’re focusing on with more clarity than things in our peripheral vision. This information is relayed to the display, which adjusts accordingly to match the focal depth of the virtual distance from the user’s eye to the object.ĭone well, varifocal displays could both cancel out vergence-accommodation conflict and also enable users to focus on virtual objects much closer to them than currently possible in existing headsets.Īnd even if fully-functioning varifocal displays are some way off, before then eye-tracking could be used to more accurately simulate the human vision system depth of field to better approximate the blurring of objects outside of the focal plane of the user’s eyes. Where those lines meet is where the right focal plane is. You need eye-tracking to achieve this because the system needs to know exactly where the user is looking to get the correct focus.Ī line is traced from each of the user’s eyes into the virtual scene to the point they’re looking at. In their simplest form, these involve an optical system where the display is physically moved backwards and forwards from the lens in order to change focal depth on the fly. The way around this problem lies in the development of varifocal displays which can dynamically alter their focal depth. This leads to an issue called vergence-accommodation conflict. That’s because the lenses in a VR headset always focus at a fixed distance, even when the stereoscopic depth suggests otherwise. Foveated rendering is one of those things: dynamic focus is another.īeing able to adjust our focus on objects that are either near or far away is an intrinsic part of human sight - one that VR headsets are currently unable to match. Even though the optical systems used in today’s VR headsets are improving all the time, they’re still just a very rough approximation of the human vision system.īecause of this, there’s a raft of things we take for granted about our vision that have yet to be accurately replicated in VR technology.