Presently VR HMDs use LCD or OLED (sample and hold) with fixed pixels, and with persistence and motion resolution tied to refresh rate.
Fixed Pixel displays will always need scalars, a native image, or insane supersampling to look their best, and the only way to make them better in terms of clarity is to bump the resolution, or using microdisplays to make panels small enough that the pixel grid disappears. Smaller display means lower field of view to keep the cost of optics low.
To get an image free of blur today, VR headsets use strobed backlights, and motion interpolation which works well for most people, but as we see on a lot of posts, it introduces Flicker, and visual artifacting if the frames per second are not always maintained.
We see posts quite often about how people want higher refresh rates, less Flicker, and gpus that can run in tandem to give us 4K resolution at 90 frames a second or greater.
The issue is, that even at 120 frames a second in steteo 3D Flicker will still be noticeable to some people just as it is on monitors with ulmb features.
As resolutions and refresh rates go higher, the headsets get harder to run.
In all honesty, the slowing of Mooreās Law is going to be a big problem for VR, unless companies are willing to spend way more on more advanced Optics so that the entire display can be used without lens Distortion correction, and its Associated losses.
Donāt get me wrong, things like time warp, foveated rendering, AI based interpolation, Etc will help, but even game engines right now have hard limits of about 400 frames per second at their max speed. We also know that Gamers want ever more advanced graphics which just makes the problem harder to solve.
The VR industry right now is essentially using Band-Aids to fix problems that will keep needing to be fixed, with ever more expensive pieces of hardware required to drive them.
If we stopped using sample and hold displays, and VR could use surface conductive electron emitter displays with self emissive phosphors, screen door effect, Persistence of vision, foveated rendering, and Optical distortion could all be solved problems.
Electron emitters that illuminate phosphors would not suffer from sde like a pixel grid. 1 emitter can illuminate multiple phosphor dots. Resolution could be dynamically adjusted as in foveated rendering without the associated loss in quality that we have on curent fixed pixel displays.
Because of the way SED works, (micro emitters and phosphors joined between two pieces of glass) you could also place emitters within the Optics themselves, and so have panels tailored to your Optics. IE more emitters in The Sweet Spot of the lens, fewer along the outer edge.
Because phosphor Decay time is about 1 millisecond, persistence of vision would be very low. With fast head motion, there would be no loss of resolution.
A refresh rate of 90 Hertz or 120 hertz on such displays would be more than sufficient to eliminate problems like blur, eye strain from strobing, aliasing, etc.
Eye-tracking would also benefit from these displays because you could potentially do varifocal on such displays.
Do you guys have any thoughts?