VR needs a new panel technology to mitigate its current tradeoffs

Presently VR HMDs use LCD or OLED (sample and hold) with fixed pixels, and with persistence and motion resolution tied to refresh rate.

Fixed Pixel displays will always need scalars, a native image, or insane supersampling to look their best, and the only way to make them better in terms of clarity is to bump the resolution, or using microdisplays to make panels small enough that the pixel grid disappears. Smaller display means lower field of view to keep the cost of optics low.

To get an image free of blur today, VR headsets use strobed backlights, and motion interpolation which works well for most people, but as we see on a lot of posts, it introduces Flicker, and visual artifacting if the frames per second are not always maintained.

We see posts quite often about how people want higher refresh rates, less Flicker, and gpus that can run in tandem to give us 4K resolution at 90 frames a second or greater.

The issue is, that even at 120 frames a second in steteo 3D Flicker will still be noticeable to some people just as it is on monitors with ulmb features.

As resolutions and refresh rates go higher, the headsets get harder to run.

In all honesty, the slowing of Moore’s Law is going to be a big problem for VR, unless companies are willing to spend way more on more advanced Optics so that the entire display can be used without lens Distortion correction, and its Associated losses.

Don’t get me wrong, things like time warp, foveated rendering, AI based interpolation, Etc will help, but even game engines right now have hard limits of about 400 frames per second at their max speed. We also know that Gamers want ever more advanced graphics which just makes the problem harder to solve.

The VR industry right now is essentially using Band-Aids to fix problems that will keep needing to be fixed, with ever more expensive pieces of hardware required to drive them.

If we stopped using sample and hold displays, and VR could use surface conductive electron emitter displays with self emissive phosphors, screen door effect, Persistence of vision, foveated rendering, and Optical distortion could all be solved problems.

Electron emitters that illuminate phosphors would not suffer from sde like a pixel grid. 1 emitter can illuminate multiple phosphor dots. Resolution could be dynamically adjusted as in foveated rendering without the associated loss in quality that we have on curent fixed pixel displays.

Because of the way SED works, (micro emitters and phosphors joined between two pieces of glass) you could also place emitters within the Optics themselves, and so have panels tailored to your Optics. IE more emitters in The Sweet Spot of the lens, fewer along the outer edge.

Because phosphor Decay time is about 1 millisecond, persistence of vision would be very low. With fast head motion, there would be no loss of resolution.

A refresh rate of 90 Hertz or 120 hertz on such displays would be more than sufficient to eliminate problems like blur, eye strain from strobing, aliasing, etc.

Eye-tracking would also benefit from these displays because you could potentially do varifocal on such displays.

Do you guys have any thoughts?


SED lost to LCD/OLED at large scale for good technical reasons.

SED displays rely on nanoscale emitters, hard vacuum, high voltage, and active matrix circuitry. VR pixels are already only a few microns in diameter. I doubt SED technology can even be scaled down that far at reasonable cost, if at all.

Active matrix LCD/OLED already incorporates significant circuitry, and our VR displays in particular are small enough to be made directly from silicon chips just like processors if need be. A lot of AR headsets are using DLP chips for that reason. Most of the proposed hardware benefits SED could provide could be realized more easily by slightly more sophisticated solid-state circuits.

Software problems behind resolution and framerate have more to do with legacy codebases and workarounds than technical problems. Some game engines can already update some objects, such as HUD graphics, more often. Others are not even multi-threaded, or their developers have not made even the most trivial modifications to negate problems like ‘Parallel Projections’. This is causing the most unnecessarily severe problems for high-end VR users at the moment.

After resolution, which the Pimax 8kX should solve, weight and comfort are the biggest hardware problems for most users.

Lightweight flexible OLED panels mounted on modern composite materials connected to flexible PCBs, themselves used as external cabling, have much more potential for size and weight savings than SED.


The panels already exist in prototype form. The problem is there’s no demand for them. Pimax should be trying to broker deals with other display makers like BOE and Kopin rather than sticking to their own local supply. Kevin’s in the perfect situation for that

@PimaxUSA have you met with any of these companies? Or is the minimum order requirement way to high for somthing custom like the Star VR OLEDS?


Size and weight wasnt really the issue I was adressing, but the limits of OLED and LCD as far as motion resolution, image persistence, and the impracticality of pushing higher FPS, ULMB, and resolution to adress these issues.

The point I was trying to make is that LCD and OLED have inherent limitations that make them only so practical for VR.

To have full motion resolution on OLED would require 500hz with black frame insertion to mimick phosphor decay tine.

We already know that some people have issues with sickness even on PSVR’s 120hz OLEDs. That’s my point.

They have made advances since 2008 in carbon nanotubes, and colors and black levels would also be perfect.

LCD and OLED have inherent limitations that make them only so practical for VR
These technologies have not been pushed to their limits. CRT was. SED may not be able to be pushed much further.

500hz with black frame insertion to mimick phosphor decay
No. A little more circuitry could be added to OLED panels to keep pixels on.

some people have issues with sickness even on PSVR’s 120hz OLEDs
We are barely getting VR to the point of supporting most of the more dedicated users reasonably well. Motion sickness can be caused by issues other than framerate, such as latency, certain types of simulated motion, or just lack of experience.

advances since 2008 in carbon nanotubes
Which can probably be applied more readily to 2D chips with LEDs on them, than to 3D emitter structures.

colors and black levels would also be perfect
If SED performs anything like the CRT monitors I had, no. Also, anything with a phosphor is going to degrade.

Also, SEDs emit x-rays, just like CRTs. That generally means heavy, leaded glass to reduce radiation levels. I personally would not like even a low dose of x-rays bombarding my head for hours, while I play in VR.


There are already a number of still undisclosed display techs that are one day going to be commercially presented almost all at once, including a form of very HQ and high resolution holographic display, and that will render 80-90% of current technology we currently use and know almost useless and archaic looking in comparison :slight_smile:

Until then, the current best solution probably would be using a single, custom made ultra-wide flexible and curved new gen OLED panel driven from non-castrated control logic capable of very high refresh rates, coupling it with lens similar to the ones used by VRgineers Xtal, but unfortunately…many of the problems associated with transmission bandwidth with the PC, and the limitations that originate from the way we currently use to render and display 3D graphics will still impose its limitations and create a lot of problems to solve.

So the problem is at both ends and even the way we generate 3D imaging should be changed and made more suitable for human stereo visualization…things would need to be changed from top to bottom, both ends.

1 Like

Direct neural connection to the brain will always beat holographic panels :wink: if you don’t mind one or two brain patch update each month


I have some doubts that reproducing vision in the brain would be a better alternative than screens. It seems like a waste of processing power. Would be useful if you’re blind or have eye damage though. Perhaps it could eliminate eye strain too. Although by the time that we can produce high quality images in the brain, we might be able to cure stuff like that easily.

So Qualcom are now boasting their new chip will do 3k x 3k @ 120hz. With those claims they must have a reference design or know about some upcoming panel tech. A purpose built 3K panel would make the “8K” look old really fast


Micro LEDs are the future


And most of all is the price! I’m sure if eveyone could shell out $10K for a headset things would be vastly improved already. The market is still incredibly small and saturated with a number of choices. We must praise pimax and anyone with incremental improvements at this stage. I would guess that returns on investments are small or non existent for this industry.
Correctly stated new panel tech is needed and will arrive but most of us enthusiasts are overly optimistic. The industry has improved greatly in less then 5 years but a majority of people have barely heard about it let alone tried it and considered more then a novelty.
Price is king and lets hope what ever improvements come soon come cheap to see them in a consumer product in the next 10 years.