Apple 8K (8k per eye small fov) vs Pimax 12k (6k per eye wide fov) discussion

On one hand 8k per eye is close to retinal resolution (PPD of 90 and up to 150 is retinal resolution with 9k vertical pixels, but needs 16k horizontal per eye to power wide fov at 160 degrees).

The 12K would come close to a third to half of retinal resolution at 3k vertical and 6k horizontal resolution per eye, and is going to come in at around 30-40 PPD.

The Apple 16k will be just shy of retinal resolution at 8k X 8k per eye, with a PPD of around 80 and retinal resolution being 9000x9000 per eye at 100 horizontal fov.

Looks like the next gen will be Apple ( No SDE at all and light weight but 90-120 horizontal fov) vs Pimax (almost no to no SDE depending on how the 12K looks, with wide fov at twice the weight.)

I wonder how Cambria/Index 2 and PSVR 2 will compete when they’re probably and reportedly still at 2K per eye but with HDR or OLED now.

It’s interesting to note that the 12k would only require 3-4 billion pixels per second to run at 90-120hz (about 2-3 times more powerful than the 8KX which runs 1.5 billion pixels per second at 90hz) and the apple 8k would take 11.5 billion pixels per second.

Full wide fov at retinal resolution (90+PPD would take 25-35 billion pixels per second to run at 90-120hz), so X10 more demanding than the Pimax 12k for 16,000 X 9000 per eye.

One question I have for the community is - seeing as 10,000 PPI displays were made by Samsung recently, and seeing how fast computing has advanced in the last 10 years, is playing Unreal Engine 5 games in VR at 120hz at 16kx9k per eye something you think we will see in the next 10 years? Current 3090s are made on Samsung 8nm, and that’s equivalent to TSMC 12nm. According to their roadmap by 2030 TSMC will be making 0.7nm node which is almost 20x smaller than TSMC 12nm and we need a performance jump of 10x-20x what we have now.

We have 3D stacked CPU cache memory coming, infinity fabric, GDDR8, DDR6, Multi chip cards and process scheduling in the works so I think it actually could happen by 2030.

2 Likes

imho 8k per eye sounds pretty but also a pain, wonder if NASA hardware can actually play games with that

For me next gen is performing both resolution and fov and pimax is the only one that is doing that great plus apple is known for its high price so expect double the price of the pimax 12k so 5000$ or more, not so great for customers really

i don’t think that we will get so much power to play normal games at 16k in 10 years, things are different if the game is actually a vr ue5 minigame

The perceived difference in detail between 60ppd and 90ppd in most cases is less than 1%. So I imagine most will not push past 60ppd. For VR as a general rule 60 is the limit for avg person to detect fine details. Also you have to ramp up the information via refresh rate for the viewer to have a chance to actually notice such small details.

So an attempt to simulate actual reality you would need ~60ppd at 160Hz with an FOV of around 240H and 135V. Certainly difficult to achieve any time soon. We will keep trying to get there.

6 Likes

Humans can easily perceive up to 1000hz +. We’ve a way to go to pass the VR Turing test.

Going to have to disagree on that… Human Eye FPS: How Much Can We See and Process Visually?
In scientific testing the highest FPS *any animal could detect was 140hz. The differences people perceive are not from FPS, its the inconsistencies within each generated frame themselves where they don’t flow naturally and your brain attempts to compensate. In effect you are grabbing frames from within a large pool of frames. The better the consistency of the flow the more likely it will appear natural. As a result there is some benefit to frame rates above what a person can detect, i.e. rates less than 200Hz or so.

High fps can cover this up to some degree but your eyes transmit data to your brain at about 60fps.

I would add that resolution plays some role in this too because larger pixels means inconsistent flow is much easier to detect. High frame rates benefits low resolution more than higher resolutions where you have sufficient resolution to render a natural visual sequence with less anomalies.

4 Likes

“In the most extreme future case (theoretical 180+ degree retina-resolution virtual reality headsets), display refresh rates far beyond 1000 Hz may someday be required (e.g. 10,000 Hz display refresh rate, defined by the 10,000 Hz stroboscopic-artifacts detection threshold), and also explained in The Stroboscopic Effect of Finite Frame Rates. This is in order to pass a theoretical extreme-motion “Holodeck Turing Test” (becoming unable to tell apart real life from virtual reality) for the vast majority of the human population.”

You need 1000hz to get motion clarity just to be on par with a CRT monitor.

Bear in mind a VR Turing test is a very high bar, IE indistinguishable from reality. Stuff is still going to look amazing long before we hit that point.

" Future Blur-Free Sample and Hold Technologies

The only way to simultaneously fully fix motion blur and stroboscopic effects is an analog-motion display.

Real life has no frame rate. Frame rates and refresh rate are an artificial digital image-stepping invention of humankind (since the first zoetropes and first movie projectors) that can never perfectly match analog-motion reality.

However, ultra-high frame rates at ultra-high refresh rates (>1000fps at >1000Hz) manages to come very close. This is currently the best way to achieve blurless sample-and-hold with no flicker, no motion blur, and no stroboscopic effects .

Also, real life has no flicker, no strobing and no BFI. Today’s strobe backlight technologies (e.g. ULMB) are a good interim workaround for display motion blur. However, the ultimate displays of the distant future will fully eliminate motion blur without strobing. The only way to do that is ultra-high frame rates & refresh rates.

1000Hz

The limiting factor is human-eye tracking speed on full-FOV retina-resolution displays. As a result, with massive screen 4K TVs, 8K TVs, and virtual reality headsets, higher refresh rates are needed to compensate for degradation of motion resolution via persistence."

This is a quote from a study used as a source on the popular blog article you listed, funnily enough the anchor text was for humans being limited to 50-90hz, that’s not what the study said:

“Here we show that humans perceive visual flicker artifacts at rates over 500 Hz when a display includes high frequency spatial edges. This rate is many times higher than previously reported. As a result, modern display designs which use complex spatio-temporal coding need to update much faster than conventional TVs, which traditionally presented a simple sequence of natural images.”

The actual study title was: “Humans perceive flicker artifacts at 500 Hz”

Perhaps I could have been a bit less bolshy in my initial statement. I do agree that it’s a complex issue with lots of different cognitive processing elements feeding in to your eventual subjective experience.

The VR true Turing test is going to be pretty challenging though using sample and hold displays.

Even getting anywhere close is going to be pretty amazing.

The article you reference is not the prime source for the article I linked, here is the one they reference multiple times: https://mollylab-1.mit.edu/sites/default/files/documents/FastDetect2014withFigures.pdf
But it is one of many references included. They were evaluating full fov totally black state with a single 478 lumen white frame mixed in. For you to frame that reference to being the focus of the entire article is disingenuous *at best.

*Their conclusions are totally different than yours.
Their conclusion:

Another conclusion:

As for a flicker test it’s not useful for this. Full FOV flicker where each frame is black and 1 frame is high intensity white is intense and the light persists on the retina. Actual life is not staring at the blackness of space and suddenly you see 1 frame of the sun mixed in.

Bottom line as the scientists have said you get highly diminishing returns up to 200Hz and beyond that essentially no gain at all.

3 Likes

Yes this is so true and is why 500hz monitors (now available for 1500 buck) may look better. We dont see the frames, but rather they look smoother only because of decreased motion artifacts. All of this seems to forget that we also need to PUSH those frames at 500 fps. Fools think they are going to get 500fps just cause the monitor can refresh. No hardware can push 4k at 500hz. Not even with minecraft style games

2 Likes

I am sorry but I mostly disagree, just like we all have different eye sight we dont perceive as much hz the same.
I am one of them who can clearly see huge difference between 90/120/160 or 200 hz.
Just like some ppl dont see SDE on the 8kX I still can easily see it, even if it is not bothering, I really cant say there are none.
And there are some ppl who still playing with Synchro refresh OFF !! and dont even see the image tearing !!
Same for the Hz perceived.
According to AirForce, some pilots can distinguish shape of a plane with what can be relative to 500Hz (I have to find source to get the ms exact number and article).
So yes 200Hz might be enough, but not for everyone for real life feelings but of course this would be comfortable.
I have a 200hz TV, even if interpolated (false one just like SmartSmoothing) I can clearly see when it is on or off, I already asked my kids to turn the 60/100 or 200hz mode while I was not watching what was set, they didnt see the difference but I had a 100% success on telling the mode they choosen.
If you google what max fps can the human eye see you will find several “webs*ite” claming human cant see beyong 60fps LOL…monkeys…

5 Likes

I’m sorry, I didn’t mean to claim that that one study was the entire basis of the article. I meant to point out that the blog article wasn’t a primary source and it was one that had been pretty lax with its claims (as is part of the course for these types of sources which are pretty hit and miss and need to be trusted very carefully). Now, that doesn’t mean none of its claims are correct.

Nor in the study of science do you use only one source. The truth is gradually built up from a plethora of primary studies that gradually give clarity on an issue. Each study has its own flaws, etc.

Anyway, from what I can tell there are several things going on: motion detection, blur detection, detail detection, light detection. Each one has a different perceptual threshold time (minimum processing time) and they all combine with individual difference and other factors, then feed through the way your brain smushes it all together using wierd shortcuts and weightings, to deliver your overall subjective experience in a headset.

Our cognitive biases are going to make us slug this out and try and prove each other totally wrong. That’s not right from either side so I’m going to try override the temptation to do it. From what I’ve read I think you are probably right when it comes to perception of light.

Sorry if I came off like a bit of a tw*t.

Blur busters know their stuff too. I wouldn’t discount what they are saying about motion blur from sample and hold displays.

I have a Psychology degree as well. We can go to primary sources rather than have a blog writer half read some papers and make a load of conclusions for us.

For whatever the reason, I defo perceive some pretty high Hz and big differences between them. I have ADHD and am hyper sensitive though. I do remember reading research shows there is a big range for different humans.

I agree with you that there are diminishing returns and many will be happy with lower rates.

1 Like

I think most people would agree that if someone hits 60ppd at 240h, 135v, ~200Hz with negligible distortion and good contrast that would be at least visually near VR nirvana. We will keep pushing the hardware and see how far it can go.

9 Likes

Haha. Yeah. I’d defo agree with that. Can’t wait.

4 Likes

I think the question itself is actually missing where VR technology is going and needs to go. It’s still being thought of as computer monitors. Flat rectangular displays which sit on a desk or in the palm of your hand. That is, admittedly, because VR gear started out and still largely uses technology that was intended for cell phone displays. But that’s changing.

As we get into displays that are designed explicitly for VR from the ground up, it changes the rules. You’re not going to have a GPU rendering 16000x9000 per eye. That’s a very brute force desktop display kind of concept. Just to begin with, VR displays will eventually not even be rectangular. Instead you’re going to have a foveated display where a lot of resolution is available at any part of it, but most of the scene is not being rendered at such high fidelity. Techniques both at the hardware and software level will be streamlined toward approaches that use far less brute force yet achieve the same fidelity that brute force would have.

With 8K per eye screens, the Apple headset isn’t going to be rendering scenes at 8K as for a rectangular computer monitor. Modern hardware just wouldn’t be able to do that. It will surely render at lower resolutions. But having the high density of physical pixels means it can eliminate SDE and perform better upscaling.

We’re seeing this kind of concept more and more in VR. The Pimax 12K isn’t going to be brute forcing 6K per eye screens as if they were computer monitors either.

So what you’re going to see is the efficiency of displaying VR improving by leaps as at the same time the number pixels the hardware can render per second continues to increase.

So to answer your question, I think we will have VR headsets that produce the same level of visual fidelity you’re imagining within 10 years. I just don’t think they will achieve it in the way your question imagines.

2 Likes

6 more months before 12K released, is that possible? Still no public demo.

1 Like

60ppd is 14,000 X 8000 per eye at 235h,135v fov.

36 billion pixels per second at 160hz.
45 billion at 200hz
112 billion at 500hz.

If Pimax can use dynamic foveated rendering and upscaling technology like FSR and DLSS, and we only need 5% of the performance to render to the whole or part of the displays, then that headset only needs performance which can render 5% of 112 billion pixels per second to render at 500hz at the above resolutions. Assuming the 8KX requires 1.3 billion pixels per second, going from 1.3>5.5 billion pixels per second in rendering in the space of 10 years should be possible.

With the added 55 horizontal fov (going from 180 to 235) it brings the 90ppd down to 60, but with the same performance requirements as 16,000 X 9000 per eye at 120hz and 180hfov (around 35 billion pixels per second)

We will definitely need foveated rendering techniques to go past the current resolutions as was suggested above by Sargon, as there is no cable currently that can meet those bandwidth requirements at all, besides a 200GBPS fiber optic DAS cable which costs over $1000 just for the cable. https://www.fs.com/au/products/135687.html

(Assuming 8KX rendering requirements of 1.3 billion pixels per second bottlenecks the 32 Gbps transfer speed of display port 1.4, you would need a cable with at least 120gbps for 500hz at those resolutions whilst only rendering 5% of the total bandwidth requirement to power the whole display.)

Theoretically it should be possible to do with the current hardware and DP 1.4 cable If you were aiming for 160hz at 14kx8k per eye but only rendering 5% of the total bandwidth requirements, but again the display doesn’t exist yet.

5% is probably too aggressive for foveation. But 500Hz I think is also overkill. So it maybe cancels out.

A big improvement in efficiency for VR that I’m expecting is actually in the rendering pipeline. The current pipeline is still built around computer screens. Rendering is first calculated for a flat rectangular view. And then it is warped for the VR display as a post processing step. This is wasteful in terms of both processing and quality loss. It necessitates substantial oversampling (by about 40%) in order to achieve the same quality that a pipeline would that was able to render directly for the VR display. And it’s also chewing extra processing for each of those extra pixels.

1 Like

I hope eventually that we see displays tailored for motion resolution, bandwidth constraints, for how our gpus draw images, and for low persistence over and as opposed to the raw frames per second sample and hold approach that the industry keeps chasing, and has been for the last 20 years.

Due respect to the earlier discussion about how many frames we can actually see.

Keep in mind, a lot of that research, at least early on was done with displays that are basically a particle accelerator shooting electrons at phosphor on glass at a kilohertz rate.

IE it’s way faster at drawing an image, and works in a way ithat just works with the quirks of our visual system better.

It’s 100% On Point to say that a human being can only see about 140 frames per second, but the real question is not how many frames, but what kind of display? How does it draw an image? And importantly, how rapidly can it draw that image?

640 x 480 at 120 frames on a good tube, with a fast phosphor decay time is in fact amazing, and most people who do not have Eagle Vision would be completely happy with it.

The way CRT’s worked enabled them to give us a passable image at 50, 60, 70 and 80hz because they weren’t sample and hold. They drew fewer images, but did so insanely fast, relying on our persistence of vision to “see” a solid image.

Totally different ball of wax with any kind of sample and hold flat panel…

I See this discussion on the Blur Buster’s forums a lot, where the chief will talk about LCD displays eventually running in excess of 2khs IE 2000 fps for blur free sample and hold. That is a pipe dream in my opinion, until I see it.

That new Asus TN 500hz monitor. if you could hit 500 Hz without dropping frames means you’ll have about 2 milliseconds of visibility per frame without blur reduction or BFI. That’s a ton of frames.

Contrast that with the 60 hz OLED in the old gear VR innovator Note 4.

It also had 2 Ms of frame visibility time, but that was running 60 hz with BFI on an OLED with near instant response time.
Do you see the clear Dilemma?

Different displays draw images in totally different ways, and have different inherent characteristics and limitations.

For a flat panel sample and hold screen to have the look of say, an OLED, or a tube display, it needs more frames for brute force, it needs very good GTG, and very good pixel response times. Put simply, flat panels are not a particle accelerator shooting electrons on glass at the speed of light and causing phosphor to glow.

Think of an OLED’s pixels like a building full of windows, but a very spry 20 year old runs very fast to either open or close the windows.

An LCD is like the same thing, but a grandmother is opening and closing the windows.

A CRT is not someone opening and closing windows, but a dude with a particle accelerator shooting electrons to make windows glow, or not glow. No matter how fast you can open or close a shutter, its not going to be the same as the particle accelerator.

I frequently comment to the chief how insane the idea of blur free sample and hold display is because of all of the half-baked workarounds, interpolation technology, eye tracking, and trade-offs just to force that to work.

Even with machine learning based frame rate Amplification interpolation or AI enhancement you can realistically only double raw frame rate without incurring significant visual artifacts, to say nothing of the fact that our eyes just don’t like the feel of sample and hold, and bandwidth is at a premium always. The pandemic Illustrated that in spades.

Display Engineers understood these things in the days of cathode ray tube displays, and the chief blur buster is trying to get flat panels to run with the motion clarity of a tube display.

An emission based display that drew the image line by line from top to bottom at a multi khz rate was ideal for our eyes, and our entire display pipeline still works with the assumptions from raster driven CRTs.

As Carmack once said " you still have crufty bits inside of the OLED controllers waiting for the raster like its on NTSC."

You didn’t incur blur on tubes Beyond whats introduced from phosphor decay time, motion resolution was great, and most important of all the bandwidth was very manageable on those old displays.

It’s hilarious that we see consumers unironically asking " when am I going to get 16K at 120 frames per second, with Ray tracing and Wireless?"

Not soon, if ever at least not wireless.

A game like csgo can run at 300 FPS on my 1060 3gb which could reasonably be upped to 600hz with FRAT.

I can run my copy of GTA V with a modified ini file at 4K 120 on the same card. Workarounds like that though are a pain.

The Problem nobody is talking about is the bandwidth required to have that Brute Force blur reduction on Sample and hold, and even with strobing that has drawbacks of its own with the sample and hold display, like double images and way less brightness.

The best strobing LCD that is relatively artifact free is the display inside of the Quest 2. That has a 0.3 MS strobe Flash for great low persistence.

I’ve read the chief Blur Buster say that there will be perceptual benefit of 2000 Hz displays, and that some of the LED jumbo trons actually run up near that rate. What’s absurd is thinking you’re ever going to get that in the consumer space at a reasonable price.

The best OLED available is the $30,000 Sony BMX X300 which can only manage 800 lines in fast motion with Sony’s special BFI implementation on an OLED that barely manages 1000 nits full screen. 800 lines in motion out of a display capable of thousands more lines of resolution, but only for still images!

That’s even counting OLED’S near perfect gtg response time and near instantaneous pixel switch time. No LCD comes close.to OLED, but evem thats not perfect.

A 4K display that can only manage 800 lines of resolution in motion without interpolation Technologies. Its so ludicrously inefficient.

And that PVM has heat sinks active fans, and it’s roughly the size a of a small CRT, forget about an HMD.

We need displays that work the way the old ones used to. New of course, but with similar Behavior.

I wrote a message a while back that I sent to @PimaxUSA @PimaxVR @SweViver and others with an idea that I had for a display. I’m not an engineer but I’m amazed nobody thought of it, because it seems like it would be possible.

A MEMS laser Scanning system using tech like whats in the Nebra Anybeam. However, Instead of being a pico projector, the scanning beam is used to illuminate Quantum dots on a glass sunstrate. The MEMS scanning laser would basically act the way an electron gun in a CRT does.

ETA prime on you tube did a review of the Nebra Anybeam. Its a 720p 60hz pico projector.

That does not sound impressive, but when he was playing a racing game, you could actually read the road signs on the side of the track, without any blur. Even though the refresh rate and FPS was so low, it did not matter!

Its because MEMS uses lasers, and kind of mimics old CRT displays.

Such a display could be very low persistence, accept variable resolution ( resolution would be limited by dot size of the laser, scan rate, and the mask with Quantum dots on it.)

It would be plenty bright, and essentially be able to mimic some of those better aspects of earlier displays without some of the downsides.

4 Likes

Good post man, thanks.

1 Like