Road To VR Hands-on: VRHero “5K” Plus. Pimax "8K" also commented

Truth it needs to be seen. Once optics & ipd adjustment nailed down the clarity should be awesome (based on my experience with the v2)

I’m just really skeptical as to how awesome the visual will be, being that it is such a wide FOV. Roughly 120 deg wide for each eye with “awesome” clarity all the way to the edges is, I think, a tall order for an affordable and light weight lensing system, i.e. fresnel type. At best I think we’ll have a sweet spot that is awesome but who know how wide they can make that. Even Vive and Rift have a limited sweet spot and they’re not covering nearly the FOV of the 8K.

I’m tempering my expectations of visual awesomeness.

I’m also do not expect same image quality from side to side. However, huge FOV will help a lot with overall immersion, even if edges are not perfect. I just hoping that sweet spot will be large enough. And this is more or less what can be done under current technology.

For major leap we need curved screens with insane resolution (16k, maybe 20k) and with perfectly working foveated rendering, to keep GPU and bandwidth requirements manageable. I say at least couple of HMD generations from now.

Agreed. Just catching movement in the periphery is enough to make the wide FOV useful and more immersive. All I do is race sims and wide FOV alone (greatly focused or not) will be a big advancement in spacial awareness. Hopefully I’ll be involved in fewer “oops” collisions either as the victim or the perpetrator.

That usefulness alone is why I’m not in favor of easing up on the width of the FOV spec like some have advocated. Unless it is reeealy terrible focus.

Current headsets generally are using 16:9 or 16:10 aspect ratio to get 90 to 110 FoV.

The pimax 5k & 8k are using 32:9 so double wide to accomdate the huge FoV.

I do not worry about focus, I’m sure it will be OK. What is bothering me, are reports that there is something wrong with stereo effect. It is a key feature for VR, but with how things are developing, it could be possible that this truly critical issue would be overlooked while chasing other, much less important issues. And THIS would be a true disaster for Pimax.

1 Like

Foveated rendering doesnt help with bandwidth, unless they also come up with a display controller that can handle mixed resolution data transmission.

I have tried to follow all the reports for V2 through V5. From what I have seen what “very few” remarks made regarding stereo effect were related to IPD adjustment or rather lack thereof. I doubt Pimax is overlooking this. On the contrary.
I see no point in being concerned with a prototype’s evolution. The best way to avoid a “glass half empty” feeling is to not drink from the glass until the bartender finishes pouring the drink.

1 Like

[quote=“geoffvader, post:47, topic:5201, full:true”]
Foveated rendering doesnt help with bandwidth, unless they also come up with a display controller that can handle mixed resolution data transmission.[/quote]
Of course you need display controller to support it. But it is the only way to have “retina resolution” (or come close to it) and wide FOV with descent FPS in foreseeable future (5-10 years or so) in VR.

[quote=“dogbite, post:48, topic:5201, full:true”]
stereo effect were related to IPD adjustment or rather lack thereof[/quote]
I hope that it is only that. I really do.

[quote=“geoffvader, post:47, topic:5201, full:true”]
Foveated rendering doesnt help with bandwidth, unless they also come up with a display controller that can handle mixed resolution data transmission.[/quote]
True. The foveated rendering tech demos I’ve seen don’t actually look very good. (The resolution outside the fovea area is way too coarse.)

I don’t think it’s going to be a successful technique, until we all get significantly more powerful video cards. Then the whole screen could be rendered at 100% scale and the foveated area could be generated with a high level of super-sampling (to get rid of aliasing effects). Even then, that may not be sufficient, since your eye is very sensitive to movement (like pixel crawl along non-antialiased edges) in the periphery.

tldr; There’s a lot of hype regarding foveated rendering, but I’m not convinced that it will be viable, at least in the next few years maybe longer, so I’d recommend that everyone temper your expectations.

The techniques of foveated rendering work perfectly, the problem is when they could be used with everything that exists now, based on modifying the programs that already exist and / or the engines of the games.

2 Likes

Indeed upgrading existing programs for Foveate rendering will not be as easy as building a program with built in support.

A good example of something like this is a movie filmed in 3d vs a movie upgraded to 3d.

The native 3d movie generally has better 3d effect then the upgraded.

**On the note though of mixed rendering we already have that with varying Level of Detail settings in game. Foveate is changing how LoD is used overall by using where your looking opposed to varying enviroment details…

1 Like

I’m imagining one way of dealing with sensitivity to motion in the periphery, when doing foveated rendering, could be to do the rasterisation step at lens-matched, but not foveated resolution; Then, every frame, build a new mask from the resulting image, favouring edges (this should take nothing more complicated than a pair of more or less standard convolve and dithering filters); The mask then determines which pixels get to get shaded, and intermediates would be interpolated. Whether there would end up being any savings, once everything is added up, I am not qualified to say, but… :7

The big problem is one of standards, variability in features and capabilities, and whether technology and methods will remain relevant for any length of time…

There needs to be one interface that lets the engine talk to all eye trackers, so that developers do not have to chase after ever more new products and implement access to proprietary APIs for each and every one. Fortunately this could be assumed to be covered by whatever comes out of the OpenXR working group.

If, say Unirealbite engine adds support for something, such as a new way of doing foveated rendering for a certain headset, developers who have games that use this engine will at minimum have to make a new build of their project, using that latest version of the engine. As far as I know, switching engine versions is “a bit” of a headache for developers, as it is – there is always something that breaks. :7

In todays headsets, the lens compresses and stretches parts of the image, and the image is always on a rectangular flat plane; These matters, which a lot of worksaving rendering techniques can be built around, could well become irrelevant with future curved lightfield displays - you’d at the very least not want to “hardcode” yourself to such constraints. Future rendering methodology could well drop rasterisation altogether and take the plunge into raytracing territory (…if I have my will, anyway :P) – That would invalidate many optimisation schemes, and open up for many others.

It’s hard to be forward thinking, and ever get anything done. :7

2 Likes