Will Pimax 8k be prepared for foveated rendering?

That’s indeed a ‘fixed foveated’ render method which ALWAYS renders the midpoint as highres, regardless where your eyes look, so no eye tracking is used here. And it isn’t an UE4 option but something the developers added themselves. The other technique they’re using is more interesting ,the ‘nvidia multires shading’, this is at the core of the Fove HMD’s foveated rendering, although they use the AMD version of Multires rendering. With this technique multiple resolutions for the same frame can be rendered in a single pass, I’m sure this is going to be the basis of foveated rendering for VR.

I’m not sure what’s happening with the Fove HMD though, it seems to be quiet there for the last few months, not sure how that system even performs.

1 Like

A quick search on google seems to indicate we are years (2-3?) before any advancement enabling to deploy at a large consumer product including FOV rendering.

1 Like

Not sure how you reached that conclusion, e.g. SMI had announced the principal availability of such a capability already almost 2 years ago, see below

At the time or a bit later they also said that it would not be very expensive, if produced at scale, i.e. implemented as standard feature of an HMD. Now Pimax probably would not reach the kind of scale even if tehy included it as a standard feature, so expect some cost to be involved for anything of quality to be made available by them. And it will not come from SMI, as they were acquired by Apple in the meantime. But the tech seems to be there already.

We are supposed to have an eye tracking of AdHawk, which if it meets the expectations has to be more accurate, faster and does not use cameras, so it is cheaper.

But one thing is to have a good eye tracking and another that you can use foveated rendering with any game or demo.

2 Likes

Hardware without proper software :sob:
‘brainwarped’

3 Likes

Correct, but both the new AdHawk solution with the tiny sensor and also the one SMI talked about two years ago are/were supposed to be good enough for foveated rendering. In the article I linked it says you would need tracking at a frequency of at least 250 Hz to achieve good enough quality.
But there of course is a difference between talking to a company, and getting their best product for little money…

1 Like

The problem comes from the software part; Graphic engines should use it to take advantage of it and they are not interested now, because they would have to build a new and different rendering system only for virtual reality.

I’m not sure if your question has been answered fully so I will try and do my best:

The quick answer is no, and even the more detailed answer is no, given the specific wording of your question.

Let’s try it this way around:

(1) Foveated rendering means more detail where your eyes can see it and in turn less detail where they can’t. Given that current HMDs have a fixed clear area in their optical design, we already have engines that render most detail within that ‘sweetspot’ and less detail outside of it, at virtually no cost to the user’s experience but providing the ability to really crank it up ‘where it matters’. So in theory (HMD specific variables such as size of sweetspot would have to be customized on a per-HMD basis) the Pimax 8K is ready for this type of foveated rendering.

(2) Eye-tracked foveated rendering doesn’t assume the size of a sweetspot to decide ‘where it matters’ as its basic metric, but in fact ‘where your eyes are looking at’. This allows for specific experience mechanics such as eye-tracked gaze control.

(3) It also allows for what has been mentioned in this thread, that is an eye-tracked adjustment of the geometric distortion algorithm so that the optical sweetspot of a panel/lense configuration can be ‘moved’ in sync with your eye-movement. Given that Pimax appear to strruggle with even the somewhat less complex sweetspot and IPD variable geometric distortion algorithm, I do not believe that a player of their size as capable of the required R&D until it becomes a verry possible industry standard.

What is also possible is to combine (1) and (2) [and theoretically even (3)] so that the engine renders a circular area at the focus of both eyes at the highest detail and then gradually decreases detail going from center to periphery. This is what Tobii as well as some others have shown. Problem is, there is no standard for sample rate and positional prediction yet, so engine developers are reclutant to invest considerable ressources into this feature until at least a roadmap for hw and river developement is agreed on. Yet in theory, given that Pimax decide upon an eye-tracking solution that satisifies the stated requirements, the 8K will be ready for that. All it needs is an interface for the tracking solution.

What you are asking though, if I understand it correctly, is if given current specifcations of DisplayPort, HDMI, USB-C, thunderbolt etc is if it will be possible to have the 8K not only display more DETAIL within the eye-tracked foveated panel area, but in fact display at at the panel’s NATIVE 4K resolution. This is absolutely NOT possible and would have to be a feature added to an IO specification such as e.g. DisplayPort and agreed upon by the industry.

So no, the Pimax 8K will not be able to ever display any area of its panels at any RESOLUTION higher than it’s native input res which is limited to 1440p. It will in all likelihood, however, be physically able to display more DETAIL in any given panel area, based on either eye-tracked or simply fixed foveated rendering.

6 Likes

Thank you, this is the most clear answer yet.

But I do wonder if this is something that could be done on a PC>headset driver level as some have mentioned previously. The DisplayPort protocol can in theory send whatever video data as long as it’s not outside its bandwidth so a solution (a workaround perhaps?) could be data that’s specifically signatured from the PC to the headset for this sort of rendering.

So that leads to another question, will the Pimax 8k firmware be updatable in the same way HTC Vive can be? Or is the release firmware all we’re gonna see.

1 Like

Glad I could help.

And no, this cannot be done on a driver level, at least not without computational requirements on the device side.

I completely understand the ‘intuitive understanding’ of bandwidth, but I’m telling you that this kind of interface standard doesn’t exist (yet).

Re the firmware, I think it would be best to ask existing 4K and 4K BE owners if and how the firmware has been developed by Pimax after release.

1 Like

What PiMax has to offer as(/with) promoted third party EYE TRACKING solution?
For example:

  • third party API/SDK
  • integration into PiPlay
  • example demo app/vr room where you can test eye tracking

any ?

1 Like

The 4k & BE firmware is updatable like just about all hardwsre these days.

1 Like

I think a few game developers will add it in themselves in the next year or two. If/when Vive and Rift have eye tracking add ons it will pick up faster. If they have eye tracking native in their 2nd gen headsets it will come heaps quicker, but i think it will either be an add on for 2nd gen or included in a gen 2.5 from either company (by gen 2.5 i mean kinda like the vive pro being released is gen 1.5).

I’m really hoping the developer of WSS will add support as they did for leap motion which is a small niche (hand controls) inside and already smallish niche (VR itself), so eye tracking will be in a similar position. If they add it to this that alone will make it worth it for me. Same hopes for it in the Honey select VR port/mod

1 Like

I still don’t understand why this isn’t possible, why wouldn’t you be able to compress or remove the data that’s not needed.

Tech announcemen don’t mean anything for market adoption. Market is going for lower prince solution. Higher oled lcd resolution not available yet. I simply bet that we’re at 2 generations from any wide spreed adoption of this techno 2*1.5 years =3

When unity and unreal integrate eyes tracking natively with hard support.

FOV headset is already on the market so it’s not about pure hardware innovation but real usability and availibity

[quote=“IrregularProgramming, post:55, topic:5018, full:true”]
I still don’t understand why this isn’t possible, why wouldn’t you be able to compress or remove the data that’s not needed.[/quote]
I’m not quite sure what you mean by “this”, but I’ll try to answer your question.

The signal to the headset is uncompressed, since none of NVidia’s current chipsets implement the required compression format, which MUST be implemented in hardware for speed. (I’m not sure, but I don’t think AMD’s chips implement it either.) Even if it was implemented in the newest chipsets, Pimax can’t assume we all have brand new video cards.

As far as displaying the compressed data at the full (4K) LCD panel resolution, that is not possible either, because of the way the scaling hardware works. We are limited to 2560x1440 pixels max, unless Pimax implements a custom hardware scaler with the required functionality.

In principle, the 8KX headset could do this, since it omits the hardware scaler, but you’d still need a decompression chip.

What you are requesting is similar to what a wireless link does, but those have compression and decompression chips on each end of the link. Even with that compression technique, an 8K headset will never be able to display native panel resolution (due to the scaler chip).

2 Likes

As far as I know it is not about compressing data at this stage unless you are talking about streamed foveated video or wireless video senders, in that case it is used in software trials (foveated streaming). Although I imagine things like TPCast uses hardware codecs too.

What it does is reduce the resolution of rendering in concentric circles from native resolution at the focal point and rendering incrementally lower resolution (maybe in 3 steps) towards the outer area of your focal point of view, this is done by the 3D engine/Driver or anything that builds up a rendered frame. This is why it needs eye tracking as it is a dynamic solution to keep your focal point always native (or supersampled) resolution. The lower resolution areas will include less pixel data and require less bandwidth as a result compared to a full frame render. The bandwidth aspect is also important for power/battery reasons like going wireless.

That is my take on it.

2 Likes

What I don’t understand is why the scaling software can’t do nothing, just let the signal pass straight through? Surely 2560x1440 pixels would be enough for foveated rendering.

I really don’t understand what you are asking tbh.

OP’s question was ‘will the headset be prepared to output at it’s native resolution with the help of foveated rendering?’

That has been answered as NO, as foveated rendering only affects the render resolution in specific areas of an HMDs panel but does NOT affect IN and OUTPUT resolution.

If you are asking if a 1440p per eye IN and OUTPUT resolution HMD (read that as Pimax 8K) can profit from foveated rendering (fixed or not) in terms of a reduction of computational requirement on the PC side, the answer is YES and YOU ARE IN THE WRONG THREAD.

1 Like

I def agree. The last thing we want to do is rely on every last game dev to implement this. It pretty much HAS to be gpu level.