and well said so we will have everything, soon, it will be abstract soon, because frankly I was a backer (942), I gave more money 100 euros for eye tracking, and I have so far only the helmet, well soon, what have we been super extremely patient, Pimax could be generous at least with these backers, this would seem really respectful! And good day, March 2020 … and they waited again and again … obviously, I put aside, the problem of the coronavirus …
Unless you are an active developer, eye tracking is (AFAIK) not supported by any games, VR applications, or any foveated rendering drivers. Even if it was supported for foveated rendering, that is just not necessary, since current graphics cards are already adequate for existing applications.
NVidia and AMD are never going to implement any features for eye tracking before there is a decent base of eye trackers out there. We wait for them, they wait for us. Let’s break the status quo.
One thing I think many forget is that Eye tracking does have other uses than just for foveate rendering. Like menu selecting as demoed by Fove and Tobi.
This was already done before in menus.
On a side note. We don’t know if the op is into Development of aps or not.
I was talking to @SweViver in Gouda a couple of months ago, and we discussed that eye tracking might be able to help bringing down distortions. It’s very hard to have a good profile over a wide FOV, but if you now where the user is looking at, you can use the most appropriate profile from a range. Different profiles for looking to your left, right, and straight ahead, and possibly a few inbetween. I’m sure it would not be easy to implement, but the approach seems promising. Just my opinion as a non-expert.
Other than the technical possibiities though, I do agree that we need killer features to help it forward. Whack-A-Mole by itself is not going to push the envelope
I’m sorry, but the buyers of the eye-tracking module are interested in foveated rendering primarily to increase performance as much as possible; current graphics cards doubt that they will achieve the native 75 fps of the Pimax 8KX with a high level of detail. Didn’t Pimax say that the module achieved a minimum of 25% more performance? Which I think, for Pimax’s FOV, is very little.
For example, Oculus is trying to increase graphics card performance by 67% using eye tracking and AI.
I am a developer and will buy the module (if it helps me) to test using eye tracking for use in games with Vive Eye and the predictable Oculus Quest 2 that could use it.
Now HTC has discontinued Vive and Vive Pro and will only be making Vive Eye, so all Vive’s will have eye tracking as well as Vive Cosmos or Vive Focus. Pimax has one module, and it seems to me that Oculus would start to add this feature in their models, even more if HTC does.
By the time eye tracking foveated rendering can be implemented, even better graphics hardware will be available. The impact will ultimately be minor compared to the use of eye tracking as an input method.
Also keep in mind foveated rendering imposes some costs of its own.
Neos VR already has eye (also full body) tracking for the Vive. I’m sure the dev would add support for the Pimax eye tracker once that comes out. In a metaverse style app, full body tracking, including eyes really adds to the immersion. The dev was recently demonstrating lower face/mouth tracking as well.
I’m not sure that the current graphics hardware is yet to the point which would make dynamic foveated rendering acceptable (without ugly artifacts). It does depend on the game and your tolerance of flicker.
I used fixed foveated rendering on conservative for over a month. I never really got used to the flicker at the edges of my vision. Even when looking straight ahead (through the normally rendered area), I would often see noticeable aliasing and small details (just a few pixels wide) flicker in and out of existence around the periphery. I found it distracting and immersion breaking, even though it provided a significant framerate boost.
My personal feeling is that DFR will not reach its full potential until your GPU can fully render the whole visual area in “normal” mode. Then, with additional GPU power, you can crank the super-sampling in the foveated area, to increase the visual fidelity. nVidia has already implemented this for FFR in some games.
Cost of foveated rendering is on the CPU single-thread side. Whereas GPUs are continuing to improve steadily, CPU single-thread performance has more or less plateaued for many years now.
For performance, I would rather have usable SLI support than more foveated rendering hacks.
Eyetracking will prove far more novel for things like menu selection.
Then the foveate software needs to be optimised for multi core cpus.
FoveVR already had viable foveate rendering with eyetracking but was too ahead of it’s time and too propietary unfortunately.
However we see this all too often. Companies having the vision for the future too soon. Amiga comes to mind with design and being a multimedia user friendly pc.
I’ve tested the eye tracking module and it does help considerably with rendering, you get several settings similar to FFR to adjust the effect. Games that naturally have a low frame rate typically benefit the most.
Of course the system offers more features than just that but those must be specifically coded for at the moment (like menu selection etc.)
We should have more information soon regarding production to incorporate into one of the next few weekly updates.