???
Is there another use for it in VR?
Foveated rendering and a next generation GPU could be the ticket to performance for the 8KX but I haven’t heard much about any implementation of it.
???
Is there another use for it in VR?
Foveated rendering and a next generation GPU could be the ticket to performance for the 8KX but I haven’t heard much about any implementation of it.
I always thought a good use would be as an aiming method in a super hero game. Laser vision for example. Another use is better avatar puppeting as in a multiplayer game you could now see where the other person is looking. HUD control too, in the serious Sam games the HUD floats and hangs in space based on where you face, it could be locked to your gaze instead. I’m sure Devs will think of something to do with it now that so many are going to have it
I think the main use is foveated rendering. This is the reason why I am very sceptical about this stretch goal feature. I think it will end up being a gimmik.
In FPS with targeting by looking it is allready a pain. I doubt it is any better with eye tracking. I don’t even want that. I just played Dead Effect 2 again yesterday. You do not want to fire where you are looking at. Sometimes you even fire into two different directions. (Dual wielding with motion controllers is so damn nice!).
One could use UI with eye tracking but I doubt it is really helpful either. How do you select the button you are looking at?
It has to be a trigger button. why not select it with the motion controller laser point in the first place then though?
I can see eye tracked UI’s being helpul in simulators where you use a HOTAS. (Look at button and lock it in with a button.) Other than that I’d rather use my motion controllers.
Of course there is. Eye tracking is an enabling tech which is needed if you want to implement foveated rendering. The question is not if, but when. Maybe we will not get foveated rendering on Pimax, or the implementation will be weak, but the only way to actually start it (and improve on it) is to start with eye tracking.
Apart from that the eye tracking may serve other purposes too.
yep. even if we dont get driver side foveated rendering, it may be hacked in by some genius user down the line. individual applications can still use the tracking data to use foveated rendering application side on a case by case basis.
I haven’t seen support for eye-tracking outside of Rise of the Tomb Raider and FSX, both of which use Tobii Eye tracking. Seems a bit niche but could be interesting for flight sim HUD interaction although FSX just uses it to pan to where you look which would be weird on top of headtracking!
If it’s there then I’m sure devs can do interesting things with it.
A couple of things come to mind.
No you are correct. it can be used to drive accurate depth of field but i think you might need to track both eyes for that im not sure.
There might be games or applications that if supported in a standard way, might interact with you differently depending on where you look…eg (kind of like an on focus)…
Like if you have a ‘sexy’ type adventure game where your eyes wander…or a poker game…where the AI interprets your body language and where you look as part of it’s calculations (instead of directly cheat peeking your cards)…or perhaps an fps shooter where you hit a target without looking…they give you a no look bonus …etc… We are still a long long way from full immersion like that though.
This video shows off how avatars will look with eye tracking. It could be huge for social VR or as others mentioned something like poker. Imagine an RPG where the characters can actually look you in the eye and react to your emotions
Tracking where the user looks can have other benefits outside foveated rendering.
Faster menu selection, you just look at something and press a button with your wand to click.
Object highlighting and info-popups where the user is looking.
More realistic avatars, you know if they are looking at you or something else.
Dynamic meshes could tesselate where the user is looking or about to look for higher quality visuals. Everywhere the user looks could be in Epic quality and outside that in low quality, just like software LOD’s today but with a tweak for same plane. e.g. AA, Texture size, screen space effects. You could get huge performance from that alone and it would be quite simple for Unity/Unreal to add that as a feature at the engine level. This is slightly different to foveated rendering which reduces the resolution (and thus bandwidth) outside the users gaze.
Effects that react to your gaze, e.g a puppy running about trying to get your attention, no matter where you look it runs to that spot while staring up at you.
Simulated effects for study or education e.g. medical and optical studies that can demonstrate to students what macular eye degeneration actually looks like for a person. You can somewhat do this already with a headset with head movement but to be accurate it should be with eye movement not head. Also games could use the same tech for effects. e.g. WHATEVER YOU DO. DO NOT LOOK AT THE MEDUSA!..user turns to stone
Partially replaces the mouse.
I don’t see any magical foveated rendering driver level solutions out in the wilds, so I don’t expect pimax to do it also. IMHO creating driver level foveated rendering will require either game-to-game implementation or a common implementation of some standard, which there aren’t any right now, I don’t see one right now, but OpenXR will probably have some type of support for it.
The best way we can probably universally use eye tracking right now is as an additional pointing device and that is mostly all.
fove has driver level. Though i dont expect pimax to do that, it would be nice down the line. If unity and unreal add foveated rendering to their main branch then you can expect to see it in 90 percent of the games currently running in vr which are indie games running on these bases. So if people want to get he full benefits of this tech those are the companies to pester.
I won’t believe that fove has a driver level implementation, unless I see one. They might have their own interface implemented, but there is no industry wide standard; Unreal, Unity, and whoever else might implement something but it wouldn’t work on any HMD with any tracking solution, because there is no unifying standard for any of it, every eye tracker have their own interface and behaviour quirks.
You say “so many”, but let’s keep it in perspective, its only going to be 4,000 out of a million or so PCVR devices on the market. We’ll see.
Foveated rendering appears to be mostly experimental at this point, not a finished technology.
Personally, dynamic depth-of-field is the most interesting application at this point. Objects at the z-depth you are looking at would be crisp and nearby and/or far away objects would be blurred. This is similar to how your eyes normally focus, so this feature might improve your immersion in a game. Unfortunately, I think it would have to be implemented in each game or game-engine.
Actually there are some nice uses like UI components and navigation. So for instance hold a button to open scroll wheel and looking at the item for a second may select it etc. Just as a quick example. Only issue is it needs to be for more prevelant across the industry first as that’s a change at the developer level.
The other thing to consider is how eyectracking could work with/change locomotion for artificial or “free” locomotion. This would be extremely challenging though as humans are prone to glance during movement. But generally we actually somehwat move in conjunction with our head rotation/ eyes focus.
Then there’s a lot or applications in terms of data analytics. Giving that info to devs, about where your eyes are typically on the screen and other general user interactions can help them to fix and enhance elements of their games and VR in general.
In short, yes, Tobii eye tracker for example of uses. For VR however people tend to see eye tracking as a part of solution for Foveated Rendering(FR), since there is no commercial solutions right now, we can only speculate how effective that use case will be. So I will share my understanding about FR and the future of HMDs. You maybe know, some games already use FR without eye tracking(Batman VR), so in principle you can implement something like lens-matched FR since around edges of screen lenses already blur the content so why render it at full res? But savings are not significant to allow running 2k or 4k per eye HMD with current high-end GPUs. In theory good FR with eye tracking can reduce GPU load by 2-4 times. My assumption is that GPU manufacturers actually are not very happy about that, they don’t want VR hunger for GPU power to stop. I will take Nvidia for example, GTX 1080ti is already very close to being overkill for 4K 2D gaming at close to max settings(60fps). But for VR, which is much more demanding even at 1K per eye, there are still games that can befit of extra SS applied beyond performance of 1080ti.
My 2cents are that GPU manufacturers will force different approach. They will focus on implementing eye tracking for allowing not edges and out of focus picture to be down-sampled(reduce res) but only to increase details and res(SS) to part of the image in users’ focus(gaze). That way minimum spec GPU for next gen HMD will be probably GTX 1080 level, and there will be reason to buy next gen cards(Volta, Navi) to allow more details and SS at area you are looking.
TL;DR GPU makers want VR to drive sales of high end GPUs in future, may support FR, but to apply more details to image where it matters, not reduce res and GPU load comparing to current gen HMDs.
Maybe I am wrong, AMD already showed their research on TFR. But I don’t expect that in 2nd gen, maybe 3rd gen HMDs (8K per eye - Pimax 16K? ) then maybe GPU makers will accept they can’t make fast enough GPU for it
Eye mapping on in game avatar. With vr opposed to regular fps being able to see where an opponet is looking could be an asset.
Of course game servers would need to be able to filter out folks without eye track mapping.
The dead game Sons of Nor