Eye tracking module and Foveated Rendering

Is there any good information on the eye tracking module and foveated rendering implementation?

Wouldn’t these two technologies allow native 4k in the region of the fovea with just the 1 x displayport - no scaling necessary…therefore making the 8K X unnecessary?

When asking about eye tracking, always remember “there is no such thing as a free lunch.”

Eye tracking is not yet ready for prime time. Only one company so far, the company FOVE has an HMD with eye tracking integrated that is affordable for mere mortals. We are talking the $799 range. Foveated rendering will only halve VR rendering requirements (at its best and most efficient, lens matched shading already does this.)

Why wont Foveated rendering be feasible right now? An eye tracking module needs to have at least a 500hz update rate to track vision well. Even at that high update rate, present Eye trackers have much difficulty tracking your eye’s involuntary saccade movements. Even the best eye trackers (we are talking in the 10s of 1,000s of dollar range, used in military and university applications) also require predictive algorithms to make up for the hardware shortcomings.

I hear people ask all the time “what about foveated rendering?” The software and hardware are not yet ready. Even if it were ready, someone like myself, who has Nystagmus would not be able to benefit from eye tracking, nor would people with a whole host of other vision problems.

Now, a foveated display on the other hand, would be a good idea. The startup Varjo is already working on such an HMD, and wants to integrate eye tracking. Most eye trackers today are used for menu navigation. Foveated rendering is a very long way off.

To give you an idea of the difficulty level we are talking about. In our present HMD’s we have IMU’s to track head movement. Even the best IMU will suffer from drift. In eye tracking you have problems similar to drift.

  1. Every time you blink, the tracker loses sight of your eyes
  2. Your eyes (like your head) make micro movements all the freaking time.

No sensor is perfect, and the more accurate a sensor gets, the more you have to worry about jitter. That’s why you need all the crazy predictive software algorithms. At the present time, you spend as much computationally to get eye tracking working as you would save if foveated rendering and eye tracking worked flawlessly.


Thanks for your insights mate. I really hope your wrong though :slight_smile:

Most of us wouldn’t have thought 8K headsets with 200 deegrees FOV would be ready for prime time either. But here we are now - some of us will receive these headsets by the end of the year.

Check out the eye tribe video (now acquired by Oculus) demonstrating foveated rendering : - YouTube This was published almost 2 years ago. Plenty of other examples floating around the web. Abrash himself was confident that this tech would be implemented within 5 years. Even Xunshu said “We will have eye tracking module to enable foveated rendering” Hence the original question - is there anymore information on this?

Personally, I would be more than happy to pay for this eye tracking module with foveated rendering if it could halve the VR rendering requirements. Its unfortunate that it may not work properly for those with medical eye problems, but for those of us that don’t fall into that category (the majority) its seems like a great OPTIONAL accessory, and one to get pretty excited about. Hell, I might even be able to play DCS @ 90 fps

Here’s Abrash talking about eye tracking and foveated rendering a year ago Oculus Connect 3 Opening Keynote: Michael Abrash - YouTube (starts 15:16). He goes over some of the hurdles and says that he believes it will be solved in five years because of it’s importance to VR, but also says its the biggest risk in his predictions. As far as we know, it’s still an unsolved issue, it’s all well and good to see companies showing off in ideal conditions, but until it’s widely tested or in the consumers hand I’ll remain skeptical.

1 Like

The Eye Tribe tracker only has a 60hz update rate, at its maximum. When you see the video, you can see the lag in response time. Its a proof of concept that Oculus acquired. I wish I was wrong, but the hurdles are real and huge.

The Deepoon E3 has eye tracking too. Supposedly, cause there’s not even 1 app to check it.

Why is that ? if you sync the eye tracking to the display then a frequency that matches the display’s refresh rate should be good enough. Just before rendering the new frame the eye position should be sent out to the game.

Lol i love his remark here: Oculus Connect 3 Opening Keynote: Michael Abrash - YouTube He didnt consider Pimax when he said that :wink: The guy is off by 4 years :slight_smile:

1 Like

Read what I wrote above. Low update rate is fine for UI navigation, and even some minor foveated rendering demos, but its laggy at a low update rate.

Abrash explains in the video above that current eye tracking cannot focus on just the fovea, but is trying to track based on the pupil. Size, shape, movement, face variance, etc. All make eye tracking impractical for prime time.

I dont want pimax to give us a half baked addon that will not work properly. AND IT WONT, NOBODIES DOES. YET.

What he showed in this video is EXACTLY what I see when I use PIMAX 4K without my glasses for myopia. It’s a shame that in my case, there is no reduction in the amount of resources needed. lol

Not sure what you mean. What good would a higher frequency than the panel for eye tracking do ?

He said 4k by 4k per eye. The Pimax 8K X isn’t out yet, and will only be nearly 4k by 2k per eye. Given that his estimate was square, he wasn’t talking about the entire field of view, so a wide field of view headset would be closer to 8x by 4k to match the resolution he was talking about. It’ll take a while still.

1 Like

Higher frequency eye tracking might give a better idea of region of interest. Due to saccades your eye isn’t typically pointing exactly where you think it is, and if your eye tracking has only one sample per frame, it might be surprisingly misleading. Perhaps close enough for foveated rendering. However, that’s academic; I’d be amazed if any current display scaler did dynamic region of interest scaling, let alone that in the Pimax 8K.

That said, it’s a pretty darn cool idea. I could build it, but I don’t know who’d buy it.

true. I guess it indeed will take some time before we see that

Maybe. Then again, maybe not. Fact of the matter is indeed that, like the guy in the video says, you need damn good eye tracking in order for it to work for everybody. I’m pretty sure we’ll need to wait a few years before we have that, so in that aspect I do agree with @VRGIMP27

Dude, there isn’t a maybe not in this situation. Tracking the pupil is not tracking the region of the eye (the fovea) that you need for foveated rendering to work properly. Michael Abrash in the video says we don’t even know how to track the specific regions of the eye yet. IE we can only track the Macro movements of the whole eye very inefficiently, but cant yet really track micro movements at specific regions like the retina.

Its not going to be a solution where you slap an add on to an HMD. Eye tracking today mostly exists merely to prepare software development infrastructure, it exists to give developers a base to work from. Its not actually useful yet. Eye tracking today is like VR in general was in the 90s.

Dude, there is a ‘maybe’ because you’re just speculating on how future technology will look. I can imagine that eye tracking doesn’t even need to be 100% perfect: If you can render half of the screen in high quality and the other half in low quality, then that’s going to add up to really important speed gains. The problem is going to be in detecting the iris, which is highly different among people. But I can imagine that if you find a good algo do to that, all you need is 1 photo to determine which part of the screen to render in high quality

Also the better the quality of the camera/photo, the more accurate it will be. So it’s really not a given that you even need such high frame rate at all. A much better photo quality can make up for much lower frame rate.

That is not how eye tracking works dude.

Its not speculation when you have read the research papers on present eye tracking technology. SMI and Tobi both have eye trackers that cost thousands of dollars with an update rate of 500hz and a frame rate of 250hz, and to get descent tracking, the participants in various studies have to be in a chin strap, and the trackers need to be precisely calibrated on a person to person basis. Even with these specs, gaze tracking is still not 100% nailed.

Therefore, I’m not blowing speculation out of my ass. What I’m saying is verifiable with the real present very high end hardware that no startup could possibly afford.

More sense? I’m telling you something based on what Tobi and SMI presently use at the high end. THEY USE 250hz cameras and the tracker updates at 500HZ.


120 hz it says, which makes much more sense