Nope, that’s the whole point - this approach reduces the number of pixels where it will be hardly visible to the eye. There are many factors to take into account: focal sweetspot, geometric distortions, chromatic aberration, glare, etc. which will make it basically impossible to discern all that detail in the peripheral areas.
But most of all, it’s due to the pre-warp process which compensates for the lens distortion. After this process, the result is heavy oversampling at the periphery, which is totally unnecessary for the many reasons stated above. Basically, all these pixels go to waste.
Here are a few images which illustrate this, and the results of Nvidia’s LMS and MRS (closest to what we’re talking about) techniques (in UE4 in this case):
Yes this is a good idea, mild or non-invasive fixed foveated rendering would be a big performance benefit to add on top of spacewarp/ASW when they get that working as well.
Supersampling is just tricking the game engine that you have X x Y resolution so it renders at that size, then a post process effect scales it back down to match your actual resolution and the result is better anti-aliasing. It is a simple brute force approach… Foveated rendering is more complex as it is not just a single resolution change (as far as the engine sees it) but multiple zones within the same frame.
Until the OS or game engines have native support for foveated rendering (usually by following an agreed standard) then it will need to be linked in on a per app basis.
It’s about speed. What you suggest here is rendering a frame twice using supersampling techniques. Even though it is less than a full high-res frame it would have to do that in two passes which is a waste. Foveated rendering can also use more than two resolutions.
Foveated rendering gets around that at the engine level (via some SDK that describes that process) so can build one frame up (Oculus demonstrate they build it up using tiles) with the different resolution sections done on the fly, this wastes no time re-calculating areas that need to be higher resolution.
I am speaking layman terms here, the process is more complicated at the engine level which is why it needs an SDK at the moment. It is still an emerging technology with the likes of NVidia and others are working hard at optimizing.
As an example, let’s assume that the sweet spot is half the lens (in X and Y dimensions). If you render the outer area at 50% res and the inner area at 100%, you’ve saved rendering half the total pixels, more if you implement a central occluder. The math sounds weird but, that’s because of linear measurements vs area measurements.
Let’s make the math easy: Assume the screen is 200x200 pixels, which is 40000 pixels. The central area would be 100x100, which is 10000 pixels. If you render the outer area at 1/2 res X & Y, that’s another 100x100 “stretched” pixels. For a total of 20000 pixels (or only 20000-(50x50)=17500 with an occluder). If the outer area is already blurry you probably won’t notice the lower res very much.
Some implementations draw various areas at 100%, 50%, 25%, 12.5%, and 6.25%, depending on distance from central area.
If done well, this can double your framerate (or more) with little noticeable image degradation. It’s simpler than implementing full foveated rendering (since no eyetracking is involved).
Firstly, I’m sure this would be a feature which could be disabled.
Let me put this in context: It would greatly improve your framerate. Given a choice, would you prefer a game to be unplayable (due to low FPS) or would you accept a small loss of visual fidelity to enable you to play that game?
And it would be a small loss. The outer areas of your view are less important and those areas are over-sampled, due to barrel distortion. All you would need to do to see that area clearly is turn your head slightly.
It probably won’t be a loss at all. I already had the feeling that the oculus go was covering up the weird edges of their (nice though) lenses.
Pimax could catch 2 birds with one stone!
By introducing a fixed foveated rendering they may manage to make the distortion less obvious (fight fire with fire) and gain performance at least in the area of the undesired distortion. Running this particular part on high pixels won’t make it look good anyway, so why not at least save performance there.