@bacon, @PimaxVR:
This should perhaps be done on the graphics cards and 3D engines, but we could save the data corresponding to the blind spots, the parts of the images that the eyes do not see, and it would be a significant improvement of the foveated rendering. It’s a pity that the Pimax 8K do not have foveated rendering without the need for modules.
I think it would be interesting for the OpenXR group to study it.
Great idea but afaik it´s not neccasary nor would it give any improvement as the blind spot is outside the fovea area therefore allready would be rendered at lower resolution in fovea rendering
as you can see in this graphic the moste need for a high resolution would be aroung +/- 10°, the blind spot is between 15° and 20° already outside the fovea are which is interesting.
I could imagine that an later point in optimizing fovea rendering this will be considere, and improvement in performance is to be expected, which should still be alot less the 5% or 1%
I have to correct myself, what I said is right for color perception (cones graph).
For actual sharpness perception, the area is bigger (rods grap). Which is between -15° and +20°, the blind spot takes up 5° (it´s an ellipsoid shaped surface). IMO it still makes it neglectable when comparing the area, but should give maybe up to an 5% performance improvement.
Guess we can be excited for when FOVEA rendering will be used and be optimized, we could see interesting results
yes this could be the right way to go. But I actually don´t know how much colors take up from the rendering time, I suspect it to be really low. But it would save bandwidth and a bit processing power.
the Signal is YCbCr, not RGB, it consists of
Y: Luminace (Black, White, Contrast, Brigthness)
Cb: Blue-Yellow Chrominance*
Cr: Red-Green Chrominance*
*this means a mix of those two colors, 128 would mean 100% yellow for Cb
so colors take up to 2/3 of the information in bandwith, and in fovea rendering you could just “drop it” which could be alot of date. When using compression for jpeg, it already drops of 50% without impact on perception of images. Or color perception is not as great as our luminance perception.