No, because games don’t render to the panel. They render the frame buffer the compositor warps to headset specs. The subpixel mapping would go in that layer, modifying which positions were read from the rendered buffer. And it already does so per colour because it compensates for lens chromatic aberration.
You talking about modifying positions of pixels after they were rendered. I talked about rendering them at the correct position in the first place.
What you described can be accomplished by only altering the eye matrix, though. It’s not as big a win as actual lens matched rendering (still doesn’t get rid of the lens warping stage), because the GPUs are still set up for rectilinear rendering. The nvidia 20 series oddly have at least three possible approaches for that, one of which may be possible without game changes… let’s hope one of them works for someone other than StarVR this time. … unless I missed a level and you meant to adjust per pixel rendering vectors continuously, in which case that approach only works with the ray tracing mode, which by nvidia’s demos is far too slow in practice. I would not at all be surprised to learn nvidia developed ray tracing hardware that’s incapable of handling CA.
Your forgetting the game renders only so far & then the headset driver re renders the frame
To be fair, that is only one approach (the one used with a compositor layer). Some VR frameworks allow or even force passing the whole lens warping task to the main program (such as OSVR before the compositor was added).
I would imagine that a lack of cooperation is sourced in the same kind if BS prevalent in any interdisciplineary wirk.
From what i read pentile modifies on the display interface using an algorithm?
Pentile, being a brand name, naturally has a bunch of different variants that do different things. The one we’re most familiar with is what’s in many phones and VR headsets; the GB/GR setup. The panels probably speak the usual RGB protocols (over e.g. MIPI DPI) and just take the average for a pair of pixels, applying it to the pixel that has that colour. Or it may blur things like I suspect the 8K panel of; it has the same sort of distribution for red and blue. It’s also possible they have modes that don’t waste cable bandwidth like that, but typically that’s only YCbCr 4:2:2, supported by GPUs but rarely used other than in TVs. Samsung happily markets Pentile as better matching our eyes even though it’s at least 4x too low resolution in VR use to be playing those tricks (and the panel in the 8K does even worse without that excuse). One of Samsung’s 4K TVs in particular I’ve observed to not do any of these smart things but just pick all colour out of one pixel, discarding it from the others in a 2x2 pixel group (and that’s the defect I returned it for).
That would not work because lens distortion compensation would wreck that. It must be the last stage before sending to the headset.
Well the game renders its scene which is the big portion, while the driver only does little work.
@LoneTech you are definitely the smartest person I have met anywhere ever, Pimax should definately hire you
My thoughts exactly
(20 blah blahs)
Carmack talked about sub pixel rendering in his connect OC5 talk.
I’m not sure I’ve found all of it. I did find a section where he speaks of CA correction on mobile, and how he ended up using an overlay manager to do the channel scaling and blending. It’s precisely the sort of thing you get when you communicate with your hardware people, because it wasn’t documented or even intended. It has limitations, such as only scaling linearly. You can’t do the same in the headset without a fairly large buffer and delay (at least as many scanlines as the curve ever bends, I recall a headset maker who tried but not their name), but a desktop GPU can do more with its caches and multichannel memory. For live low latency video the streaming scaler approach becomes more interesting, more of an FPGA task, and I’ve planned bits of that but not implemented it. This overall bend is a large part of why the cost of subpixel displacement is negligible; it’s just fine tuning. It’s precisely the sort of thing you want before the cable, not after.
PS: I’ve found the subpixel discussion, at the end of the same keynote. It’s interesting, I need to relisten and mull it over. He mentioned things like how it wouldn’t really give 3x resolution boost without having CA compensation that accurate, and there was some more relevant talk about filtering.