@Sean.Huang I understand that it is a bit late for a change like this but would it be a feasible option?
I have already PM-ed @Sean.Huang about this.
But we don’t have DP 1.4; the ANX7538 isn’t in production until next year. We can do 7680x1080 at 80Hz though. Also, the performance change is a little more complicated; we’ll still want to render at a normal aspect ratio, just alter the remapping in the lens compensation stage so its texture samples match the subpixels.
Pimax can use a technique similar to checkerboard that uses sony on ps4 to get 4k?, everything is at 16: 9 scale but they reach 60 fps. Sony uses sometimes to reach 4k, the effect is very good. Maybe we can do something like this but for down, calculate 1080p interlaced and we dont need reescale.
1 Row rendered
1 Row ommited
1 Row Rendered
Fair enough, I wondered but couldn’t recall for certain
That will be needed, if we need to match the subpixels, but will it be really necessary? Will be one half pixel (1 original pixel) imprecision worthy twice the performance cost?
Bravo man, well done. @anon74848233 Have engineers looked into this thread’s discussion?
We don’t need to match their placement precisely; in fact, we can’t, because the lens compensation means the relation varies continuously. As I think about it, the basic conclusion is that this panel seems like it would benefit from the 45 degree trick only if the lens weren’t in place.
So there are two steps we need to get done: Remove the inappropriate scalers, and map the subpixels in the lens warp shader. The first reduces aliasing by having the displacement be a maximum of one pixel off, instead of spreading both ways after a blurring scaling step. The latter removes even that aliasing by telling the GPU where the subpixel really ends up. This doesn’t require anything special about the render buffer.
When the subpixel sampling points to the right place, the render buffer resolution should be able to go lower without observable loss of quality. That is, it would match the dot count instead of the output aspect ratio; and that puts it back near 2560x1440, removing the expectation that you need to supersample more for an equivalent picture on 8K vs 5K+ (which it still wasn’t).
I still haven’t calculated how much area is misplaced by approximating the hexagons as rectangles. My gut feeling is that it’s minor compared to missing out on half the pixels. Perhaps that is where the rotation comes back into play… sorry, I can’t always think equally well.
I agree. If we could establish direct control over the (sub)pixels in the panel (which is still the primary concern about the panel’s hardware caps) we could render at arbitrary quality (to get a leeway to adjust the performance requirements on gfx card end) and up/down-sample to the target res on the gfx card…
On the other hand, I still consider adding correct subpixel mapping as additional complexity which might not be needed (in the end), but I am not able to visualize or simulate the visual impact. So I am not going to argue further, just say, that if it was me, doing the design, I would simply render into 3840x1080 display (with 1:2 pixel aspect ratio) and see how it pans out first, then try to eventually optimize the pre-lens warp for subpixel matching to get better color distribution.
The hexagonal model would be fine, if the colors were somehow equally distributed over the area, either in equilateral triangles, or even hexagons (e.g. by duplicating RGB subpixels - if such design exists), then the calculation and mapping made fit to this configuration would make sense, but the observed geometry is rectangular (I am still eager to see more detailed photo of the panel to confirm it), where one pixel is formed (or better say could be formed - we do not know yet) out of two components (squares) each holding either G
or R/B
.
If our ultimate goal is subpixel rendering (for better color distribution, or aliasing), then I guess we need to work with the rectangular topology from the start.
I agree though that the rendering itself does not need to target 1:2 pixel ratio and can be done on square pixels at arbitrary res, the panel geometry is only important for pre-lens warp transformation (compositor) and its shader. But as I wrote before, I would start with the simple approach and see how it turn out first.
The Simple approach is always the best avenue if viable.
All of this discussion about optimizing the shader for the sub pixel layout makes me wish that we had lenses optimized for the normal FOV, that could have less magnification, and use more of the panel.
Hardware adjustment seems simpler, not to mention a PPD bump.
Many of the suggestions here would require games to be specifically optimized for the Pimax 8K or require change in hardware. I don’t think the hardware on the 8K is flexible enough to do major adujstments.
However, I think there is one thing Pimax could do, to lower the rendered resolution to 50% for every SteamVR game with a PiTool option. As far as I understand OpenVR (SteamVR), the FOV, aspect ratio and rendered resolution can be controlled independent from each other. If that is true, it would be possible to distribute the samples in a non-uniform way e.g. the space between vertical samples might be bigger than for horizontal samples. This would allow to render the game with a resolution of like 38401080p, without changing the aspect ratio of the rendered scene. The Pimax driver would then receive this image and copy every line to get a 38402160p image. This would then be warped and downscaled (not on the 8K X) and sent to the HMD.
Does not improve quality, but does saves performances which can be used to increase quality by other means.
None of these suggestions require game changes; they’re all compositor changes (and possibly altering the eye matrix, part of the tracking information). All the hardware settings are about turning off fancy functionality that’s counterproductive in this specific configuration. The trickiest part in here is adjusting the fragment shader in the lens warp pass (which is Pimax code).
Is it too late to try these suggestions on the 8k?? Can this be remedied easily in the field with board swap down the road and some software tweaks??
I want to stick with the 8k if there is hope that this could be implemented in the future somehow…
The actual ship with the actual 8ks on is about to sail or already on its way
Go ahead and rain on my parade… hahaha
Maybe they will fix it for the 8kx…??
Anything software based may have potential to be implemented.
Are you a consultant? Can you dive in with code too?
Just wondering because if Pimax had a beta driver program that was open source then that would be fantastic. Not sure if third parties (Bridge chip) would allow firmware mods though.
Yes, but I’m not in any contract with Pimax (beside backing the kickstarter), it’s doubtful they’d choose to, and I’m likely not the best person for this particular job. I’ve never understood why hardware manufacturers insist on spinning their own sub-par drivers rather than cooperate.
Well, the suggestion to render only certain colors for certain pixels definitely requires game changes.
I can relate to that with my 2013 Mac Pro and hacked drivers to get it using the latest AMD drivers in Bootcamp. Which are sooooo much better than the never never Apple drivers.
Also, you sound like you are a good person for the job, at least in an R&D capacity.