Supersampling vs AA and resolution

Been playing around with the 8K all afternoon now and have a query / suggestion.

I understand that Super Sampling is a form of Anti Alaising in VR whereby the software upscales the pixel density to a given setting and then tries to deliver visual niceties given a headsets display resolution per eye limitations. It was introduced for the Vive and Rift due to the fact they had low per eye screen resolution.

Query I have is, How can I set the resolution per eye of my Pimax 8K so I get a more traditional screen resolution. For example, say I want to run 1080p per eye or 1440p per eye utilising PiTool in combination with Steam VR??

Would it not be better to run the per eye resolution as a factor of the native screen resolution like we do with normal monitors and then apply in game AA to smooth things out?

Why use SS to upscale pixel density to 4k per eye only to tax the hell out of ones GPU hardware and then have it pumped up the line to the headset at only at 2k for the upscaler chip to then bang it up to 4k again? Would it not be more efficient to run things at the res the upscaller performs at due to the limitations of the headset etc?

Why not use traditional AA instead of SS to smooth things out like we have been doing for years with normal screens and have the routines quite efficient now days??

I would just love to understand as at the moment, I am trying to get a per eye res of 1080p happening as I am using a 1070 MaxQ GPU and then apply faster traditional AA to keep the frames up. I wonder if this would be more efficient than SS but I can not seem to get SteamVR to give me the per eye resolution I require. It is always out a little which then brings in artifacts to the image.

I would wonder if Pimax with the PiTool Render Quality Settings to be designed to give us these per eye corresponding resolutions with Steam VR SS setting set at 100%

1080
2K
4K

with 1080 being 0.5
2k being 1 and
4k being 2

Keeping it simple and giving per eye res as a factor of the screens native resolution per eye at large FOV. PiTool could based on this baseline settings then apply the per eye res with SteamVR at 100% to correspond with the lower FOV settings available in PiTool keeping the relationship of the resolutions being fed to the HMD screens as a factor of the native resolution of the HMD screens reducing artifacts as a result.

Just some thoughts on trying to make sense of the resolutions per eye settings we have for HMD’s and getting the best image at set resolutions we can minimising artifacts. Something I find hard with the current PiTool / SteamVR setup.

4 Likes

BTW I found bluriness as stated by others here is dependent on vertical positioning of the HMD on ones face.

Also in Large FOV - the outer bluriness seems to be less if the HMD is pressed up against my face but at the correct vertical placement.

It works very well and is a blast flying in Il2 BoX except when I crash (keyboard flying atm) I close my eyes so as to not be too distressed LOL.

2 Likes

VR has a very different set of issues compared to a 2D (flat) monitor, due to the lenses. A lens doesn’t just magnify, it introduces a visual artifact called “barrel distortion”. This means that the image must be “predistorted” to counteract the artifact, using the inverse “pincushion” distortion. This image looks sort of like a 4-pointed star and needs to be larger than what will be displayed (otherwise the image corners will be lost) and part of it will be discarded and not sent to the screen. (It’s complicated, see link below.)

Also, traditional antialiasing does not work for common graphics engines which use “deferred rendering”. That technique is popular, since it allows for low-overhead lighting and shadows. For example, Elite Dangerous uses deferred rendering and super-sampling is the only way to get “decent” antialiasing (even on a flat screen). Because of this, I run ED at 7680x4320 res and the video driver shrinks it to fit my 4K (3840x2160) monitor.

. Distortion (optics) - Wikipedia

The only reason for the shrink and upscaling (on the 8K) is because 1 video cable doesn’t have the “bandwidth” to carry 2x 3840x2160 res @ 90 Hz refresh. That’s why the 8KX will need 2 video cables. At some point in the future, this will become possible, perhaps using image compression.

. Bandwidth (computing) - Wikipedia

1 Like

yeah, combination of panel tech and cable interface tech needing updating for the native 4k per eye HMD’s

They can do it with 2 cables but by mid year there should be DP and HDMI solutions for single cable capable of the bandwidth. My only thing is trying to get a subset of the native resolution rendered and on the HMD panels. If you don’t do it by a factor of 2 then you introduce artifacts that make for a very noisy image. This would apply to HMD’s as well.

I have read up on the need to counter lens distortion and hence Steam uses a 1.4 multiplier in their render to accommodate the barrel distortion for the Vive headsets.

I wonder what the factor is for Pimax and if there is a way the 5K and 8K can be setup to utilise the render res at panel friendly res’s whilst allowing for lens distortion.

Anyway, just some thoughts and been having a blast with my 8K

Testing continues.:+1:

I think that the factor is larger for Pimax, due to the larger FOV and the fact that the barrel distortion gets larger as the distance to the center of the lens increased.

“Panel friendly” res doesn’t really exist for VR, due to the non-linear pre-distortion “pincushion” transformation (to counteract the lens’ barrel distortion). For VR, you want the largest res that your system can handle, to get the highest visual fidelity. (Of course, at some point the law of diminishing returns kicks in.)

Thinking on this more, the distortion adjustment should be included in the res options to the user. Keeping it simple at our end,
The image res and distortion adjustments need to be kept within a ration of 2 to the native panel resolutions to minimise artifacts when viewed by the end user.

According to my (limited) understanding of VR transforms, that’s not really possible, “due to the non-linear pre-distortion “pincushion” transformation” I mentioned earlier. What that means is that some areas of the image might have a pixel coverage ratio of 2 or less, while other areas might have a ratio of 8 or higher.

Hmm, interesting and quite an issue to work around. Maybe we will have to suffer in the short term until screens can be developed that are designed with the distortions in their pixel matrices.

1 Like

NVidia has already implemented a way of dynamically adjusting the pixel coverage ratio, to handle this non-linearity. It’s called “Variable Rate” shading.

. Variable Rate Shading: Get Smarter About Shading, Too - Nvidia’s Turing Architecture Explored: Inside the GeForce RTX 2080 | Tom's Hardware

I was having a brain fart about it yesterday whereby the lens distortion profile could be held in firmware on the upscale chip in the HMD. Also with 4K and soon to be released 8K screens,then it wouldn’t be such an issue.

Could be tweaked via firmware updates but keeps it simple on the PC end.

Problem with any Nvidia solution is it will be proprietary (customer lockin) and expensive.

1 Like