How does supersampling works

Anyone in here that can explain how ss works.
In steam when you increase ss the number of pixels that are sent to headset inreases.
How is it possible to send more than the native resolution to the headset. How will the headset treat this ?

1 Like

More Supersampling is used to create a better frame for lens warping. The higher res allows for a better quality frame to be generated as the warping stage affects quality of the original image.

As I understand it anyway. In flat gaming it creates a better frame reducing Anti aliasing needs(this also helps in vr as well).

Thanks for reply Helio, I am very interested in how the headset handle a bigger resolution than its native.

If you try to send 4k video to a 2k monitor ,it will end with bad result.

1 Like

It doesn’t. The image that comes out of the warping stage of the software on the host computer is native resolution (…or less than native, and subject to subsequent upscaling on the HMD, in the case of the 8k).

…but all those extra rendered pixels have gone into that image. Even if the native resolution is lower than the rendered, the outcoming value of each native res pixel contains part of several rendered ones, resulting in a more accurate - less aliased - rendering, than if you had rendered at native to begin with.


@jojon is right with Supersampling/DSR/VSR the rendered image is shrunk to the output res of the display to create a better quality image without needing to use Anti aliasing.

Original Xb1 renders at 1440x900 & upscales internally to ouput 1920x1080. The og ps4 renders at 1600x900? Then upscales to 1080p.

Ouput signal yes a 4k will result in no picture.

@jojon , this explanation seems logic.
Thanks for that. So the information from steam about
resolution send to screen is just missleading.

If I undestand this correct you need more resolution in the content you render than resolution of the headset to gain anything with ss.

1 Like

This is an interesting discussion. But is it not so that you need even more resolution in the content.

If you view a 180 times 180 movie with a headset resolution of 2880 times 1600 with 90 degees fov.

To match the video screen you need at 4 headset screens to do that.

This means that the content need to be 5760 times 3200.

If this is coorect you need at least that resolution on the content to gain anything with ss.

Do you agree ?

To correct for lens distortion, your VR headset predistorts the image, so if you’d look at it through the lens, it would cancel out the lens distortion and you would see the correct image:

But as you can see, in the predistorted photo, some areas are ‘blown up’. Now if you only have the frame in its native size, you can only blow up by duplicating (or interpolating) pixels. However if you have a higher resolution photo than the original, you don’t need to ‘invent’ data, you can just use the original data! This causes the higher quality.

So it’s pretty much the difference between zooming in on a low resolution photo and a high resolution photo. With the low resolution it will duplicate/interpolate, with the high resolution photo it will just show original data.


I completely understand that the bulging predistortion would be of a higher dataset with a higher res source image. I just wonder wether that actually is being rendered that way. It would be much easier and cheaper to just bulge a flat image and duplicate a pixel here and there and then let the lens distortion put it back to where it belongs again.

In theory yes, the extra data will travel to your eyes but at such a small detail, would you notice that compared to a supersampled pixel?

1 Like

I do not really understand how connect this to ss.
If you look at vs film most of them are already warped
and is warped back by the lens.

I wouldn’t say that. SteamVR does not say: “resolution sent to screen”, it says: “The current setting renders each eye at nnnn x nnnn”. This is to say the resolution the game renders; The “source” image, from which Pimax’s distortion shader (Pimax has its own, and bypasses SteamVR’s) subsequently takes samples, when rendering the native resolution output image.

I’ll throw in two caveats, here:

  • Many games completely ignore any supersampling settings you have in SteamVR, and substitutes their own, without necessarily exposing them to the player, which is why so very many Unreal Engine 4 based titles look absolutely terrible, until you mess around with their .ini files.

  • With Pimax, specifically, somehow, somewhere along the line, numbers and realities seem to become detached: E.g. sometimes a high SteamVR ss setting results in the expected performance hit, without yielding the quality that should buy, especially in large FOV with the “parallel projection” compatibility fallback mode. No idea what happens here - only Pimax knows, and they are not telling.

As is the case for all supersampling - that is the fundamental concept, after all. :slight_smile:

1 Like

…which is exactly what happens.

…which is what happens when you render without the base supersampling, that matches render resolution to the minimum needed to make the most of the capabilites of the screen, lens distortion taken into account; You get fewer effective pixels-per-degree in the centre of the view through the lens, than the HMD can show. To see whether you’d notice: Just compare 100% ss to let’s say 80% (I have no idea which the properties of the 8k/5k lenses are, although I suppose somebody more capable could figure them out from the distortion parameters sjef extracted from the binaries. :7

The unfortunate thing with all this, is that the games can (currently) only supersample the entire screen easily, but the lens transformation is not linear at all, and only needs the supersampling for the middle part; In the periphery, you could instead go the opposite way, and subsample, without any noticeable loss of quality. We’ll see what happens with this in the future; NVidia gave us lens-matched shading and other techniques for dealing with this issue, already with their 10x0 series, but almost nobody ever used them – let’s see whether newer initiatives get better adoption. :7


@jojon Thanks for the lesson , I am also from Sweden and will for sure hure you if I run in to problem.

I know your explanation based on games and not on recorded material.
I think you agree that this is in some way is a different ball plan.
Can you comment my thoughts about the gain of ss with todays avalible content.

1 Like

As long as Nvidia makes their techniques proprietary it is not likely to have strong adoption. Nvidia has started learning this with their recent addition of supporting Amd’s open Freesync.

Depends on a few things…

First of all: How are we presenting the recorded material in the headset?

One could just push it directly to the HMD screens, like it was just another monitor (or one of the non-vr, movie viewer HMDs, that have been on the market all along, while we were longing for VR :7), but in VR we usually want to be able to look around, which would entail mapping the video onto a plane of some description, in a realtime graphics engine, which puts us right back with the game rendering… :7

If one do go the just-a-monitor route, and have predistorted video, like you mentioned; The distortion applied in the editing of that video would have to be tailored for the specific HMD one aim to view it in.

As for resolution of such video: Ideally, to use the HMD to its full, without file size bloat; That could be matched so that its Pixel Per Degree resolution is 1:1 with the HMD, where it is highest (typically right in front of you - with the 8k/5k the view axes through the lenses would actually be a bit wall-eyed, or so I reason). I don’t think anybody have measured or calculated actual resolution in PPD, of the 8k and/or 5k+ yet. Unfortunately this video file would still suffer a lot of bloat, since with the distortion rendered into it, it would contain those extremely streched oversampled areas to the sides, which you’d not need for, say, a cylindrical video, which could be distorted on-the-fly.

For video that is not clamped to the screen (instead projected in-virtual-world, and without that virtual projection screen being clamped to one’s physical one), and not predistorted, you could actually use video of slighly higher resolution than is native to the HMD, and actually be able to perceive some of that higher detail, due to how your brain samples detail over time, so minute head movements lets you pick up more detail, than if you are locked in place, like old Alex DeLarge. :stuck_out_tongue:

…eller något åt det hållet… Jag svammlar nog mest. :slight_smile:

1 Like

I don’t think I’ll be holding my breath while NVidia learns. :stuck_out_tongue:

1 Like