There are three things which come to play (with regards to the final image quality):
a) optics
b) rendered resolution
c) image transformation (pre-lens warp)
ad a) As long as you can set up the headset in a way that you can see individual pixels (or display pixel matrix) you are fine, for what concerns the lenses.
ad b) The rendered resolution determines how much information can potentially end up on the display and it also determines how good it can be. In other words, if the app renders resolution (X, Y), the final image can only go up to this resolution (information wise, not pixel wise), no matter what happens in between.
ad c) There are not so many ways how to transform the rendered image at resolution (X, Y), to display image at res (W, Z), because there is only one transformation which is (mathematically) correct, if you want to have the final image look right (when observed through the lenses).
So theoretically, there should not be any difference in the perceived image quality (in the headset), rendered at the same resolution. However there are recurring reports from people who claim that setting “unbalanced” PiTool Render Quality and SteamVR supersampling gives better results (in terms of image quality), even though the rendering resolution remains the same.
How is that possible?
The only explanation (I found plausible) is the one given by @Gared here (https://community.openmr.ai/t/best-quality-settings-for-steamvr-ive-found/22614/34).
This would mean that Pimax warping function (which is basically non-linear sampling function) does work better on the higher res images. In other words, it probably uses not very good sampling algorithm (so it needs higher res, to achieve better results).
I guess only Pimax knows how good/bad their algorithm is, but the truth is, there seems to be enough evidence that @Gared was right, and you need to factor that into your settings.
So far I have not answered (not even touched) the original question
, so here it goes. In general (not talking about VR now, but the normal 2D image), there is certain limit where increasing supersampling does not improve the overall quality. My guts feeling is that it is about 2.0x (by dimension), or 4.0x (by pixel count = what SteamVR uses), where something like 2.0x (by pixel count) can already improve things. The same (assuming you are not limited by lenses) applies to VR headset except, that the same idea about some supersampling “limit” applies not to the pixel res, but to the PPD.
You need to know the PPD (pixel per degree) density the headset can display (which is an analogy to the regular monitor res in 2D world) and then get rendered image resolution which would relate to that PPD (as the rendered res can also be characterized by pixel per degree - in this case rendered). Because the PPD of the headset is not linear and the most important part is the center of view, you would want rendered PPD in the center to match (or oversample) the headset PPD in the center.
So theoretically, if you know the PPD of the headset in the center, you can calculate the rendering resolutions for the particular “oversampling” factor (in the view center). The thing to keep in mind is that the “oversampling” factor I wrote about above is something completely different from SteamVR SS, or PiTool RQ, because it applies to angular density. The question is if it is worth an effort
.