Recommended render target resolution is just that - recommended

I think you are correct :slight_smile: I still testing (max cap adds confusion to the results), but it seems to be that your explanaition is right.

I and others have asked @PimaxUSA to explain a bit more what PiTool does, but he never made any further comment on the subject. After his statement I did a small test myself, running the PiTool at 1.0 and at 1.5 (while keeping the render target resolution the same by (ab)using the 4096 limit imposed by OpenVR) and did not find any visible difference in the image quality of the outputs (https://community.openmr.ai/t/is-it-best-to-fix-pitool-at-a-certain-resolution-like-1-75-or-2-0-and-change-steamvr-ss-or-the-opposite-or-are-they-interchangeable/14897/28).

It was a static comparison so I might have missed something, but I guess it would need more detailed information from Pimax to convince me that PiTool “does much more”, especially because I believe that PiTool (in particular here we are talking the Pimax custom compositor) should do as less as possible.

3 Likes

I do not want to argue with your observation (I believe it is authentic), but from the application (=Elite) point of view it works just with one input = recommended render target resolution. It does not know how exactly OpenVR figures this one up (or how it depends on PiTool) and it does not need to know. The expectation is that if you feed Elite with the same resolution it will render the same image (quality).

Now you have the image coming from the application (which is most of the time bigger than the native HMD resolution) and “the only thing” PiTool can do is to downsample it while applying also the pre-lens warp. I do not expect that this procedure should be done in several different ways depending on the PiTool settings, because it does not make much sense (at least to me). It is just common sense argument, but I have no proof.

This should not happen and probably is rather an indication of a bug (or a bad design) in the pre-lens warp transform implemented by Pimax.

1 Like

@risa2000,

With CV1, there was a limit at which increasing SS made no visual difference anymore. It was around 180%-200% of the Native resolution.

Do you think same math will apply to Pimax?

I tried 0.5, 1.0, 1.25, 1.5 at around the same resolution. 0.5 and 1.25 happened to be equal. I couldn’t see a difference between 1.25 and 1.5 and the difference between 1.0 and 1.25 was so slight it may have been imagined.

However, 0.5 and 1.25 were not the same. [quote=“risa2000, post:23, topic:16832”]
This should not happen and probably is rather an indication of a bug (or a bad design) in the pre-lens warp transform implemented by Pimax.
[/quote]

I agree that the game must render the same with 0.5 and 1.25. The only thing that makes sense is that Pimax, for whatever reason (bug?), uses the pimax slider to control the resolution of the output of the pre lense warp and/or up/down samples the output of steamVR before applying the warp.

This would mean that at 0.5 Pitool would take the 3000px output of steam and turn it into 1000px during the warp. And at 1.25 it would take the 3000px and turn it into around 4000px. Before converting to the native resolution of the headset.

This would explain why it would be blurry. This could also explain why PimaxUSA would say it was better to increase Pitool to get a higher resolution since it would preserve the rendered resolution through the warp. But is just speculation based on what I observed with Pitool at 0.5

1 Like

Hmmm… for moment it looked like outer edge warping was less pronounced with Pitool at 2.0. This needs testing.

The same principle yes, but for the exact numbers I have no idea. But let me elaborate on the principle. Each headset (Rift, Vive, etc.) has some “inherent” supersampling factor defined by the manufacturer. This was 1.7 for Rift and 1.96 for Vive. The good explanation how they derived it for Rift is here https://www.reddit.com/r/oculus/comments/766a93/reality_checking_the_hype_train_on_the_pimax_8k/docy5e5/.

In the explanation you may notice that it depends on the particular quality (1:1 pixel mapping) at particular FOV (90° for Rift). This is something only the manufacturer decides based on his expectations/limitations and what should define the “out of box” experience.

Then the users are free to improve it. I do not know if the numbers you refer to 180-200% are meant over the real native panel resolution, or they are on top of built-in 1.7 factor (I do not have Rift), but it does not matter. The important point is that with “out of box” (=default) settings you still get “only” 1:1 pixel mapping at 90° FOV and while it means you do not “lose” information (=pixel) it does not ensure that one pixel rendered will map precisely on one pixel at the HMD panel (as the pre-lens warp is nonlinear and applies different level of downsampling at different places).

So you get all the info somehow, but in an aliased form. Then you are free to apply the additional supersampling, which is basically an additional anti-aliasing. At certain point of applying of the additional supersampling you will observe no visible improvement in the quality.

The original post on reddit is much more technical, and quite informative about the implications for larger FOV (eg. Pimax). What it does not discuss in detail (because it is probably difficult to asses) is that at certain FOV you basically do not need so much precision (or 1:1 pixel mapping) because the lens distortion is so big that it does not matter anymore (here comes foveated rendering). This is more defined by the optical system property and not so much related to the rendering geometry. So there are two kind of opposing desires - render the image at enough precision so it is nicely anti-aliased to the certain point (of FOV), but not further.

So to sum, up, while I guess it could be possible to somehow calculate the required maximal usable supersampling factor for Pimax from its optics and geometry, I guess the exercise is left for the user to find the right value experimentally. There are several important differences between Pimax and the other headsets (FOV and view geometry on the rendering side, lenses properties on the optics side) that I would not try to draw any conclusion from Rift numbers for Pimax.

1 Like

I was thinking along the same line when I run my tests in Elite. Since I was only doing the comparison between PiTool at 1.0 and 1.5 though, I thought, maybe PiTool upsamples the image by 1.5 factor and then downsamples it (even though it would not make much sense, as it would waste the resources and lose the precision), but I should at least see the allocated resource in the Nvidia debugger (i.e. the resource texture with the resolution factored by 1.5 (or whatever) from the original input image rendered by Elite). But I did not see anything like that.

3 Likes

Yes, makes sense. Defintely numbers cannot be reused… Especially because I am not sure if curve that gets to the best image is linear (like SS slider) or is it quadratic/square, if you get what I mean.

Please, share your findings if you figure such setting out - I will do too. I particularly prefer better visuals and can accept lower FPS, but of course, there’s perfect middle ground, which I guess is subjective…

Look at fpsvr. Those settings steamvr stuffs up and defaults to 100%

1 Like

This is all related to what you explained above, except that IIRC CV1 had recommended resolution close to native at 100% SS. At PiTool 1.0 and 100% in SVR, resolution is way above native per eye resolution 2560x1440. Is that because of angled panels, or some excess to increase quality of warping? If I am not mistaken, with CV1 at 100% SS resolution was very close to native resolution of the device.

It also sounds like games that do not use correct matrices and still use ones for Parallel panels are wasting quite a lot resources, as requested resolution is quite bigger?

No, I’d have to go research to find the numbers (again), but when I was digging into the SO and SO+ super-sampling I learned that both CV1, Vive and Vive-Pro had notably higher render targets with SVR at 100%, whereas the Samsung headsets had much closer (or right at) native resolution with SVR at 100%.

1 Like

I found the numbers I had recorded a few months back:

Odyssey+ (1440x1600 per eye)
100% = 1426x1779
140% = 1687x2105
200% = 2017x2516
250% = 2255x2813

OG Odyssey (1440x1600 per eye)
100% = 1428x1776
140% = 1690x2101
200% = 2019x2512
250% = 2258x2808

Oculus Rift (1080x1200 per eye)
100% = 1344x1600
140% = 1590x1893
200% = 1901x2263
250% = 2125x2530

Vive-Pro (1440x1600 per eye) *
100% = 2016x2240

Vive (1080x1200 per eye) *
100% = 1512x1680

6 Likes

Thank you, this is awesome information. So at 100% SS all headsets request slightly bigger image than native resolution. That kind of explains why certain games looked nearly best at 100% with my older headsets - they already get some super sampling.

Well to varying degrees. At the time I recorded those number, my focus was on the Odyssey(s) and how they were much closer to native at 100% yet the Rift, Vive and Vive-pro where considerably more beyond native at 100%. Particularly the Vive-Pro versus Odyssey, which both have the same native resolution but are dramatically different at 100% SVR. I don’t have my 5k+ right now (sent in for replacement) or else I’d record the same. I do believe it’s a big over native when PP off. With PP on it’s way over native.

2 Likes

This is exactly to what I refer in my previous post as “inherent supersampling”, but I would not call it “slightly” bigger if it almost doubles the number of pixels.

The numbers for 5k+ for normal FOV and PP on and off are in my original post. PP on seems to be 1.5x of PP off (factoring the pixel count, not dimensions)

1 Like

The reason is explained in the reddit article I linked above (which is BTW using Rift as an example). In short, to preserve defined amount of the visual information during the pre-lens warp, certain supersampling factor for the rendering of the original image is needed.

3 Likes

I understood that part, not 100%, but I have good 3D imagination so yes, I visually imagine what is going on, and why more pixels is need to make image look better. It makes more sense now.

1 Like

The way it would make sense, to my mind at least, would be in terms of optimisation, whereby the whole distortion shader itself, or the parameters it works with, could both be tuned to the particular ratio of available samples in the intermediate (source) image, and the resolution of the native output; And making the loop of code as tight and efficient as possible, for any given such sample rate – actually, I could imagine one might even choose to use different filtering methods altogether, for different ratios, if they perform differently (quality and speed) under the different circumstances (SteamVR, as it happens, at one point changed to a highly optimised algorithm, which averaged more samples, without loosing speed, but on popular request, Valve brought back the old one as option, to placate those of us who preferred its slightly better sharpness, at the cost of more aliasing, to the softer output produced by the new “advanced” one).

BUT, it stands to reason that the choice, in that case, between which algo/parameter_set to use, should be based on the de facto size of the incoming intermediate images (relative to native target output), and not the distant PiTool quality preference, whose bitmap dimensions could end up modified on numerous occasions, between piserver and openvr initiating conversation, and rendered intermediate images arriving back from the application (and SteamVR’s overlays) to piserver. (Then, after that point, I suppose, in high level code all bitmaps would likely basically be counted as 1.0x1.0, regardless of their varying sizes in pixels, or aspect ratio… :7 )

Yes, I suppose ultimately, at the bottom of the stack of working-with-what-one-has-got, the choice of base render target multiplier boils down to the magnification strength of the particular lens that is used, together with how much (FOV) of its profile is used (hence less base oversampling for Rift than Vive, which in turn explains why Rift users experience slighly better frame rates, for the exact same pixel count display panels, than Vive users), and the amount of distortion that (with current optics) comes with that . :7


A few observations, from skimming recent activity in the thread:

I find it a little unclear, when people write: “CV1”. That mnemonic has become more or less synonymous with the first consumer Oculus Rift model, in particular, and I find myself unsure whether some use it for the first models in the Pimax 8k/5k series as well. It would be helpful if everybody made the distinction in writing, just to preempt misunderstandings. :7

When if was found out we could increase SteamVr’s resolution cap, and with PimaxUSA’s claim that the PiTool Quality “does more” than just increase the claimed bitmap size of the HMD, I tried a few variations in Elite Dangerous, and found (just an impression - no actual measurements made) for my part, that I seemed to be better off with PiToolQ at 1.0, and modifying things (EDIT: upwards, preferrably :7) later down the line, both in terms of quality and speed; …than with the equivalent ultimate resolution, beginning with a high PiToolQ (EDIT2: …and then reducing down the line, if needed), which is contrary to what I see others say. I guess I’m going to have to do more experimenting.

Doing both HMDQ and game internal SS x2.0 in ED (without any preeding multipliers; If one have those, one would need to reduce HMDQ accordingly, for the same resulting intermediate bitmap size; Reduce this total rendertaget below x1.0, and you throw away the advantage) does produce an image that is both clear and beautifully antialiased (tried in the past, on Vive and Rift), but that’s 16 times regular workload, and not even possible without everything grinding to a complete standstill, with the large bitmaps of the 8k/5k, at least on my 1080Ti. :7

I see some claim fantastic results with a certain combination of every multiplier and internal SS down the line supposedly hitting a sweet spot, and will have try that, but remain somewhat dubious. Perception differs between people, and Elite has had plenty of players swearing by more than countering a high HMDQ with a SS below 1.0, which to my eyes only results in blur and inter-pixel irregularities (…or “poop on a rope”, which somebody coined as a term), so I usually prefer as little “magic” messing with things up- and downwards as possible, but we’ll see. :7

One thing I have noticed, but not investigated further, is many UE4 titles bogging down quite a lot on the 8k/5k by default, and showing huge desktop mirrors, before one adjusts their settings.

These have “traditionally” “ignored” any SteamVR RenderTargetMultiplier (supersampling, at the end of the day) – and seemingly the “baked-in” base multiplier too, that one use in SteamVR, so the question then becomes what resolution these games (…and any others, too, I’d presume) take as “device native”, when using a P8k/5k; It would seem this would be what piserver suggests, with PiToolQ factored in, because I could get Pool Nation running by reducing PiToolQ to 0.5 (after that I could reduce my existing high in-game screen percentage setting, and after that restore PiToolQ to a higher value), but as I said: I have not investigated anything. :7

2 Likes

Right, I was talking about the situation where the “incoming” rendered image has the same resolution (as in my post about PiTool 1.0 vs 1.5 comparison). Doing something differently based on incoming resolution makes sense. But it was not the point.

The point which is not completely clear (theoretically it is, but practically not) whether it makes sense to combine in-game SS together with HMD quality SS in Elite. Theoretically it make sense to apply only HMD quality, but if Pimax downsampling algorithm sucks (relatively speaking) then distributing the overall SS factor over both might be desirable.

On the side note, you are not the only one who runs PiTool at 1.0.

1 Like