Consolidated info on all different types of resolutions for 8K and 5K?

I am being new to the party, have not backed the kickstarter yet as there is still some important information I am missing. Also the kickstarter FAQ seems to be a bit vague about everything (res, panel type, matrix type, IPD range, etc.). Below I tried to consolidate the info about the panels and resolutions I had and put the question marks where due.

  • 8K and 8K(X) panels
    • HW resolution 3,840 x 2,160 (2x), LCD RGB matrix
  • 5K panel
    • HW resolution 2,560 x 1,440 (2x), LCD, OLED(?), matrix type?

The engadget article says that output resolution was only 2,560 x 1,440.

  1. Kickstarter FAQ says the input for Pimax 8K and 5K will be 4K. Is 2,560 x 1,440 supposed to be this 4K?

  2. If it is only 2,560 x 1,440 then it makes 1,280 x 1,440 per panel/eye which means different aspect ratio and 2x horizontal upsampling compared to vertical upsampling. Is my understanding correct?

  3. Even that hw resolutions of OR or Vive are lower (and both accept native resolution input, i.e. no upsampling) there is significant supersampling set in software to compensate for lens warping. From what I read about games as ED people use 1.5 even up to 2.0 supersampling, so I assume even et 2.0 there is perceived image quality difference.

Now, does Pimax use also supersampling before warping and if yes, what are the typical values?
This coefficient directly influences the rendering resolution and therefore the video card requirements, but I cannot find any info about it in any hands-on/review article so far.

The 8k & 5k accept input of 25601440 per eye or 51201440 overall. The 8k upscales to 38202160 per eye or 78402160.

51201440 if added = 6560
3820
2160 if added = 5980

These 2 values are close in resolution so roughly the input resolution is 4kish.

The article says the 5k is also using clpl lcd & presumably like the 8k is using an RGB panel.

1 Like

Having the input resolution 2x 2,560 x 1440 would make prefect sense. But then the raw throughput difference to oculus would be:

5 1201 440 = 7 372 800 (pimax pixel count)
2 160
1 200 = 2 592 000 (oculus pixel count)

This gives:

7 372 800/2 592 000 = 2.84 more pixels for pimax.

Now the important question remains, which supersampling factor pimax requires to compensate for warping. If it remains similar to oculus (i.e. 1.5) then we look at rendered pixel count ~ 11M pixels, which is twice as much as real 4K.

I do not believe there is a graphics card which can render 2×4K at 90 FPS (or anywhere close) today in any “real” game (e.g. ED or PC2).

1 Like

This looks like a good question for @Enopho as a user who has 2 1080ti (chose to use them in 2 computers).

The high supersampling factor is specifically to compensate for aliasing caused by the low resolution. You’ll get a visual upgrade keeping the same rendering buffer scale, which means you can lower the supersampling by the same factor the resolution is rising. On top of that, the Rift and Vive both use 2/3 subpixel arrangements, which drops the effective resolution without reducing rendering or cable demands. In short, sticking to the same performance GPU will have a quite obvious quality improvement.

So for reference, let’s assume we have a 1.5x supersampling factor on the Rift. For simplicity, we’ll ignore the details of lens warping and unused panel space. We then are rendering 1.510801200 pixels, each of which has RGB, so 5.832 million rays. This is then displayed by 210801200 subpixels, 2.592 million rays. That’s an effective oversampling of 2.25.

We compare this to an 8K where we choose to aim at the same pixel count. We’re starting fairly weak at a supersampling factor of about 0.53, just because the field of view has become so much wider. That gets us an effective vertical rendering resolution of 1046 pixels; so yeah, the 8K will want higher GPU performance for the same effective resolution, to compensate for the field of view. That then becomes upscaled during the lens compensation pass to 2560x1440, and transferred as 11 million rays. The headset then further scales that up to 24.88 million subpixels.

In this comparison, we’ve lost some resolution by going from about 12 pixels per degree displayed and 14.7 rendered, to 12 pixels per degree displayed and 8.7 rendered. But that display resolution only applies to green! For the other colour channels, we also have a doubled subpixel count, raising the perceived resolution by about 22%, and that’s before we’ve considered the panel upscaling that fudges another 1.5x. Combine these factors, and we’re looking at an image with effective 10.67ppd that may be perceived as resembling 12ppd, the actual resolution of the 8K X.

So what’s the performance factor required to maintain effective resolution? Roughly 1.9x, which means dropping the supersampling factor from 1.5x to 1x. Because the original factor happened to match the subpixel count factor, and Pimax expanded the vertical field of view and resolution the same while doubling the horisontal. Supersampling more will of course help, and on an 8K-X, this level is about a 0.66x factor.

As for “real” games, including E:D, people have differing priorities. I for one don’t care much for blur effects, but do care for reasonably decent shadows, for instance. The former scales its requirements with rendering resolution but the latter doesn’t (as shadows are typically rendered in their own buffers). There will be plenty of tuning to do.

1 Like

@LoneTech
You say that the major reason for supersampling is antialiasing for the low res.

and then also say

My understanding is that there are two types of aliasing involved. First is the one you describe, i.e something that already exists on the standard flat panel desktop monitor, where the supersampling helps to mitigate “discrete nature” of the gfx card rasterizer. In other words while the card can draw lines and triangles, it can only draw them by pixels. So the line on low res display gets natural jaggies from pixel geometry.

In this case you are right that the increased resolution does not need more supersampling. In fact, if (still using the flat panel situation) one increases the output resolution, but keeps the physical dimensions the same, so effectively makes the panel “more dense”, and the image is rendered with the corresponding supersampling factor, the perceived quality will remain the same. For example, having display 1000x1000 rendered with 4x SS (i.e. at 2000x2000) would give the similar picture to having 2000x2000 display rendered with 1x SS (i.e. changing SS multiplier by factor 1/4). This is the reason why people with high-res panels (4K, 5K) do not need usually antialiasing and can save GPU processing power there.

There is however the second type on aliasing (if we may call it so) in HMD which comes from lens warping (or to be more precise from final warp, coming from the compositor to correct lens warping). In this warp not all pixels of the rendered view are mapped 1:1. Some (possibly in the center) are close to 1:1, others off the center may be 1:2, or even up to 1:3 (see this explanation).

So in order to benefit from the panel resolution, and to avoid to generate pixels 2 or 3 times the native pixel size in the non-center zone, the HMD compositor uses supersampling before warping to give the warping enough pixels to match the panel resolution even in off-center areas.

You have completely dismissed the second case, so there is not much I can follow on, and I agree that for the first case, you are right. My point is I am not sure we can really dismiss the second case. I would love to make some tests to prove (or disapprove) my theory, but, unfortunately, I do not have any HMD at the moment, so take my claim that also the second point is important with the grain of salt.

The warping aliasing is clearly defined by lenses properties (as per the link I mentioned above), so it is possible that the requirements for Pimax can be different from those of Oculus, and it also seems to me that pixel arrangement (i.e. pentile vs RGB) does not really matter much in this case.

2 Likes

There is certainly aliasing in the precision with which we can correct for lens distortion! I’ve mentioned this elsewhere, as a particular reason that raising the GPU to panel resolution will help even with the same size render buffer. This is in favour of higher resolution HMDs. Ignoring the lens effects was for simplicity of the mathematical model to compare, and I cannot easily quantify this aliasing.

It relates to another source of aliasing that is rarely mentioned. The upscaler in Pimax 4K and 8K makes the assumption that the display panel is seen as a regular grid, with equal density between all pixels. This is not the case because of the lenses; therefore the interpolation math is slightly off, similar to image processing without gamma correction. The 5K and 8K-X models won’t have that particular artifact.

I also think that it is not easy to speculate about the warp aliasing, especially since we do not know the optical properties of the lenses. This was the reason why I asked here, hoping someone from Pimax may chime in :).

Considering the explanation from reddit I linked above, it seems that this “correction” might not be negligible (if Oculus had to use 1.7x SS to ensure 1:1 mapping at 90° FOV). On top of that, with FOV increasing the aliasing is getting worse so technically the supersampling factor would also increase. I guess Pimax will have to strike some balance there between acceptable SS factor and not losing too much of clarity at high FOV, but this will definitely impact the image quality and the requirements for gfx cards and this is my concern so far.