Tried to get an impression what the difference between 8k and 8k X might look like.
For that I downloaded a 4k wallpaper, downscaled it to 1440p (bilinear interpolation), upscaled it again (bilinear interpolation again) and put original and rescaled images side by side with 400% zoom (to simulate the ppd to expect from the 8k(X) panels).
Things to note:
The original image was already quite compressed, so you see all kinds of compression artifacts in the 4k image on the left
As the right (scaled) image was retrieved by scaling down (and up) an original 4k image this is essentially 4k-to-1440p supersampling (and then upscaling again). So it would be the optimal situation for the 8k where you have a gpu that is able to render one eye in full 4k before downscaling to 1440p (so this GPU would also have been able to power the 8k X)
What is the point of using already corrupted 4K image as the origin? I would assume that 8K-X will render in 4K directly and there will be no artifacts. So if you want to make the comparison with 8K-X, you should take clear 4K image.
On the other hand, 8K setup will never render 4K output, because it will not be necessary. 8K version accepts 2560*1440, so the card will render this resolution with some supersampling (we do not know however yet how much SS will be).
So for the comparison, technically, you should render the same scene in 4K and then render the same scene into 2560*1440 (with SS) and upsample it to 4K.
sorry, didn’t have a better 4k image at hand. If somebody has a better image and can reproduce it with this - feel welcome
The right side is essentially “rendering a 2560x1440 image with 4k supersampling” (by downscaling a 4k image to 1440p) - and then upscaling to 4k again.
Sure, only applied bilinear interpolation without extra sharpening - the 8k scaler might do something else (as it has to do that for 2x90 Hz in realtime I’m not sure whether the scaling algorithm can be significantly more expensive than what Affinity Photo did here)
Here another image with a better quality source image
In the background you see the unzoomed 4k image (to better evaluate the quality). (edit: unfortunately Google scaled it down when uploading - use alternative link to download the full 4k image)
The leftmost zoom is 400% zoom from the 4k image (to simulate the ppd in 8k X)
The middle one was reduced to 1440p (no interpolation, hard clipping) and upscaled again.
And the right one was scaled down from 4k to 1440p (thus 1.5x supersampling) and upscaled again.
edit: alternative link that allows to download the 4k image: Online-Speicher (first click on the image and then on “Herunterladen” (download) to get the 4k image)
Great little comparison on the clarity 4k provides over 1440 upscaling.
While 1440p upscaling is still great, I would be curious how that image would look without the upscaling say in the Pimax 5k’s case. To see how much the upscaling does to smooth out the transition, blurring/blending it together
Supersampling only really works for the HMD since it allows a more accurate result when applying distortion to the image which is reversed by the lens. At least that’s how I understand things. On a flat image, once you downscale you’ve lost information so there isn’t much benefit.
I’ve made upscaling on images from 720p to 1080p to mimic what you’d get upscaling from 1440p to 4K. In both cases you get 2.25 times more pixels and it’s easier to see the inaccuracy in upscaling since most people have 1080p monitors. Images uploaded to imgbb.com which keeps the original images and doesn’t resize or encodes them.
I did not notice any real difference between linear, cubic, and sinc interpolation so I just used linear as it’s probably the fastest upscaling method meaning a smaller hit on the latency.
The better question is can we use something like checkboard upscaling that’s used on current gen consoles to get 4K resolution from a lower resolution. They certainly don’t render in native 4K. Maybe it’s possible to use this method to upscale to 2x4K and send it to the Pimax 8K X to get a relatively good result while we wait for better and more affordable GPUs. It would essentially future proof your HMD while letting you use lower end hardware in the meantime.
Note: had to put links as text because of limitations of 2 links or 1 image as a new user.
Good example! Just remember to zoom into the images to get an impression how the differences look like with a lower ppd than your monitor has (the 8k X will approximately look like VGA on a 24" screen with 20-25ppd).
Also hoping that we can use GPU side scaling to use the 8k X in the first year(s).
P.S.: Not sure how the “originals” were created. Presumably by downscaling an e.g. original 20MP digital photo? Then they all would be in the “supersampled” category already.
I do some similar experiments but with DCS graphics. Even this differences not look significant but when you focus on some details importhant in game it is huge. For example you are able to spot targets on the ground much more clear than it is on lower PPD also read instruments.
In lower PPD picture is more melted and how you go higher edges of the objects in picture is more sharper and clearly visible.
But like I say when you not focused on such details at first look you are disappointed because os difference but in practice this is in game matter of life and death or if you wish to be competitive or not to the monitor user in MP gaming.
I was doing that even for CV1 resolutions because someone on DCS forum say that difference is too small. And realy at first look there is almost no difference between CV1 and Pimax4K PPD but when you take attention on details higher PPD get significant advantage.
Just compare 720p video to 480p, or 1080p to 720p. It’s the same relative step in resolution. Pick something slow moving, or compression artifacts will screw up the comparison because hosting sites tend to spend even less bitrate in proportion on the lower resolution versions.
Here you go. Added an example where I just reduced the 4k image to 1440p (no interpolation, just leaving out pixels). And then zoomed in to 3840/2560*4 = 600%. This would correspond to a 5k without any supersampling, just rendering in 1440p.
Think this image is not representative for the 5k though as most VR experiences are optimized for HMDs with low resolution. This image in contrast is full with nitty gritty details. And I would assume people would at least use some supersampling on the 5k
Edit: To see how good you could get on a 5k I added another example with biqubic interpolation from 4k to 1440p (so a quite expensive 4k->1440p supersampling) and then zoomed to 600%. This shows what could be done with the 5k and a very capable GPU. So changing the GPU could still get out a lot from the same HMD.
Here’s also something that might help, a test I did with my Pimax 4k: I ran VRmark, set it to 4k rendering and then dumped the HMD image and the original game rendered pics. The HMD image is of course 25601440p and the left eye (and right eye too of course) is 19202160 (half of 4k).
(you can download the originals). Of course the HMD is rendered for distortion but apart from that I think it gives a nice idea of the resolution difference too (especially watch the poster in the top of the pic)
If you want to know how a desktop image in the 5k would look like you can easily test this yourself.
First determine how big your monitor is from the distance you want to see it in VR later. (e.g. a 24" display from 1 m away would be 30°). Then you have to calculate which resolution you have to set your monitor to to “simulate” the resolution to expect on the HMD. The Pimax 5k has 2560 pixels width per eye and about 150° FoV per eye (with 100° overlap), so you have a pixel per degree value of about 17 ppd. (assuming that they managed to use 100% of the screen with the new prototypes. With the old one where only 80% of the screen were used you would have only 13.6 ppd with the 5k)
So if the viewing angle towards your monitor in your sitting position is e.g. 30° then you would have to set your monitor to 17ppd * 30° = 512 pixels (*288 on a 16:9 monitor). If you go closer so you see 60° it would be twice that. etc.
This is of course hypothetic. In order to do this you would need a GPU that can render each eye in 4k (like for 8k X) and then additionally apply an expensive interpolation down to 1440p (which takes more than a second for a single image with Affinity Photo). So this GPU is likely several generations away… But it illustrates how good (close to 8k X) you could get without swapping HMD. (If you had the choice you would of course use the 8k X with that GPU as you get even better image quality for less computation overhead)
Edit: uploaded one (final) alternative image where the 5k examples are replaced by an 8k X supersampled variant. (The image with the 5k is still there).
This would require a GPU that can render 8k per eye (ugh…).
Finally there is an image (second from left) that shows how this 8k image would look like without resampling on a hypothetical Pimax 8k2 X (with two 8k displays) - or a “normal” Pimax 8k X with 100° FoV lenses that sacrifices field of view (in both axis) for sharpness.
So in contrast to the unrealistic 8k2 X or 8k->4k supersampled image on the left which would require a four times more powerful GPU than the normal 8k X, the “100° FoV 8k X” would work with the same GPU as the 200° 8k X (just with half the FoV and twice the sharpness).
This imho underlines how nice it would be to have swapable lenses. Quite some amount of sharpness that could be gained for applications where a big FoV isn’t mandatory.