Fans of 8K how do you explain this?

Just counting the pixel lines, 5K+ has higher pixel density.The black lines between the pixel rows are wider on the 8K as well. All I did was count the pixels and rotate them to match the alignment. Anyone can explain this?

5 Likes

Panel utilization I assume.

You counted visible lines, right? The line pattern on the 8K appears to not be aligned horizontally. So if we for argument’s sake assume both headsets were horizontally oriented and the line pattern is at 45 degrees, you’d expect the line distance you see on the 8K matches the diagonal distance from pixel to pixel. That would make it too large by a factor square root of 2, or about 1.4. So assuming your count has about 45 8K lines in the distance you counted 50 lines on the 5K+, the 8K panel has nearly 64 pixels in the same orientation. That would mean about a factor 1.3 higher panel resolution on the 8K. This is a rough estimate, but not contradicted by the diagonal lines.

Note that this explanation essentially requires that the pixels are displaced to line up with the diagonals, such that the diagonal line passes through two pixels for each image line. I am not certain that is the case; there’s lots I don’t know about the 8K panel.

5 Likes

What I can tell you is
_I had a 4k, I know how is SDE (Until you wear a HMD on your head you dont really know what it is, what is “feel”. And for a decent VR, SDE on 4k was descent.
_According to numbers (and this is a surprise since I didnt though about it until yesterday…which is stupid) the 8k and the 4k should have similar SDE and 5k should be the less good.
For comparison, I found this : http://community.openmr.ai/t/for-reference-pimax-4k-vs-vive-pro/8348/79
_About the pixel arrangement, I’ve read that since the 5k has horizontal one like most of LCD sensors, so when captured there are no arterfact created by missalignement, which our eyes doesnt see, just like when someone wears stripped shirt on TV. This can make sens.

So, Iam glad the 8k is the one I am going for.

4 Likes

He has rotated them.

1 Like

Extremely clever and worthwhile thing to do thank you.

However I’m a little confused. With possibly higher panel utilisation the 5K+ may seem more compact and therefore have more “information per inch” but not more pixels per inch. So it doesn’t make sense to me that you’re counting more actual pixels in the same area.

huh, so i wasnt imagining things. that would mean about 20% of the 8k screen is not beeing used… or these are not 4k panels…

edit: wait a minute, if my guestimating math is right, if you count pixel rows in a diagonal, you end up with a higher number without any more pixels on the panel… is that how they got to 4k? by using a 1440p panel with diagonal pixel arrangement?

edit2: with a little more guestimation i think there is no way that a panel with 8.3Million pixels that size even fits into the housing…(vertical res: 1440p to 2160 is a 50% difference, acording to this picture the pixels are about 10% bigger = if you have 10%bigger pixels and 50% more of them, your panel is 60% bigger, which probably wouldnt fit into the hmd…)

1 Like

Unless the camera was exactly the same distance from the lense in both photos the pixel count comparison will not be accurate. So you would need to count the used pixels from one edge to another.

4 Likes

I am not sure, but if you draw triangle

if you draw diagonal line in “h” side, you can draw line more than vertical line on c1 and horizontal on c2.

Then if you count the number of diagonal line on c2, it will have line (or pixel) more than horizontal line on c2 side. Than mean we will get more pixel density in vertical?

I agree.

IMHO, people argumenting with through the lens photos all the time here are missing the point completely. Until you know the exact conditions these photos are made this is pointless.

Both pictures must be taken:

  • on the same position and rotation in the game world
  • same distance from the lenses
  • same spot on the lens
  • same position and rotation of the camera
  • in RAW format without automatic correction inside the camera/smartphone
  • then there can still be errors because of the internal chip of a digital camera, best take a anologe camera

And I am pretty sure I missed 100 other sources of error. That’s why these pictures are only for getting a general impression. The perceived experience of the reviewer in words is far more valuable than these pictures. I think counting pixels is just ridiculous…

8 Likes

i don’t agree, here both photos are based on the same mechanical layouts of the headsets and even lenses are the same so the resulting image you take a photo off has the same dimensions (physical), as long as the camera captures enough detail to make out the pixels and the image size (like say the nose, if you hold the pictures side by side) are the same size of both photos you can compare and count, what you will have to take into account is the error rate of your measuring, stuff like that can sum up heavily, is physics/chemistry measurements you sum up the factor and give a percentage error rate to your measurement (next time you measure a weight have a look on the device what the +/- error can be, like 2000g +/- 1g , and that can differ in that range too)
so if you measure the space between pixels and come up with max. 50% error rate and you compare 1440p -> 2160p the whole thing will be pointless, just measure is not enough, you need to take more into account (also if the method is valid, what i tried to cover above for this case - if i overlook something important please point it out, thats the way of science - and also valid documentation to be reproducible by peers!!!)

2 Likes

I’ve taken a few moments to look closer at some of SweViver’s through the lens shots, and they’re JPEGs presumably at the sensor resolution of his phone. At the resolution the 8K shots show, it’s absolutely dominated by interaction of the camera’s Bayer filter with the panel’s subpixel arrangement, with some uncertainty from the Pimax lens thrown in, and repeated conversion back and forth between chroma subsamplings. The diagonal pattern observed may be entirely Moiré. We need significantly more magnification to get representative data about the panel. On the plus side, that means it may be irrelevant to the eye for many. There are other artifacts we can observe, like the colour shifts MRTV showed in the text chart for his maximum supersampling comparison. MRTV did provide pictures, but they were the frames used for the video, not easily relatable to factors like sensor resolution or angular size.

The sort of shot needed to tell these details may be completely different from the overview shots of how things might look inside the headset. For example, here’s a Zenfone AR in a Daydream View, photographed from 2m away with a hama 80-250mm lens on a Nikon D7000, cropped for the forum’s size requirement.


Noone would mistake that picture for representative of what you see in the headset, would they? But it does show a few central pixels with sufficient resolution to reveal the subpixel pattern of the OLED panel; yet another pentile. Just a little bit out of focus, and you’d be completely distracted by lining up red subpixels, which are larger than the green and brighter than the blue (I did have a blue reduction on).

And here’s an example, with the same headset, of the kind of shot you might use in attempts to estimate field of view. It is an equirectangular projection shot with a Ricoh Theta V:


In hindsight, I should add a ruler straight out from the headset for an idea how far the camera is from the lens.

Neither kind of shot gives a complete impression of what you’d see using the headset. This headset has severe enough SDE that the subpixel arrangement is plainly visible, and it does have a narrow field of view, but it might be wider in actual use than this shot suggests (which has 360 degrees horisontal, not remotely human). I did not deform the soft edges of the set, and it might not have been vertically aligned (the camera was, on a tripod with a 2D macro slider).

7 Likes