I was wondering how Pimax internal projection works and decided to run the following experiment. I used:
- PiTool 1.0.1.91 (rendering quality 1.0)
- Normal FOV
- parallel projection off
- IPD dialed in at 70 mm
I set up the Pimax static image (%ProgramFiles%\Pimax\Runtime\resource\pimax_default.jpg
) to this particular picture:
PiTool does not have to be running, the static picture is displayed by pi_server.exe
as soon as the headset is detected and initialized, only PiServiceLauncher.exe
service needs to be restarted after changing the file. I chose the original resolution 2560x1440 as it was the resolution of the original pimax_default.jpg
, but as it turned out later, there is no correlation between the panel resolution and the static picture resolution.
The picture has to be first ârenderedâ into the 3D space by being transformed into the left and right âskewedâ version. It is interesting that significant part from the image on the inner side is cut (because of stereo overlap overlap) and on the other side quite large is wasted. The resolution of this image 3202x2633 corresponds to the size the headset advertises to OpenVR when queried about the recommended render target resolution (parallel projection off, PiTool at 1.0).
Left eye skewed and culled version:
Right eye skewed and culled version:
Then the images are transformed to the pre-lens warp versions:
Left eye warped:
Right eye warped:
Originally I thought that the IPD does not apply to the image, but I was wrong. It applies as the image goes through the complete âcompositionâ in the Pimax driver.
Now what I found interesting (and partially disturbing) is that if done right, I should be able to look at the picture and get easily one particular square into the focus. But what I observed instead was that when put the headset on with this image, my eyes will initially focus on different squares. This can be easily observed by âfuzzyâ numbers, or simply by closing one and the other eye and reading the numbers in the particular square.
In my case my natural focus points are at two adjacent squares. I cannot say exactly, because the brain automatically tries to align the views to the grid, but even when aligned to the grid, it is still aligned on the wrong square. If the same âmisalignmentâ happens with the normal content (e.g. games), it could be the cause of the strain, many people observe.
EDIT: The following text was added 27.2.2019 for consolidating the additional important observations I made later and which I originally posted in this thread.
Observation 1
I was trying a different test now. Using the image above, I was putting the headset further from my face as long as I was able to read the numbers through the lenses, because at some point they become completely distorted.
The last moment I was able to read them, I was looking at the same square (judging by numbers). It is important to use the corresponding eye for the corresponding lens, i.e. looking with left eye into the left lens, and vice versa, otherwise one can read completely different squares.
Which might seem ok, except that my eyes were not converging on the same spot, they were basically looking in parallel, each to its corresponding lens, 70+ mm apart.
Observation 2
I rechecked the natural convergence with the headset on. When I wrote above that I was focusing (when letting the eyes focus naturally) at the adjacent squares I did not realize that they were adjacent, but crossed. So for example, my right eye focused at 1240 and my left eye at 1280, i.e. they kind of crossed each other (not really).
It kind of confirms what Observation 1 showed. The same spots at the image for each view are rendered at the place which is a parallel projection of the eyes (e.g. as if the picture image was at infinite distance), but the headset design makes the eyes to focus on finite distance. Which brings the discomfort.
I do not know if this actually may also apply to the apps. It could if it comes from the design, it may not, if it is just a poorly implemented rendering of the static image.