Pimax native projection and pre-lens warp transformation

I was wondering how Pimax internal projection works and decided to run the following experiment. I used:

  • PiTool 1.0.1.91 (rendering quality 1.0)
  • Normal FOV
  • parallel projection off
  • IPD dialed in at 70 mm

I set up the Pimax static image (%ProgramFiles%\Pimax\Runtime\resource\pimax_default.jpg) to this particular picture:

PiTool does not have to be running, the static picture is displayed by pi_server.exe as soon as the headset is detected and initialized, only PiServiceLauncher.exe service needs to be restarted after changing the file. I chose the original resolution 2560x1440 as it was the resolution of the original pimax_default.jpg, but as it turned out later, there is no correlation between the panel resolution and the static picture resolution.

The picture has to be first “rendered” into the 3D space by being transformed into the left and right “skewed” version. It is interesting that significant part from the image on the inner side is cut (because of stereo overlap overlap) and on the other side quite large is wasted. The resolution of this image 3202x2633 corresponds to the size the headset advertises to OpenVR when queried about the recommended render target resolution (parallel projection off, PiTool at 1.0).

Left eye skewed and culled version:

Right eye skewed and culled version:

Then the images are transformed to the pre-lens warp versions:

Left eye warped:

Right eye warped:

Originally I thought that the IPD does not apply to the image, but I was wrong. It applies as the image goes through the complete “composition” in the Pimax driver.

Now what I found interesting (and partially disturbing) is that if done right, I should be able to look at the picture and get easily one particular square into the focus. But what I observed instead was that when put the headset on with this image, my eyes will initially focus on different squares. This can be easily observed by “fuzzy” numbers, or simply by closing one and the other eye and reading the numbers in the particular square.

In my case my natural focus points are at two adjacent squares. I cannot say exactly, because the brain automatically tries to align the views to the grid, but even when aligned to the grid, it is still aligned on the wrong square. If the same “misalignment” happens with the normal content (e.g. games), it could be the cause of the strain, many people observe.

EDIT: The following text was added 27.2.2019 for consolidating the additional important observations I made later and which I originally posted in this thread.

Observation 1

I was trying a different test now. Using the image above, I was putting the headset further from my face as long as I was able to read the numbers through the lenses, because at some point they become completely distorted.

The last moment I was able to read them, I was looking at the same square (judging by numbers). It is important to use the corresponding eye for the corresponding lens, i.e. looking with left eye into the left lens, and vice versa, otherwise one can read completely different squares.

Which might seem ok, except that my eyes were not converging on the same spot, they were basically looking in parallel, each to its corresponding lens, 70+ mm apart.

Observation 2

I rechecked the natural convergence with the headset on. When I wrote above that I was focusing (when letting the eyes focus naturally) at the adjacent squares I did not realize that they were adjacent, but crossed. So for example, my right eye focused at 1240 and my left eye at 1280, i.e. they kind of crossed each other (not really).

It kind of confirms what Observation 1 showed. The same spots at the image for each view are rendered at the place which is a parallel projection of the eyes (e.g. as if the picture image was at infinite distance), but the headset design makes the eyes to focus on finite distance. Which brings the discomfort.

I do not know if this actually may also apply to the apps. It could if it comes from the design, it may not, if it is just a poorly implemented rendering of the static image.

23 Likes

I can not keep the image pimax_default.jpg for more than a few seconds. He immediately entered the planet.

How can I leave the static image?

Just disable “Start Pimax VR Home” in PiTool->Settings->General.

2 Likes

Thx!! 20202020202020

What you are saying is each of your eyes focuses on a different square, but if everything was “right” they should be focused on the same square?

Is it possible that your natural ipd is not perfect? I.E. one of your eyes is a little further from the center or your face than the other?

I wonder could this be the result of a manufacturing issue. The lens alignment wouldn’t need to be off by much to completely ruin the experience.

Yes, I am saying that if the picture was rendered with the correct parameters, which means at the distance, my eyes feel it is rendered, then when I put the headset on, the eyes should naturally pick a same square to focus on, regardless if it is directly in the middle, or slightly off (if it was largely off the center, it probably would not work, but it is not the point of this exercise).

This is an interesting point you make since as a matter of fact I have non symmetrical IPD. (My eyes are 35,5 and 36,5 mm from the nose, or you may say my nose is 0,5 mm off the center :slight_smile:). This however should not impact the results.

Imagine you stay in front of the picture on the wall and look at it. Then you move to the side slightly. You may still focus on the same spot, or focus on the spot slightly next to it (if you manage to keep your look fixed in the same direction), but you would still “naturally” focus on the picture.

This is the same with the headset. Even with my IPD to be asymmetrical (or my nose off the center), in the worst case, if I keep looking straight ahead, maybe I would not focus on the dead center of the picture (or at the same square as you do), but I still should focus on the same square with both my eyes.

2 Likes

Very interesting findings. I know it’s extremely hard as our brains are so adapt at interpreting what is ‘meant to be seen’, but are you able to discern just how far apart the relative focal points per eye are, as in, how many squares apart?

Also, what facial interface foam are you using? I am now on a 16mm foam and find it a considerably more comfortable viewing experience than with the stock interface.

Using 6mm I need a completely diff pre i distortion than at 16mm

This an interesting and important point I forget to consider. I am actually using buffed up cushion with the top part having ~ 20 mm. I am also using my IPD set to 70 mm, while my real IPD is 72 mm. Both mods put my eyes further from the designed spot as both mods make the lenses optical axes converge in front of my eye (if my eyes have the depth for which Pimax was designed).

I was trying a different test now. Using the image above, I was putting the headset further from my face as long as I was able to read the numbers through the lenses, because at some point they become completely distorted.

The last moment I was able to read them, I was looking at the same square (judging by numbers). It is important to use the corresponding eye for the corresponding lens, i.e. looking with left eye into the left lens, and vice versa, otherwise one can read completely different squares.

Which might seem ok, except that my eyes were not converging on the same spot, they were basically looking in parallel, each to its corresponding lens, 70+ mm apart.

2 Likes

One more thing to add. I rechecked the natural convergence with the headset on. When I wrote above that I was focusing (when letting the eyes focus naturally) at the adjacent squares I did not realize that they were adjacent, but crossed. So for example, my right eye focused at 1240 and my left eye at 1280, i.e. they kind of crossed each other (not really).

It would suggest what my other experiment showed. The same spots at the image for each view are rendered at the place which is kind of parallel projection of the eyes (e.g. as if the picture image was at infinite distance), but the headset design makes the eyes to focus on finite distance. Which brings the discomfort.

I do not know if this actually may also apply to the apps. It could if it comes from the design, it may not, if it is just a poorly implemented rendering of the static image.

That’s really interesting. What kind of conclusions have you come to so far? Do you feel that the software IPD is a little off?

Someone mentioned this link to manually adjust the software IPD in SteamVR so maybe there’s a way to try the same image being run through SteamVR and see if you can improve things.

3 Likes

So far I cannot draw a definitive conclusion. As I already wrote, I know that the static image rendering is wrong, because the same spot (square) at the image is rendered at the parallel projection of the eyes axes. This is wrong because it gives the impression to the brain that it is looking at infinity, while the focal distance of the the lenses is relatively close. The natural solution would be to render the static image at the natural focal distance of the lenses.

But I cannot confirm that the same happens with the games, for that I would need to run a test app, with similar image but rendered through OpenVR. But chances are that the whole setup is off on the fundamental level.

2 Likes

Someone wrote a test app for finding the IPD on the HTC vive that appears to be what you’re looking for. It’s just an image that is rendered just like the Pimax static image before loading SteamVR.

I tried to look through the files but it doesn’t have a single image you can edit so it might be a complex scene created in Unity that has the target and text as an image. However if you can figure out how it’s made then you can just have your own version that you can manipulate as you wish for testing the SteamVR/OpenVR rendering pipeline in conjunction with Pitool.

2 Likes

Putting together a static scene in Unity should be fairly easy. I would be interested to know what difference PP makes as well.

thanks for finding this info

My vive pro gear lens eye strain is now sorted by adding .0025 to the steam ipd. Now to find a solution for the pimax , will try subtracting .0025

Go to internet browser after running steamvr

http://localhost:8998/console/index.html

then input one of the following examples

examples
settings steamvr.ipdOffset 0 ( resets to default 0 )

settings steamvr.ipdOffset -.0025 (to reduce steam ipd seperation by 2.5mm)

settings steamvr.ipdOffset +.0025 (to increase steam ipd seperation by 2.5mm)

settings steamvr.ipdOffset scalehereInMeters

Reboot headset to apply

Try it at your own risk

9 Likes

I have reduced the steamvr seperation .00175 mm ipd from the steamvr software and the Pimax image does feel more comfortable for my eyes

settings steamvr.ipdOffset -.00175

Do you have means to verify this change is actually applied to the rendering?

I have been playing with this setting and it does not seem to do anything. When I check my settings in the web console, I have there those two values:

Tue Feb 26 2019 15:28:40.289 - [Console] steamvr.ipd: 0.063000001013278961
Tue Feb 26 2019 15:28:40.289 - [Console] steamvr.ipdOffset: 0

The steamvr.ipd value is not correct (I have Pimax set to ~70 mm), but it is like this since the beginning and I think it does not actually make the difference.

When I check IVRSystem::GetEyeToHeadTransform which uses the real IPD reported by the headset it reads as follows.

Left eye to head:
[[ 0.9848078   0.          0.17364816 -0.03515173]
 [ 0.          1.          0.          0.        ]
 [-0.17364816  0.          0.9848078   0.        ]]

Right eye to head:
[[ 0.9848078  -0.         -0.17364816  0.03515173]
 [ 0.          1.         -0.          0.        ]
 [ 0.17364816  0.          0.9848078   0.        ]]

Note the values at [0,3] which define the X translation and are basically the offsets of the eyes on the X axis. They sum up to the IPD ~ 70 mm.

Once I change the value in the console e.g. by 10 mm by

settings steamvr.ipdOffset -0.010

it is correctly acknowledged and saved into steamvr.vrsettings file. Then I reboot the headset and restart SteamVR and check the eye to head transform again I see the exactly same matrices as above.

It means that for the application there is no change and it will still render the images for IPD ~ 70 mm. So I wonder what exactly changes by changing this setting?

I tried doing it first off with a large number like -.03 on my old vive and the images were completely separated , sort of goldfish view so the console definitely makes a difference.
Not tried any large changes to the pimax , maybe it’s placebo effect with the pimax , i will try again later on
It definitely works with the vive and vive pro though