Optical Calibration: In depth Info on Hardware and Software IPD function in newest PiTool Version

Yes. What is more unfortunate that if someone tries to apply the old “best practice”, he will most likely make things worse.

The major consequence is that the user can no longer compensate the “wrong” lens position (in cases when the wrong lens position feels more comfortable) with the software IPD offset. Now (as I wrote above), the IPD for virtual cameras is determined by (and only by) the hardware setting. So once you do not set your real IPD the scene will be rendered with the wrong virtual camera positions.

Good question. I have no idea. As it shifts the image across the display it could be anything Pimax devised as convenient. The way to figure it out would be to try to capture the final (post warp) image from Pimax pipeline and check the image shift. But this will only work, if the shift is done in the compositor and not on the headset itself. I guess I am not going spend my time on that for obvious reason.

In general there are two fundamental problems in the new implementation:

  1. It removes the possibility for the user to set up the geometry right (in cases the “correct” IPD setting is not comfortable and he needs to use different lens positions). In other words, whenever you set the hardware IPD to anything else than your real IPD, you are screwing up the rendering geometry for yourself.

  2. It shifts the image across the screen but does not change the warping, which means with visible shift, it will also introduce visible distortion, because the (pre-lens) warping will no longer match the image position on the screen.

1 Like

I appreciate the information very much! Thank you for spending your time in the forum although beeing a happy Vive Pro user by now!

I still have a view questions left.
The lens distortion or warping issue might not be the biggest problem as the default warping mostly is not perfect and affecting it with the offset might make it worse but also better for some. My finding is that the warping is not heavily affected as long as you use the H-Offset in smaller amounts (0.1 to 1.0 value). The IPD Dial has more effect on the warping, maybe because Pimax actively changes the warping profile according to the IPD position. The IPD Wheel on the other hand does not shift the images so significantly sideward. In the end it is unclear to me what it actually does. It has a subtile effect but when looking through one eye only it seems like almost nothing happens. H-Offset on the other hand jumps significantly sideward even by changing only by 0.1 value.

Regarding your point Pimax is doing some processing “in the Headset”. I guess you mean that Pimax/PiTool is “post processing” the image after it was produced by the Renderer. Could it not be that on top of the image shift they do additional post processing like warping?
Let’s use the V-offset as an example. I found that dependent on the IPD or H-Offset setting you need to apply a different V-offset to reduce barrel distortion and get a straight picture When turning the head from left to right, Any idea why that is? Is it an indication that it affects warping or post processing? With my 8KX I need to set a 2.5 offset between left and right image as there is a hardware misalignment for my two panels (otherwise I see double vision). Does it mean I never will have identical warping profile for both eyes?

Can you give your opinion on these points? I feel that we are slowly coming to an end of the discussion. So don’t worry, I will not bother you much more :wink:

The error is always progressive, so less you deviate from the norm, less error you get. But it does not change the fact that the behavior is not correct.

IPD dial changes the position of the lenses and the position of the image on the display. But, I do not believe that Pimax is changing the warping profile based on IPD (even though it should!). But it is hard to judge for a human, because we cannot move our eyes the same way we move the lenses. So, apart from the correct position, there always be a “residual” distortion, coming from the fact that the eyes do not look through the lenses the way they should, and for which the warping (pre-lens warp distortion) has been calculated.

This is because the lenses shift masks the images shift in a way. This is an optical behavior of the lenses and may seem counter intuitive. But the image shifts, as you would see on the final images which are sent to the headset if could you access them.

This is easier to spot, because the lens do not move. It also depends on the distance the image moves.

The scene is rendered by the application and then is passed to the “compositor” (at least this is what some pipelines name it) where the image is “warped” to compensate for the lens distortion. This all happens on the PC (for regular headsets). What I meant was that the image shift on the display is a relatively simple task, which could be also implemented directly on the headset, but it could as well (and relatively easily) be implemented on the PC as a part of the image “composition”. For the user it does not matter.

This will be unnecessarily taxing the pipeline. The correct way would be to do the only one warping on the rendered image and do it right first time. But, as I wrote, I do not believe that the warping Pimax does reflects the IPD setting or the image shift setting.

The optics/math for high FOV headset is quite complex and I believe Pimax simply does not know how to fix it. The offsets implemented in the current form can compensate only for one thing - if the display is misplaced physically in the headset. In other words, if the display is in a wrong place because of big manufacturing tolerances or an error. Then, by playing with the offsets (if you do not mind potential rotation as well) you can position the image that it is in the right place (relative to the lens). But this is something which should be done at the manufacturing phase, not by the end user.

On the other hand, if you have the optics wrong, or the math wrong, or simply the design wrong, no amount of image shifting will fix it. The discrepancy may not bother you, but it still will not make it correct.

3 Likes

I think I found a use case for the new Implementation of the H-offset. In case you use small values and have the same value with opposite algebraic sign, you can shift the complete VR image more left or more right on the headset without changing stereo separation or scale. If you give L -0.2 and R+0.2 the almost same 3D image is produced but you look through the lenses at a slightly different angle. It could help to get both eyes sharp, which is especially problematic with the 8KX!
It would also help if the screens are misaligned on hardware level. My 8KX is basically not usable without changing the V-offset for one screen. Similar alignment issues could also be on the horizontal axis, although it would be much harder to realize and apply the correct countermeasure.
However, it would be nice if we could get back also the Soft IPD function as it was implemented previously on top (different name, different slider).

Is there any way or tool to affect PiTool settings without using the PiTool UI? Are there entries in the Windows Registry data base maybe? Command line tool? Probably the functionality is still there but not available in the UI.

AFAIK, the only way to control the headset is through C API. There is a project on GitHub (GitHub - OpenMAR/PiTool: Provide interactive interfaces for Pimax products (such as HMD, controllers, base stations, etc.) to users), which was supposed to be a start of Pimax OSS effort, but it never got updated and never got the other parts released. There is a DLL (in a binary form), which you can reverse engineer to figure out, how to interface the headset. But this DLL is so outdated that I would not bother.

2 Likes

Thanks for sharing this @risa2000!

I have a physical IPD of 67.9 and had resorted to run the 8KX with a hw IPD of around 65 to use more of what appeared to be the clearest part of the lens when focusing straight ahead at infinity in VR.

Using your pattern with said low hw IPD I immediately perceived ‘merged’ numbers; I then increased the IPD to my physical IPD of 67.9 and lo and behold, I perveived the numbers correctly.

Alas, as soon as I close and open my eyes within the HMD, the numbers appear ‘merged’ again; I tried this via a range of hw IPD settings and the result is, that as long as I keep my eyes open whilst adjusting the hw IPD, the will suddenly perceive a clear image but only until I either close my eyes or take of the HMD and put it back on.

It appears as though my eyes are physically moved whilst adjusting the IPD but at no point do they show a clearly uniform image perceived by both eyes at the same time in a natural resting position.

Is this to be expected?

I am yet to test how h offset steps improve the overall experience.

I cannot vouch for it as I did the tests related to “matching the numbers” long time ago (https://community.openmr.ai/t/pimax-native-projection-and-pre-lens-warp-transformation/15775), but it seemed (at that time) that the “default” image was not rendered correctly, or to be more specific, it was rendered as if it was in infinite distance and you had your eyes looking totally parallel. Which is not the default eye behavior, because we are most of the time focusing on something closer than infinity and therefore have the eyes converge a bit. Plus it seems that the virtual focal distance is somewhere 0,5-1 m, so having the eyes looking parallel and at the same time focusing at 0,5 m could produce some sensoric discrepancy.

So I cannot really recommend using the default image for setting the right IPD. I will stress it in the original post as well, it was meant more as tool for testing how these offsets behave, as you can easily spot the relative image shift.

On the other hand, you can use it at least to align the vertical offsets (if there is a need). Concerning the “merging numbers”. The idea is that if you put up the headset and look into it, the eyes should naturally focus on the same numbers (squares). The pattern will however force the eyes to align on the squares so the fact that the “focus must come naturally” is important, because with some nudging and convincing the eyes may actually focus on the same numbers, just not completely naturally.

When I tested it though, it seemed that it is not difficult to determine whether it was natural or slightly “guided”.

1 Like

That absolutely makes sens and would explain why at a given focus (read eyes perfectly parallel) I get a clear and correct image yet after focusing on the virtual focus distance, there is a mismatch, regardless of the IPD values.

Thanks for your feedback!

1 Like

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.