How to enable native Pimax/wide FOV support in games?

Please provide me with something I can send to a dev that will enable them to no longer require parallel projections be used with their game. I have a dev but I cannot explain to him how to not require it.

The dev said ‘I am indeed using GetEyeToHeadTransform. What should I change it to?’

My understanding is that it isn’t that hard to do, the dev just has to know what to change. And I don’t know what to tell him. If us Pimax users want better support for our headsets, we need to be able to tell devs what exactly to do to better support our headset.

9 Likes

I’ve never used GetEyeToHeadTransform(), but here’s some info I found:

[The software] makes an incorrect assumption that both displays are pointed straight forward. That’s not the case for Pimax, StarVR, […] Point is, GetProjectionMatrix and GetEyeToHeadTransform may contain rotation as well as translation. Supporting it is generally easier than not; it fails because you’re specifically rebuilding the same transformation from incomplete data.

Source: Please support canted displays, used in wide angle VR headsets like Pimax | Frontier Forums

IVRSystem 's GetEyeToHeadTransform gets you matrices that relates the eyes to the head. You get the head matrix by querying the pose of the HMD. GetProjectionMatrix similiarly gets you a per-eye projection matrix.

I recommend you read the comments in the header, they’re very descriptive and outlines how you’d typically use the head/eye matrices in a pipeline: openvr/headers/openvr.h at master · ValveSoftware/openvr · GitHub

You’re going to need to render your scene twice, each with a camera derived from the head & eye matrices and the corresponding projection matrix.

Source: How to get the pose and the orientation of the eyes of the HMD ? · Issue #128 · ValveSoftware/openvr · GitHub

It sounds like the dev just needs to use the angle associated with each eye to set the viewport, instead of assuming that the viewport faces straight ahead. I think that instead of using the head angle for the viewport, it needs to include the eye angle.

GetEyeToHeadTransform
Returns the transform from eye space to the head space. Eye space is the per-eye flavor of head space that provides stereo disparity. Instead of Model * View * Projection, the sequence is Model * View * Eye^-1 * Projection.

Normally View and Eye^-1 will be multiplied together and treated as View in your application.

Source: openvr/headers/openvr.h at master · ValveSoftware/openvr · GitHub

6 Likes

Apparently, there is still some info missing. What exactly does not work (when parallel projection is not used)? Does the developer use some off-the-shelf engine or his own?

As you already guessed, the game (engine) should handle non parallel cameras (defined by IVRSystem::GetEyeToHeadTransform) and render the stereo views accordingly.

1 Like

Hello, I’m the dev @crispybuttphd was talking about. My project is Quake VR, a mod for Quake 1 (1997) which aims to turn the classic game into a first-class VR experience. My project is free and open-source - the source code is available on GitHub.

I only own a Valve Index, so I cannot really test the game on any other headset. I had contacted Pimax about providing a developer kit on globalbusiness@pimax.com, but did not receive any reply yet.

Regardless, the engine I am using is a significantly modded version of QuakeSpasm, which is built on top of Quake 1’s original source code. The VR rendering setup code can be found in the VR_UpdateDevicesOrientationPosition function here. Here’s the part where I use GetEyeToHeadTransform:

// Position of HMD
vr::HmdVector3_t headPos = Matrix34ToVector(
    ovr_DevicePose[iDevice].mDeviceToAbsoluteTracking);

// Quaternion for HMD orientation
vr::HmdQuaternion_t headQuat = Matrix34ToQuaternion(
    ovr_DevicePose[iDevice].mDeviceToAbsoluteTracking);

// Left eye position
vr::HmdVector3_t leyePos =
    Matrix34ToVector(ovrHMD->GetEyeToHeadTransform(eyes[0].eye));

// Right eye position
vr::HmdVector3_t reyePos =
    Matrix34ToVector(ovrHMD->GetEyeToHeadTransform(eyes[1].eye));

// Rotate eye positions by HMD quaternion
leyePos = RotateVectorByQuaternion(leyePos, headQuat);
reyePos = RotateVectorByQuaternion(reyePos, headQuat);

// Adjust positions by in-game rotation (thumbstick rotation)
HmdVec3RotateY(headPos, -turnYaw * M_PI_DIV_180);
HmdVec3RotateY(leyePos, -turnYaw * M_PI_DIV_180);
HmdVec3RotateY(reyePos, -turnYaw * M_PI_DIV_180);

// Attach left eye to head
eyes[0].position = AddVectors(headPos, leyePos);
eyes[0].orientation = headQuat;

// Attach right eye to head
eyes[1].position = AddVectors(headPos, reyePos);
eyes[1].orientation = headQuat;

I am not sure how to add Pimax support and I have no way of testing it. I’ve looked at a few online resources but - to be honest - nothing was helpful. I would really appreciate a hand in adding support for Pimax HMDs here :slight_smile:

9 Likes

Thank you for your efforts. I really enjoyed Quake 1 when it was first released and have replayed it using QuakeSpasm. I’m a software engineer who decades ago actually wrote some games and worked on a Quake mod.

My free time is currently very limited, so I cannot make any promises at this time. I’ll try to do a test of your code (probably sometime this weekend). I’ve programed in OpenGL, but have no experience with quaternions. Assuming you’re using MsDev, I might even be able to tweak your eye view code.

6 Likes

Awesome to hear that! I’m open to help you out compiling and guiding you through the relevant places in the code to minimize the amount of time you have to spend on this. Either send me an email or join our Discord Quake VR | Discord Me and I’ll be happy to help out.

2 Likes

I am not sure what is Matrix34ToVector supposed to get. Is it a translation part? (As the Matrix34 is an affine transformation consisting of both translation and rotation.)

Beside that, it seems you are missing the rotation part in your code.
Have a look here: (https://risa2000.github.io/hmdgdb/hmd_cfgs/Pimax5KPlus_Normal_Native_90Hz.html). The geometry part also lists both EyeToHead transformation matrices (you may compare it with the same matrices reported for your Index).

Plus, you can test your code on Index, if you put Index into “raw camera” mode (which by default is disabled). This will enable the native geometry, with canted camera views (by 5°) (https://risa2000.github.io/hmdgdb/hmd_cfgs/Index_Native_144Hz.html).

4 Likes

Just curious - does “raw camera” mode requires fewer pixels to be drawn with Index, just like with Pimax no PP mode?

1 Like

From the data I got from @jojon it does not seem so.

1 Like

Thanks :slight_smile: But default mode has no rotation on the eye orientation, right? (parallel views identity matrix with IPD offset only)?

1 Like

Awesome thread! It would be great if we could explain developers easily which changes they’d need to make. Wouldn’t it possible to get a Pimax developer to reply here @Konger @PimaxQuorra ? It would be great if we could point any developer to this thread or some example code on how to make their project compatible with native Pimax wide FoV

3 Likes

I’d second that.
I’ve just pointed a dev team into the right direction, but the information seems to be too unstructurized and spread out. That’s horrible for non-native English speakers.

They’re not getting the right information returned by the API to correctly get the viewports aligned. It seems it was mostly about eye-rotation as well. The view thus went fish-eyed without PP.

If there’s no guide to implement this properly via the API from OpenVR, to walk through from a-z once, it’s also not something you’d consider the most directly cost-effective to put your resources on - given how few Pimax headsets there are. Polls in that simulation have indicated 10% use Pimax, and still. There just has to be a comprehensive a-z guide somewhere. Does openvr/openvr.h at master · ValveSoftware/openvr · GitHub
not properly return the eye-rotation of 10° / -10° ?

2 Likes

Of course it does:

Left eye to head transformation matrix:
    [[ 0.984808,  0.      ,  0.173648, -0.03502 ],
     [ 0.      ,  1.      ,  0.      ,  0.      ],
     [-0.173648,  0.      ,  0.984808,  0.      ]]

Right eye to head transformation matrix:
    [[ 0.984808, -0.      , -0.173648,  0.03502 ],
     [ 0.      ,  1.      , -0.      ,  0.      ],
     [ 0.173648,  0.      ,  0.984808,  0.      ]]
2 Likes

API returns correct matrix (if user has PP off), but game math has to be adjusted. I guess what happened is without canting, matrix only included IPD offset (rest is identity), and developers excluded it from camera orientation math, maybe an example of premature optimization.

Biggest issue is that just fixing camera orientation is not enough. Sometimes reflections, shadows need to be adjusted as well.

3 Likes

BTW, this sample shows how to use GetEyeToHeadTransform (note the inversion) GitHub - JamesBear/directx11_hellovr: A helloworld OpenVR program written with DirectX11. but of course this is minimized simple version though it gives the idea.

Another important piece (not hugely for Large FOV of Pimax, but every bit helps) is implementing hidden area mask, I extended that sample to do that if anyone needs it.

3 Likes

Thanks! It looks like the pertinent code is in CameraClass::Render(), in dxHelloworld1/cameraclass.cpp

I’d say the meat is in: GetCurrentViewProjectionMatrix function (at least in my local copy :D)

2 Likes

I believe so:

[[nodiscard]] vr::HmdVector3_t Matrix34ToVector(
    const vr::HmdMatrix34_t& in) noexcept
{
    vr::HmdVector3_t vector;

    vector.v[0] = in.m[0][3];
    vector.v[1] = in.m[1][3];
    vector.v[2] = in.m[2][3];

    return vector;
}

I think I understand the problem. I am just getting the position of the eyes relative to the HMD, and then setting the rotation of each eye to the same rotation as the HMD:

leyePos = RotateVectorByQuaternion(leyePos, headQuat);
reyePos = RotateVectorByQuaternion(reyePos, headQuat);

HmdVec3RotateY(headPos, -turnYaw * M_PI_DIV_180);

HmdVec3RotateY(leyePos, -turnYaw * M_PI_DIV_180);
HmdVec3RotateY(reyePos, -turnYaw * M_PI_DIV_180);

eyes[0].position = AddVectors(headPos, leyePos);
eyes[1].position = AddVectors(headPos, reyePos);
eyes[0].orientation = headQuat;
eyes[1].orientation = headQuat;

I think this means that I am throwing away all the information regarding individual eye rotation. I guess that eyes[n].orientation should not simply be headQuat, but somehow I need to use rotation part of the GetEyeToHeadTransform. Maybe convert the result of GetEyeToHeadTransform to a quaternion and then multiply it with headQuat?

I googled a bit, but couldn’t find a way to enable that. Any info?

1 Like

I do not know how to force Index to request non-parallel views, and I can’t get math right for you because that’s not something I do daily, but I’d just share one mathematical trick that helped me to validate camera math for canted displays in the past.

After you have camera orientations right (by respecting rotation and not just translation), calcluate angle between view vectors for each camera. If it matches 2x canted (20deg for Pimax) orientation is right. Also pay attention to the sign of eye translation, it’s easy to get them flipped :slight_smile:

1 Like

Yes, you understand it correctly and if you prefer to do your math in quaternions, you need to convert the rotational part of EyeToHead transformation to the quaternion and apply to the head pose. Just keep in mind that the EyeToHead is a transformation from eye space to head space, while you may need the inverse for your calculation.

https://community.openmr.ai/t/closer-look-at-pimax-parallel-projection/20510/39

2 Likes