Please provide me with something I can send to a dev that will enable them to no longer require parallel projections be used with their game. I have a dev but I cannot explain to him how to not require it.
The dev said ‘I am indeed using GetEyeToHeadTransform. What should I change it to?’
My understanding is that it isn’t that hard to do, the dev just has to know what to change. And I don’t know what to tell him. If us Pimax users want better support for our headsets, we need to be able to tell devs what exactly to do to better support our headset.
I’ve never used GetEyeToHeadTransform(), but here’s some info I found:
[The software] makes an incorrect assumption that both displays are pointed straight forward. That’s not the case for Pimax, StarVR, […] Point is, GetProjectionMatrix and GetEyeToHeadTransform may contain rotation as well as translation. Supporting it is generally easier than not; it fails because you’re specifically rebuilding the same transformation from incomplete data.
IVRSystem 's GetEyeToHeadTransform gets you matrices that relates the eyes to the head. You get the head matrix by querying the pose of the HMD. GetProjectionMatrix similiarly gets you a per-eye projection matrix.
It sounds like the dev just needs to use the angle associated with each eye to set the viewport, instead of assuming that the viewport faces straight ahead. I think that instead of using the head angle for the viewport, it needs to include the eye angle.
GetEyeToHeadTransform
Returns the transform from eye space to the head space. Eye space is the per-eye flavor of head space that provides stereo disparity. Instead of Model * View * Projection, the sequence is Model * View * Eye^-1 * Projection.
Normally View and Eye^-1 will be multiplied together and treated as View in your application.
Apparently, there is still some info missing. What exactly does not work (when parallel projection is not used)? Does the developer use some off-the-shelf engine or his own?
As you already guessed, the game (engine) should handle non parallel cameras (defined by IVRSystem::GetEyeToHeadTransform) and render the stereo views accordingly.
Hello, I’m the dev @crispybuttphd was talking about. My project is Quake VR, a mod for Quake 1 (1997) which aims to turn the classic game into a first-class VR experience. My project is free and open-source - the source code is available on GitHub.
I only own a Valve Index, so I cannot really test the game on any other headset. I had contacted Pimax about providing a developer kit on globalbusiness@pimax.com, but did not receive any reply yet.
Regardless, the engine I am using is a significantly modded version of QuakeSpasm, which is built on top of Quake 1’s original source code. The VR rendering setup code can be found in the VR_UpdateDevicesOrientationPosition function here. Here’s the part where I use GetEyeToHeadTransform:
// Position of HMD
vr::HmdVector3_t headPos = Matrix34ToVector(
ovr_DevicePose[iDevice].mDeviceToAbsoluteTracking);
// Quaternion for HMD orientation
vr::HmdQuaternion_t headQuat = Matrix34ToQuaternion(
ovr_DevicePose[iDevice].mDeviceToAbsoluteTracking);
// Left eye position
vr::HmdVector3_t leyePos =
Matrix34ToVector(ovrHMD->GetEyeToHeadTransform(eyes[0].eye));
// Right eye position
vr::HmdVector3_t reyePos =
Matrix34ToVector(ovrHMD->GetEyeToHeadTransform(eyes[1].eye));
// Rotate eye positions by HMD quaternion
leyePos = RotateVectorByQuaternion(leyePos, headQuat);
reyePos = RotateVectorByQuaternion(reyePos, headQuat);
// Adjust positions by in-game rotation (thumbstick rotation)
HmdVec3RotateY(headPos, -turnYaw * M_PI_DIV_180);
HmdVec3RotateY(leyePos, -turnYaw * M_PI_DIV_180);
HmdVec3RotateY(reyePos, -turnYaw * M_PI_DIV_180);
// Attach left eye to head
eyes[0].position = AddVectors(headPos, leyePos);
eyes[0].orientation = headQuat;
// Attach right eye to head
eyes[1].position = AddVectors(headPos, reyePos);
eyes[1].orientation = headQuat;
I am not sure how to add Pimax support and I have no way of testing it. I’ve looked at a few online resources but - to be honest - nothing was helpful. I would really appreciate a hand in adding support for Pimax HMDs here
Thank you for your efforts. I really enjoyed Quake 1 when it was first released and have replayed it using QuakeSpasm. I’m a software engineer who decades ago actually wrote some games and worked on a Quake mod.
My free time is currently very limited, so I cannot make any promises at this time. I’ll try to do a test of your code (probably sometime this weekend). I’ve programed in OpenGL, but have no experience with quaternions. Assuming you’re using MsDev, I might even be able to tweak your eye view code.
Awesome to hear that! I’m open to help you out compiling and guiding you through the relevant places in the code to minimize the amount of time you have to spend on this. Either send me an email or join our Discord Quake VR | Discord Me and I’ll be happy to help out.
I am not sure what is Matrix34ToVector supposed to get. Is it a translation part? (As the Matrix34 is an affine transformation consisting of both translation and rotation.)
Awesome thread! It would be great if we could explain developers easily which changes they’d need to make. Wouldn’t it possible to get a Pimax developer to reply here @Konger@PimaxQuorra ? It would be great if we could point any developer to this thread or some example code on how to make their project compatible with native Pimax wide FoV
I’d second that.
I’ve just pointed a dev team into the right direction, but the information seems to be too unstructurized and spread out. That’s horrible for non-native English speakers.
They’re not getting the right information returned by the API to correctly get the viewports aligned. It seems it was mostly about eye-rotation as well. The view thus went fish-eyed without PP.
If there’s no guide to implement this properly via the API from OpenVR, to walk through from a-z once, it’s also not something you’d consider the most directly cost-effective to put your resources on - given how few Pimax headsets there are. Polls in that simulation have indicated 10% use Pimax, and still. There just has to be a comprehensive a-z guide somewhere. Does openvr/openvr.h at master · ValveSoftware/openvr · GitHub
not properly return the eye-rotation of 10° / -10° ?
API returns correct matrix (if user has PP off), but game math has to be adjusted. I guess what happened is without canting, matrix only included IPD offset (rest is identity), and developers excluded it from camera orientation math, maybe an example of premature optimization.
Biggest issue is that just fixing camera orientation is not enough. Sometimes reflections, shadows need to be adjusted as well.
Another important piece (not hugely for Large FOV of Pimax, but every bit helps) is implementing hidden area mask, I extended that sample to do that if anyone needs it.
I think I understand the problem. I am just getting the position of the eyes relative to the HMD, and then setting the rotation of each eye to the same rotation as the HMD:
I think this means that I am throwing away all the information regarding individual eye rotation. I guess that eyes[n].orientation should not simply be headQuat, but somehow I need to use rotation part of the GetEyeToHeadTransform. Maybe convert the result of GetEyeToHeadTransform to a quaternion and then multiply it with headQuat?
I googled a bit, but couldn’t find a way to enable that. Any info?
I do not know how to force Index to request non-parallel views, and I can’t get math right for you because that’s not something I do daily, but I’d just share one mathematical trick that helped me to validate camera math for canted displays in the past.
After you have camera orientations right (by respecting rotation and not just translation), calcluate angle between view vectors for each camera. If it matches 2x canted (20deg for Pimax) orientation is right. Also pay attention to the sign of eye translation, it’s easy to get them flipped
Yes, you understand it correctly and if you prefer to do your math in quaternions, you need to convert the rotational part of EyeToHead transformation to the quaternion and apply to the head pose. Just keep in mind that the EyeToHead is a transformation from eye space to head space, while you may need the inverse for your calculation.