How to enable native Pimax/wide FOV support in games?

If I understood the comments in openvr.h, you just need to multiply the eye_matrix with the head_matrix. Then convert it to quaternions, if necessary. Here’s the exact wording of the comment…

GetEyeToHeadTransform
Returns the transform from eye space to the head space. Eye space is the per-eye flavor of head space that provides stereo disparity. Instead of Model * View * Projection , the sequence is Model * View * Eye^-1 * Projection .

I had a chance to play Quake VR tonight. I was very impressed, although I had a few problems.

  1. I couldn’t find world scale and player height that seemed correct. Some things seemed to big and other things seemed small.
  2. I don’t have any VR controllers. Is there anyway to use the mouse? WASD worked fine, but I couldn’t turn or attack.
  3. Is there a recenter command? I play seated and I wasn’t able to position myself looking forward with my keyboard directly in front of me.

And as you know, I had to enable Parallel Projection. Even so, my framerate was good.

I’m hoping Vittorio can get wide FOV headset native support implemented (should work for all wide fov headsets, not just Pimax), he seems on track very nicely, then it can be played on the Star VR one headset in 210 degrees!!! (the idea is not to have to use parallel projections in Pimax and be able to use it with other wide FOV headsets like Star Vrone). Its an extremely impressive port already.

I know. I’m hoping to help him with that. I’m a software engineer.

I agree. Until the original Half-Life was released Quake was my favorite game. It’s very nostalgic and cool to be able to play it in VR.

Well it depends what you are trying to achieve :).

Here is what I wrote to @TheIronWolf over a year ago when he asked a similar thing:

IVRSystem::GetEyeToHeadTransform, returns transformation matrix from eye space to head (camera) space , while for the rendering you need the inverse transformation. So if we assume that for mono rendering the point transform can be written as:

p = P * C * M * v

v - vertex in model coordinates
M - model transformation matrix
C - camera (view) transformation matrix
P - projection matrix
and assume that E is eye to head transform, then the formulas for stereo rendering can be written like this:

p_left = P_left * inv(E_left) * C * M * v
p_right = P_right * inv(E_right) * C * M * v

Just to add, for the sake of clarity, C (aka camera view), can be referred to as “head” view too.

2 Likes

Just to be clear, Quake vr mod from Vittorio already supports wide FOV - just does not support canted displays / parallel projection off.

1 Like

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.

hank you for your support and we are also looking for developers to cooperate with

1 Like