If I understood the comments in openvr.h, you just need to multiply the eye_matrix with the head_matrix. Then convert it to quaternions, if necessary. Here’s the exact wording of the comment…
GetEyeToHeadTransform
Returns the transform from eye space to the head space. Eye space is the per-eye flavor of head space that provides stereo disparity. Instead of Model * View * Projection , the sequence is Model * View * Eye^-1 * Projection .
I’m hoping Vittorio can get wide FOV headset native support implemented (should work for all wide fov headsets, not just Pimax), he seems on track very nicely, then it can be played on the Star VR one headset in 210 degrees!!! (the idea is not to have to use parallel projections in Pimax and be able to use it with other wide FOV headsets like Star Vrone). Its an extremely impressive port already.
Well it depends what you are trying to achieve :).
Here is what I wrote to @TheIronWolf over a year ago when he asked a similar thing:
IVRSystem::GetEyeToHeadTransform, returns transformation matrix from eye space to head (camera) space , while for the rendering you need the inverse transformation. So if we assume that for mono rendering the point transform can be written as:
p = P * C * M * v
v - vertex in model coordinates
M - model transformation matrix
C - camera (view) transformation matrix
P - projection matrix
and assume that E is eye to head transform, then the formulas for stereo rendering can be written like this:
p_left = P_left * inv(E_left) * C * M * v
p_right = P_right * inv(E_right) * C * M * v
Just to add, for the sake of clarity, C (aka camera view), can be referred to as “head” view too.