Question to developers: some games that need PP don't need it anymore if using oculus library: why does it even work?

This question is addressed to @risa2000 or anyone having the skill to elaborate an explanation or even the beginning of an explanation. It’s only by curiosity.
Of course any clue of non expert member is also welcome.

So as I understand it, because pimax headsets have canted panels, if a game is hard coded to have it’s virtual cameras parallel to eachother (to match most of the previous headsets that have parallel panels), we have to transform (lets call that counter canting) the rendered images in some larger virtual plans to take in account the geometry of the the canted panels.
this needs more virtual pixels to work, which has an additional 30% performance cost to achieve the same apparent resolution.

This transformation is made by pitool (or a service associated to pitool) when we enable parallel projection in the menu, otherwise the images are crossed and the game is unplayable.
Correct me if I’m wrong so far.

For some games that need PP, like project cars 2, using some tips, which includes using an oculus library allows to make it work without using parallel projection, but how is it even possible in the first place?
Since oculus library is, I guess, made for oculus headsets, which have parallel panels, how does a pair of images that is supposed to be rendered for two parallel plans can match canted panels without parallel projection?
That’s where I’m lost.

my first guess is that there is an “invert canting” procedure somewhere, but then how this supposed non-pimax invert canting process can save any performance?
My second guess it that there is something in the oculus libraries that somehow allows the virtual cameras to be canted in the game, which is the only obvious way to save performance, but then it should be available from the game in the first place. Doesn’t sound plausible.

So, what are your thoughts guys?

Edit: By re-reading my post maybe I make a confusion about the virtual cameras angle thing, maybe it’s not the virtual camera angle that defines if a game is parallel projected or not, but you got the idea of my question.


I am the furthest from an expert on this, but my layman’s understanding is that depends on the respective SDKs (Oculus, OpenVR / SteamVR) and the projection matrix that is called. Many games in Steam are not properly calling the correct available function, but presumably are via Oculus.

The devs would need to change the code, but many will not do so because of lack of resource / demand vs risk etc

An example is ED:

Here is the function: IVRSystem::GetEyeToHeadTransform · ValveSoftware/openvr Wiki · GitHub

But as Risa points out, it is unfortunately probably complex to implement and makes it not worth their while for a relatively niche product with canted displays:


While I cannot answer the question, I would like to clarify few things.

What exactly do you mean by Oculus library? Typically (SteamVR, Oculus) the application initializes the corresponding runtime, by calling the appropriate function. The SteamVR has a concept of layers, where a 3rd party can add a new headset by writing a “driver” for it (which is basically a DLL with defined interface.) and register it with the runtime.

I am not aware of such functionality in Oculus runtime, at least not public, so what I would expect (and speculate about) is that Pimax is using a technique similar to Revive, which hooks into Oculus runtime and redirects the app calls to their implementation. This would also mean that this is undocumented and unsupported by Oculus.

Now the “original” Oculus runtime is aware of a possibility of having the canted geometry (it returns to the app the eye pose, which defines both the position and the orientation). It just so far only returned the orientation always the same - looking straight ahead. Since historically all Oculus headsets so far only used parallel views, I would not dare to guess what would happen if suddenly the API starts returning canted views - more precisely, what the apps would do.

Since the Oculus is closed ecosystem, it is however possible (I have no idea) that Oculus required for the applications to support the canted views from the beginning, in order to get the app into the store (it would be a sensible idea since the API supports it and considering the mess it brought to SteamVR camp.).

The other possibility is that Pimax, in their implementation of Oculus runtime simply do the parallel projection transformation by default (assuming that the only way to fake Oculus headset reliably, is to advertise it with parallel views). But this is again just a speculation.

I guess, only someone with Oculus games (which he plays on Pimax headset), can give a definitive answer, but that is not me as I have only SteamVR games.


@Octofox and @risa2000 thanks for your answer.
My coding time is from way too long ago to get all the essence of your answers but I think I get overall meaning of all of this.

1 Like

I’m a programmer, but I have no expertise in this particular area. My best guess is that the APIs (library functions) for Oculus and SteamVR differ on how they initialize the game and that SteamVR requires a bit more of the work to be done by the game programmers (during setup) to handle canted displays and that on the Oculus, that work is “free”. That is, the Oculus API handles it automatically.


There is not much of a difference how the runtime advertises the eye/camera poses to the application, except that Oculus API uses a vector and a quaternion, and SteamVR uses “eye to head” matrix (i.e. the pose represented by an affine transformation - translation and rotation). Both data are equivalent and can be transformed one to another easily.

The hard work (for the app) remains the same in either runtime. It either knows how to render the scene with the canted views or it does not. If it does not, it should not know it equally in Oculus and in SteamVR, because the rendering geometry is the same in both.

Now, what you are saying:

Could be true, but, Oculus API would never advertise canted views to the app, because currently there is no Oculus headset using those. The only API that can advertise canted views to an Oculus app is a “Pimax API pretending to be Oculus runtime” and there the thought that “Oculus API (i.e. runtime) handles it automatically” is no longer relevant.


it is possible, i guess. that oculus had plans for or were preparing for a future with canted displays and simply made the support mandatory. I don’t know if their terms are available anywhere public.


It’s because the Oculus SDK handles camera rendering by itself. By rendering per-eye cameras, all issues with PP and culling are solved automatically. The problems you see are typically because of game engines trying to be smart and trying to take shortcuts for optimisation.

OVRCameraRig.cs is a component that controls stereo rendering and head tracking. It maintains three child anchor transforms at the poses of the left and right eyes, as well as a virtual center eye that is halfway between them. It is the main interface between Unity and the cameras. and attached to a prefab that makes it easy to add comfortable VR support to a scene.

Note : All camera control should be done through this component.

  • Use Per Eye Camera : Select this option to use separate cameras for left and right eyes.

I guess by “Oculus SDK” you mean “Oculus SDK for Unity”? I am not aware of any rendering API in Oculus SDK (for PC).

Here you go, it’s quite low level and a long read:

But from the example code there, it looks like it’s rendering per-eye.

I am not sure, what am I supposed to find at this link. If you read the section “Frame rendering”, you will see a sample code how the application should handle the eye poses, which is fine, but this is a code the application needs to implement, not something in Oculus SDK.

I agree that if the application implements it this way, it will render the scene correctly even for canted views, but this is basically true equally for SteamVR - in both cases the application must correctly set up the rendering for the (canted) eye poses. The only difference is that Oculus documentation shows the example explicitly and SteamVR documentation does not say much about how to handle “eye to head” matrices.

Yes, you are right this is something typically implemented at the game engine or plugin level. I would guess that most would follow Oculus’s reference implementation. In a game which implements both SteamVR and Oculus there will be typically two code paths for the camera rendering. They most likely will not share the same code for both. So SteamVR might be broken, but using -vrmode oculus will work, but again it won’t be 100% guaranteed especially if a game does some fancy rendering tricks.

1 Like

Aah, so never underestimate the importance of providing examples that developers can just copy and paste wholesale. :wink:

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.