Closer look at Pimax parallel projection

When looking at the view geometry numbers reported by my Pimax 5k+ I noticed something strange, so I decided to visualize the native and the parallel projections and since the pictures turned out to be better than words, I decided to share them here.

But first I need to explain, what is actually on them, so people do not get confused or misread them:

The geometry

is taken from the following values reported by the headset to OpenVR:

  • Eye to head transform matrix for each eye, which gives the orientation and the location of the “eyes” in the “head” space.
  • The projection “raw” values, which define the viewing frustum for each eye.

The shapes

are the visualization of the viewing frustums which the application gets from the OpenVR. The wireframe is just helping imagine the the shape of the frustum and its center = eye point. The rectangles are the representation of the area (in arbitrary unit) into which the application renders the image (i.e. a canvas). They respect the aspect ratio of the recommended render target resolution but they do not represent the panels as the panels and recommended render target res are two completely different things.

These “projection planes” seems also awfully close to the eyes, which would not be possible, if they were panels (the drawing is in scale in a sense that the distance between the eyes corresponds to my IPD ~ 70 mm).

The reason why the distances from the eyes to the projection planes do not correspond to the distances from the eyes to the panels on the real headset are the lenses, which magnify the panels (=bring them virtually closer).

On the other hand, even though the visualizations do not represent the actual hardware design, they are perfectly in scale with the geometry the application uses when it renders the scene, which means all the angles and orientations are accurate.

The projection planes and their colors

There are three different viewing frustums merged into one image:

Native projection planes represented by the “white”-ish rectangle (the smallest one), represent the “native” mode of Pimax. The planes are canted outwards by 10°.

Native planes projected onto the parallel plane are green. These represent the shapes which the headset should render if it wanted to cover the original native view by using the parallel projection. I.e. projection onto the plane parallel to the face.

Pimax parallel projection views are red.

Since the colors mix, where green and red overlaps you see yellow. The white-ish plane is originally blue (as visible on some “top” views), but gets its final color from the combination with red and green. The different coloring also offers an insight about how Pimax handles the parallel projection.

If it would be possible, the headset would need to render only into the green area (when running in parallel projection mode), but since the normal rendering pipeline does not usually supports rendering into non rectangular frustum, the Pimax had to choose a trade-off.

Where you see the red triangles, these are the areas which are rendered, but never used, as they are outside of the view the canted panels can display. On the other hand, the green triangles are the areas which ideally should be rendered, but are probably safe to discard, because the corresponding area on the panel is not visible anyway (for other reasons, as for example lens distortion, or dimensions).

Pimax decided to use the same “pixel density” in all modes, so the surfaces (represented by different colors) are directly related to the number of pixels in those areas. In other words, by comparing the surfaces, we can estimate the difference in performance requirements for particular modes.

Because the “front view” is not really giving out the 3D nature of the visualization, I decided to add also the “top view”, which should give some additional clues about what is actually on the picture.

There is one thing which Pimax can improve for parallel projection though. The red areas can be masked by hidden area mask so the application will not render there (though geometry would still need to be processed). I wonder why they have not done it since the driver already supplies the mask, which seems to be just a static mesh, hardcoded for the particular mode.

Small FOV

Recommended render target size (SteamVR SS=100%, PiTool RQ=1.0):
    PP off: [2636, 2632]
    PP on:  [2784, 3288]

Normal FOV

Recommended render target size (SteamVR SS=100%, PiTool RQ=1.0):
    PP off: [3200, 2632]
    PP on:  [3852, 3288]

Large FOV

Recommended render target size (SteamVR SS=100%, PiTool RQ=1.0):
    PP off: [4268, 2632]
    PP on:  [6948, 3288]

(edited by Neal, to fix a confusing misspelling)


Thanks for the info! That really helps visualize what’s going on.

It’s now clear why Large FoV is so expensive when Parallel Projection is turned on.

1 Like

God, all this wasted GPU ressource :scream_cat:

I feel lucky my most played game doesn’t require PP.


Unfortunately, mine (Elite D) does require PP. Since the game is in on-going development, I hope Frontier will address this issues sooner, rather than later.


I did a ticket on Elite a while ago when several people were raising it is an issue, but I haven’t followed up any further. Is there any word out of Frontier that they were going to do anything about it?


Not that I’ve heard. You might want to do a follow-up. I’ve posted on the Frontier forum about it, but I just chimed in on an existing report.

1 Like

Thanks @risa2000 for the excellent visuals! This makes it very clear how the geometry of parallel projections works. If I’m understanding correctly, this also doesn’t account for the supersampling required for lens distortion, correct? So that would also add to GPU overhead?


Yes, hidden area mesh adjusted for PP mode might save some GPU power in pixel bound cases. However, game still has to utilize that mesh, and possibly even adjust pixel shaders to further optimize things. Unfortunately, some games don’t even bother to render the mesh.

Edit: to test if OpenVR game utilizes the mesh, just enable mirror view in SteamVR. You should see corners of the image masked out.


Nice to see it rendered by the actual numbers - thanks! :slight_smile:

Looks like the height of the PP plane is quite possibly regulated by visible area after all (with the unrendered green triangles out in the occluded corners). :7


The aspect ratios of the “projection planes” correspond to the aspect ratios of the recommended render target resolutions, which are reported by the SteamVR to the application for particular modes (small, normal, large, PP on, PP off). So in a way the shape (dimensions) of the projection planes already include the supersampling required for the pre-lens warp.

The actual resolution however depends on the supersampling factor (which the user can adjust), but does not change the shape of the viewing frustum (the geometry). You can just imagine that you can have more or less pixels there (which will affect the visual quality), but it would not change the way you look at the scene.


Definitely. One can mentally project the green “cutout” to the native plane (I did not find an easy way to add it to the render :wink:) and would see it actually does not hurt much if at all. Even on the Large FOV. I even wonder if Pimax was not too conservative and could cut more.

1 Like

@Sean.Huang, This illustrates a possible speed up: In addition to masking, if PP is enabled, FFR could specify rendering the black and red areas using the coarsest possible setting.

Please consider investigating this enhancement: That area isn’t even visible, so this could be a new “Imperceptible” level of FFR, which is only available when PP is enabled. I’ve outlined the coarsest res areas with blue boxes. The other areas would be drawn in highest res.

According to my rough calculations, up to ~25% of the image could be drawn in the coarsest res, which should be a noticeable speed improvement. If a higher level FFR was selected by the user and PP was enabled, this coarse area should be combined with the existing FFR matrix.

From an ease-of-use perspective, “Imperceptible” FFR should actually be allowed, even if PP was off. In that situation, FFR would do nothing.


When PP is enabled the headset advertises the red-yellow rectangle for the rendering. What you marked on the background by the grid is never rendered into, nor advertised for the rendering. The only part which could (should) be masked are the red triangles.


So the green and black areas are never drawn? It seems wrong to cut off the green area, which looks like it might be visible, based on the projection of the “white” area.

1 Like

You are right that leaving out the green parts leaves out the part of the view (as already spotted by @jojon above). But this cut seems either insignificant or even irrelevant as it affects only the area in the corners which is most likely masked or not visible anyway.

Here is an example of what is lost (and how to figure it out) for Normal FOV:


Thanks! That last illustration really helped me visualize what’s going on.

1 Like

Nice illustration. Could you add in a wireframe of the hidden area mesh as well, so we get a clue what region is visible in the lenses? It should ideally outline only overlapping regions when perspective projected from the viewpoint.

1 Like

Did you check the PP output resolution vs non PP in SteamVR ?
From what I remember, the increase of vertical resolution increase ratio is greater than horizontal resolution ratio. Maybe it’s normal, but increase ratio should be the same for both direction, shouldn’it ?

I have added the corresponding resolutions (the recommended target resolution) for each mode to the original post. These resolutions were recorded with SteamVR SS factor at 100% and PiTool render quality at 1.0.


I thought about it (and I still am), but getting the meshes from OpenVR and put them into the model in some automated way is beyond my rendering skills at the moment. I guess I will eventually look into it, but right now I cannot really promise anything.