Should Pimax implement Quadport/multiview rendering like XTAL and StarVR to make large fov useable and to eliminate distortion entirely? (poll)

“Multi-View Rendering enables next-gen headsets that offer ultra-wide fields of view and canted displays… Multi-View Rendering doubles to four the number of projection views for a single rendering pass. And all four are now position-independent and able to shift along any axis. By rendering four projection views, it can accelerate canted (non-coplanar) head-mounted displays with extremely wide fields of view.”

We can see the Left and Right views being further split into Left-side and Right-side to cover the outer periphery rendered with cameras rotated farther towards the outer edges of the viewfield.
Some of the geometry overlap between the split views (Left and Left-side, for example) is needed for seamless warping and stitching the two split views into a single ultra-wide view. [overlap eliminates appearance of tearing between both render areas]

Clearly, two views do not suffice for VR headsets with ultra-side FOV."
(https://devblogs.nvidia.com/turing-multi-view-rendering-vrworks/)

The difference to StarVR One is that nVidia draws the 4 projection views in a single rendering pass. StarVR One uses 2 pass rendering (at the driver or software level and doesn’t require per-game implementation). So it seems Multi-View Rendering is more efficient and would make rendering easier than the current distortion profile approach.

Wide for VR without distortion is in fact
possible according to Sweviver

https://community.openmr.ai/t/starvr-one-is-available-now/26922/64

However; the majority of the average of people have said they prefer normal fov to large fov in this poll (Please add your vote if you haven’t yet.)

Seeing as changing the distortion profile is always going to leave some distortion and changing it hasn’t solved the root cause of the distortion to this day, which is caused by stretching a 1:1 rendered image into a 16:9 image.

Do you think Pimax should try quadport/multiview rendering (stitching 2 1:1 rendered areas together on each display for no distortion like StarVR and XTAL use) so that large mode will be more useable and possibly eliminating distortion entirely?

3 Likes

Pimax isn’t stretching the image. In that regard. It’s distortions more has to do with lens design and the eye looking through the lens at different angles. Dynamic distortion correction may resolve a lot of the distortions by adjusting the profile by know at which angle the eye is looking through the lens.

That being said dividing the view port into 2 views per eye would allow for some benefits.

If we assume the StarVR specs are true at 210 wide it has 50 extra degree wide vs Pimax which is 160 wide on Large FoV.

StarVR 2 camera views per eye is needed due to being greater than 180 wide.

8 Likes

It is not for Pimax, or XTal, or StarVR to implement - it is for the developers of each application.

If there exist general drivers or injectors that intercept the calls of any game, and force the matter, without unforeseen consequences, I’d be delighted to know, but I have not seen any.

The purpose of dividing the field of view into smaller segments, is not to eliminate distortions in the ultimately viewed image (save possible floating point precision issues); It is to reduce the rendering load, since things get more and more stretched out the farther out you get from 0°, so more and more pixels are rendered per degree of FOV, in the areas where they are the least needed.

After the four views have been rendered, they are reprojected together, for the display screens, to the same two views one would have had if one had only rendered two to begin with, stretching them to the same correct distortions one would have had - albeit with the corrective predistortion that compensates for the distortion of the lenses applied on top.

I kind of regret that I brought up the whole matter in the other thread, because I was apparently so bad at describing it, that everybody seem to have walked away with a backwards understanding.

8 Likes

Maybe the reason why 1080ti is the minimum spec because it only can use 110 degrees field of view mode in Compass in SteamVR games which dont use the StarVR sdk, or 210 degrees using single pass stereo with a 1:1 aspect ratio reference render to stretch it out to 16:9 widescreen, which creates the distortion effect, because Pascal cannot use SMP.

Because Compass plugs straight into SteamVR leads me to believe that StarVR is in fact compatible with all Steamvr games (but maybe with 110 degree mode or in 210 degree mode with 2 viewports)
Although Heliosurge seems to think Pascal can use SMP so in that case it would explain why the 1080ti is the minimum StarVR spec, and that would mean 1080ti can render 4 viewpoints from different angles would it not?

What remains to be seen and what we should ask their developers if the headset works in 210 degrees fov mode with 4 viewports in all SteamVR games using simultaneous multi-projection/multiview rendering if you have a 2080ti, or if 210 fov mode works with only 2 viewpoints using a 1080ti

Does it? Isn’t it just a settings/updates UI panel application for the parameters their runtime uses, like PiTool is to piserver?

Aspect ratio? Stretch to 16:9? This sounds eerily reminiscent of a discussion I had with another user, a while back, who mistakenly conflated unrelated things.

The view that is rendered for each eye is narrower than 210°, and less than 180° too (the left eye can not see as far to the right as the right eye, and vice versa, and the canting lets you reach farther “back”), so one viewplane per eye is perfectly sufficient for rendering to the full FOV of the StarVR, as long as one do not use any “parallel projections” mode – it is just not nearly as efficient as it could be.

NVidia’s cloak-and-daggery just makes me tired… :stuck_out_tongue:

4 Likes

Yea, placement seems very important judging from user experiences. The comfort kit is a good example of just how important placement of your face is. If Pimax can get a good idea of where the eyes are relative to the headset using 7Invensun’s eyetracking then they should be able to greatly reduce distortion or maybe even eliminate it. Besides that, hopefully they can make the IPD go down a bit lower. That’d be nice.

1 Like

I think the virtual cameras in the game itself should render out a fish-eye effect and the screens themselves should be curved not flat and canted. OLED makes it possible to create curved screens and the output from the game should match this curve rather than several flat views. Probably cut out distortion on the lenses this way too because they would have less to ‘correct’.

What I understand from the NVIDIA paper, it’s actually more about getting more details into the periphery. By turning the cams outwards the angle for the periphery becomes bigger thus capturing more details in the pre-distorted game render. So the final result, through the lens has more details in the periphery.

1 Like

@jojon is right, this is about reducing the workload. The way the perspective projection works ensures that the image rendered is observed exactly as the same whether it is rendered into one plane or several. It is just “differently placed” into the FOV of observer.

Actually these two approaches are interchangeable. One can imagine an app which is rendering into multiviews and then displays it on one big display (the renders will be “reprojected” to the plane of the display), or the app renders in single view, but it is displayed on multiple displays (after the proper transformation).

From the geometry however the perspective projection has “optimal performance” exactly in the center of the view (when looking along the view axis). Further from the center you are looking at needing bigger and bigger plane to accommodate the FOV increase, which is determined by tangent function.

So the best would be rendering on sphere, but unfortunately, this is not how current hardware (and software) works.

3 Likes

I don’t like polls that ask you “Do you want this better thing for your Pimax headset?”. Of course everyone always wants an improved headset. I hesitate to vote because it’s like, if Pimax implements this, is it going to take longer for us to receive our 8K X? Will this method reduce the performance of the headset? I haven’t tried the headset yet, will any distortions even bother me? There’s a lot more nuance than a simple yes or no. That’s why your poll only has 3 votes.

2 Likes

I did notice that quote, and you wouldn’t believe how ridiculous I, as an utter illiterate, feel about contradicting a NVidia paper.

I suppose one can look at the matter from a multitude of angles (pun a happy accident), and there are several “stages” to the whole thing… I am sure there must be some way of interpretation, that makes both statements true… :stuck_out_tongue:

As you rightly point out: The “pillow” shape distortion made by the lens, stretches things out, toward its edges, and compresses them in the centre, reducing spatial detail density in the periphery, and increasing it in the middle; We counter this with a “barrel” shape distortion in software, that does the exact opposite

This is one matter, which is separate from the two projection ones, although they do all inform one another, when designing the HMD and software, and the “feeding” one another detail.

(Let’s also be clear that the matter of distortions from “pupil swim” are inherent, to various degrees, to all current consumer grade HMDs, and is not going to go away entirely without more complex optics, and/or dynamic distortion correction informed by eyetracking (thinner (preferrably zero thickness) lenses is one way to reduce the problem, which is the reason we have fresnels); There is no one perfect distortion profile that produces a correct result no matter which way we are looking though the lens - one have to make one tradeoff or other.)

…so we have two projections: One from the game world, toward the game camera viewpoint, intersecting the camera viewplane along the way.

…and one from the screen, through the lens, into the viewer’s eye.

The two need to line up perfectly, so that any ray of light that enters the eye, comes in from the direction it would, if that eye was in the game world, at the game world location of the game camera viewpoint.

The effects of the lens complicates things, but momentarily overlooking that important matter, for simplicity’s sake: The viewplane rendered should be a perfect match with the real world screen, in its spatial relationship with the eye/viewpoint; It is our “window” into the game world, or “2D cutsection” if you like - our lightfield rectangle sliced out of the air in the gameworld, albeit crippled by being only two dimensional, with a single given projection.

…so if I see a tree far out in my periphery, the pixels that depict it need to be in that direction, they will be on the screen, askace from my eye, where the rays of light intersect the flat viewplane, after bouncing off the tree and racing one another toward my eye.

This will make the view look correct, as long as my eye is in front of the screen, in the same location (and rotational relationship) as the camera viewpoint is in behind the viewplane – it is only when I move my eye, and look at the picture from another angle, that the stretching toward the edges become apparent - I have broken the conditions of the projection, and the screen is now just another photo in a picture frame, in my real world space, and I can see the effect that projecting onto the viewplane at a very oblique angle has.

Like with so many things VR, one have to think angles, rather than two-dimensional rectangles: Pixels per Degree, rather than Pixels per Inch, and if I send a vector out from my eye, through the centre of a pixel on a screen, onto the surface of an object, there will be more such rays for each degree of field of view the farther they go from perpendicular to the screen, garnering more detail: More pixels sampling- and representing more points on the object surface. I have not yet managed to figure out precisely how the writers of the paper are reasoning when they write the opposite, but maybe they could relate to one of the other things I have mentioned, the results of a combination thereof, or some other technical limitation, perhaps one pertaining to small fractions, or something…

…anyway: The takeaway is that for multiple reasons (lens distortion, flat plane projection), detail density is not uniform across the view. The pillow distortion of the lens actually mitigates the projection stretching somewhat, in some aspects, but you still have many more pixels for each degree of field of view in the periphery, than down the axis of the lens, where they benefit you the most, and you render many more there.

The problem here becomes that (…barring the various VRWorks features, and equivalent), the way we generally do rasterization, we can not decrease the bitmap render resolution only in the periphery of the view - all screen-, and all bitmap -pixels are of a uniform size, and equidistant in their matrix layout, and to render less in the periphery, we inherently render less in the centre, too, along with it, wasting the fortunate upside to the pillow distortion, and blurring the centre view.

We also get to the point where all that surplus detail we have rendered in the periphery gets “squished” by the barrel distortion. Here we run into something of a sampling rate conundrum, where, for performance reasons, the algoritm will only take a small number of texel samples from the source texture, for each output pixel on the bitmap that goes to the screen, but is absolutely inundated with rendered detail to pick from, that will for the most part simply be left untouched. This leads to an additional degree of aliasing, from swathes of the source being skipped, right in the periphery, where we are most sensitive to flickering. :7

'By splitting the view in two, and turning the lateral camera-half outward, so that the now two viewplanes become as the one old one folded down the middle, we reduce the angle, for two narrower camera frustums; To specify: This is the angles from the camera viewpoint, to the edges of the viewplane, and nothing else.
This way, we render less detail per degree of field of view out in our periphery, because since the half-viewplane faces the viewpoint, instead of stretching farther and farther away, it is less distant a periphery within its half-frustum, than it would have been, reducing the disparity of (game world…) sample rate across both the bitmap and the total per-eye FOV, and a lot of unnecessary work.

Hmm, it strikes me, right now, that the matter that leaves us needing to render larger vertical views, and not just horizontal, is probably going to rear its ugly head with Multiview rendering as well, one way or another…

I am not a fan of what I see on the frame shot from the quoted article… There should be no need, nor desire, for overlap between the views. That, together with the words, appears indicative of them applying a panoramic photo-stitch algorithm, which is approximative, and wastes resources on rendering the overlapping parts, as opposed to reprojecting viewplanes that line up, which should yield more accurate results…

…ooo-kay… I believe I lost focus in a hideously tragic way, there… Sorry about all the the blathering, and for going way below the paygrade of the listener (if any remains :P).

3 Likes

That is a bold statement :wink:.

This is true not because (or not mostly because) you moved your eye, but because while your eye moved it moved significantly relative to the “photo”, because the “photo” is very close to the pupil, so it suffice just to turn the eye and it suddenly becomes offset.

If you however have a projection plane large and far from the eye, so your move will be less important, the photo will hold the original 3D perception quite well. What I had on mind is what produces of Mandalorian used for the shooting (this was really a brilliant idea and probably something we may see eventually enter other areas of VR).

What I am trying to say is that the currently available headsets suffer from the facts that the eye is not perfect “pin hole” camera and is not fixed at one point in space, but moves around quite a lot and changes its aperture as well. But is also equally important that those eye changes are relatively significant to the positions (or characteristics) of other parts (of equipment) of the visualization pipeline.

While it is true that towards the periphery (of the viewing frustum) the area on the projection plane representing one angular unit grows with the tangent. So technically, assuming the rendering in the projection plane is homogenous, i.e. with the same planar density, the angular density must be increasing. My (admittedly) personal feeling is that when looking at things at those relatively small angles, they do not look actually right, not because there is a problem with the optical model, but because the rendering simply does not work well.

You might want to look at it the other way that the objects in periphery which look “normal” in the lateral camera view, gets unnaturaly stretched over the single “frontal” plane in classical single view rendering. Then you have to look at this unnaturaly stretched image at some precisely given angle to “unstretch” them back into the normal shape. Whether it could bring more (or less) detail into the equation is up to each one’s point of view :slight_smile:.

2 Likes

Hmm, I’d better lean into that…

Yes - was basically referring to when we see a frame from one eye’s screen image captured, and shown on a monitor. :7

Umm, Mandalorian? Sorry, I’m not quite “in” on what we are talking about… sounds kind of like you are speaking of matte paintings used when shooting the show?

…but then: “VR”… Is there a Mandalorian VR experience? Remembering something from talk around previous Star Wars experiences; Are you referring to use of the use of baking down of (EDIT: …not all that…) distant geometry and shading to something that is simple, and cheap to render, as long as one do not stray far enough to peek behind the carboard walls… Google-something-named-after-one-artist-or-other-or-something…? Matisse…? Chagall…? :stuck_out_tongue:

EDIT2: Seurat! That was it!

2 Likes

There was a TV show last year called Mandalorian. They shot it in very special way using a stage with background projected in real-time rendered in Unreal Engine :slight_smile: https://ascmag.com/articles/the-mandalorian

3 Likes

Aha! That’s neat - much like the virtual sets used on TV news. :7

EDIT: Began to read the article …or more like a spatially tracked, and real-time rendered version of the old: “Look, we’re driving in a car, totally not sitting still in a studio - do you not see the world zip past through the side window?” trick… :slight_smile:

2 Likes

As other’s said, biggest problem is game devs dragging their feet. We don’t even have SPS/MVR/SLI/LMS on many games. However frustrating it is, I cannot see devs implamenting multiview, as often not even relatively simple things listed above are implemented.

3 Likes

I like all these cool techs and wish they were in every game - I’d say it’s NVIDIA / Epic games / unity not having a clear path to implementation of those features. There are often fancy builds with extra fun stuff but there’s a lot of fiddling around with building unreal from source and outdated versions and lack of documentation. Maybe there’s more details than last time I looked. Could you point me to a handy guide on implementing these things for unreal engine?
Ray tracing (rtx) on the other hand is a check box 'cuz it’s a sales point.

1 Like

Not at all. It opens the subject which will allow greater understanding for others.

2 Likes

Not familiar with Unreal/Unity side of things, but you can download VRWorks from Nvidia if you’re curious. I think same principles apply anywhere.

Things is “there’s no clear path” as you say, because VR needs different approaches really, not bolt on changes. I am not a game dev, but graphics is my hobby. One example is SPS or even non parallel cameras, required by Pimax. If rendering was implemented with those in mind, those things are easy. But adding them later to something that was not done for VR is not always easy.

Really like this presentation: GDC Vault - Advanced VR Rendering from 2014 still sad, that in 2020, certain racing games have none of those VR specific approaches, just brute force rendering twice…

1 Like

The Normal Vs. Wide FOV poll isn’t very useful without the reasons for the preference. Personally I prefer Wide FOV whenever possible, but when I can get the performance I prefer normal. Some people may wish to be able to use Wide but not be able to due to nausea. And so on.

4 Likes