Is On-HMD/Internal Reprojection possible?

Why the heck do we do so much to double frame rates with “motion smoothing”/ASW/reprojection BEFORE we send them in a limited bandwidth over DP or more importantly wireless?
I wonder, could the XR2 be used to internally double refresh rate with reprojection?
I realize depth/motion is not sent via the cable/wireless, and that motion vectors are best handled by tensor cores, so I’m not talking about ASW or anything.
I’m talking about a frame-rate-doubler (which I’m sure it already would do if receiving a 60hz signal on a 120hz display and so on…) but with interframe rotation - call it on-board or internal reprojection.
In that case, all the XR2 would have to employed to do is add rotation based on HMD motion, like old school reprojection?

Probably with some added latency I suppose, which is a hurdle for sure, but…150-180hz would be a cake-walk given a 75hz or 150 hz signal.
As an added bonus it would effectively be independent of the software or your PC hardware, which usually takes a overhead hit when using motion smoothing.
It just seems like they’re going about it in an inefficient manner - especially given the fact that they are butting up against the limitations of DP and wireless and there’s a relatively idle XR2 sitting right there?

Is this what they meant by “Hybrid rendering”?
Does Tobii have a hand in this arena?

Thanks for any clarifications/educations as I admittedly don’t have the best grasp of these concepts beyond that of an over-enthusiastic end-user.

2 Likes

Headsets already do “old school reprojection” that way, by doing spatial reprojection (rotational reprojection if depth isn’t available) on the headset at the last moment (sometimes called “late reprojection”). But that won’t give you a smooth experience because anything that has motion (moving elements) in the scene will look choppy due to motion discontinuity.

What is more interesting is doing temporal reprojection (motion reprojection) on the device itself. I’m pretty sure this is what SSW (Synchronous SpaceWarp) is with Virtual Desktop. It seems to work pretty well (haven’t tried it, but Googling gives some praise).

This shouldn’t be too much of an issue. Depth and Motion Vectors can be downscaled while remaining quite usable for either depth or motion reprojection. The key issue (why it’s not worth doing): almost no games submit them anyway. MSFS is the exception. But it’s almost literally the only game to even submit depth nowadays.

1 Like

Also, idk what you meant here. Using Motion Vectors for Motion Reprojection is actually quite cheap, it’s a fairly simple shader that would easily run even on an embedded GPU. You basically use a grid mesh to cover your source image and transform each vertex based on the corresponding motion vector.

Generating Motion Vectors post-rendering is very expensive. But it can be done on the PC, and the resulting texture is relatively small (like under 512x512).

Ideally, applications would write motion vectors during rendering (so even the expensive Motion Estimation phase isn’t needed) as explained here (see Motion Vector Generation). This is almost free, and most engines already do it for TAA/DLSS/Motion Blur or a variety of effects. Only MSFS submits them today, and no platform, not even Oculus, can actually use them (platform support for it isnt present on PC, only on mobile).

1 Like

That’s interesting, so how would those motion vectors (if done in MSFS for example) be passed to the HMD though? And for games that don’t give the motion vectors, what about the XR2 chip - can it do the motion estimation phase on HMD or is too “late” at that point?

I just can’t get over the idea that engineers are struggling with wireless and bandwidth, and then nvidia develops DLSS3.0, whifff!

image

Maybe DLSS4.0 will address VR (in my dream Nvidia makes a HMD with Gsync and somehow it’s
both cheap and license free hahah)

1 Like

You’d need to compress them (and likely downscale them) first and send them over wireless just like the color buffer (fancy name for the stereo view).

Apparently SSW uses motion vectors estimated on the XR2. I can’t think this will get you anywhere near the quality of something like Nvidia Optical Flow done on an RTX card though…

2 Likes

How cool would it be for AMD to beat them to the punch in VR?

1 Like

Not gonna happen anytime soon IMO.

1 Like

Looks like it’s back to the gym (so I can live long enough to see it). PSVR2 could have done it, lost the cable and not taxed the PS5 for the smoothing…PSVR 3 on PS6 I suppose

1 Like

Only likely a possibility if Amd leveraged Sony to share some of there VR advancements they use on playstation VR. ie Amd could have had there own reprojection of sorts that Sony was using.

Unfortunately like @mbucchia is right and even if Amd did do something even Open like FSR is the problem has been often GameDeVs not adopting much unless it is easy to implement. Plus Nvidia likes to give incentives to lopsided development at times.

1 Like

I remember in about 2016 somebody was doing some work with Lightfield Stored in texture, and I keep thinking that would take up a ton of hard drive space, but would make re-projection and motion-vector processing so much more accurate at lower computational cost. If you had a texture that had a light field of any game geometry, occlusion artifacts could potentially be nonexistent

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.