11 series nvidia cards coming around July. Prepare your wallets

So, the game engine has to prepare a frame before sending to the GPU, Ok, yes I get that with framebuffers etc

However take VRWorks for instance, they have done all the hard work. If you read this bit:

"With this feature, if an engine already supports sequential stereo rendering, it’s very easy to enable dual-GPU support. All you have to do is add a few lines of code to set the mask to the first GPU before rendering the left eye, then set the mask to the second GPU before rendering the right eye. For things like shadow maps, or GPU physics simulations where the data will be used by both GPUs, you can set the mask to include both GPUs, and the draw calls will be broadcast to them. It really is that simple, and incredibly easy to integrate in an engine. "

and the important bit here is “A few lines of code…” So yes, we really are second class citizens. The VR industry should be shouting for this. The 8K-X NEEDS this.

So why do the game engines not have a checkbox to enable this? There must be other complications, maybe VRWorks licensing or something.

This is the most straightforward use, which, I guess, would justify SLI for more people in VR, if it was actually used by the devs. Does SteamVR support this?

The problems however are:

  • It seems SLI is getting deprecated by having less and less support in general (from both Nvidia and the devs).
  • There are not so many people running SLI and VR together.

[quote=“D3Pixel, post:21, topic:5591, full:true”]
…they have done all the hard work. If you read this bit:

"With this feature, if an engine already supports sequential stereo rendering, it’s very easy …[/quote]

Actually this is where it goes wrong. For a single GPU solution you will want to avoid sequential rendering because parallel rendering to both targets saves lots-a-workload.
Sequential rendering is rather unlikely, so the nVidia hack as quoted would only offering adressing of different GPUs and all the work resides with the engine devs to maintain a parallel path for single GPU and add a seperated aka sequential one for dual GPUs.
Acutally for AMD a cro team developer described how they did the single pass stereo rendering and also can be found there how they optimized later on for CPUs… I’d assume sth similar should be possible for nVidia as well but it definitely should take more than a few lines of code …

1 Like

What that quote was saying, in my opinion, is that if your software supports sequential rendering already because that’s all you can do with one GPU as it renders left eye then right eye then the VRWorks solution is practically a drop in solution to take it to a parallel solution using two GPU’s

1 Like

I think you may have been talking about this:

You can see that rendering both eyes in one pass saves cpu load but it is still calculating one thing at a time rather than two things at a time, no matter if that is each eye or both eyes together.

Exactly - as you can see the “normal rendering” is what nVidias hack is talkin about as a sequential one. The thing that is used to the contrary is the second animation, parallel dispatching as single passes, both views get generated at the same time.

So in the case of your unity example - either somebody is toying around for an indie project - or one is seriously trying to optimize using single pass rendering. But then all the optimization for the VR code path for shaders and post processing effects (just as this page describes) has to be reviewed and potentially reverted back to make the easy nVidia hack possible.

It’s not on the driver, it’s the engine developers who have to put days and weeks into it. And their decision is usually against the additional efforts in every relevant title besides Serious Sam.

That is where is goes wrong and why most probably there is no support today for that niche of dual GPU in that niche of VR.

Yeah. I guess demand has to hit a certain high before advancements are made to make the devs life easier. At the moment it all seems prototype still.

One of the Devs for Raw Data said back in 2016:

While I think its great that AMD and nVidia are pushing for these GPU VR innovations, these are all somewhat stopgap solutions until VR matures and standards are established to cover these innovations in a vendor agnostic way. At that point, we will see all games adopt these improvements quickly and more effectively, especially considering that is when major game engines like Unreal Engine and Unity will integrate and maintain the implementations themselves and can make sure that each new feature in each new engine upgrade plays nice with the VR extensions.

Source: https://forums.geforce.com/default/topic/981472/vr-general-discussion/-vr-sli-vr-sli-supported-games-list/

Which is what you are saying, that it is not straight forward but that was way back in 2016.

VR demand has moved on massively since then, granted it is still in its infancy for probably the next 5 years.

You would think that other advances like foveated rendering are a threat to NVidia’s sales margin so VR-SLI would be high on their list as it sells additional GPU’s where foveated rendering sells single gpus and lower power ones too (AMD rejoice).

As long as we the consumer get faster cheaper and more powerful VR then I don’t care which technology matures first :slight_smile:

[quote=“D3Pixel, post:27, topic:5591, full:true”] …
One of the Devs for Raw Data said back in 2016:

While I think its great that AMD and nVidia are pushing for these GPU VR innovations, these are all somewhat stopgap solutions until VR matures and standards are established to cover these innovations in a vendor agnostic way. At that point, we will see all games adopt these improvements quickly and more effectively, especially considering that is when major game engines like Unreal Engine and Unity will integrate and maintain the implementations themselves and can make sure that each new feature in each new engine upgrade plays nice with the VR extensions.
[/quote]
Yep - the devs play the chicken n egg game as an excuse. Actually it was stated in a steam forum originally.

But IMHO the solution would be totally different: Since every VR game is fundamentally at least 2x the GPU load to reach same picture quality as for 2d monitors equivalents - especially since FOV will be way bigger always - every VR rendering should solely be coded for a dual GPU setup in mind. On the other end the GPU drivers and HMD drivers could fetch this dual GPU draw calls to convert and optimize for the single GPU use cases. The loss would be minimal to none and we wouldn’t necessarily have to wait for same detail levels until 2 GPU gens down the road.

But the hardware guys pulled out a chicken already, it is now up to the devs to lay eggs.

This is pretty much the same annual schedule they have had for years. Any perceived delay is due to memory shortages. One of 3 major chip plants burned down 2 years ago.

Agree with all that said here, and Nvidia could have been supporting new forms of graphic technology too, like this: http://community.openmr.ai/t/time-to-rethink-current-computer-graphic-technology/5872

But they seem not to give a shit and insist in brute force power (more and more money for us to pay their useless cards…) , chip optimizations that in the end give us only 20-25% more real world performance even with a doubling of compute units, because these GPU’s are fast approaching their physical limits of current technology…still…they insist on it and show us Ray Tracing capability (well almost…) xD

They are becoming the Apple of the graphic chip industry :slight_smile:

1 Like