Lets wait for official benchmarks
R.I.P. 3090 …
3080 TI at 999$ in 3… 2… 1…
Omg AMD nailed it! 6900XT price is just what I hoped for.
Also the promised performance gains using 5000 series CPU and AM4 have convinced me to build an entire new AMD build for the 6900XT.
So if everything goes as planned, I will keep the 9900K rig for 3090 and go 5900X/5950X with 6900XT.
And then I will eat noodles for the next 12-16 months.
Lots of room for VR benchmarks at least
Not having done much other research on that card yet…
Let’s look at some hard numbers.
GeForce RTX 3090
28.3B Transistors
35.68 TFLOPS / 2 = 17.84 ‘real’ TFLOPS, same 35.68 TFLOPS at half-precision
10496 Cores / 2 = 5248 ‘real’ Cores
162 GP/s Fillrate
555 GT/s Fillrate
Radeon RX 6900 XT
26.8B Transistors
23 TFLOPS, 46 TFLOPS at half-precision
5120 Cores
288 GP/s Fillrate
720 GT/s Fillrate
Both cards might OC to somewhere around ~2200MHz with ‘conventional’ water cooling chilled to -20degC by a slightly modified window HVAC unit using R410a.
An obvious conclusion is that NVIDIA ‘wasted’ a good bit of silicon with dubious performance ‘hacks’, such as doubling the single-precision TFLOPS, while AMD only pushed up raw number crunching.
Depending on which of these tradeoffs are important, multiple scenarios are plausible.
- Transistor count - AMD at 95% of NVIDIA .
- Half-precision TFLOPS - AMD at 130% of NVIDIA .
- Cores - AMD at 98% of NVIDIA .
None of these give AMD the >>2x total improvement over the RTX 2080 Ti to confidently disable Smart Smoothing (1.5x*1.3x = 1.95x). The fillrate gives a purely theoretical difference of 130% or 175%, but benchmarks of the RTX 3090 vs RTX 2080 Ti have already shown this is not relevant to plausible VR use cases.
Bottom line, AMD does not have Smart Smoothing, does not have Foveated Rendering, and AMD does not have the performance to do without these features.
Then there is the possibility that such an edge in number crunching may be followed up with something better from NVIDIA.
@SweViver
I recommend you save your money and avoid AMD for now based on what seems to be known so far.
Some leaks say they have pushed clocks above 2500Mhz. Who knows, time will tell. The claimed 2250 is base OC. Either way its really nice to see AMD pushing the limits with a $500 cheaper card (in Sweden probably $700 cheaper on average). Im tempted to see what AMD can do for VR and that’s why Im building a second rig. Especially paired with Ryzen 9, there might be some additional power to be squeezed out.
Cuda cores cannot be directly compared to Steaming Processors. Its hard to quess the real difference at this point.
Nobody was really expecting 2x performance gain with any of these cards. If its +60% or even more, then its a huge win. Its like jumping 2-3 generations having in mind the previous 9xx, 1xxx and 2xxx series. Whatever performance gain we can get in VR is a big step for 4K headsets such as Pimax 8KX.
Honestly, theres not a single VR game out there that really needs x2 performance over the 2080Ti to run smooth . At least not for the 8KX running at 75Hz (and hopefully 90Hz one day).
The only titles that would need 2x are VR simulators, like DCS, FS2020 or XP11 which are bottlenecked by single core CPU utilisation and bad optimization. Here’s where proper Vulkan or DX12 implementation will be our only hope. Its not Nvidias nor AMDs fault that developers yet today (including MS) are releasing single core dependent simulators. And in the end, Im not building a new PC solely for DCS.
Yes thats true. Although I wouldn’t be surprised we will see alternatives. Especially since AMD rules the console market now and new technology and improvements can be easier translated to the PC scene.
Sure. Thats how it is these days. You buy something today and its old tomorrow. But I hate playing the waiting game. Life’s too short. There will always be something better. And when it’s needed, I’ll upgrade again I guess.
I sold a whole bunch of DSLR camera/lenses and other stuff lately just to afford this. I think the 5900/5950 CPU upgrade seems much more promising than going from 9900k to 10900K, and I guess it was time to upgrade anyway either way, I will keep my Intel rig as well. Just in case…
Btw first part just arrived!
Maybe. But GPUs are basically one-number-per-cycle-per-ALU arithmetic machines. They aren’t CISC like arguably bad x86/x64, where IPC is likely to make a real difference.
Roughly, Cores * MHz ~= ~2x FLOPS . And pretty much the same for half-precision, integer, etc, unless one of these is badly neglected (like NVIDIA neglected integer math). For both the NVIDIA RTX 3090 and AMD RX 6900 XT, that is approximately true, so there really doesn’t seem to be a lot of fancy stuff happening there.
For this generation, yes. However, with the delay to getting through the last few generations, and the loss of multi-GPU, for VR simulation users like us, it is a massive loss overall.
I am also not fond of NVIDIA proprietary drivers. Those cause me unending pain with Linux.
There are. Not many. Some of the community maps made for PavlovVR are near the limits.
It’s rare, but it’s like a hazard. Ideally, you want somewhat better margins, or features like FFR/DFR, Smart Smoothing, etc, to be sure irritating frame dropping won’'t hit you.
The alternatives would have to be on the CPU side, if that is even possible, probably severe deficiencies if the CPU can even do it. Smart Smoothing AFAIK, is based on real-time H.264 hardware encoder motion compensation GPU silicon. Without enough of those special cores (which AMD seems not to have), we are basically out of luck.
Console games will just be optimized not to need DFR/FFR, Smart Smoothing, etc.
Couldn’t agree more.
Might want to save a bit of cash though. I am very serious about building that thin helium-filled externally chilled PC. With a cryocooler, it will be really fast.
By the way, I hope you are going with custom loop water cooling. You will be missing a crucial amount of performance without it.
Don"t forget to factor in that Nvidia does have waste as RT cores are dormant if not doing RayTracing. Where as Amd cores can do other things and not tied to a one use scenario. Unless this has changed in the 3000 series.
However as many have already stated best to wait for real world benchmarks. Especially with Architecture changes that might not be used by older game engines and such.
VRS is said to be in Both Amd roadmap and DX’s if not already complete so… FFR is possible. And there are better things like Oculus sdk having several Agnostic FFR methods. In game support trumps upconverting.
Valve added motion smoothing to AMD cards. I am sure pimax will do the same.
Not all AMD cards. IIRC, it still only works reasonably with Vega cards, and still tends to only sometimes work. Seems like the motion compensation feature is a ‘professional’/‘workstation’/renderFarm/CAD thing for AMD.
Indeed.
An obvious conclusion is that NVIDIA ‘wasted’ a good bit of silicon with dubious performance ‘hacks’ , such as doubling the single-precision TFLOPS, while AMD only pushed up raw number crunching.
However, by other metrics, probably not so much that AMD can make up for the probable lack of Smart Smoothing, FFR, etc. And then there is the fact that NVIDIA had this stuff first, so it may be a while before things are good and stable with AMD.
Of course why would they add support to old cards that are EoL.
I disagree. Especially with mentioning things that old cards lack. Motion smoothing as we already duscussed has been added to Vega and newer. VRS is also either been added or us coming to both Dx and Amd. So FFR upcoverting is likely going to happen more than unlikely.
Oculus already has baked in support for native application FFR if a dev uses it that is agnostic to gpus. Several flavors in fact for different types of applications… Ie cpu bound
Radeon VII was ‘Vega and newer’, hardly EOL, and I was burned by that card.
Well we know Pimax has been painfully slow getting things working at times with new Amd cards. Amd did have there share of issues getting Vega working with pimax; however didn’t help that
Amd was having issues getting a pimax headset to test with til around December when I made a deal with one of there advisors to trade my first 5k+ for my Ryzen cou/mobo/ram upgrade. The Advisor was surprised that my r9 390 had no issues running pimax 8k and 5k+; however did cite it was an under utilized card.
The Radeon VII was not imho a good time to invest in this card being another set of changes to architecture. Much like the 2000 series Nvidia Cards might not have been a good investment especially at the price point of which just taught Nvidia that customers are willing to be gouged.
What is ashame though is Intel blocked Nvidia from Developing there own pc cpu.
Amd also seems better with there Linux support as propietary bits can work with Opensource via the kernel driver. Giving a to Linus’ Nvidia salute.
I bought a 5k+ later from a fellow backer.
But in reality that just means only a handful of games will support it… Because the developers won’t care…
Seems like we’ve been here already (PP), right?
Yes but only one pack of noodles a week😀
I hope you need to loose weight
Indeed until it gets traction. Which is more likely than things like VRWerks and other gou specific propietary sdks.
The main issue at present is that we need more agnostic support in other platforms. Ie OpenVR, DX, & vulcan vs just having Oculus.
Got my Asus ROG Strix 3090 OC today.
From the Asus ROG strix 2080ti OC I did get a 58% increase in average fps in DCS /VR with the current settings I used with the 2080ti.
Lowest frame rate increased 76.5% !
This during the dcs VR benchmark posted on digitalcombatsimulators site and using my Reverb pro. Havent configured the newly arrived 8KX but things look good for this
This is a higher increase then I expected!
If this is correct (spec-wise) and the 3080 Ti will be priced @ 999 USD with performance between the 3080 and 3090 (there’s not a big gap left between them) and have 12GB of VRAM it doesn’t make sense to go for the 3090 for me.
I don’t mind paying a premium but not 50% more for what? 5% performance difference?
The 3080 is crippled by it’s 10GB VRAM so that’s a no-go for me, but 24GB seems to be a waste.
I don’t believe I will see it utilized before buying a 4000 series card anyway…
And I just can’t go from a 2080Ti with 11GB to a 3080 with 10… That’s a downgrade…
EDIT: It still seems to be 8nm Samsung though. Guess we’ll have to wait for the 4000 series (or Super cards?) for 7nm?
new (internal) benchmarks