And it’ll run at 24Hz…
lol… but let’s just talk about those production values. Damn Jensen knows how to put on a show.
Did you hear that… 2040 confirmed. Nvidia Holodecks
Imagine Pimax Day had the same production value?
@PimaxUSA next Pimax day pull a 16k out of your oven.
Ohhh you know you will
The true question is, will it be an Nvidia FE or Asus version ?
AMD just “sharted” themselves
Is the bold text not cynical enough?
The entire point of this thread was that we need some way to get more than double the performance of a fully overclocked RTX 2080 Ti. VR Multi-GPU, or a card with glued GPUs on it that supports VR.
Looks like there is some possibility we might actually see the latter.
But with all the hype about raytracing - which is NOT urgently needed for VR - I have to suspect NVIDIA is just playing tricks with the numbers to try and get some kind of monopoly with new features instead of real performance.
Oh by the way, we still have to wait for vendors to pre-bin and overclock the 3090. I don’t know how long that takes. So, release October, maybe more like November for us?
I’m fawning.
Like…I’m fawning ALOT!
Not to mention. did you catch it?
Jensen said the “first” 3 members of the ampere family.
3090Ti incoming mark my words
P.S. I love that this thread was resurrected from 2 month death, and now its the hottest topic
I want a 3090 Kingpin Edition
Man i waited 6 years and have managed not to have to upgrade my pc.
Im still rocking a 4790 and msi z97 mobo
Now this GPU requires new everything
my next pc is gonna cost $5K
and I guess intel decided for me Ryzen it is.
But you know intel is scrambling right now to eat their words and put something out with pcie4
Good catch. You may be right.
PCIE4 will not be as important as CPU single core for a while yet. I hate to say it, but go Intel for now, and get it from Silicon Lottery…
From anandtech
‘We know that the new RTX 30 series cards pack an incredible number of FP32 CUDA cores, and that it comes thanks to what NVIDIA is labeling as “2x FP32” in their SM configuration. As a result, even the second-tier RTX 3080 offers 29.8 TFLOPs of FP32 shader performance, more than double the last-gen RTX 2080 Ti. To put it succinctly, there is an incredible number of ALUs within these GPUs, and frankly a lot more than I would have expected given the transistor count.’
If you see some vendors reporting half the cuda core count (asus has released card specs with half the cores), that’s why
The 3090 is scheduled to be at available nvidia and partners September 24’th, the 3080 is the 17’th i think and the '70 sometime next month.
Sounds like marketing stuff indeed. We may have to wait outright for DCS World VR benchmarks.
the next generation of intel should be pcix 4.0, it’s a shame the current gen isn’t but it might be worth just chucking the gpu in your current pc and see when intel comes out with new stuff.
yeah, it’ll be interesting to see actual reviews and benchmarks.
I’d offered a friend my 1080Ti when I upgraded to Ampere (@ mates rates ofc). He doesn’t want it anymore and is after the 3080, pending benchmarks.
His system recently died, and didn’t want to spend loads on a GPU so close to release, so I told him to just buy an acceptably cheap 1080p card and flog that 1080p card afterward to reduce his net cost.
flops are flops to some extent, but yeah it’s pure speculation till we get some benches
But are our most troublesome VR apps actually bottlenecked specifically by FP32 instructions, or some other instructions? And even if the developer resources were available to optimize for this, would the FP32 performance even be relevant to the needs of something like flight sim, with all the high-res geometry and textures?
NVIDIA historically was notorious for having bad performance in all but one metric. It would not at all be the first time they boosted some specific metric that mattered only to the feature that was part of some agenda they were trying to push.
Counting one core as two seems really suspicious to me. Even worse than Hyper-Threading, which at least helped reduce workstation multitasking latency when it showed up.
I might not have any confidence at all in NVIDIA product specifications from now on. They really want us to buy into CUDA/RTX so we are stuck with them.
I think today’s announcement confirms what we were already expecting. RDNA2 is going to be extremely competitive, especially with the RTX 3080 (which is the card Nvidia would usually market as the RTX 3080 Ti and sell for $1500). This is really thanks to AMD, that we’re getting this much performance at those prices. Nvidia is trying to avoid a marketing disaster by releasing their RTX 3080 (which is not the successor to the 2080, but to the 2080 Ti), at $1500, only to have it outperformed by an AMD card a month later at half the price.
I really have to hold myself back on not buying the 3080 right away and to wait another 1-2 months for RDNA2. For me, the main reason not the get the 3080, is the fact that it lacks DP 2.0. That means it’s not future proof for VR and needs to be replaced in 3-4 years, which is a much shorter amount of lifetime, than I would want for a high-end GPU. HDMI 2.1 is has much lower bandwidth.
Oh now here’s some numbers that are a little harder to fake!
RTX 3090
28 billion transistors
8 nm
1700MHz
RTX 2080 Ti
18.6 billion transistors
12 nm (being generous)
1545MHz
So, maybe 50% more transistors, capable of slightly faster switching speed, plus some of the ‘Titan RTX’ transistors may have been disabled on the RTX 2080 Ti.
(28/18)*(1700/1545)=1.71
So maybe about 70% faster. Maybe another 10-20% if the other NVIDIA claims are not 100% hype. Still falls short of doubling performance, and certainly not far enough over the mark to be comfortable.
So, we are probably stuck with Smart Smoothing for at least another generation or two unless multi-GPU becomes available for VR.