Stop fawning over the RTX 3080 cards and future CPUs. Seriously

Let’s just tally a few, well documented, data points.

You will not see a drastic improvements. Only a very few flight sim users will need to upgrade by this time next year, and then only because their software is poorly optimized and run to the absolute limits.

We might realize much larger gains from software developers supporting DFR properly (~50% gain), removing the need for Parallel Projections (~50% gain), and multithreading their applications to the extent easily feasible (50% higher cap).


  • Current RTX 2080 Ti and i9-9900k are adequate to reach point of diminishing returns with Pimax headsets, including 8kX.
  • Higher resolution headsets take less GPU power to run, due to reduced supersampling.

So, you do not need a next gen GPU/CPU.


  • Core clock and number of shader/texture/render units has the largest impact on VR framerates, with memory clock being near-negligible.
  • Pre-binned RTX 2080 Ti cards have really high maximum clocks, approaching steady 2100MHz, compared with 1545MHz boost at the absolute most from factory specs.
  • RTX 3080, releasing near end of this year, is projected to have slightly worse performance than an RTX 2080 Ti today.

So, next gen GPU will not drastically improve framerates.


  • Single-thread CPU performance is the only metric under pressure, and only under highly specific (typically very unusually complex flight sim) conditions.
  • Typical Intel CPU overclock has been around 6% lately (5.3GHz all-core vs 5.0GHz boost), with negligible hardware damage risk due to wide VCore specification.
  • Typical Intel CPU generational improvement has been around 6% lately (5.3GHz i9-10900K vs 5.0GHz i9-9900k).

So, next gen CPU will not drastically improve framerates. Worse, CPU performance will improve 6% per 1.5 years, rather than the 50% typical for GPU constrained apps.


Guess we will have to hope some kind of dynamic foveated rendering becomes a thing.

1 Like

Naah. I think we need to hope game developers go back and do the little things, like eliminating Parallel Projections.


Everything depends on how bound by CPU/GPU we are and if Intel can really make a leap by finally moving from their 14nm process. FFS I’m using a 6700K that has the same process as current gen chipsets.

Doesn’t matter. Single-thread performance will not improve enough. Physically, the feature size is not the same as the ‘process node’ anymore, and is not getting much smaller than it already is in any hurry.


There’s no point ‘worrying’ about this stuff. People are just looking for the magic bullet because VR performance is frankly kinda trash for terrible looking games.

Likely the solution will be some standard with lots of dev backing and popularity.

1 Like

Well said, people expecting the 3080Ti to double performance, better be ready for disappointment.

I still remember when sweviver, who just got a 1080Ti maybe a year prior, was grinning like a school boy when he recieved his 2080Ti and was so disappointed that his framerates didn’t double. Even when all pancake benchmarks evaluated that the card was only about 25% faster

doing the math a 25% increase isn’t even enough to get you from 60 to 90

i have a 2080Ti but I upgraded from a 1080. Which to me was a 50% improvement .

On the topic of CPU im still rocking my i7-4790. Every year for the last 6 years I look at the game benchmarks to see if its work the hassle of upgrading and every time it seems that the cpu yields very little compared to getting a new video card.

Anyway, i’m not holding my breath for he 3080 series, Nvidia already ripped all of us off thoroughly with that one.

I’ll wait for the 4080Ti in 2021 . Maybe by then i’ll have those damned Pimax controllers


Thanks. Exactly my thoughts. Even 25% compared to what we can buy now might be too much to hope for, and I certainly wouldn’t count on 50%, much less 100% (ie. double).

Actually, I think the 1080 Ti to 2080 Ti was a bit more of a boost than that, when the pre-binned overclocking is accounted for. Because of that, I wouldn’t be surprised if the 2080 Ti that people like me are actually buying turns out to be on par with the 3080 Ti.

To be fair, myself, I might buy the 3080 Ti. But only if extra bloat in DCS World necessitates it by that time, or for the extra VRAM (I do experience crashes due to VRAM exhaustion).

Ultimately, either way, I am expecting the RTX 3080 Ti to give me precisely zero improvement in visual quality or framerate.

1 Like

…using a 1060. :laughing:

I’ll probably get a 3080Ti or 3090 to replace my 2080, but I’m expecting a 50% improvement at best.


RTX 3090 Ti double chip card? If that gets announced, I might be a little more hopeful. But it hasn’t been, and yeah, 50% at best is a reasonable expectation.

Let’s not forget though, what we have today is usable as long as developers don’t bloat their code.

1 Like

I’ve only heard rumors of a 3080 Ti and a 3090.

I’m not sure if there will even be a 3090, much less a 3090 Ti. The rumored specs I’ve seen for a 3090 sound like a Titan-type card, which is what the 2080 Ti really was.

The biggest boost is apparently for ray-tracing, which isn’t all that important to me.

Yes, indeed, I know. Just saying, if NVIDIA actually did announce something like a double GPU card, that would be great.


But I want to fawn over amphere/rdna 2… :slight_smile:

On a serious note, I do want to replace the old 1080ti, turing’s price/performance didn’t justify a purchase for me, but all signs point to an upgrade in the near future.


The 2080 Ti is more likely to be a substantial upgrade. You will be waiting at least a year for anything better, if scarcity doesn’t start driving the price up, and if the next generation is worth upgrading to (the point of this thread).

Big Navi and GA102 are realistically expected to offer around 50% higher performance than a 2080Ti.
I guess that’s not huge for 2080Ti owners, but the even bigger thing here is that we’ll be getting 2080Ti performance for $500 instead of $1500 (prices where I live).
GPU performance, is absolutely the most important thing for high resolution gaming and not CPU performance. The CPU doesn’t care if you’re rendering at 720p or 8K. Increased single thread CPU performance is important for high FPS gaming though.


Source? Also, any clue whether these AMD GPUs may support smart smoothing and FFR/DFR?

All speculation about Nvidia is in the hand of AMD fanboys at WCCFTECH

28 nm process
980 shaders 2048
980 ti shaders 2816

16 nm process
1080 shaders 2560
1080 ti shaders 3584

12 nm process
2080 shaders 2944
2080 ti shaders 4352

Latest rumours
three gpus GA102
7 nm process
3080 shaders 4352 ?
3090 ti shaders 5248 ?
Titan shaders 5376 ?

These rumours are fake

Nvidia calibrates 3080 to be as fast as 2080 ti or a little more this time thanks to a 7 nm process
However 3080 will have less shaders than 2080 ti probably less than 4000 and more than a 1080 ti

I do not think Nvidia is giving the throne of fastest gpu to amd with BIG NAVI

This is the first reason to launch 3080 ti in Q4 with 3080
Nvidia story saying 3080 ti will have about 40% more shaders than 3080
that is as fast as a 2080ti at least

@mirage335 is enough fast 3080ti for you ?

I point out that 3080 ti will have more shaders than Titan shaders last rumour

the second reason is that nvidia is competing with AMD also in raytracing performance. To win against amd, NVIDIA is improving raytracing for all RTX 3000 series. Again this is why at launch we have 3080 and 3080 ti Gpus: to win in raytracing performance with 3080 at the same price range
of Big Navi

But there are rumours of another GPU
at beginning it was a titan, now we heared 3090 with 24 gb memory
but they don’t know why Nvidia is calling 3090 a titan or a 3080ti

Three clues telling this might be a sli Board: The name, the same 102ga chip, 24 gb memory two times 12

Pimax people should be ready for a 3090 breaking 10.000 cuda cores target on one board few months after 3080/3080 ti

Probably this card is to hype the market because INTEL IS COMING
and latest secret reports are a big threat for NVIDIA Business


The source to the performance improvements? There isn’t a single source that we can rely on, but rather a whole series of indications. There have been some “leaks” of YouTubers etc. claiming they have sources, but that’s not the real indication.
I’ll mention a small subset of real indications here:

  • Performance of the new consoles at much lower wattages than current high-end GPUs.
  • The common scaling on full node die shrinks (like this one) has historically been at least +50% performance.
  • Nvidia is typically more tight mouthed about future GPUs, but AMD has released a bunch of official information, that shows pretty clearly what to expect. The full 2x Navi die is going to be 80 CUs, so twice the shaders of 5700XT. AMD also claims 50% performance improvement per watt (on the same node) and some IPC improvement, as well as some clock improvements. Realistically that will give us around 2X 5700XT performance (hence the name Navi 2x), if we consider for worse than linear horizontal scaling.
  • Nvidia will not want to give away the performance crown for marketing reasons, so we can expect them to tweak their highest end GPU accordingly to meet that goal. They have various options here, like increasing voltage for higher clocks, releasing the full die in non-Titan variants, or absolutely worst case even releasing their REAL big die (GA100) as a gaming card (unlikely they will do that).

To answer your question about FFR/DFR and smart smoothing:
Yes, AMD has confirmed that RDNA2 will have full support for FFR and DFR (the technical term is variable rate shading). Smart smoothing has always been fully supported on AMD cards, because this is done on the software side from developers of the VR middleware.


I beg to differ RE graphics card performance. You are likely correct for flight sims and CPU limitations.

But In a heavily modified Skyrim I struggle to hit 60fps average outside. I would very much like to hit 120 which is a much better experience. I would also like to crank up the resolution or supersampling, or extend out the draw distance or any other number of things I could turn up.

Coming from a 1080 ti to a 3080 to or 3090, why would I not want that extra performance? Which I can choose to deploy to either hertz or supersampling or other details?

The 2080 ti generation was a small jump because it was a tick rather than a tock, on the same node, with no competition from AMD. This time there is compettition, and it as a full on node shrink.

There is a ton of chatter that this next jump will be as big as Maxwell was. But, either way, lets stay conservative and say a jump of even 40% + the 30% of the 1080ti to the 2080ti would cumulatively be a very nice upgrade from the 1080 ti.

DLSS 3.0 also has great potential.

I largely agree with you RE CPU performance. Transistors are so small now it is almost impossible to squeeze more density or single thread performance. Although AMD are rumoured to finally possibly take the single core performance title with their next gen.

As for graphics chiplet or multi GPUs, I think that is coming the gen after.


only in very special cases is developers going to go back to upgrade older games with all those fancy new technologies. not when they can be making new games to sell instead. sometimes you just need raw umph and sometimes even a 20% improvement can mean the difference between frame drop hell and a visual pleasing experience.