I agree that there’s a lot of space for software improvements. My biggest hope, as a flight sim fan, is that another mainstream hardware manufacturer produces a canted-display headset. That would put more pressure on the flight sim engines to properly support canted displays without the performance-robbing PP mode.
On my Artisan PP requires 40% more pixels, and it smushes them together making the image worse than native mode. I don’t know if 40% more pixels = 40% less FPS, but it’s probably close.
With the official release info just 2 days away I think it’s okay to revive this topic.
Here’s 5 reasons to hold off on the 3000 series
he missed 1 I think nobody has thought of.
I’ve heard that the 3090 won’t even have display port 2.0 whick means any chance of getting 90 hz native 8kx is slim. However what I also noticed is that if the leaks are to be believed there is no Ti version in the lineup.
Which means more than likely 6 months down the line there will be a 3080Ti and who knows it could have DP 2.0.
I would feel like quite a sucker if i spent $2000 on a 3090 only to have a 3090Ti come out with DP 2.0 6 months later.
I noticed that too. But NVIDIA’s branding is such that the 3090 could be the equivalent of the ‘Ti’ version this time around, or who knows with NVIDIA branding.
I really wish they would just make driver-transparent multi-GPU for VR users through the NVLink. NVIDIA is welcome to take my money.
Or even just release a 1.5kW GPU card.
I don’t appreciate having to just wait for NVIDIA to lift such a hard single-GPU performance cap.
It does. It’s biggest speed gain with its huge fast memory will probably be for the sweet dual 4K towards 90Hz.
Its peak performance improvements are clearly aimed at VR users. What monitor user needs that fillrate? How many 8K 60Hz or 4K 144Hz gamer monitors are out there?
Yeah the rational side of me generally does this. here’s my history.(i wont go back all the way)
8800GT
GTX280
GTX480
GTX680
GTX780Ti
GTX1080
RTX2080Ti
RTX4080???
What speed gain over an RTX 2080 Ti? All I really care about is core. GPU memory clock won’t make a difference, and extra memory might help me run more overlays, but what I really need is more GPU core.
Depends on the game I’d say. For IL-2 we’ve caught well pronounced differences, the community has gathered 30-40 performance samples of different systems in an effort to get the game tweaked.
From my understanding, high super sampling rates and thus high resolutions benefit the most from a larger memory, and get throttled by insufficient memory much later.