Which features are (not) supported on AMD cards?

Yeah , I was thinking about upgrading to Ryzen last year from my i7-4790

but looking at benchmarks from sweviver who goes insane overclocking 9900k’s to 5ghz and only gets 10 more frames than i do.

I held off.

I think I’ll wait for maybe Ryzen 4000 series maybe then they can overcome intel at single threaded perf.

Or intel is scared eight now and actually is forced to innovate. And beats Ryzen. in which case I would feel more comfortable buying intel.

Either way competition is good, because I win in the end.

1 Like

AMD had missed the start of VR and hasn’t shown any indication to strengthen that department so far.

Even on the non-VR side, they may not have totally missed the “RT” war since I would rather consider that Nvidia jumped the start (still too much of a performance hit and games not ready yet), but they clearly missed the mark on the AI assisted image reconstruction. Nvidia is already at his 2nd iteration of their DLSS tech while we still don’t have any word from a possible implementation of a similar feature (DLSS isn’t an open standard like VRS became once it got endorsed by Microsoft in DX12) in upcoming AMD cards.

So RDNA2 will have to prove its use to me before I ever consider returning to the red side.
They did it on the CPU market, but they still have a long way on the GPU one.

Where did you read that?

Based on the description of what AMD called VRS back then, it looks closer to the useless Nvidia’s 2016 Multi-Res Shading (compatible with Maxwell and Pascal GPU), the ancestor to the useful Nvidia’s 2018 VRS / DX12’s 2019 VRS.

DLSS in its best example Death Stranding still generates artifacts. So as long as it’s just an alternative garbage over other garbage like TAA I’m not convinced…
It’s not that AMD hardware wouldn’t be capable at all, the inferencing performance is not worse on Vega with FP16 & Int8/Int4. The main issue with AMD is they don’t provide the software like Nvidia does.

Wow, using Death Stranding as an exemple, one of the rare games where DLSS 2.0 deliver better image quality than native 4k image … bold.
I agree that most games have artifacts (like all other reconstruction techniques), but frankly, those are usually close to negligible (unless you freeze and magnify the image heavily) before the gain in performance it brings.
And the tech has been around for such a short time too.

Every feature can be handled on the software side, the problem is, if you don’t have hardware support (RT Core, Tensor Core, etc.) dedicated to these features, it will bring down the performances. When the goal is to enhance the quality, it might be a trade-off to consider. But when the goal is to enhance performance, it defies the purpose to use features your hardware is not suited for.
AMD hasn’t implemented any hardware support for VR features so far. They could implement it in their driver as a software feature … but as I just said, it wouldn’t do them any good.

That said, it doesn’t mean things won’t change, be it in their next GPU or those who follow. AMD generally seems more interested in the general market (which makes sense economically), which means they don’t target niches like ultra high-end gaming gpu (they never try to compete with xx80 Ti nor Titan), and they don’t “waste” time with VR specific features. Thankfully, AI reconstruction techniques and VRS are not VR specific features so we can expect some improvements on that side at some point.

Tensor Cores don’t help in inferencing. These are for training. There is no training when DLSS is used, thats inferencing. Don’t get BSsed by Nvidia marketing.
The RT performance is a gimmic on Turing compared to what both companies will bring with next GPU gen.

VRS was in the planning from AMD for long time, yet they didn’t master it, but announced it for next gen. All new consoles will have VRS too. Then we will have a common open standard to implement it.

It’s really just Nvidia provides proprietary SW where AMD dosn’t do anything, rather donatin Mantel for Vulkan. So ppl have reason to buy Nvidia cards and see some games justifyin the purchase before the market as a whole catches up. AMD tried that with tesselation back then too. It backfired on them. Can happen to Nvidia driven feat. too. Future will tell. Pimax teached me to wait for years. So I shouldn’t bother to wait for that next gen for VRS and RT too. :wink:

2 Likes

DLSS 2.0 is decent, but DLSS 1.0 was pretty bad and not really useful.

An older review to show this is here:

But even with DLSS 2.0, when compared with decent scaling algorithms, it usually doesn’t look much better but comes at a higher performance cost. It’s difficult to compare, because most games either lack a resolution scaling, or lack DLSS, or lack both.

The only recent example I know of, which features both DLSS 2.0 and an alternative scaling algorithm, is Death Stranding, which is actually a good DLSS 2.0 implementation. But even there, regular algorithms that help with upscaling and come at a near zero performance cost, in this case using AMD’s open source FidelityFX CAS, offer almost as good results.

This is not to say that DLSS and similar technologies won’t be important in the future. I think developing in that direction is a good way to gain some performance and I’m sure AMD will continue their development in that field too.
But at the moment DLSS is mostly a marketing gimmick of Nvidia, rather than an important feature to look out for when buying a GPU.

1 Like

Just to be clear, VRS has been standardized in DX12 in early 2019.

I agree that despite CAS givinge sub-native 4K image quality contrary to DLSS (and worse performance than DLSS to), I find it ti be quite a good trade-off for those who don’t have access to DLSS-like features.
It’s not image reconstruction, just adaptative sharpening to preserve some details which should be lost with standard upscaling, but it’s quite a good first step.

Let’s agree to disagree.
What I don’t really like with most comparison I read, is that it’s usually based on frozen and magnified images beyong reason. That’s why I really like your link for the Death Stranding review since you can barely see any difference while moving the slider, be it for DLSS or CAS. Still, it’s (obviously) frozen images which usually handicap reconstruction techniques.
When DLSS 1 was implemented in Monster Hunter World, I immediately switched to it which let me set the output resolution to 4K. A few weeks later, I read comments saying how bad it was, and how it was better to switch even to native 1080p. And based on the screenshots comparison provided, I agreed and switch back to native 1440p … how boy was I disappointed. I switch back to 4K DLSS after 1 quest.

So yeah, DLSS 1 is far from perfect, and there are games where it is worse than others, better than others, but let’s be honest, for most people and in real gaming condition (meaning no still images magnified several times or even simply focusing on details you’re not supposed to focus on during actual gameplay), then DLSS 1 is still a pretty good trade-off between quality and performance.

Rumors are that the first Ampere GPUs will be built on an 8nm node (not 7), so they will be power-hungry. I’ve heard 350W and 400W for overclocked GPUs. In the first half of next year, the “super” versions will likely ship using the 7nm node, with noticeably better power usage. If that’s true, I’ll probably wait to upgrade next year.

5 Likes

I think it only got put in the standard by the time Microsoft also finalized the specs and feat. for the upcoming Xbox.
For VR it is eye tracking aware VRS, that is possible on a GPU driver level. For all other use cases it is content aware VRS. Can be used for very fast moving objects or scenery as well. Can be used to separate objects of interest like with text from less important scenery objects. That’s when you have to have it in DX12 exposed to developers.

1 Like

Sure.
Game agnostic VRS (as used in FFR/DFR) must be at driver level anyway.
And having it standardized in DX12 since May 2019 (date of inclusion in the API) isn’t that helpful for the bunch of DX11 titles anyway.
I was just saying that the standard has been there for more than a year now (and nvidia’s take on it even more), implying that even if the latest AMD GPU didn’t supported it (hardware-wise) probably because of the lack of time to alter their design which should have been in the works for quite some time at this point, we should (must?) expect it to be integrated in the next one (especially since it’s been confirmed as part of Series X GPU features).

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.