Future GPU Architecture Talk - MCM, PCI 6.0, ASICs, Infinity Fabric

the next step for even faster frame rates is bypassing the PCI-E bandwidth bottleneck for gpus, for that reason even though PCI-6 has 10 times the bandwidth PCI 3 has it just may not be as practical for faster performance unless MCM GPUs have some compatibility with games and manufacturers give us more than 2 tiles in the future.

I’m not talking about PCI 3.0 bandwidth bottleneck which will likely become apparent with the 4000 series rtx cards (you may still likely need a PCI 4.0 motherboard to not saturate the slot with a rtx 4000)

but rather the latency issues processing data from processor > motherboard > GPU rather than directly from processor > gpu

I’m not convinced that SLI is the answer anymore really, unless performance is scalable and can work with any GPU chips that you could assign individual work but even still there is latency issues even if saturation is solved with PCI gen 5 and 6, I think SOCs and some kind of infinity fabric from soc to hardware GPU would be the answer but that would mean putting the GPU right on top of the cpu in production, looks like Intel and AMD may be the only ones who can do that since they both design/manufacture their own CPUs and GPUs, unless TSMC only makes the chips for AMD and not the whole PCBs.

How did PCI-E bandwidth end up in this thread? /offtopic

NVIDIA already has capabilities far beyond PCI-E with proprietary NVLINK, and the CPU/RAM side is inevitably becoming less and less relevant to ‘feeding the GPU’. NVIDIA just elected not to give us ‘gamers’ any decent NVLINK capability, presumably so they can move some product on us at some time or another.

3 Likes

3090 has nvlink doesn’t it? Have you heard much about the scaling performance of multiple 3090s using nvlink?

I haven’t read into it much but I recently read AMD was working on something which allowed any GPU to be assigned individual work which may have some interesting use cases in the future.

Theoretically you could have a high end GPU and 3-4 ASICS on a PCI 6.0 board each dedicated to doing shaders, lighting, physics etc and achieve X10 the performance that anyone could get with standard video cards

That is a very ill-informed statement. For better or worse, it is not without good reason that NVIDIA and AMD GPUs are not beat in any relevant performance category by ASICs of any kind. Modern GPUs are very well designed. These companies have basically had thousands of people doing nothing other than optimizing math and graphics performance, both per watt, and per capital cost.

Over the years, I have repeatedly debunked the Parallella, Google TPU, and other ‘alternative’ hardware, doing basic GFLOPS/$$$ and GFLOPS/W estimation. At best, the alternatives to modern GPUs - or even Intel CPUs - are within some reasonable percentage of the ‘real thing’. Those alternative chips exist not for performance so much as not being dependent on NVIDIA/AMD.

Is that the case even if you had a GPU and an ASIC with say X5 the amount of shader cores as NVIDIA or AMD GPUs or whichever component?

of course you’d need seperate ASICS for unreal engine and unity games or havok engine games, etc

How much money and watts can you afford to spend?

I don’t have a budget necessarily, and power is very cheap here. If it can be done I’d strongly consider it over having to wait 6-10 years for the same performance in consumer level gpus

It’s more of whether the nvlink or other similar amd equivalent soon comes about which solves the age old latency and PCI-E gen 5 and 6 solving saturation issues with PCI-E to actually make such an approach feasible.

Ok, how does a $100M run of a few dozen custom ASICs just for you sound? And next year NVIDIA releases something better (might take you that long anyway). While you are at it, start by funding the open-source GPU (these folks already have a CPU design which is a good starting point since GPUs are largely built from massive numbers of simpler CPU cores).

https://www.eetimes.com/rv64x-a-free-open-source-gpu-for-risc-v/

Have fun!

On a serious note, I would love for such things as open-source GPUs to happen, but I am not holding my breath. sigh

1 Like

I lost my hard drive with 10,000 bitcoin unfortunately, so won’t be able to afford that at the moment. Shame I could have afforded 5 or 6 of them ASICs.

Ouch! That really sucks.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.