AMD Will Need Another Decade To Try To Pass Nvidia - note the gaming revenue trends

https://www.nextplatform.com/2024/10/30/amd-will-need-another-decade-to-try-to-pass-nvidia/ An article which shows that gaming revenues for AMD are down, but datacenter revenues are way up.
As an AMD fan, I hope Lisa Su's health holds up for a nice long time.
As well as AMD can do, as long as Jen Hsun Huang is fit and in charge, I don't see a way in for AMD.

Nvidia released cuda & hpc cards in 2006, the year AMD was acquiring ATI.

AMD are basically 2 decades behind nvidia in some aspects but maybe with proper focus they can do a zen on data centre gpus
 
As well as AMD can do, as long as Jen Hsun Huang is fit and in charge, I don't see a way in for AMD.

Nvidia released cuda & hpc cards in 2006, the year AMD was acquiring ATI.

AMD are basically 2 decades behind nvidia in some aspects but maybe with proper focus they can do a zen on data centre gpus
Alternatively, can Nvidia's ARM offerings give them an even bigger lead into the server and PC market.
 
Alternatively, can Nvidia's ARM offerings give them an even bigger lead into the server and PC market.
It is a fight for survival for AMD

They have acquired rack companies, networking companies & xilinx etc. (Also now a tie-up with Intel)
 
It is a fight for survival for AMD

They have acquired rack companies, networking companies & xilinx etc. (Also now a tie-up with Intel)
If Intel can actually deliver 18A, I see someone big jumping over to use their fabs. If not two or more. Anyways would AMD really be a competitor with Intel if they use their fabs or more like a partner? Keeping X86 alive for another decade or two. As for being a decade behind, ahmm no.
 
If Intel can actually deliver 18A, I see someone big jumping over to use their fabs. If not two or more. Anyways would AMD really be a competitor with Intel if they use their fabs or more like a partner? Keeping X86 alive for another decade or two. As for being a decade behind, ahmm no.
Intels biggest issue is their E Cores,
E cores were developed to solve two problems their shit fab process (almost solved) and the need to get that crap process to a power point that could meet some EU and California power requirements for desktop equipment.
The transition to many desktops using mobile silicon also solved that in addition to improvements in the fab process.

Intel needs to ditch their mixed core approach. It was a solution for a time that has passed.
 
https://www.nextplatform.com/2024/10/30/amd-will-need-another-decade-to-try-to-pass-nvidia/ An article which shows that gaming revenues for AMD are down, but datacenter revenues are way up.
As an AMD fan, I hope Lisa Su's health holds up for a nice long time.
Here's the thing....Not all HPC is equal. For a LOT of math, we need double precision or the errors grow too fast and the computation is worthless. I don't get the feel that Nvidia prioritizes FP64 given Blackwells specs..

https://wccftech.com/nvidia-blackwe...0x-faster-simulation-science-18x-faster-cpus/
H100 FP64 34 TFLOPs
GB200 Superchip (include grace arm cpu) gets about 90 TFLOPs
AMD MI300X 81.7 TFLOPs of FP64 capabilities on a single chip.
(citation above)


So, if you want to do any HPC that needs double precision (doubles), AMD seems like a winner. If AMD were to focus on making a competitor to CUDA (i.e. make using their GPUs easy for devs), they could be competitive for a lot non-ai HPC applications.
 
Last edited:
Here's the thing....Not all HPC is equal. For a LOT of math, we need double precision or the errors grow too fast and the computation is worthless. I don't get the feel that Nvidia prioritizes FP64 given Blackwells specs..
From my limited understanding you are right, like Jensen said, cpu is about spending a limited amount of very high quality chips at a precise problem, GPU is about a massive amount of low quality non-precise chips for things that often do not have an actual exact answer to be found
AMD MI300X 81.7 TFLOPs of FP64 capabilities on a single chip.
Not sure about single depending what we mean, that over 1000mm of silicon sold for like $10-20k that look like this, I am not sure if it can fit in a single mono chip:

1023-default.jpg
2882196-amd-instinct-mi325x.png
https://www.amd.com/content/dam/amd/en/images/products/data-centers/2882196-amd-instinct-mi325x.png

This seem like 4 different cpu title, with 8 cpu titles/hbm stack or something like that, "fused" in a way that look similar to nvidia with the GB200 (for the 2 dual gpu part), you must be getting better bandwith, lesser latency between those chips that share an interposer and for some specific workload that fit on their HBM it can be impressive, but for larger work that use 80-800-8000 of them, the best networking could win.
 
Last edited:
Here's the thing....Not all HPC is equal. For a LOT of math, we need double precision or the errors grow too fast and the computation is worthless. I don't get the feel that Nvidia prioritizes FP64 given Blackwells specs..

https://wccftech.com/nvidia-blackwe...0x-faster-simulation-science-18x-faster-cpus/
H100 FP64 34 TFLOPs
GB200 Superchip (include grace arm cpu) gets about 90 TFLOPs
AMD MI300X 81.7 TFLOPs of FP64 capabilities on a single chip.
(citation above)


So, if you want to do any HPC that needs double precision (doubles), AMD seems like a winner. If AMD were to focus on making a competitor to CUDA (i.e. make using their GPUs easy for devs), they could be competitive a lot non-ai HPC applications.
Strict FP64, is one of the few remaining locations where AMD shines.
But newer CUDA models can get very close in performance with the same levels of accuracy, but that hardware costs more.
So if FP64 is all you want then AMD all the way.
 
Strict FP64, is one of the few remaining locations where AMD shines.
But newer CUDA models can get very close in performance with the same levels of accuracy, but that hardware costs more.
So if FP64 is all you want then AMD all the way.
CUDA hitting FP64 performance? There's NO chance double pumping fp32 is gonna match full 64 bit register math. There's NO chance poor condition matrix with fp32. Matlab is easily the most commonly used application in this space, and I don't see this at all on my home rig. I don't use GPUs on my work PC for matlab....but those are all quadro gpus.

I am not an expert in CUDA, but i do enough that I am able to test this on my 3090s. I have a few h100s i can get some time on at work if you want me to test something on those. If you have a reference for how to test this id be happy to give it a spin (might take me a bit to follow up).
 
CUDA hitting FP64 performance? There's NO chance double pumping fp32 is gonna match full 64 bit register math. There's NO chance for a sparse matrix that there's software tricks. Matlab is easily the most commonly used application in this space, and I don't see this at all on my home rig.

I am not an expert in CUDA, but i do enough that I am able to test this on my 3090s. I have a few h100s i can get some time on at work if you want me to test something on those. If you have a reference for how to test this id be happy to give it a spin (might take me a bit to follow up).
$ for $ then Nvidia doesn't come close, but the MI300x doesn't do FP64 as fast as the new Blackwell parts, but the Blackwell parts cost more than 2x the MI300.
So you can easily get double Nvidia's performance for the same money spent if FP64 is your only concern.
 
$ for $ then Nvidia doesn't come close, but the MI300x doesn't do FP64 as fast as the new Blackwell parts, but the Blackwell parts cost more than 2x the MI300.
So you can easily get double Nvidia's performance for the same money spent if FP64 is your only concern.
I think it hands down crushes the Blackwell in the sense that only the Super chip (gpu and grace cpu) hit 90 TFLOPS in total. The AMD MI300X is a single chip at nearly 82 TFLOPS.

My question is why doesn't matlab support AMD gpus? AMD needs to up their investment in supporting devs.
 
Last edited:
I think it hands down crushes the Blackwell in the sense that only the Super chip (gpu and grace cpu) hit 90 TFLOPS in total. The AMD MI300X is a single chip at nearly 82 TFLOPS.

My question is why doesn't matlab support AMD gpus? AMD needs to up their investment in supporting devs.
Yeah Matlab uses the official AMD drivers as those are the only ones certified, AMD hasn't done a good job of updating them and it's painful.
 
From my limited understanding you are right, like Jensen said, cpu is about spending a limited amount of very high quality chips at a precise problem, GPU is about a massive amount of low quality impressice chips for things that often do not have an actual exact answer to be found

Not sure about single depending what we mean, that over 1000mm of silicon sold for like $10-20k that look like this, I am not sure if it can fit in a single mono chip:

View attachment 689475
2882196-amd-instinct-mi325x.png
https://www.amd.com/content/dam/amd/en/images/products/data-centers/2882196-amd-instinct-mi325x.png

This seem like 4 different cpu title, with 8 cpu titles/hbm stack or something like that, "fused" in a way that look similar to nvidia with the GB200.
Fairly sure the AMD part is not heterogeneous compute (feels weird to say homogeneous). Writing efficient cpu code and writing efficient gpu code are very different.

So what?

I don't think it would be easy or very useful to write cpu + gpu code to try to hit 90 tflops. I would prefer to just do all cuda, or all cpu. I also have no idea of the breakdown of the fp64 on the superchip and nvidia doesn't seem interested in being forthcoming.
 
AMD wont catch up until Nvidia reaches 1nm.... Then there is pretty much no where to go.
Unless Nvidia gets a little cocky starts over promising and under delivering and they leave room for others to step in and take their place.

The node isn’t everything, it’s the Nvidia ecosystem and software stack that are keeping them well out in front.
 
https://www.nextplatform.com/2024/10/30/amd-will-need-another-decade-to-try-to-pass-nvidia/ An article which shows that gaming revenues for AMD are down, but datacenter revenues are way up.
As an AMD fan, I hope Lisa Su's health holds up for a nice long time.
AMD's gaming GPU sales sucks because AMD doesn't want to lower prices, and their Ray-Tracing performance sucks. As much as people hate the RTX 4060, it is a better GPU for the price. As for server sales, we know it's all for AI. The question is, how long does the AI bubble have before it pops? I really doubt it'll last beyond 10 years. The AI bubble would be lucky to last past 2026, which at that point AMD doesn't need to catch up anymore because the demand for AI hardware would have declined. Also everyone and their grandma is making AI hardware, so either way the market for AMD is going to be lesser but this also applies to Nvidia.
 
Looking at this list doesn't give much hope

(Maybe FSR 4 & UDNA can spark a turnaround 🤔)

Region wise split would be more informative

Looking at AMD cards, there is only one RDNA 3 card in this list

6600
580
6700xt
5700xt
580 2048sp
570

7900xtx

6600xt
6650xt
550
6800xt
6750xt
6900xt
5600xt
6750 gre 12gb
6800
6500xt
5500xt
 
The node isn’t everything
Is it anything when talking Nvidia vs AMD, it is not like AMD as not been using TSMC and using very similar node, with some of the best collaboration in that regard, 3d cache, mi300x packaging, as Nvidia any advantage on that front ?
 
The question is, how long does the AI bubble have before it pops? I really doubt it'll last beyond 10 years. The AI bubble would be lucky to last past 2026, which at that point AMD doesn't need to catch up anymore because the demand for AI hardware would have declined. Also everyone and their grandma is making AI hardware, so either way the market for AMD is going to be lesser but this also applies to Nvidia.
AI bubble won't pop. Luddite thinking. Here is just one reason why. Ever hear of Folding@Home? 24 years of intense computer usage replaced by AI.

Nvidia will continue to eclipse AMD, unless they can release a better performing AI chip. NVIDIA will have excess capital from AI and gaming that it will make it difficult for AMD to compete in high end gaming.
 
Last edited:
AI bubble won't pop. Luddite thinking. Here is just one reason why. Ever hear of Folding@Home? 24 years of intense computer usage replaced by AI.

Nvidia will continue to eclipse AMD, unless they can release a better performing AI chip. NVIDIA will have excess capital from AI and gaming that
Don't be surprised if NVidia goes on an acquisition spree. And Trump, unlike Biden, is much less likely to challenge an acquisition for being anti-competitive. So NVidia can overpay just to deny all competitors a chance to acquire a given company.

it will make it difficult for AMD to compete in high end gaming.
and more ... Sadly.
 
Don't be surprised if NVidia goes on an acquisition spree. And Trump, unlike Biden, is much less likely to challenge an acquisition for being anti-competitive. So NVidia can overpay just to deny all competitors a chance to acquire a given company.
Well he had the good sense to stop Broadcom from buying Qualcomm previously so there’s that at least. His reason was a little strange but he still did it.
 
AI bubble won't pop. Luddite thinking. Here is just one reason why. Ever hear of Folding@Home? 24 years of intense computer usage replaced by AI.
You don't think it'll happen because of a niche case? I'm not saying AI doesn't have a future, I'm just saying it doesn't have one that can sustain Nvidia and anyone else thinking of jumping into the AI train. Look at Microsoft and how they're trying to shoe horn AI into Windows with CoPilot and Recall. They need a way to make AI work for everyone and that's not going to happen this way.
Nvidia will continue to eclipse AMD, unless they can release a better performing AI chip. NVIDIA will have excess capital from AI and gaming that it will make it difficult for AMD to compete in high end gaming.
Gaming is a different story, but AMD could do it if they made some simple changes. One is that they need to get better Ray-Tracing performance. If I were trying to compete with Nvidia i would try to find a way to get Ray-Tracing without any performance loss, because Nvidia hasn't cracked that code. The second thing AMD needs is to lower prices. Something investors don't wanna hear but they need market share and this is how you do it. The third thing AMD should do is leverage their graphics with their CPU. Ryzen is popular but Radeon graphics is not, so why not try to put good graphics with their CPU? The 9800X3D has built in graphics but it's terrible. It's not even as good as laptop graphics found in AMD's Ryzen AI 300 series. An easy way to expand Radeon graphics is to just include their latest graphics technology in their CPU's. This seems like the direction Intel is going, and it's not stupid.
 
AMD's gaming GPU sales sucks because AMD doesn't want to lower prices,
AMD gaming segment had 2% margin last quarter, I am not sure how much room they have in that regard.

AMD could do it if they made some simple changes. One is that they need to get better Ray-Tracing performance. If I were trying to compete with Nvidia i would try to find a way to get Ray-Tracing without any performance loss,
Simple change.... like doing real time ray tracing faster than raster (what would that even mean.... how many rays, how many bounce, etc...there is no limit on how fast or slow raytracing get, it is a bit of a choice on the quality as well) also achieve to make better gpu (ie more performance by $ they take to make),

If only someone at AMD thought of those 2 simple things (or no villain stockholder stopped them to do so....)

An easy way to expand Radeon graphics is to just include their latest graphics technology in their CPU's. ... It's not even as good as laptop graphics found in AMD's Ryzen AI 300 series.
Are you suggesting they put 16 CU or more level of GPU in all their cpu and not just in their G line (8700G type of offer) ? Or you mean using only 2-4 cu still but the latest one ? zen5 come with "RDNA 3+" or rdna2 according who you read...

iGPU is identical to the one on Ryzen 7000—it is based on the RDNA 2 architecture, https://www.techpowerup.com/review/amd-ryzen-7-9700x/22.html
In terms of graphics, the Ryzen 7 9700X includes an integrated GPU (iGPU) with 2 compute units (CUs) based on the RDNA 3+ architecture. https://www.guru3d.com/review/review-ryzen-7-9700x-processor/#:~:text=In terms of graphics, the,operate up to 2,200 MHz.


I think much better iGPU on all their cpu (say with an F equivalent for special non-igpu option) would be a good thing for the business desktop type, would not go Ryzen AI 300 level, but maybe 4 CU with great media encode-decode and drivers, the openGL application I do seem to have issue with their desktop iGPU once the vram usage ramp up, but it is something that ran relatively well on old sandy bridge iGPU.
 
Last edited:
This is certainly not happening in a decade, unless Jensen stops wearing leather jackets and Nvidia starts doing an Intel.

There is nothing AMD can do about it, Jensen drives the company too well to enable this, even after eventual AI crash. Perhaps if they get too huge and US government intervenes, or something similar, not related to business operations and innovation.
 
This is certainly not happening in a decade, unless Jensen stops wearing leather jackets and Nvidia starts doing an Intel.

There is nothing AMD can do about it, Jensen drives the company too well to enable this, even after eventual AI crash. Perhaps if they get too huge and US government intervenes, or something similar, not related to business operations and innovation.
Also how old is Jensen What about his health. At some point he will be replaced. Is there a succession plan in place?
 
Both has what look health and drive-motivation to work for a long time.

Some brain drain from Nvidia veterans in all position being extremely rich and retiring or semi-retiring could happen.

NVIDIA will have excess capital from AI and gaming that it will make it difficult for AMD to compete in high end gaming.

One think that could make it hard to compete at the highest level of nvidia gaming GPU for AMD, is if the situation Nvidia can make and sells at very high price x40-RTX 6000 enterprise-datacenter card that the bad one binned down end up xx90 gaming card while AMD as not much or other market for its top of the line gpu, Navi 21 had some radeon pro product line but I am not sure how much they sold, Navi 31 had the radeon pro W7900... in both case they were not really binned down version, the 6900xt and the 7900xtx as had the same amount of core enabled than them, if Nvidia really go with their plan of a yearly release a la Apple (or the early days of gpus) because they need a new x40-RTX6000 gpu every year and the gaming one is a product of that, AMD could have an hard time spending that much on R&D to keep up with the very top of the line.
 
Last edited:
AMD gaming segment had 2% margin last quarter, I am not sure how much room they have in that regard.
Just like auto makers in US and EU have to figure out how to compete against China, so too will AMD have to come up with new and creative ways to make cheaper and better hardware. Does nobody wonder why modern graphic cards are so big and heavy? What ever happened to using HBM memory like Vega and R9 Fury cards? It's gotten to the point where you can crack the PCB due to the weight.
Simple change.... like doing real time ray tracing faster than raster (what would that even mean.... how many rays, how many bounce, etc...there is no limit on how fast or slow raytracing get, it is a bit of a choice on the quality as well) also achieve to make better gpu (ie more performance by $ they take to make),

If only someone at AMD thought of those 2 simple things (or no villain stockholder stopped them to do so....)
Simple as in invest into it before Nvidia does. Right now Nvidia doesn't care about gaming, because they're now an AI company. Nvidia's solution to poor RTX performance was DLSS, and this is how Nvidia plans to solve this problem. At some point someone is going to get Ray-Tracing to perform as well on and off, it might as well be AMD.
Are you suggesting they put 16 CU or more level of GPU in all their cpu and not just in their G line (8700G type of offer) ? Or you mean using only 2-4 cu still but the latest one ? zen5 come with "RDNA 3+" or rdna2 according who you read...
Whatever they put in their Ryzen AI laptops they should also put in their desktop chips. This game with their G products needs to end. Intel's Lunar Lake is already better in GPU performance, and it's just a matter of time before Intel starts putting good graphics in all their CPU's.
I think much better iGPU on all their cpu (say with an F equivalent for special non-igpu option) would be a good thing for the business desktop type, would not go Ryzen AI 300 level, but maybe 4 CU with great media encode-decode and drivers, the openGL application I do seem to have issue with their desktop iGPU once the vram usage ramp up, but it is something that ran relatively well on old sandy bridge iGPU.
Market share is market share. You know what's worse than a GPU nobody buys? A GPU that nobody supports. AMD has more advantages than Nvidia right now, that is until Nvidia figures out they're not an AI company. Right now AMD is their own worst enemy when it comes to how they use their GPU technology and they need to use their latest tech in everything.
 
Last edited:
I don't think it will take AMD a decade to become competitive again.

In 2026 we will probably have competition in higher teir cards and I guess they will have a top halo card this time, of course if they find a way to control the heat from the large number of chiplets... So for next year we can only hope for a price war with the 8000 series.
 
of course if they find a way to control the heat from the large number of chiplets...
I would have too that outside of cutting cost (R&D once it work accross sku line and better yield manufacturing wise), the only advantage* of chiplets is easier heat management, has you have a series on distinct heat spot easier to manager instead of a single one.

*(when you are still under 800mm, for giant product there is the ability to get bigger than the max die constraint)
 
Back
Top