New AMD Radeon Instinct M100 leaked - Arcturus coming.

Jandor

Gawd
Joined
Dec 30, 2018
Messages
665
It seems this is kind of early leak but already the complete GPU only made for compute. It has a 200W TDP for now and it looks like it's going to be 12Tflops FP64 against what's leaked about future Tesla Ampere 18Tflops.
That power on one GPU on FP64 is around double (triple for Nvidia) the performance of previous line of cards at AMD (Vega 20) and Nvidia (Volta).
https://wccftech.com/amd-radeon-instinct-mi100-arcturus-gpu-32-gb-hbm2-200w-tdp-rumor/
32GB HBM2 is nothing new though (equal to Vega 20).
 
What numbers are you using for your double and triple calculation?

Both Vega 20 and GV100 are 7+ Tflops FP64 depending on the implementation (SKU).
 
  • Like
Reactions: noko
like this
I'm curious as to how they improved power efficiency so much they were able to get drasticly more cores, drasticly more performance, and drasticly lower power....Is this still Vega???

Must be Vega (next version)

The clocks are pretty low. So the power should be manageable as long as clocks are not cranked high, (like in consumer gaming cards that compete with Nvidia)
 
You should link Videocardz, much higher quality website in general...
I'm curious as to how they improved power efficiency so much they were able to get drasticly more cores, drasticly more performance, and drasticly lower power....Is this still Vega??? or are we looking at something Navi related, or something entirely new???

https://videocardz.com/newz/amd-radeon-instinct-mi100-with-32gb-hbm2-and-arcturus-gpu-spotted

Extreme Tech, thinks it is possible that this is Vega (& not Navi)

https://www.extremetech.com/computi...nct-mi100-leaks-hint-at-massive-8192-core-gpu
The leak, indicates a 32GB HBM card (same maximum RAM loadout as the current Instinct family), with up to 8192 GPU cores, a boost clock of 1.33GHz, and a base clock of 1GHz

All of this, supposedly, in just a 200W TDP.

Is it possible to cram this kind of performance into a data center GPU? The answer might be yes.
 
Extreme Tech, thinks it is possible that this is Vega (& not Navi)

https://www.extremetech.com/computi...nct-mi100-leaks-hint-at-massive-8192-core-gpu
The leak, indicates a 32GB HBM card (same maximum RAM loadout as the current Instinct family), with up to 8192 GPU cores, a boost clock of 1.33GHz, and a base clock of 1GHz

All of this, supposedly, in just a 200W TDP.

Is it possible to cram this kind of performance into a data center GPU? The answer might be yes.
What is interesting is Indiana State University for their AI computer is going with Epyc (working with AMD) but Nvidia next gen HPC GPUs -> Which means to me Nvidia surpassed what AMD is offering here -> unless AMD did not have this ready or did not consult with the University which I doubt. The tensor cores in Nvidia HPC GPUs are just way more potent for AI in the long run is what I am thinking. For scientific type FP64 work, calculating universe dark matter distribution for example these might work great.
 
What is interesting is Indiana State University for their AI computer is going with Epyc (working with AMD) but Nvidia next gen HPC GPUs -> Which means to me Nvidia surpassed what AMD is offering here -> unless AMD did not have this ready or did not consult with the University which I doubt. The tensor cores in Nvidia HPC GPUs are just way more potent for AI in the long run is what I am thinking. For scientific type FP64 work, calculating universe dark matter distribution for example these might work great.
Could be software support also for what they are running or was pre-existing nvidia stuff.
 
Could be software support also for what they are running or was pre-existing nvidia stuff.

Exactly. Power doesnt matter if you can't use it.

Arcturus could be faster than Ampere but it will still be up against CUDA.
 
What is interesting is Indiana State University for their AI computer is going with Epyc (working with AMD) but Nvidia next gen HPC GPUs -> Which means to me Nvidia surpassed what AMD is offering here -> unless AMD did not have this ready or did not consult with the University which I doubt. The tensor cores in Nvidia HPC GPUs are just way more potent for AI in the long run is what I am thinking. For scientific type FP64 work, calculating universe dark matter distribution for example these might work great.

Nvidia has been an order of magnitude greater than AMD in their architecture efficiency, It's going to be hard to use AMD for datacenter type setups when the power envelope difference is so vast. Also AI is becoming closer to a necesity for everything moving ahead, and as far as I know, there's only like 4 companies doing (their own) legit tensor cores at scale right now. Nvidia, Google, Tesla, and Marvell. Marvell is looking to consolidate the market for custom Tensor silicon designs (which they then bang out on GloFo), They have been buying up all the competing engineers and slowly axing the competition, so at some point, you're going to start hearing their name a lot more, kinda like when google arrived.
 
Nvidia has been an order of magnitude greater than AMD in their architecture efficiency, It's going to be hard to use AMD for datacenter type setups when the power envelope difference is so vast. Also AI is becoming closer to a necesity for everything moving ahead, and as far as I know, there's only like 4 companies doing (their own) legit tensor cores at scale right now. Nvidia, Google, Tesla, and Marvell. Marvell is looking to consolidate the market for custom Tensor silicon designs (which they then bang out on GloFo), They have been buying up all the competing engineers and slowly axing the competition, so at some point, you're going to start hearing their name a lot more, kinda like when google arrived.
We do not know what AMD will have with Arcturus architecture efficiency or Nvidia next gen Tesla GPU's. We do know that Nvidia has sound and highly used software stack that support their hardware which AMD does not seem to have. Only way I see AMD competing with Nvidia is to sell much lower to allow college students, engineers, scientist etc. a very affordable potent card to use to get their development and user numbers up. This would not be overnight. Nvidia also has AI servers that one can rent for deep learning so one does not even need to invest heavily into a rig.

AMD, my opinion, maybe needs to think very big/wide/long term here for data servers. Hire some professional teachers/programmers/video producers and give out a lot of training to ease those professionals into using AMD eco-system. Like Apple virtually owning desktop publishing for years even where their hardware was second rate compared to the PC side, video - once folks get use to an eco-system and able to perform tasks and get the job done -> it becomes really hard to now retrain, convince, buy new software etc. to go to something else. AMD getting into separate unique markets like video editing etc. other fields requiring very high FP64 etc. would be prudent while also building up in markets already established.
 
We do not know what AMD will have with Arcturus architecture efficiency or Nvidia next gen Tesla GPU's. We do know that Nvidia has sound and highly used software stack that support their hardware which AMD does not seem to have. Only way I see AMD competing with Nvidia is to sell much lower to allow college students, engineers, scientist etc. a very affordable potent card to use to get their development and user numbers up. This would not be overnight. Nvidia also has AI servers that one can rent for deep learning so one does not even need to invest heavily into a rig.

AMD, my opinion, maybe needs to think very big/wide/long term here for data servers. Hire some professional teachers/programmers/video producers and give out a lot of training to ease those professionals into using AMD eco-system. Like Apple virtually owning desktop publishing for years even where their hardware was second rate compared to the PC side, video - once folks get use to an eco-system and able to perform tasks and get the job done -> it becomes really hard to now retrain, convince, buy new software etc. to go to something else. AMD getting into separate unique markets like video editing etc. other fields requiring very high FP64 etc. would be prudent while also building up in markets already established.


I'd like to see AMD put together some sort of tpu core for AI research. From my comfy seat behind my keyboard it just seems that AI processors are going to be the new cpu moving ahead. Gpus have been adequate for many needs until recently when TPUs have essentially improved upon them exponentially from an AI deep learning standpoint. This golden age of CPUs for AMD is great and all, but missing out on the next big silicon product for the future would be a big mistake.
 
  • Like
Reactions: noko
like this
Back
Top