AMD's 7nm Products Provoke Interesting Industry Responses

AlphaAtlas

[H]ard|Gawd
Staff member
Joined
Mar 3, 2018
Messages
1,713
In their coverage of AMD's 7nm EPYC and Vega launches, EE Times dug up some interesting responses from the scientific community and other parts of the computing industry. While there was the usual anticipation of performance gains that come with any launch, some scientists replying to the story on Twitter went beyond that. One English researcher lamented the $15,000 cost of Nvidia Tesla V100s, calling the pricing "not sustainable" while pointing out "Intel and Nvidia gross margins in excess of 63%." Another user pointed to the ballooning die sizes, and costs, of GPUs over time, as you can see in the image in the article. Other programmers criticized the poor quality of AMD's GPU Comptue software stack, while lamenting the proprietary nature of most of Nvidia's driver code.

On the hardware side, AMD’s Epyc and Vega are among the first reality checks on the 7-nm node. TSMC said in March 2017 that its process would offer up to 35% speed gains or 60% lower power compared to its 16FF+ node. However, AMD is only claiming that its chips will sport 25% speed gains or 50% less power compared to its 14-nm products. "TSMC may have been measuring a basic device like a ring oscillator - our claims are for a real product," said Mark Papermaster in an interview the day that the 7-nm chips were revealed. "Moore's Law is slowing down, semiconductor nodes are more expensive, and we're not getting the frequency lift we used to get," he said in a talk during the launch, calling the 7-nm migration "a rough lift that added masks, more resistance, and parasitics."
 
Thanks OP. Look at what nVidia is dealing with with the 12nm node. We are approaching the limits of physics. [H]ard things are [h]ard. And advancing microprocessor manufacturing is damn [h]ard.
 
Keywords, "Claim, theory" and real life application...:D
Yeah, in real life applications one is almost impossible to work with and be profitable doing going forward and the other is wildly more profitable. HMmmmmmmmmmm~ ;)
 
I remember when GaAs was supposed to be the next big thing to replace silicon. I know it's used in some things, but I guess they never figured a way to make it cost effective for most things, too bad I'd really love to see a CPU/GPU with way above 10GHz--potentially well above 100GHz.
 
I wish there was a non jpeg raped version.
https://i.redd.it/1nlky813msl11.png (I plugged hardocp's image into tineye.com).

Source: https://tineye.com/search/7ccb543bf17a79c254d65f9efae93610fab86e9e/

Enjoy in its 11,050 × 3,387 pixels / 5,887,127 bytes glory.

Edit: The above image is an earlier draft, missing TU104 and TU106 on the far right, sorry about that. I plugged the above image into G's reverse image search and found the current, though lower resolution.

https://ru.gecid.com/data/news/201809180853-53566/img/01_nvidia_turing.png

4,000 × 1,226 pixels / 529,963 bytes

Edit 2: Real Source (I'm done lol)

https://pc.watch.impress.co.jp/video/pcw/docs/1139/009/p30.pdf

https://pc.watch.impress.co.jp/docs/column/kaigai/1139009.html
 
Last edited:
How much do they think this kind of computing power would have cost 5 years ago? A decade?
 
63% gross profit, is unreal. That is higher than most retail stores.

I mean 1080ti-750$ new at launch , 2080ti- 1200$...... for what 30% more performance? Ripping ya off hard....

icing on the cake here is the cards are failing....
 
AMD's new EPYC design is brilliant packaging. Using 7nm just for the CPU cores and cheaper 14nm i/o die was smart.

Latency will increase but hopefully it's marginal. I'm dieing to see zen2. They're being cheeky about it and I don't know why.
 
I remember when GaAs was supposed to be the next big thing to replace silicon. I know it's used in some things, but I guess they never figured a way to make it cost effective for most things, too bad I'd really love to see a CPU/GPU with way above 10GHz--potentially well above 100GHz.

Yeah, they've never been able to grow wafer sizes above 4" profitably. Silicon does 12" profitably. That's almost ten times the area available per-wafer.

Also, there's no workable p-type or inexpensive gate insulator like Si, so building CMOS circuits in impossible. So everything is fast, but leaks power like a sieve.
 
Intel is facilitating a 7-nm fab with EUV in Chandler, Arizona, now, but no production is expected until at least late next year. Even Intel’s current 14-nm yields are “above threshold but vary widely by product,” said Jim McGregor, principal of Tirias Research.

Wow this at the end of the article what is this exactly ?
Confirmation that 10nm is dead at Intel ?
Odd piece to say the least...
 
Another user pointed to the ballooning die sizes, and costs, of GPUs over time, as you can see in the image in the article.

hmm... the GPUs I've bought over the years have all been about the same size according to that chart until we get REALLY old... like Geforce 256 / GeForce 2 / 3 timeframe...

If you're buying the larger GPUs then you are presumably finding value in what that extra cost is providing... and the MFRs are marking those up more as a result.

According to the chart, I've been pretty happy with my GPUs in the 300mm2 +/- range throughout the years, and my assumption is that profit margin on a RX580 or 1060 type of product is probably not 63%. You want a luxury GPU, you pay a luxury price... it's really kinda simple. Games can be configured to look and play great on the $2-300 cards. Sure you don't get EVERYTHING, but you get all the bang for the buck video options.
 
hmm... the GPUs I've bought over the years have all been about the same size according to that chart until we get REALLY old... like Geforce 256 / GeForce 2 / 3 timeframe...

If you're buying the larger GPUs then you are presumably finding value in what that extra cost is providing... and the MFRs are marking those up more as a result.

According to the chart, I've been pretty happy with my GPUs in the 300mm2 +/- range throughout the years, and my assumption is that profit margin on a RX580 or 1060 type of product is probably not 63%. You want a luxury GPU, you pay a luxury price... it's really kinda simple. Games can be configured to look and play great on the $2-300 cards. Sure you don't get EVERYTHING, but you get all the bang for the buck video options.

Soon we would be getting chiplets due to size problems ;)
 
if that can lead to the new "baseline" CPU speed being 5+Ghz, then im OK with that.
Take a look at Zen vs Zen+ it did not move that much the peaks were higher but again not that much.
If AMD anticipates several years on 7nm it might go up gradually as well but a base clock speed of 5.0 ghz would take all of the thunder of Zen 3 there is no need for AMD to push the desktop market (due to the stranglehold Intel has on it) in such a short time , remember that AMD is here to get the server market back on track (some of the dynamics are not really that advantageous for desktop ) so large gigahertz numbers is not really a priority.
 
AMD's new EPYC design is brilliant packaging. Using 7nm just for the CPU cores and cheaper 14nm i/o die was smart.

Latency will increase but hopefully it's marginal. I'm dieing to see zen2. They're being cheeky about it and I don't know why.
I'm going to assume its because when AMD is hyping a product you know it'll be shit. When they are quiet, its cause they are all hiding in the back with evil grins saying "this time it doesnt suck".
 
I'm going to assume its because when AMD is hyping a product you know it'll be shit. When they are quiet, its cause they are all hiding in the back with evil grins saying "this time it doesnt suck".
They are not really quiet AMD released a benchmark catered towards the financial workload that suggest a 29% IPC increase.
 
I'm going to assume its because when AMD is hyping a product you know it'll be shit. When they are quiet, its cause they are all hiding in the back with evil grins saying "this time it doesnt suck".

That's not universally true. They publicized Ryzen and Epyc and they are both decent products.
 
That's not universally true. They publicized Ryzen and Epyc and they are both decent products.

Yes they did, but they did in small, controlled, non "hyped" bite sized peices...Take the Blender demo vs the current x99 cpu...They showed it in multithread mode, and everyone just assumed that was the only way they could make the cpu look "good". Then it was released and we found out it was very efficient, runs cool, and could offer competitive performance in every market.

Overnight, they killed the "quad core is best" mantra which is what Intel had been chanting for nearly a decade and forced Intel to rapidly design the hot mess that was the x99 platform so they could "me too" affordable (but way more then AMD's offerings of course) High Core Count SKUs....


AS long as Zen2 can do 4.5~4.7Ghz with good cooling (assuming a 5~7% IPC increase) it will be a success...If AMD comes in with a 10~15% IPC increase but can only manage 4.3~4.4Ghz clocks, it is still a success. I am hoping for both (~10%+ and 4.5Ghz+ on water) on the 8c/10t and possibly a 12c/24t part (although I doubt we will see more then 8c on AM4).
 
AS long as Zen2 can do 4.5~4.7Ghz with good cooling (assuming a 5~7% IPC increase) it will be a success...If AMD comes in with a 10~15% IPC increase but can only manage 4.3~4.4Ghz clocks, it is still a success. I am hoping for both (~10%+ and 4.5Ghz+ on water) on the 8c/10t and possibly a 12c/24t part (although I doubt we will see more then 8c on AM4).

Fair enough.

I agree on the 8 core part, I doubt am4 will exceed that.
 
So smaller chip means lower TDP, power draw, cooling.... so then we would need less/smaller cooling? Or will thermals/cooling stay the same because they will have higher clocks? I’m talking gaming chips here.
 
So smaller chip means lower TDP, power draw, cooling.... so then we would need less/smaller cooling? Or will thermals/cooling stay the same because they will have higher clocks? I’m talking gaming chips here.

If AMD reused the Zen+ design with zero changes, and you kept the clock speeds the same, you would save ~30% on power use (IIRC I think 7nm is roughly that) which would result in a chip that runs at w lower operating temperature.

Since AMD is using the node shrink to gain performance, then expect the Zen² design to use roughly the same power with much improved performance.

The third potential option is a mix of performance and power savings, which is not the direction they are going to go IMO. Closing and exceeding the IPC/performance gap with Intel (at a reasonable power draw) is AMD's goal so they are going to wring every bit of performance they can out of the new design.
 
If AMD reused the Zen+ design with zero changes, and you kept the clock speeds the same, you would save ~30% on power use (IIRC I think 7nm is roughly that) which would result in a chip that runs at w lower operating temperature.

Since AMD is using the node shrink to gain performance, then expect the Zen² design to use roughly the same power with much improved performance.

The third potential option is a mix of performance and power savings, which is not the direction they are going to go IMO. Closing and exceeding the IPC/performance gap with Intel (at a reasonable power draw) is AMD's goal so they are going to wring every bit of performance they can out of the new design.


One thing I forgot to mention is that a smaller die is much harder to pull a given amount of heat from vs a larger die. The smaller the die the greater temperature of "hotspots". I would expect to use the same cooling solution you would use on a current Zen+ design and you should be fine unless AMD goes "balls to the wall" and releases a 125-150W TDP SKU but I do not foresee this happening.
 
Back
Top