From ATI to AMD back to ATI? A Journey in Futility @ [H]

For Vega they needed HBM otherwise that card would have used over 500 Watt with GDDR5 and still not be able to run high clocks.
Checkout Buildzoid he tested Vega and explained why it would not have worked with GDDR5.

Since when do you get large discounts when there is scarcity?

Do you understand how signed contracts work? You pay a set price no matter what. Sometimes there is even payment penalties if the manufacturer cant deliver enough parts as well. AMD may have paid more until Hynix could deliver the quantity needed. However many have clauses on non delivery by a certain date which means Hynix had to pay the difference. We will never know cause we will never see the contract signed for HBM and AMD helped develop it as well which likely set prices far different then what they would charge somebody else. Biggest problem with HBM is the complexity it added to the package which AMD even struggled with at first. It's more expensive then GDDR but not by as much as most think and they kept GDDR on the lower end stuff that sells in large quantities, like they should.
 
Those contracts are generally written up way before scarcity is known and the sales/procurement teams are optimistic.
There’s only ~25W difference between GDDR and HBM.
Well don't believe me then there are plenty of articles describing the problems with HBM2 availability at the time of Vega.
At higher memory clocks you burn through a lot more then 25 Watts because it needs more voltage even that is well documented.
Do you understand how signed contracts work? You pay a set price no matter what. Sometimes there is even payment penalties if the manufacturer cant deliver enough parts as well. AMD may have paid more until Hynix could deliver the quantity needed. However many have clauses on non delivery by a certain date which means Hynix had to pay the difference. We will never know cause we will never see the contract signed for HBM and AMD helped develop it as well which likely set prices far different then what they would charge somebody else. Biggest problem with HBM is the complexity it added to the package which AMD even struggled with at first. It's more expensive then GDDR but not by as much as most think and they kept GDDR on the lower end stuff that sells in large quantities, like they should.

Show me the contract then you were so giddy to suggest that I should show you that last time,
https://www.hardocp.com/news/2017/07/31/exclusive_rx_vega_interview_chris_hook
https://www.gamersnexus.net/guides/3032-vega-56-cost-of-hbm2-and-necessity-to-use-it

“The HBM2 memory is probably around $150, and the interposer and packaging should be $25.” We later compared this estimate with early rumors of HBM2 pricing and word from four vendors who spoke with GamersNexus independently, all of which were within $5-$10 of each other and Kanter’s estimate.
 
Well don't believe me then there are plenty of articles describing the problems with HBM2 availability at the time of Vega.
At higher memory clocks you burn through a lot more then 25 Watts because it needs more voltage even that is well documented.


Show me the contract then you were so giddy to suggest that I should show you that last time,
https://www.hardocp.com/news/2017/07/31/exclusive_rx_vega_interview_chris_hook
https://www.gamersnexus.net/guides/3032-vega-56-cost-of-hbm2-and-necessity-to-use-it

HBM is very low on power compared to DDR. It had to be because the VEGA chip itself ate so much.
 

One of the things that many (especially the "VEGA" experts that do not own them and haven't spent a year learning/tweaking/pushing the arch to it's limits) forget is that VEGA was supposed to launch with MUCH faster HMB2. The delay of SK being able to produce what they promised (speed wise) and then volume wise, forced AMD's hand to go to Samsung..Why do you think EVERY single launch card had Samsung memory?

You can undervolt just the "memory" (which actually sets the lower Vcore level for the GPU) in wattman and crank the HBM from the very low 800mhz to 950Mhz in a stock 56 and see well over 10~15% performance boost without touching the GPU core speed. This let's a 56, even with immature launch drivers, dominate a 1070 and beat a 1070ti. I realize that you can OC a 1070/TI but the point was to show what VEGA would have done "stock" with the memory it was supposed to ship with in the first place. If you crank that HBM to 1100mhz and give VEGA 563GB/s of memory bandwidth and the performance increase continues to climb.

Imagine if SK had been able to give AMD the HBM they actually ordered (so the 56 launched with 950mhz and say 64 had 1050) 14 months ago. AMD could have actually spent time binning VEGA so they launched with a much lower default Vcore and you would have still had 1070/ti being soundly beaten (like they are now) at launch week by an even larger margin and the one "drawback" of power useage would be a nonstarter....The competition between SK and Samsung, combined with the much better yields, would have allowed AMD to launch a SKU with 1050~1100Mhz HBM from the factory, giving VEGA yet another nice performance boost.


I really think Raja lost his job based on his over promising on yet another big project that he could not deliver on..Granted the HBM supply was not his "fault" but AMD (aka Raja) was aware in the months leading to launch week while they were stock piling VEGA dies that SK had nothing to give them. AMD really should have made this much clearer, but it's possible that SK paid an even higher penalty to "admit no fault" and not contest AMD moving to Samsung at the last 2 -3 months before launch. The problem with this is that it further restricted supply on an already highly anticipated part (BEFORE the mining performance even hit the interwebz) and made everyone mad at AMD.

Raja really messed up by promising TBR and the primitive shader features that never made it into the cards. That 25~40% improvement would have given AMD the grandslam they needed (relative to the 2x+ performance increase over Fury) and a nice HR over Nvidia despite being late on launch.

I, and several other medium to large mining operations, have considered a class action suit against AMD for screaming about these features and the performance uplift they would bring from the rooftops, but the fact that VEGA made me so much money made me not really follow through with it.

AMD really should have been sued over it, as it is no different then what Nvidia did with the 970 by failing to disclose an engineering "issue" to its consumer base.
 
Imagine if SK had been able to give AMD the HBM they actually ordered (so the 56 launched with 950mhz and say 64 had 1050) 14 months ago. AMD could have actually spent time binning VEGA so they launched with a much lower default Vcore and you would have still had 1070/ti being soundly beaten (like they are now) at launch week by an even larger margin and the one "drawback" of power useage would be a nonstarter....The competition between SK and Samsung, combined with the much better yields, would have allowed AMD to launch a SKU with 1050~1100Mhz HBM from the factory, giving VEGA yet another nice performance boost.

Imagine if AMD pulled this off, and Nvidia just increased the performance of the GP104 in the above GPUs?
 
Imagine if AMD pulled this off, and Nvidia just increased the performance of the GP104 in the above GPUs?

How are they going to do that when Pascal SKUs already run at their maxish boost clocks? This wasn't about saying AMD would destroy Nvidia HA!...I was making a point of why VEGA suffered due to some various factors...If what I mentioned above had occured, we would have had a nice price war, and the consumer wins? Who could argue with that?
 
Well, Pascal can run better with better cooling- including the boost clocks. It would have meant building the parts a little stronger of course, and non-mining-craze hypothetical pricing might have been under control especially given that AMD might have had more flexibility in their margins.

But the overall point is that Vega suffered due to SK Hynix's HBM failures and part and parcel of that was AMD overestimating what SK Hynix could actually produce- meaning AMD gambled poorly. A GDDR5 implementation of Vega would have been acceptable but would have required more fine-tuning of course- and it would have payed off massively, especially considering the mining craze.

As for the other experimental technologies- that's just AMD (and ATi). They've been doing that since before they adopted the Radeon name. Out-of-band features rarely pay off on first release for any ecosystem-bound ASIC; you have to be Apple to do that well, and AMD is perhaps the opposite of Apple. They exert near zero leadership in graphics technology.
 
Back
Top