From ATI to AMD back to ATI? A Journey in Futility @ [H]

You completely forget cost, TDP and reason yet again.

If it all was so easy we would be sitting with very different PCs than we do today.
 
You're giving a false dichotomy. How so? It won't give the same amount of bandwidth. It'll give you more. And it'll fit on to the package. GDDR5X and probably 6 won't fit on the package and won't offer the same amount of bandwidth. It won't even be close if you tried. You could fit maaaaybe 3-4 channels of GDDR5X or 6 on package if you really stretched and blew out the packaging size. How much bandwidth is that? Crap compared to HBM. They're not going to put it on the mobo and somehow try to tie to the iGPU either. The CPU package pins aren't there. Part of the same reason they don't do 4 channels of DDR4.


It will give you more at a higher cost though, and if that extra bandwidth isn't usable, then its not cost effective and won't be adopted.
I gave a 1-3yr timeline earlier for a reason. I haven't been posting in thread for that long. If you're not going to read my posts and consider thread context please don't reply further.

That is too small of a time to see that kind of change because there are more cost effective ways of getting what they need.

What other tech? GDDR6 is the only other real alternative if you want more bandwidth than what GDDR5X will offer.

Was GDDR5x even on the radar when HBM1 was being deployed?

Why did P102 use GDDR5x with 384 bit bus vs going with HBM 2? If its cost effective on a 1200 card you better believe it will have more impact on lower end cards right? At 1200 bucks a pop margins should be more than enough to cover the extra cost of HBM2..... Added benefit to that would have been nV would not have needed to spend 20 million or more to tape out GP102.
 
You cannot directly compare how many things are sold with HBM/HBM2 with GDDR5/5X and say "the numbers of HBM/HBM2 are nothing but a PR gimmick" they have not been out nearly as long so will only count for a small small number of a company balance sheet until when and if more and more of them are being used in place of "standard" GDDR5/5X/6 or whatever, that time is not here yet, and if they were truly a "dead end" AMD/SK Hynix/Samsung/Micron/ JEDEC would not be continuing to persue optimizing speed/spec for HBM2-3 and beyond, they would have dropped it for something else, pretty sure no high tech company likes to chase multi-million dollar "pranks"



I agree with most of what you stated outside of this, what Shitai is saying because it was not usable in real world to give benefits over its competition, they just used as a PR fluff to show something they did was better, but the end results didn't show that. When AMD started touting HBM for Fiji, many people were looking at its bandwidth and saying it is going to crush the Titan and later the 980ti, all because of its bandwidth. Which was really a total misunderstanding how AMD communicated their marketing, and to AMD's benefit their marketing was geared towards people to misunderstand it.
 
Its not misleading at all.
When you know that the GDDR6/GDDR5X bus will never be wide enough to offer the same or better bandwidth while you try to construe otherwise by constantly hyping "HIGHER PER PIN BW WOOOO" then yes it is.

If anything the latency for the transfer is higher on HBM after the requested memory gets delivered.
Based on what? HBM is wide and fast remember. And latency is a tricky thing when issues like error correction come into play. GDDR in general is known for being fairly high latency.

1 pin and 10Gbps or 10 pins and 1GBps. Exactly the same.
But the total bus bandwidth for GDDR5X/6 and HBM will be very different. That is where you're being facetious and how you're attempting to misrepresent things. As the end user all that will matter is total bus bandwidth. Per pin bus specs is technical minutia.

Some things never become cheap enough for those segments. Plenty of examples in history.
And some things get lots cheaper over time. Plenty of historical examples of that too. And there is plenty reason to believe that HBM will be able to scale down in costs.

Just as people dont buy IGPs for their graphics performance.
Have you already forgotten the Iris Pro? It wasn't high performing vs a dGPU but for the mobile segment it was quite impressive for a iGPU vs other discrete mobile solutions. The eDRAM cache it used wasn't even that big of a chunk of memory either. Again the future won't be exactly like the past.
 
When NVidia has HBM on their gaming cards the naysayers here will say HBM was the second coming in which NVidia bless the world with :angelic:. HBM advantages: smaller package, less latency, great bandwidth , lower power per a given bandwidth, makes AMD look good :troll:.
 
well its about money and what these companies can get for less to do the same amount of performance. End users, it doesn't matter as long as the performance, power, what ever is competitive with other products.

Look if AMD comes out with an APU with HBM, even low cost HBM, and goes up against Intel which uses DDR5 at the time, which company is going to get more margins?

Now we can say HBM will give less latency than DDR5 but latency can be hidden and that is what iGPU's and APU's have been doing so far.
 
When you know that the GDDR6/GDDR5X bus will never be wide enough to offer the same or better bandwidth while you try to construe otherwise by constantly hyping "HIGHER PER PIN BW WOOOO" then yes it is.

How fast is 16Gbps GDDR6 on a 512bit bus? And 256bit?


Based on what? HBM is wide and fast remember. And latency is a tricky thing when issues like error correction come into play. GDDR in general is known for being fairly high latency.

But the DRAM only operates at 500Mhz.


But the total bus bandwidth for GDDR5X/6 and HBM will be very different. That is where you're being facetious and how you're attempting to misrepresent things. As the end user all that will matter is total bus bandwidth. Per pin bus specs is technical minutia.

Again, how fast is a 16Gbps 512Bit GDDR6 bus? And 256bit?


And some things get lots cheaper over time. Plenty of historical examples of that too. And there is plenty reason to believe that HBM will be able to scale down in costs.

HBM got everything against it due to static cost modifiers. Interposer, TSV, manufactoring loss etc. And this is why some segments are simply out of reach.

Have you already forgotten the Iris Pro? It wasn't high performing vs a dGPU but for the mobile segment it was quite impressive for a iGPU vs other discrete mobile solutions. The eDRAM cache it used wasn't even that big of a chunk of memory either. Again the future won't be exactly like the past.

But people didn't buy it. And its gone with Cannon Lake. Now take a wild guess what would happen to your HBM parts.
 
HBM did nothing for last gen, GCN couldn't utilize all that extra bandwidth.

Unless you gamed at 4k or probably 1440p. My personal experience trumps your personal opinion in this any day of the week.
 
Why did P102 use GDDR5x with 384 bit bus vs going with HBM 2? If its cost effective on a 1200 card you better believe it will have more impact on lower end cards right? At 1200 bucks a pop margins should be more than enough to cover the extra cost of HBM2..... Added benefit to that would have been nV would not have needed to spend 20 million or more to tape out GP102.

The elephant in the room!

While with time HBM may enter cards like 102 and 104 series, maybe 106 and Polaris 10 type series if really pushing it. There is a far way to IGPs and lower dGPUs.

When NVidia has HBM on their gaming cards the naysayers here will say HBM was the second coming in which NVidia bless the world with :angelic:. HBM advantages: smaller package, less latency, great bandwidth , lower power per a given bandwidth, makes AMD look good :troll:.

Congratulations, you completely missed everything despite how hard it could be :)
 
Unless you gamed at 4k or probably 1440p. My personal experience trumps your personal opinion in this any day of the week.


It wasn't HBM that gave those benefits, because the moment you turn on AA and AF at those settings you see a much larger hit then the expected hit, that means bandwidth isn't cause the differential. And I can show you overclocking results on different websites based on GPU vs Memory overclocking which show that too.
 
It will give you more at a higher cost though, and if that extra bandwidth isn't usable, then its not cost effective and won't be adopted.
How much higher though? If you're going to assume HBM costs won't ever drop much then sure it won't ever make sense but there is no reason to believe that will be true. There also isn't any reason to believe the iGPU won't be able to use the extra bandwidth. They're already highly bandwidth starved and have been since ever. AMD and Intel are already perfectly willing to dedicate ever more die space to the iGPU too.

That is too small of a time to see that kind of change because there are more cost effective ways of getting what they need.
The first consumer GPU's to use HBM came out in mid 2015 and cost lots more than they do now. They're not hard to find either like they were at launch. You don't think we've seen huge improvements over a relatively short amount of time here?

Was GDDR5x even on the radar when HBM1 was being deployed?
Of course. It was probably in development for a long time before it started getting press on enthusiast sites or ratified by JEDEC.

Why did P102 use GDDR5x with 384 bit bus vs going with HBM 2?
Ask Nvidia. The first thought that comes to mind is profit margins though. They're like Intel in that they'll do the minimum to improve their products if they can keep their profit margins or increase them. Any business is these days.
 
Ask Nvidia. The first thought that comes to mind is profit margins though. They're like Intel in that they'll do the minimum to improve their products if they can keep their profit margins or increase them. Any business is these days.

Or simply the benefits wasn't there.
 
I'm wondering what's the extent of the deal. Is it something like they had with nvidia or are we actually going to see an Intel cpu with AMD gpu.
 
How much higher though? If you're going to assume HBM costs won't ever drop much then sure it won't ever make sense but there is no reason to believe that will be true. There also isn't any reason to believe the iGPU won't be able to use the extra bandwidth. They're already highly bandwidth starved and have been since ever. AMD and Intel are already perfectly willing to dedicate ever more die space to the iGPU too.

HBM costs will drop but you have other tech that provide more bandwidth that what is used now at lower costs than HBM. If the need is there then HBM will be adopted otherwise the other tech is good enough in the short term. And 1 to 3 years is just one generation of CPU's/iGPU/APU, do you think 1 generation is enough?


The first consumer GPU's to use HBM came out in mid 2015 and cost lots more than they do now. They're not hard to find either like they were at launch. You don't think we've seen huge improvements over a relatively short amount of time here?

Yet AMD, Raja, stated 1 Q ago to create the manufacturing pipeline for HBM they still haven't recovered the cost?

Of course. It was probably in development for a long time before it started getting press on enthusiast sites or ratified by JEDEC.

GDRR5x first announcement for mass production was ahead of schedule by 1 quarter, which happened 1st quarter of this year, tech like ram takes about 1 year to produce, so they started around beginning of 2015, HBM was in the works from 2010.


Ask Nvidia. The first thought that comes to mind is profit margins though. They're like Intel in that they'll do the minimum to improve their products if they can keep their profit margins or increase them. Any business is these days.


And you don't think AMD thinks about bottom line too?
 
How fast is 16Gbps GDDR6 on a 512bit bus? And 256bit?
1Tbps on a 512 bit bus and 512Gbps on a 256 bit bus. We both know you'll probably never see a 512 bit GDDR6 bus though so stop being factious. 256 bit or 384 bit are the most likely implementations. And that is real fast now but in 2018? Against HBM2 or HBM3? Or even a cost oriented HBM? Come on.

But the DRAM only operates at 500Mhz.
Which is pretty fast so what is your point?

Again, how fast is a 16Gbps 512Bit GDDR6 bus? And 256bit?
Why don't you compare that to the 2Tbps of HBM2 or 4Tbps+ (will do more than a 8 stack remember) of HBM3? How do per pin bandwidth numbers matter so much when the peak bandwidth numbers are so different again?

HBM got everything against it due to static cost modifiers. Interposer, TSV, manufactoring loss etc. And this is why some segments are simply out of reach.
But those have all improved massively just over the last year and half. You have to show that those can't improve at all or very little to make your point.

But people didn't buy it. And its gone with Cannon Lake. Now take a wild guess what would happen to your HBM parts.
What? Tons of people bought it. Intel stopped making it, people didn't stop buying it. Especially in enthusiast circles when it was found the eDRAM cache improved general gaming performance a nice amount even if you weren't using the iGPU.
 
Last edited:
I'm wondering what's the extent of the deal. Is it something like they had with nvidia or are we actually going to see an Intel cpu with AMD gpu.


My understanding is its kinda like what they had with nV but because AMD's IP for memory systems are different (nV's deal didn't include this at that time), there are other benefits for Intel.
 
And 1 to 3 years is just one generation of CPU's/iGPU/APU, do you think 1 generation is enough?
If we keep seeing improvements continue at their current rate? Sure.

Yet AMD, Raja, stated 1 Q ago to create the manufacturing pipeline for HBM they still haven't recovered the cost?
Which has nothing to do with what I said. Its quite possible for AMD to get costs down lots and improve manufacturing volumes but still have financial issues due to mediocre sales + high development costs. You know this. You have to show that manufacturing costs and production volumes haven't improved much if at all.

GDRR5x first announcement for mass production was ahead of schedule by 1 quarter, which happened 1st quarter of this year, tech like ram takes about 1 year to produce, so they started around beginning of 2015, HBM was in the works from 2010.
Development time has nothing to do with production ramping or product announcements. You know this too.

And you don't think AMD thinks about bottom line too?
Did you not read this part? "Any business is these days." If you're not going to read my posts, or worse, selectively read them then what the heck is the point of posting at all? This is getting ridiculous!
 
If we keep seeing improvements continue at their current rate? Sure.

Err next gen products are here right now.....

Which has nothing to do with what I said. Its quite possible for AMD to get costs down lots and improve manufacturing volumes but still have financial issues due to mediocre sales + high development costs. You know this. You have to show that manufacturing costs and production volumes haven't improved much if at all.

Manufacturing and production volume if yields were low then they will improve in this short time 1 year but it is well know they will not going into mass production unless they have acceptable yields. So no cost is not going to go down much in a one year time frame.


Development time has nothing to do with production ramping or product announcements. You know this too.

Development time for GDDR5x was about one year but wasn't talked about till much later,

Development time for HBM was 5 years and was talked about since its inception.

Did you not read this part? "Any business is these days." If you're not going to read my posts, or worse, selectively read them then what the heck is the point of posting at all? This is getting ridiculous!

But it all goes back to is it cost effective when the need isn't there? No its not, and since next gen products are coming out now for iGPU and APU's we won't see that in a 3 year life span of those products.
 
As far as this deal is concerned, is AMD licensing their actual graphics IP to Intel (such as GCN cores, Video encode/decode blocks, etc.), or are they just giving Intel a license to develop their own IP that would be covered by AMD's patents? I think I would lean toward the latter as Intel has probably run up against a patent wall in developing much of their graphics IP. If they no longer have to deal with patents as such there are probably a number of ways they can improve their own IP. Kyle, do you have more information regarding this?
 
Err next gen products are here right now.....
Which has what to do with costs coming down quite a bit on Fury/X + improved availability of those same cards? Since when is lowered costs + improved availability not a sign of improvements in manufacturing and lowered production costs?

So no cost is not going to go down much in a one year time frame.
It does with any other fabrication process. There are some unique steps involved for HBM but some of the more expensive parts (the interposer itself) will get cheaper over reasonable time scales.

Development time for GDDR5x was about one year but wasn't talked about till much later, Development time for HBM was 5 years and was talked about since its inception.
OK but how does this matter much? One was a iteration/evolution of a existing tech and the other was something entirely new. And we already know what is coming for the next few years. Its highly unlikely something new will pop out of nowhere. Or that GDDR has much more scaling room left. If you want to go into fantasy land and do 'what ifs' based on nothing then sure you can create any scenario you want. But then what is the point? I can do the same and we can go back in forth and it'll all be BS.

But it all goes back to is it cost effective when the need isn't there? No its not, and since next gen products are coming out now for iGPU and APU's we won't see that in a 3 year life span of those products.
No it doesn't. Of course there is need there. By and large the PC community doesn't want to buy dGPU's. Only enthusiasts do that for the most part. And there have been persistent rumors AMD will do a HBM APU using Zen for Raven Ridge in late 2017.

Also, you're just going to totally ignore that you misread/selectively read my comment or what? Blah forget it.
 
As far as this deal is concerned, is AMD licensing their actual graphics IP to Intel (such as GCN cores, Video encode/decode blocks, etc.), or are they just giving Intel a license to develop their own IP that would be covered by AMD's patents? I think I would lean toward the latter as Intel has probably run up against a patent wall in developing much of their graphics IP. If they no longer have to deal with patents as such there are probably a number of ways they can improve their own IP. Kyle, do you have more information regarding this?
Who knows.. perhaps they just use AMD as a patent shield. Or they could be using GCN. Intel did mention they would support FreeSync while back, I wonder if they knew about the deal for awhile. But yeah I would like to know the answer to your question as well.
 
Which has what to do with costs coming down quite a bit on Fury/X + improved availability of those same cards? Since when is lowered costs + improved availability not a sign of improvements in manufacturing and lowered production costs?

They haven't gone down though, because you can see AMD margins haven't improved.....

It does with any other fabrication process. There are some unique steps involved for HBM but some of the more expensive parts (the interposer itself) will get cheaper over reasonable time scales.

nodes don't work that way, they level off and the manufacturing part of the interposer is on 65nm which pricing on that node leveled off years ago. Added to this, per person cost of putting the interposer and components together, won't change too much either, this is skilled job and it can't be done solely by robots.

OK but how does this matter much? One was a iteration/evolution of a existing tech and the other was something entirely new. And we already know what is coming for the next few years. Its highly unlikely something new will pop out of nowhere. Or that GDDR has much more scaling room left. If you want to go into fantasy land and do 'what ifs' based on nothing then sure you can create any scenario you want. But then what is the point? I can do the same and we can go back in forth and it'll all be BS.

I'm not saying what if, I'm saying there will be other tech that scales better form a price/performance ratio, we have seen it already happen and we will continue to see it.

No it doesn't. Of course there is need there. By and large the PC community doesn't want to buy dGPU's. Only enthusiasts do that for the most part. And there have been persistent rumors AMD will do a HBM APU using Zen for Raven Ridge in late 2017.

Also, you're just going to totally ignore that you misread/selectively read my comment or what? Blah forget it.

PC community vs Gaming community. Why do PC community need more performance? Is there a need for just the pc community to upgrade a computer because of an APU or iGPU? Did we see that as a driving factor for the general consumer? To this day we haven't seen that, perfect example Apple, they have shit graphics yet their sales improved while others declined. Most of their sales come from computers with iGPU's not with discrete.

It isn't about the graphics that caused that.

Most people don't buy computers to play games at higher settings year in and year out. Companies don't need higher graphics power for most of their computers. Most of them don't even need a proper high performance iGPU or APU. In my floor, where we don't have any artists or programmers, we have no computers that use any need for performance on their GPU, that is like 300 computers right there. You can't generalize the need for high performance even with an IGPU or APU to the average consumer, cause they don't need em. All they need is something that can run windows desktop, which ya don't need more then what is currently out there, for that matter for iGPU's or APU's that came out 10 years ago are enough.
 
come of you really have way too much time to mulit quote and explain everything lol. I have a limit of about 2 short paragraphs... I always hated economics..
 
come of you really have way too much time to mulit quote and explain everything lol. I have a limit of about 2 short paragraphs... I always hated economics..

You come on, lol, actual Econ courses by "Ben Stein voice" professors were the ones that we all used to prop up our textbooks (in front of our faces) for, pretending to read/listen, while we were busy snatching some sleep (hangovers/overnight study/gaming), doing work from another class, or playing games on the personal laptop. (Yes, lots of witnessing/experience) :D

There may be nitpicking here, but it's been pretty civil thus far. Neither too boring or too out of hand. And the info is pretty interesting, for the most part.
 
I'm wondering what's the extent of the deal. Is it something like they had with nvidia or are we actually going to see an Intel cpu with AMD gpu.

Cool thing is that it does benefit AMD and that I can get behind. :) Same folks are here bashing AMD as in every other AMD thread, video or cpu.
 
I have to admit though that every time someone brings up HBM APUs i can only get to laugh. I mean, HBM APU for HPC may make sense, but then again, Koenigsegg Regera makes sense too. For mainstream PCs? That's like slapping 700 bhp electrical motors into Prius.
 
As far as this deal is concerned, is AMD licensing their actual graphics IP to Intel (such as GCN cores, Video encode/decode blocks, etc.), or are they just giving Intel a license to develop their own IP that would be covered by AMD's patents? I think I would lean toward the latter as Intel has probably run up against a patent wall in developing much of their graphics IP. If they no longer have to deal with patents as such there are probably a number of ways they can improve their own IP. Kyle, do you have more information regarding this?
To my understanding patents for current technology are going to be extended and AMD teams will be working on tech to be used in future Intel products. I do not think it is an either/or situation.
 
Last edited:
most likely not the GPU cores they would be using, just like nV's contract, Intel used only the IP they needed for to make sure they don't get dragged into court again.
 
most likely not the GPU cores they would be using, just like nV's contract, Intel used only the IP they needed for to make sure they don't get dragged into court again.

Yep. Also gives Intels IGP team more flexibility by not being IP restricted.
 
most likely not the GPU cores they would be using, just like nV's contract, Intel used only the IP they needed for to make sure they don't get dragged into court again.
I would agree apart from Kyle said this:
AMD teams will be working on tech to be used in future Intel products
Cheers
 
Intel has been making huge strides with their iGPU tech the past few years. It's hard to believe they are going to give up on it already.

I'm also wondering if it's possible that Intel only wants a certain portion of the technology and not actually AMD GPUs.
 
This actually makes a ton of sense from both sides, maybe the apu market isn't going to be worth it with the die change so they want to leverege things, or it would be worth more money to license it out than make your own, or ... there are a lot of things that make total sense for amd to do this. They have technology intel doesn't , and they want it
 
Well that is like putting the final nail in AMD's coffin. The gpu in AMD's APU was the only thing that made AMD's processor better than Intels.
 
Makes you wonder if this is why AMD split their CPU and GPU divisions, to keep Intel happy. I guess this means Intel is still going to waste more than half the CPU area on something we enthusiasts will never use (Or sell you a chip that costs the same to make without wasted space but 1-2 generations out of date for $1,000+).

I bet this will make some more compelling small form-factor notebooks. I do feel bad for the Intel GPU guys, the Skylake integrated GPU is actually fairly competent, finally.
 
I was wondering why my AMD stock jumped up so high. I was pretty surprised to see Hardforum linked in a Forbes article quoting Kyle himself.
 
Back
Top