HBM death nail? GDDR5X and 8Mb chips sampling

  • Thread starter Deleted member 93354
  • Start date
D

Deleted member 93354

Guest
Found on two other major tech sites.... just google GDDR5X Micron. The articles suggested a 2016 time frame.

In order to surpass the 8 Gbps barrier of GDDR5, without completely building a new memory technology from the ground up, Micron doubled the pre-fetch of GDDR5 from eight data words for each memory access, to 16 data words per access. Doing this resulted in 10 to 12 Gbps on the first generation, and the company expects to be able to surpass 14 Gbps, and perhaps reach 16 Gbps and beyond as the technology is refined.
 
I checked out one of the articles and saw no mention of power consumption. Isn't that one of the selling points of HBM/HBM2? Theoretically, the stacked memory design, slower speed, but mega parallelism, and proximity to the GPU core(s) is supposed to make HBM more electrically efficient, isn't it? Total board power is a concern for the industry moving forward, so advancements that free up more power budget for the core should be the way to go. If the GDDR5X consumes more power than current GDDR5, then we just end up with more power needing to be delivered and more heat to dissipate.
 
I checked out one of the articles and saw no mention of power consumption. Isn't that one of the selling points of HBM/HBM2? Theoretically, the stacked memory design, slower speed, but mega parallelism, and proximity to the GPU core(s) is supposed to make HBM more electrically efficient, isn't it? Total board power is a concern for the industry moving forward, so advancements that free up more power budget for the core should be the way to go. If the GDDR5X consumes more power than current GDDR5, then we just end up with more power needing to be delivered and more heat to dissipate.

OMG, let me pay $100 more for less performance, and less memory, so I can save 20 Watts.

Sign me up spanky.

The numbers and features are all that matter in the high end segment. Board power doesn't mean squat if you don't have the numbers.

BTW: 8Gb chips cut in half the number of packages necessary on the PCB. This makes board design simpler
 
OMG, let me pay $100 more for less performance, and less memory, so I can save 20 Watts.

Sign me up spanky.

The numbers and features are all that matter in the high end segment. Board power doesn't mean squat if you don't have the numbers.

BTW: 8Gb chips cut in half the number of packages necessary on the PCB. This makes board design simpler

HBM actually has the potential to be cheaper than GDDR5, let alone GDDR6, especially on a whole board basis. I'm not saying GDDR6 isn't going to become popular (should such a thing ever be released), just that HBM is not all about power savings (although power savings = cost savings).
 
Even with the number of packages cut in half, they're still on the PCB taking up space and complicating circuit design when designing the boards.
 
Even with the number of packages cut in half, they're still on the PCB taking up space and complicating circuit design when designing the boards.

You do realize that an interposer is just a fancy PCB where the memory sits close to the GPU? Oh and it has lots and lots of traces because it's a wide bus.
 
HBM actually has the potential to be cheaper than GDDR5, let alone GDDR6, especially on a whole board basis. I'm not saying GDDR6 isn't going to become popular (should such a thing ever be released), just that HBM is not all about power savings (although power savings = cost savings).

So far that is NOT proving out to be true. If HBM was supposed to have twice the bandwidth but half the speed, and GDDR5x hits 16Gbps, then HBM no longer has an advantage.
 
So HBM has been out on consumer boards for, what? A month or so? And you're predicting its death at the hands of a slight revision of the exact kind of tech it's meant to replace?

Holy sensationalism, Batman.

You don't think it's going to keep on improving just like GDDR?
 
HBM2 is coming on the big guns next year. Not GDDR5X. So, I dunno why you're claiming it's the end of HBM.
 
So far that is NOT proving out to be true. If HBM was supposed to have twice the bandwidth but half the speed, and GDDR5x hits 16Gbps, then HBM no longer has an advantage.

How is it not proving out to be? currently HBM provides 100% increase in bandwidth with 30%-50% savings in power, what is it not proving ? HBM 2 will be out soon, btw HBM is only expensive because it's new, GDDR5X / 6 will also be expensive when it comes out, and until it becomes mainstream, it will remain that way.
 
Well I guess both Jen-Hsun and Lisa must be retards for incorporating HBM2 in Pascal/Arctic Islands then. Who knew.
 
The AMD Fury proves that despite having twice the bandwidth it is still slower in the end. It is ROPs limited, not memory bandwidth limited.

Given the HUGE price premium and reportedly low yields of HBM + interposers it doesn't seem cost effective against future variants. HBM's only trump card was it's bandwidth and space savings. And don't give me that power envelope crap. 20 Watts of less power would only be of concern to someone on a laptop.

Will HBM2 be able to blow GDDR5X out of the water? Doubtful. HBM2's main attack point was it's memory density, not speed. But I'm speculating.

It seems HBM is a technology is search of a problem that isn't there...yet.
 
HBM isn't just a better RAM technology. It's a platform-level technology for AMD for both their GPU and their APU business. I wouldn't be surprised if you saw AMD selling HBM-equipped APU interposers with their next generation to speed up most workloads. It's like their Crystalwell.

Oh, and also, HBM is going to go much faster in future generations. It's only at 500MHz on Fury.
 
Well I guess this has been scrapped.

ubSnxzK.jpg
 
HBM is the obvious future.
Do not let Fury Xs 4GB cap fool you.
That was AMDs mistake for launching HMB so early, but with Gen 2 HBM that cap will be a thing of the past.
 
You really are just hating on HBM, it's advantages are more than just damn bandwidth and power usage. When it goes mainstream and board designs get more refined and gpus are finally on 14nm, the card sizes and the tech advancements in notebooks or what not has a huge huge potential. Gddr5 has its place but nvidia and amd are not crazy when two of the only gaming gpu manufacturers are going that route. HBM2 will have nothing that matches it's advantages as simple as that. It's saves gpu manufactures lot of cost in the long run due to card sizes and efficiency and potential it has for small form factor and even laptop gaming. HBM will the future in graphics may be only top end for now but it is indeed going to take over when it goes mainstream.
 
HBM is certainly the future IMO. There are huge advantages to be had having the memory right next to the die. Fury is just a first generation example, and lets be honest, it's not all bad and I'll bet most of the performance shortcomings of Fury is due to the GPU itself and not HBM. HBM is probably the only reason it's as competitive as it is.
 
The AMD Fury proves that despite having twice the bandwidth it is still slower in the end. It is ROPs limited, not memory bandwidth limited.

Given the HUGE price premium and reportedly low yields of HBM + interposers it doesn't seem cost effective against future variants. HBM's only trump card was it's bandwidth and space savings. And don't give me that power envelope crap. 20 Watts of less power would only be of concern to someone on a laptop.

Will HBM2 be able to blow GDDR5X out of the water? Doubtful. HBM2's main attack point was it's memory density, not speed. But I'm speculating.

It seems HBM is a technology is search of a problem that isn't there...yet.

my friend you are half right with AMD its not just rops its the architecture itself it was built for dx12 from the begining and those nv fanboys who say async shaders wont be used should think again because all AAA Multi Platform games will use it even the ps4 upcoming games are using async
 
my friend you are half right with AMD its not just rops its the architecture itself it was built for dx12 from the begining and those nv fanboys who say async shaders wont be used should think again because all AAA Multi Platform games will use it even the ps4 upcoming games are using async
So a buy a slower card now in hopes that it will perform better in the future when I will be ready to upgrade to another card anyway? lol

And wasn't that sort of the claim for the "8 core" AMD cpus.

AMD is always needing/wanting people to wait for a future that never seems to get here.
 
HBM seems so specialized. The interposer also takes slow wide data and crams it into a fast more-narrow bus, right? It's not just a pcb with routing, I think. Maybe I am wrong.
 
Hate to be that guy, but fuck it I'm going to be that guy.

It's death "knell", not nail.

Anyway, as you were.
 
HBM is still an emerging "next generation" memory standard.

HBM will have to coexist with GDDR5 (and with this news possibly GDDR5X) in at least the near to mid-term. Economically it does not seem like HBM will make sense for all products currently using GDDR5 (note that even DDR3 is used in place of GDDR5 currently in some cases due to economics). I assume until the package cost (not just the memory but the interposer, assembly and etc.) scales down signficantly it wouldn't make sense to use scaled down designs (eg. 1 stack) for cost reasons as your costs wouldn't scale down linearly (or close to) as with GDDR5.

If you look at the Samsung HBM presentation for instance they don't roadmap and expect HBM to scale down to mainstream applications until 2018 (http://www.computerbase.de/2015-08/idf-2015-samsung-fertigt-high-bandwidth-memory-ab-2016/).
 
I feel like this is just a shot at AMD rather than a legitimate criticism of HBM.

Yes, it is currently more expensive, but that's not a result of HBM tech - that's a result of the size and yields of AMD's chips and HBM implementation, and of course production volume. GDDR5 is manufactured in far larger quantities.

HBM offers much less complicated board design, power savings, size reductions and improved bandwidth. GDDR5X only offers improved bandwidth, and possibly smaller board size.

It's nice but I don't see NVIDIA and AMD backtracking and scrapping HBM plans.
 
i would say its less about yields, and more about economies of scale.
 
i would say its less about yields, and more about economies of scale.
I meant the yields of AMD's GPUs not HBM.

AMD has to manufacture the GPU itself.

Then they have to manufacture the interposer which is HUGE, it's the maximum physical size supported by the node they are manufacturing it on, they literally cannot make it any larger.

Then they have to package the GPU and HBM onto the interposer.

AFAIK the yields on HBM are good, but as we can see by market availability there are yield problems with Fiji somewhere.
 
Same thing was argued when SDRAM first came to the consumer arena. HBM will eventually win...it just has too many "good points" that are achievable compare to the effort required for GDDR6 to achieve the same. My guess is in 3-4 years HBM will have a very strong foothold.
 
Same thing was argued when SDRAM first came to the consumer arena. HBM will eventually win...it just has too many "good points" that are achievable compare to the effort required for GDDR6 to achieve the same. My guess is in 3-4 years HBM will have a very strong foothold.

Yeah, this is just major fabs trying to add more value to their long-term GDDR5 manufacturing investment, and the prolong the use of GDDR5 on mid-range and low-end parts.

If they can deliver the speeds and densities promised, then the massively faster 14nm midrange generation can still be powered by a paltry 128-bit bus (OEMs can reuse their existing board designs). They can also ship with 4GB ram standard, using only 4 chips :D
 
Last edited:
how about a Mule point?

no need to make an ass of yourself...
 
If you make an ass out of an ass's ass by kicking its ass, do you also make an ass out of yourself?

:D
 
No no, i'm pretty sure that you guys mean a Mutt Point of course!


Now, kidding aside, by the time they even begin to show actual products of GDDR5X next year there will be HBM2, which doubles the density and bandwidth of current HBM, which already dwarves GDDR5, what this means is that, no, HBM will not die, not like anyone other than OP thought such thing.

Btw i would laugh at 8 Mb chips... pretty sure that you meant 8 Gb chips...

Also, GDDR5 came out 7 years ago, the fact that they could create GDDR5x also surprises noone, after all, they should have been working on that and more in the last 7+ years (since GDDR5 was finalized), heck i'm more surprised that we haven't seen no such thing pop out earlier, which may be the reason why HBM was created...
 
sure thats not 8GB (giga-byte)? Gb is giga-bit.

HBM is GB per chip, DDR is Gb per chip.
 
HBM however, is not.

that was my point.

HBM1 is 1GB per chip/stack
 
Last edited:
HBM however, is not.

that was my point.

HBM1 is 1GB per chip/stack

It's 1 gibibyte per STACK. The chips are 2 gibibit density, which is half the density of mainstream GDDR5. But you'll have to search for "Giga" to find any articles about density, because old people like their borrowed nomenclature.

I assume they use Bytes because the chips are already delivered in a manufactured block (stack), similar to a DIMM? I don't know for sure how they assemble the whole interposer, but I can't imagine Hynix not selling a complete stack to AMD. Similarly, DIMMs are also sold in Bytes, because they are sold as a manufactured set of chips.

It's stupid logic I agree, but it's the way things work :D
 
Last edited:
OMG, let me pay $100 more for less performance, and less memory, so I can save 20 Watts.

Sign me up spanky.

The numbers and features are all that matter in the high end segment. Board power doesn't mean squat if you don't have the numbers.

BTW: 8Gb chips cut in half the number of packages necessary on the PCB. This makes board design simpler

But you're going to want twice as much ram.
 
Back
Top