AMD HBM High Bandwidth Memory Technology Unveiled @ [H]

Will HBM have high performance gains( as in FPS)? 390x 8gb doesn't seem worth it over a 970 unless its not a rebrand.
 
Will HBM have high performance gains( as in FPS)? 390x 8gb doesn't seem worth it over a 970 unless its not a rebrand.

It is a 290x. The 290x was about as powerful as a 970 a 390x has double the ram and has been tweaked for higher clocks a 390x will beat a 970

currently only a fury has hbm as an option and that is within margin of error of the titan x.
 
It is a 290x. The 290x was about as powerful as a 970 a 390x has double the ram and has been tweaked for higher clocks a 390x will beat a 970

currently only a fury has hbm as an option and that is within margin of error of the titan x.

So kind to give a proper reply.
You are a better Man then me.
Bow in honor.
 
It is a 290x. The 290x was about as powerful as a 970 a 390x has double the ram and has been tweaked for higher clocks a 390x will beat a 970

currently only a fury has hbm as an option and that is within margin of error of the titan x.

Damn I didn't know HBM was only for the fury. Oooops.
 
Damn I didn't know HBM was only for the fury. Oooops.

it really should not have been it should have been the feature of the x designation such as the r9 370x r9 380x r9 390x those should have been hbm equipped but since they are rebrands i dont think they could run hbm... To have the 300 series be what i envision there would have been 4 fiji cores small, medium, large and extreme any rebrands would have been non x or r7 or r8 designations and non x r7 or 8 would be aimed at budget/oem the 370x 380x 390x and fury x would have been new and for top tier gamers.
 
it really should not have been it should have been the feature of the x designation such as the r9 370x r9 380x r9 390x those should have been hbm equipped but since they are rebrands i dont think they could run hbm... To have the 300 series be what i envision there would have been 4 fiji cores small, medium, large and extreme any rebrands would have been non x or r7 or r8 designations and non x r7 or 8 would be aimed at budget/oem the 370x 380x 390x and fury x would have been new and for top tier gamers.

That definitely seems alot more appealing. I wonder why they went for another rebranding of the same chip.

Will be interesting to see how this turns out for AMD financially.
 
That definitely seems alot more appealing. I wonder why they went for another rebranding of the same chip.

Will be interesting to see how this turns out for AMD financially.

In theory my scheme would also allow each x designation to be designed around being faster than each tier nvidia has out... my pricing would also be pretty mean to big green.

unfortunately the ceo running amd is not going to be aggressive and is going to be frugal and defensive where I would gamble it all on fiji and zen... i would have also probably have killed the construction cores sooner and had zen out this year or at least had carrizo with hbm and made sure I had a flag ship apu with every oem...
 
Lunas--it makes no pragmatic reason to tear apart a working architecture (the 2XX->3XX chips) to go to HBM unless you go whole hog. If it makes $$ sense for AMD/Nvidia/et. al to go to an interposer-based architecture in their lower-grade architectures, we'll see that happen over the next few development cycles.
 
After the AMD rep confirmed that Fury-X will only have HDMI 1.4a, not HDMI2, I decided to go with a 980ti.

Too bad :(
 
Zarathustra[H];1041674047 said:
After the AMD rep confirmed that Fury-X will only have HDMI 1.4a, not HDMI2, I decided to go with a 980ti.

Too bad :(

And what does that have to do with HBM?
 
And what does that have to do with HBM?

Tangentially related, as this thread discusses AMD's upcoming GPU's with HBM, and this is a comment on those GPU's features.

Essentially, I'm bringing up a point people reading this thread and anticipating Fury may need to be aware of.

Conversations always have tangents. A laser focus on topic is neither feasible, or entirely desirable, as we are all here to collect information, and this is relevant information.

It is reasonably related.

Who elected you the "on topic" police? :p
 
Lunas--it makes no pragmatic reason to tear apart a working architecture (the 2XX->3XX chips) to go to HBM unless you go whole hog. If it makes $$ sense for AMD/Nvidia/et. al to go to an interposer-based architecture in their lower-grade architectures, we'll see that happen over the next few development cycles.

But the way amd is hemorrhaging money will they last more than a few more dev cycles... while yes all 3 camps are pretty stagnant right now intel has done little more than improve IPC and catch up with the gpu side. Nvidia has done little more than IPC gains over previous gen cards and now amd has done nothing but refresh for the past 4 years or 3 card generations... bottom line is amd needs to operate in the black this strategy is not going to accomplish that...

i wanted to buy a fx-8800 based laptop then I saw the benches and the dgpu the oems were planning on putting with it... the gains over a fx-7600 were very small and only on the multi core tests. NNothing but power savings were gained from the shrinkage and optimization excavator is bringing nothing and the only notable feature was h.265 decoding...

On top of that there is no oem presence every oem favors intel/nvidia so heavily the only chance amd would have is to buy it's way into the market space...
 
^ What does any of that have to HBM? I also would love for huge innovation and massive cost drops, but I try to live in reality, rather than try to tell companies that I have no idea about their internal state how to run their business (which is probably a fair bit of why they ended up in their present predicament). :D

There are certain architectural and systems benefits to HBM, especially around the memory controller, sure, which I'm sure allows them to arrange the system in a smarter way (but with no guarantee that they'll succeed in doing so, either). Those interposers cost $$, too, so it needs to make sense over continuing to use GDDR5 on their non-flagship products.

If you don't have a revised GCN 1.2-1.3 design to move throughout the product portfolio (and the fact that they're still parked on a 28 nm process), then it makes a whole lot of sense in continuing to build around these rather mature chipsets, where everything around fabrication is well established (see cost structure and living in the red). Alls you can do is alls you can do.
 
UMM

both amd and nvidia have released drivers that have given over 30% gains on performance.

They weren't being honest but making the statement based on others claiming the Fury was as good as it will get.

Anyway for HBM I saw this:

To substantiate my comment about driver trickery, this is a quote from TechReport's HBM article:

"When I asked Macri about this issue, he expressed confidence in AMD's ability to work around this capacity constraint. In fact, he said that current GPUs aren't terribly efficient with their memory capacity simply because GDDR5's architecture required ever-larger memory capacities in order to extract more bandwidth. As a result, AMD "never bothered to put a single engineer on using frame buffer memory better," because memory capacities kept growing. Essentially, that capacity was free, while engineers were not. Macri classified the utilization of memory capacity in current Radeon operation as "exceedingly poor" and said the "amount of data that gets touched sitting in there is embarrassing."

Strong words, indeed.

With HBM, he said, "we threw a couple of engineers at that problem," which will be addressed solely via the operating system and Radeon driver software. "We're not asking anybody to change their games.""


Certainly speaks to why we have seen VRAM usage grow in recent months, maybe years but seems more drastic this past year. Of course I think W10 will be where these drivers are at.
 
I read somewhere amd was in the process of completely rebuilding the drivers from the ground up... if that is true the next set of drivers they issue should have a huge performance boost.
 
So we need to wait for new series cards every time to get working drivers ?
 
interesting thing is if we take CPU's as an example, when a new cpu architecture is made let say from Intel, only about 20% of the software code for it is usable from a previous gen if they are lucky and that is if the new processor is mid generation update. So for GPU's I'm sure its even less then that. So drivers from ground up, kinda goes out the windows, ever generation of cards the drivers have to be rewritten substantially.
 
interesting thing is if we take CPU's as an example, when a new cpu architecture is made let say from Intel, only about 20% of the software code for it is usable from a previous gen if they are lucky and that is if the new processor is mid generation update. So for GPU's I'm sure its even less then that. So drivers from ground up, kinda goes out the windows, ever generation of cards the drivers have to be rewritten substantially.

i doubt they normally need to be rewritten to large degrees but it has been so long and there are so many issues and weak areas for amd drivers right now they need to go back and re do them all and streamline them they are still using ati code for crying out loud i am just hoping that rumor was true and they are really rewriting them from the ground up and that they dont screw any of us over by dropping support for the hd 7000 series and older or even dropping support for anything but the 300 series and newer in these new drivers...

Due to the number of rebrands and direct x 12 support the new drivers they are supposedly working on need to support the hd 7XXX 8XXX and rX 2XX and rX 3XX series cards or at least the 2XX and 3XX series cards...
 
Last edited:
for a new generation of architecture they will have to rewrite from the ground up. Outside of the rebrands, Fury would probably need a good deal of rewrite even though architecturally its is similar, there are quite a bit of changes, so the rewrite has already been underway.
 
So is there any actual tangible benefit from hbm right now? The r9 nano is supposed to have it also
 
So is there any actual tangible benefit from hbm right now? The r9 nano is supposed to have it also

The Fury CF reviews have shown 4gb on it is not a limitation for 4k gaming, and with the insane amount of bandwidth it will be utilized better in the future.
 
Personally I'm curious to see how AMD might integrate this technology into it's CPUs or APUs an future console chipsets.
 
A further announcement from Samsung on the HBM DRAM modules is anticipated over the coming months. Until then, those seeking a computer memory upgrade will find a multitude of options at Data Memory Systems, from external hard drives to RAM modules for any device.

This might suggest that it would work out that way but how do you work this around current plans for AM4? I can see the benefits Samsung is happy to make hardware for others.

But the whole ecosystem changes so dramatically if it wants all these things it has to be dirt cheap because none of the products would (or could) use this as expensive cache ...
 
How is it possible to do HBM on a DIMM??

Let's do some math...
A regular DDR chip is 8-bits wide. They use 8 of them on a normal dimm, thus a normal DIMM is 64-bits wide. (ECC uses a 9th chip, so 72-bits wide, bnt let's not worry about that right now) It's going to take at least 2 pins per bit (Differential Signalling), plus possibly ground shielding on each side. Let's just low ball it and say 2 pins per bit, saying you could do it without a ground shield on each bit. That means for a SINGLE HBM chip (1024 bits wide) it would take 2048 pins JUST for the data. Then you also have control pins, power, etc. Current DDR4 DIMM's are 288 pins, it would take like 2500 pins to do a DIMM with ONE memory module on it. Thats almost 10x as many pins as a DDR4 dimm ... there is not enough room for that many connections on a DIMM.

This isn't passing the sniff test....
 
How is it possible to do HBM on a DIMM??

Let's do some math...
A regular DDR chip is 8-bits wide. They use 8 of them on a normal dimm, thus a normal DIMM is 64-bits wide. (ECC uses a 9th chip, so 72-bits wide, bnt let's not worry about that right now) It's going to take at least 2 pins per bit (Differential Signalling), plus possibly ground shielding on each side. Let's just low ball it and say 2 pins per bit, saying you could do it without a ground shield on each bit. That means for a SINGLE HBM chip (1024 bits wide) it would take 2048 pins JUST for the data. Then you also have control pins, power, etc. Current DDR4 DIMM's are 288 pins, it would take like 2500 pins to do a DIMM with ONE memory module on it. Thats almost 10x as many pins as a DDR4 dimm ... there is not enough room for that many connections on a DIMM.

This isn't passing the sniff test....

Not sure if I am understanding directly your point, but it appears that it would be a whole new setup. I get your concern about the pins. Maybe we will get more info later this year with Zen APUs.
 
I'm not saying it's complete bullshit, but yeah it would have to bring along with it some other changes in tech, or something.
 
I'm not saying it's complete bullshit, but yeah it would have to bring along with it some other changes in tech, or something.

And Rome was not built in a day. If it is a practical solution then it is something viable if it is just some boasting for stockholders then it is just that :) .
 
Their links seem broken, but I think it was a translation issue as Samsung never mentioned DIMM, only system memory. That reads to me more like AMD bolted a nano onto a Zen MCM and suddenly there is HBM system memory through a high speed link. DIMMs, even if you only used 1 of HBM's dual channels would still be nearly a foot long with a tiny stack in the middle. The other option would leave it hilariously dense. And that says nothing about the motherboard to accommodate it. It'd be easier to make CPU like sockets for them if you went that direction.
 
And Rome was not built in a day. If it is a practical solution then it is something viable if it is just some boasting for stockholders then it is just that :) .

Yeah I know what you mean but there are some real technical physical limitations here. See below for more...


Their links seem broken, but I think it was a translation issue as Samsung never mentioned DIMM, only system memory. That reads to me more like AMD bolted a nano onto a Zen MCM and suddenly there is HBM system memory through a high speed link. DIMMs, even if you only used 1 of HBM's dual channels would still be nearly a foot long with a tiny stack in the middle. The other option would leave it hilariously dense. And that says nothing about the motherboard to accommodate it. It'd be easier to make CPU like sockets for them if you went that direction.

Yeah, I'd have to agree, it was probably a misinterpretation on the part of the article's author. I mean, think about it, even if you had a LGA sockets for memory (LGA sockets are very expensive, this would raise mobo costs a LOT), you still need all those pins to go to the CPU itself as well. Currently the biggest common LGA socket is 2011 pins, and it isnt big enough anymore, we need a super LGA socket... So, now let's assume this CPU has dual channel HBM support, you could get 16GB max, so you would still need DDR4 as well. On a 2011-class platform, 16GB is not enough, and this is going to be a massive socket, so it's definitely not going to be the mainstream/desktop socket. So, for two channels of HBM we need 1024 bits each, x2 for differential signalling, x2 for dual channel, that's 4096 pins, we are at 6107 and we haven't even added any control lines or anything like that. There are just so many reasons that this isn't going to happen. I would imagine we will see a multi-lane high speed serial type bus (like PCIe) used on DIMM's instead. That makes MUCH more sense for an application like this. Leave the extremely wide bus on the silicon interposer where you have the capability to support that many pins. I mean the jitter you would have between the different bits on a 1024 bit wide bus going across two connectors (cpu -> mobo -> ram) ... just no way.
 
Back
Top