Shintai
Supreme [H]ardness
- Joined
- Jul 1, 2016
- Messages
- 5,678
There is no difference at all. Its just a different name
HBM2 will basically double the bandwidth offered by HBM1 – which is quite an impressive feat considering that HBM1 is already around 4 times faster than GDDR5. Not only that but power consumption will be reduced by another 8% – once again over an existing reduction of 48% over GDDR5 (of HBM1). But perhaps one of the most significant developments is that it will allow GPU manufacturers to seamlessly scale vRAM from 2GB to 32GB – which covers pretty much all the bases. As our readers are no doubt aware, HBM is 2.5D stacked DRAM (on an inter-poser). This means that the punch offered by any HBM memory is directly related to its stack (layers).
The impact of memory bandwidth on GPU performance has been under-rated in the past – something that has finally started changing with the advent of High Bandwidth Memory. Where HBM1 could go as high as a 4-Hi stack (4 layers), HBM2 can go up to 8-Hi (8 layers). The 4-Hi HBM stack present on AMD Fury series is basically a combination of 4x 4-Hi stacks – each contributing 1GB to the 4GB grand total. In comparison, HBM2’s 4-Hi stack will offer 4GB on a single stack – so the Fury X combination repeated with HBM2 would actually net 16GB HBM2 with 1TB/s bandwidth. Needless to say, this is a very nice number, both in terms of real estate utilization and raw bandwidth offered by the medium.
Of course, HBM2 is only as good as the graphic cards its featured in. As far as use-case confirmations go, Nvidia at-least, speaking at the Japanese version of the GTC confirmed that it will be utilizing HBM2 technology in its upcoming Pascal GPUs. Interestingly however, the amount of vRAM revealed was 16GB at 1 TB/s and not 32 GB. The 1 TB/s number shows that Nvidia is going to be using 4 stacks of HBM – and the amount of vRAM tells us that its going to be 4-Hi HBM2. They did mention however, that as the memory standard matures they might eventually start rolling out 32GB HBM2 graphic cards. This is something that isn’t really surprising considering 8-Hi HBM would almost certainly have more complications than 4-Hi HBM in terms of yield
P100 is 1.6Ghz HBM2 at 732GB/sec. Not 1TB/sec.
And as you mention, 16GB. Another problem for HBM is the density.
HBM1/HBM2 isn't as effective as you think after GDDR5X/GDDR6 hit. And HBM2 didn't improve much over HBM1. I know you just plain copy Hynix PR numbers. But you can ask yourself why you find no Hynix slides comparing to GDDR5X or GDDR6. The sole fact that Nvidia made GP102 with GDDR5X instead of HBM2 says it all from cost, performance and power metrics. And then there is Samsung, a HBM2 maker way ahead of Hynix by 6-9 months, working hard on GDDR6.
Last edited: