Samsung Introduces Next Generation HBM2E


Mar 3, 2018
At NVIDIA's GPU Technology Conference, Samsung unveiled an even faster version of their HBM2 memory. A stack of "Flashbolt," as they call it, can deliver up to 410 GBps of bandwidth, which they claim is 33% faster than previous offerings, and a single package can hold up to 16GB of memory. Samsung says the new memory is aimed at "supercomputers, graphics systems, and artificial intelligence (AI)" applications, though they didn't mention just what GPUs or accelerators will make use of it in the near future.

"Flashbolt's industry-leading performance will enable enhanced solutions for next-generation data centers, artificial intelligence, machine learning, and graphics applications," said Jinman Han, senior vice president of Memory Product Planning and Application Engineering Team at Samsung Electronics. "We will continue to expand our premium DRAM offering, and improve our 'high-performance, high capacity, and low power' memory segment to meet market demand."
Sounds like this is aimed at the HPC industry. I/O is still the bottleneck in that area.
That's right up around L3 cache speeds on my dual Xeon V4 servers. Faster than the L2 cache on a 7700k.
Will we see these on Navi ?

No way Navi is getting this, all rumors report it will use DDR6 and targeted at low/mid range. This memory will cost more than an entire Navi card... Maybe for the next gen instinct cards.

However single stack HBM cards are now a viable option, may make mini-pcs and laptops even faster with an "I7-8709G with RadeonTM RX Vega" type chip even better. So we could see single and dual stack HBM cards again.

This is the newest and most expensive HBM memory, think next gen Volta (Ampere) $6,000+ GPUs

4x16GB (64GB) stacks of these at 1640 TBps (Radeon VII is 1TBps) tied into a next gen Nvidia 7nm+ chip will be insane. In price and performance.