Micron says GDDR5X Updated and GDDR6 on the Way

FrgMstr

Just Plain Mean
Staff member
Joined
May 18, 1997
Messages
55,602
Micron has told us that we should expect updates to GDDR5X in terms of throughput. When launched, GDDR5X 10Gbps was spec, and now we should expect stock bandwidth will be upped to 11Gbps and 12Gbps configurations.

When we launched G5X just over a year ago, we were proud to deliver the world’s fastest discrete memory for NVIDIA’s highest performance gaming and workstation-class graphics cards. To keep up with the insatiable demands on memory from high performance GPUs for gaming, visualization and artificial intelligence, we continue to push the envelope for graphics memory data rates and yields.

As you likely know NVIDIA has been the company truly bringing exposure to this G5X technology, while AMD has been firmly in the HBM corner. It has been rumored that HBM2 supply will hamper the launch quantities of AMD's new RX Vega GPU. It has also been surmised that this is the reason that Radeon Vega Frontier Edition has been pushed out in front of the enthusiast RX Vega card. Remember that AMD has continually promised that Vega would be launched for the gamer in Q2 of this year, that has now been pushed back to Q3. Now you could go back into this and look at the exact wording used and suggest that it was stated as such that AMD never promised an RX Vega launch in Q2, but it surely implied it.

What we know is that NVIDIA's product stack at the high end will have a much stronger supply channel than AMD's. NVIDIA's current-gen Titan Xp uses G5X at 11.4Gbps specification.

Looking forward, GDDR6 is slated to push out 16Gbps! And it will be doing this in early 2018!
With regards to the status of Micron’s G6 program, which we first announced in February, I am pleased to report that our product development efforts are on-track and we expect to have functional silicon very soon. By leveraging our G5X based high speed signaling experience from roughly 2 years of design, mass production, test and application knowledge, I am confident we are well positioned to bring the industry’s most robust G6 to mass production by early 2018.

To that end, I am excited to announce that our Graphics design team in Munich has achieved 16Gbps data rates in our high speed test environment-another first for memory industry. The left picture shows the data eye opening at 16Gbps based on a critical PRBS pattern sequence, with great timing and voltage margin. The right image below shows stable data timing margin (horizontally) versus data rate (vertically), from our base sort speed of 10Gbps up to an unprecedented 16Gbps. This result is based on measurements on a meaningful sampling size of our mass production G5X silicon – not theoretical simulation data.
So what does this all mean to the computer hardware enthusiast and gamer? It means that we will continue to see trickle down memory technology making better mid-range products soon, and certainly new products be less constrained by bandwidth limitations. At least for the companies that use the technology. Also worthy of mention is that it is being kicked around that not only are HBM2 quantities limited for RX Vega, but that it also will cost the company $160 for the two 4GB stacks needed on it new GPUs while GDDR5X comes in at less than 1 one third of the cost per GB.
 
So other than possible power savings and reduced footprint on the videocard, it looks like HMB is becoming rapidly and increasingly irrelevant.
 
So other than possible power savings and reduced footprint on the videocard, it looks like HMB is becoming rapidly and increasingly irrelevant.
Until it gets "cheap" and in high production, it looks to be a loser in consumer parts.
 
HBM2 only have 2 advantages over GDDR5X/GDDR6. Its size and ECC, nothing else. Even Hynix dropped every form of power or bandwidth argument.

And HBM is close to losing to HMC when it cant reach the consumer space anyway.
 
Once again, cost is king. The minute HBM production hit a snag and GDDR5x came along, HBM was in trouble. It doesn't offer enough of a performance benefit to justify 3x higher the cost.

Maybe if production hadn't been so initially slow and GDDR5x hadn't come along NVIDIA would have been forced to use it, and costs would be dropping as production ramped up. But that didn't happen. HBM is more or less dead at this point as a result.
 
Is this HBM thing what could be seen a mile away before, or was it a combination of that HBM production issues AND GDDR5 progress that doomed it?
 
Is this HBM thing what could be seen a mile away before, or was it a combination of that HBM production issues AND GDDR5 progress that doomed it?

For the consumer it was more or less doomed from the start due to static cost penalties alone. The entire success of HBM evolved around no progress for GDDR and then having the market without competition to offset the costs and other penalties.

HBM is simply sitting between a rock and a hard place. HMC on one side, GDDR on the other.

The GDDR5X/GDDR6 story I think we all know. Here is an example of HMC.

Table.png

Figure_3.png


If something like GP100 used HMC. You could at least in theory, add up to 256GB memory via 4 memory controllers equally to the 4 HBM2 controllers.
 
Last edited:
Is this HBM thing what could be seen a mile away before, or was it a combination of that HBM production issues AND GDDR5 progress that doomed it?

I feel it would have taken off, if AMD didn't shit the bed by offering it in an expensive Fury lineup of products that were already lacking features and specs compared to the competition, plus targeting a small segment of the consumer market (enthusiasts), but an even smaller percentage of that (deeper pocketed enthusiasts), and a sliver of that for their flagship (even deeper pocketed enthusiasts that had the gumption/capability to install a factory WC GPU).

The Fury line was comparatively obsolete before it was even available to purchase, despite having promising HBM onboard. AMD may have fared better if they could have offered the Fury X with 8GB HMB, but regardless of that, they should have also added a full Fury X without the WC into the mix.

The only saving grace in the entire lineup was the Nano, imo...but it suffered from the same limiting specs, while most AIB partners for nVidia were rolling out itty-bitty-sized offerings of their own, ranging from dirt cheap to upper tier in the price/performance spectrums.

For AMD's sake of success, I hope that they have learned from that experience and don't repeat any aspects of it with their upcoming HMB2-equipped lineup.
 
Last edited:
So why is there a tombstone here already for HBM2? Is it not coming to Vega soon?
 
So why is there a tombstone here already for HBM2? Is it not coming to Vega soon?

Cost and availability in the supply chain. We'll likely see some hefty price tags on HBM2 GPUs, when they can be found in stock...
 
Cost and availability in the supply chain. We'll likely see some hefty price tags on HBM2 GPUs, when they can be found in stock...
So gddr5 vega soon? I mean if memory is the thing holding vega back. ..
 
So gddr5 vega soon? I mean if memory is the thing holding vega back. ..

Probably not, since AMD is likely already locked into contractually purchasing X quantity of HBM2 over Y timeframe from Z suppliers.
 
Just wait for AMD's more general management and product decision behavior from the last several years to reassert itself and give us an APU in the style of Intel's Iris Pro with a bunch of HBM, but to offset the cost by using original bulldozer cores and "optimzed" GCN cores for graphics.

AMD: Just Wait™
 
Back
Top