AMD Working on GDDR6 Memory Technology for Upcoming Graphics Cards

Megalith

24-bit/48kHz
Staff member
Joined
Aug 20, 2006
Messages
13,000
A rumor recently surfaced involving the LinkedIn page of an AMD technical engineer that listed the company working on a GDDR6 memory controller. The company will be sticking to HBM2 for its high-end, next-generation graphics cards in 2018 (Navi), however.

Compared to current generation GDDR5 DRAM, we are looking at both, increased bandwidth and transfer speeds (8Gbps vs 16 Gbps) at lower power consumption (1.5V vs 1.35V). The specifications can easily be compared to current DRAM standards. While GDDR5X can hit same speeds as GDDR6, the latter comes with better optimizations and higher densities.
 
Vega 20 with GDDR6? Is that what's being implied? And then Navi back to HBM? Or just the midrange of Vega 20 (Vega 21 or whatever they call the midrange) with GDDR6 and Top end Vega 20 (assuming 2018 refresh) with HBM2 still?
 
Is the RAM bus often a bottleneck at this point? Perhaps suggest their GPUs are on a leash?
 
Is the RAM bus often a bottleneck at this point? Perhaps suggest their GPUs are on a leash?

not as much as it once was. it's more likely cost savings since HBM costs would eat into the already small profit margins of mid/low end cards while it's not as big of a deal on the high end.
 
Vega 20 with GDDR6? Is that what's being implied? And then Navi back to HBM? Or just the midrange of Vega 20 (Vega 21 or whatever they call the midrange) with GDDR6 and Top end Vega 20 (assuming 2018 refresh) with HBM2 still?

More like Navi must still be hbm2 and then products in 2019 and Beyond most likely will be using gddr6
 
If AMD was smart they'd dump DDR4 memory in favor of GDDR6. This is more beneficial for any APU's they make since that's always the limiting factor. Plus GDDR6 memory has much higher bandwidth for Ryzen CPU's.
 
If AMD was smart they'd dump DDR4 memory in favor of GDDR6. This is more beneficial for any APU's they make since that's always the limiting factor. Plus GDDR6 memory has much higher bandwidth for Ryzen CPU's.

Because it's always that easy and cost effective, right?
 
If AMD was smart they'd dump DDR4 memory in favor of GDDR6. This is more beneficial for any APU's they make since that's always the limiting factor. Plus GDDR6 memory has much higher bandwidth for Ryzen CPU's.

you may want to do your research on the differences between graphic memory and system memory.. it's not as simple as you make it sound, lol.
 
So we're looking at some mid-level cards some time next year?

Radeon RX-Vega 48 - 3072 Stream Processors, 1590Mhz Base/1770 Boost, 192 Texture Units, 8GB GDDR6, 384-Bit - $350 - GTX1070 - Vanilla Performance.

Radeon RX-Vega 40 - 2560 Stream Processors, 1433Mhz Base/1660 Boost, 160 Texture Units, 8GB GDDR6, 256-Bit - $250 - RX580/GTX1060 Performance +15%

Radeon RX-Vega 32 - 2048 Stream Processors, 1502Mhz Base/1680 Boost, 128 Texture Units, 8GB GDDR6, 192-Bit - $179 - RX570 performance.
 
Last edited:
you may want to do your research on the differences between graphic memory and system memory.. it's not as simple as you make it sound, lol.
And yet this is something AMD has already done with the Xbox One X and the PS4. The 8 jaguar cores on these machines are using the GDDR5 memory. So I don't understand how it's not so simple when it's been done?
 
So we're looking at some mid-level cards some time next year?

Radeon RX-Vega 48 - 3072 Stream Processors, 1590Mhz Base/1770 Boost, 192 Texture Units, 8GB GDDR6, 384-Bit - $350 - GTX1070 - Vanilla Performance.

Radeon RX-Vega 40 - 2560 Stream Processors, 1433Mhz Base/1660 Boost, 160 Texture Units, 8GB GDDR6, 256-Bit - $250 - RX580/GTX1060 Performance +15%

Radeon RX-Vega 32 - 2048 Stream Processors, 1502Mhz Base/1680 Boost, 128 Texture Units, 8GB GDDR6, 192-Bit - $179 - RX570 performance.


And add $100 to $150 more to the price because they will all be sold out like most ATI cards today
 
And yet this is something AMD has already done with the Xbox One X and the PS4. The 8 jaguar cores on these machines are using the GDDR5 memory. So I don't understand how it's not so simple when it's been done?
Was just thinking this.

What's the difference?

The memory controller is all i can think of.
 
Was just thinking this.

What's the difference?

The memory controller is all i can think of.
It is the difference in uses actually that create the issue. Graphics memory has high bandwidth with high latency. The high latency part is what would hinder desktop performance. If all you did was game then maybe you could pull it off but that would be a huge niche and hinderence for what a typical desktop get used for by most.
 
Maybe now since Raja left, somebody is thinking.
Or is just evolution as Nvidia is working on the same thing right now! HBM2 has some very low yields right now so only the highest SKUs get it like GV100 in those DGX-1 machines and VEGA GPUs. GDDR6 is coming out to market due to necessity!
 
The way I'm seeing it, HBM has three main benefits:
  1. In it's highest configuration, it provides tremendous bandwidth
  2. Because of the interposer configuration, footprint is reduced relative to standard memory configurations
  3. Power usage may be reduced (this is contested)

And it has some problems as well:
  • Even higher latency than GDDR memory
  • Configurations are not very modular, unlike GDDR
  • The largest implementations are very difficult to produce, and the equipment available is scarce
  • The vertical height of the product is increased (important for mobile applications)

Depending on application, these may be beneficial or may be detrimental to the end product:
  • For the high-end Nvidia compute cards, the extra bandwidth is actually put to use, and the price difference is meaningless
  • For Intel's new CPU with Radeon graphics, the smaller HBM implementation allows a significant savings in terms of footprint while providing (I expect) excellent performance
  • For Fury and Vega, the extra bandwidth has not been helpful for consumer performance, while the extra cost has been detrimental, and limited availability has kept costs high and end product availability low

Going forward, I expect HBM to become more competitive; Nvidia has leveraged it successfully in their compute products, and that will push economy of scale forward as foundries have a reason to invest in more capacity and improved production methods. And that's good, given that AMD/RTG may not stay on the HBM train for their consumer products.
 
They are going to continue with HBM2-3 etc for the high end consumer cards like vega -navi. The mid range is getting GDDR6. It will most likely be the cut down chips of Vega 64- 56.
HBM has proved to be superior, but in order to extract it the game needs to be coded masterfully. A good example is Fury in BF3, 4 and 1. or Doom.
Rumor has it that the upcoming drivers from AMD have some great improvements for Vega cards. It was to be expected.
 
There most likely will be improved Polaris with it as well. Mining has taken a shit load of cards. They will not just get rid of Polaris.
 
HBM has proved to be superior, but in order to extract it the game needs to be coded masterfully.

This is not true. HBM provides more bandwidth, but in order to use that bandwidth the GPU has to have the workload, and games do not yet present such a workload. Further, the real-world gaming performance of Fury and Vega would likely not be affected at all by using GDDR in place of HBM.
 
So we're looking at some mid-level cards some time next year?

Radeon RX-Vega 48 - 3072 Stream Processors, 1590Mhz Base/1770 Boost, 192 Texture Units, 8GB GDDR6, 384-Bit - $350 - GTX1070 - Vanilla Performance.

Radeon RX-Vega 40 - 2560 Stream Processors, 1433Mhz Base/1660 Boost, 160 Texture Units, 8GB GDDR6, 256-Bit - $250 - RX580/GTX1060 Performance +15%

Radeon RX-Vega 32 - 2048 Stream Processors, 1502Mhz Base/1680 Boost, 128 Texture Units, 8GB GDDR6, 192-Bit - $179 - RX570 performance.
Come on now, you can't have 8 GB of vram on a 192 or 384 bit bus without using mixed density memory which of course AMD has never done. Nvidia tried it on a few cards over the years but it's a stupid approach as you have some memory running full speed while another part of the memory can only run at a fraction of that bandwidth.
 
Maybe now since Raja left, somebody is thinking.
this HBM is dumb for consumer cards GDDR5X and now GDDR6 is more then fine and is MUCH cheaper had they used GDDR5X on Vega it likely would of been 100 bucks cheaper
 
this HBM is dumb for consumer cards GDDR5X and now GDDR6 is more then fine and is MUCH cheaper had they used GDDR5X on Vega it likely would of been 100 bucks cheaper

And, just as importantly, available.
 
  • Like
Reactions: Elios
like this
If AMD was smart they'd dump DDR4 memory in favor of GDDR6. This is more beneficial for any APU's they make since that's always the limiting factor. Plus GDDR6 memory has much higher bandwidth for Ryzen CPU's.

GDDR is not usable as main system memory, the latency is too high.
 
1½-2 years behind on GDDR6. They really betted on the wrong memory. And just show how little resources RTG got to work with.

AMD wont be making a GDDR6 controller, they just license one.
 
Great! However, those systems are optimized to deal with the latencies, with low overhead and lots of SoC cache. Desktop computers have much higher overhead (and are much, much faster).
I think if the architecture was designed around the latency, it wouldn't be an issue. Just add much more cache I would assume?
 
I think if the architecture was designed around the latency, it wouldn't be an issue. Just add much more cache I would assume?

Sort of; on consoles, the architecture and the applications can be designed around the latency quite well. However, on the desktop (or server!) side, that's a level of customization of code that usually just doesn't get done, and would be needed to be done for every level of the system stack, from UEFI to kernel to APIs to drivers to applications.

So it's not that it couldn't be done (or won't be), but that it's not really commercially feasible for most consumer and enterprise systems. Probably happens in the supercomputer world though.
 
Come on now, you can't have 8 GB of vram on a 192 or 384 bit bus without using mixed density memory which of course AMD has never done. Nvidia tried it on a few cards over the years but it's a stupid approach as you have some memory running full speed while another part of the memory can only run at a fraction of that bandwidth.

Good catch...so, figure 9GB, 8GB, and 6GB respectively.
 
Just because its not on Nvidia consumer cards does not mean its dumb. It makes the Vega , Fury do more and better than Nvidia offerings in the same price range. I know you kids have your penties bunched up. Especially with the new drivers coming out this month and they are focused around the Vega cards. You cant run from the reality. Don't hate, accept the fact that Vega is a better buy than Nvidia
 
If I'm wrong about the way the code is done + the api used how would you explain the performance gain with the latest patch on Vegas in Wolfenstein II The New Colossus. Stop spreading fake news. There is a reason AMD pushed for Mantle, Vulcan. It simply works great with HBM and cuts a ton of latency.
 
Just because its not on Nvidia consumer cards does not mean its dumb. It makes the Vega , Fury do more and better than Nvidia offerings in the same price range. I know you kids have your penties bunched up. Especially with the new drivers coming out this month and they are focused around the Vega cards. You cant run from the reality. Don't hate, accept the fact that Vega is a better buy than Nvidia

Is this the real life, or is it fantasy?
 
If I'm wrong about the way the code is done + the api used how would you explain the performance gain with the latest patch on Vegas in Wolfenstein II The New Colossus. Stop spreading fake news. There is a reason AMD pushed for Mantle, Vulcan. It simply works great with HBM and cuts a ton of latency.

I've seen no explanation for the performance increase. We know that id optimizes for AMD first, and that makes sense, but we also know that usually they figure out how to do the same stuff with Nvidia too. And with Vulkan, surprise!, now they have to optimize for each vendor and each hardware generation.

And no, HBM does not cut latency from GDDR- it increases it. But for graphics, latency doesn't matter much.
 
There is no need for explanation. Vulcan allows them to work closer to the GPU and its memory. Therefore it cuts the API overhead which means less latency. Get your facts straight , so you can form informed opinions.
 
So we're looking at some mid-level cards some time next year?

Radeon RX-Vega 48 - 3072 Stream Processors, 1590Mhz Base/1770 Boost, 192 Texture Units, 8GB GDDR6, 384-Bit - $350 - GTX1070 - Vanilla Performance.

Radeon RX-Vega 40 - 2560 Stream Processors, 1433Mhz Base/1660 Boost, 160 Texture Units, 8GB GDDR6, 256-Bit - $250 - RX580/GTX1060 Performance +15%

Radeon RX-Vega 32 - 2048 Stream Processors, 1502Mhz Base/1680 Boost, 128 Texture Units, 8GB GDDR6, 192-Bit - $179 - RX570 performance.

isn't that a bit expensive? i'd expect nvidia to come out with GTX2060 with at least matching 1070 performance next year for $250.
 
There is no need for explanation. Vulcan allows them to work closer to the GPU and its memory. Therefore it cuts the API overhead which means less latency. Get your facts straight , so you can form informed opinions.

sorry bro the one that need to check the facts is you. you sure that you understand what you're talking about?
 
Back
Top