NVIDIA GeForce RTX 50-series and AMD RDNA4 Radeon RX 8000 to Debut GDDR7 Memory

erek

[H]F Junkie
Joined
Dec 19, 2005
Messages
11,242
Speaking of cadence, I just mentioned them today in another thread

“While GDDR7 promises major performance increases without major increases of power consumption, perhaps the biggest question from technical audiences is when the new type of memory is set to become available. Absent a hard commitment from JEDEC, there isn't a specific timeframe to expect GDDR7 to be released. But given the work involved and the release of a verification system from Cadence, it would not be unreasonable to expect GDDR7 to enter the scene along with next generation of GPUs from AMD and NVIDIA. Keeping in mind that these two companies tend to introduce new GPU architectures in a roughly two-year cadence, that would mean we start seeing GDDR7 show up on devices later on in 2024.


Of course, given that there are so many AI and HPC companies working on bandwidth hungry products these days, it is possible that one or two of them release solutions relying on GDDR7 memory sooner. But mass adoption of GDDR7 will almost certainly coincide with the ramp of AMD's and NVIDIA's next-generation graphics boards.”

1678320848596.jpeg


Source: https://www.anandtech.com/show/18759/cadence-derlivers-tech-details-on-gddr7-36gbps-pam3-encoding

Source 2: https://www.techpowerup.com/305676/...md-rdna4-radeon-rx-8000-to-debut-gddr7-memory
 
Last edited:
erek, really?
Don't you realize posts about memory don't count as "news", and totally belong in the Memory sub-group on here, how dare you. ;)

NVIDIA GeForce RTX 50-series and AMD RDNA4 Radeon RX 8000 to Debut GDDR7 Memory

by btarunr Today, 03:10 Discuss (18 Comments)
With Samsung Electronics announcing that the next-generation GDDR7 memory standard is in development, and Cadence, a vital IP provider for DRAM PHY, EDA software, and validation tools announcing its latest validation solution, the decks are clear for the new memory standard to debut with the next-generation of GPUs. GDDR7 would succeed GDDR6, which had debuted in 2018, and has been around for nearly 5 years now. GDDR6 launched with speeds of 14 Gbps, and its derivatives are now in production with speeds as high as 24 Gbps. It provided a generational doubling in speeds from the preceding GDDR5.

The new GDDR7 promises the same, with its starting speeds said to be as high as 36 Gbps, going beyond the 50 Gbps mark in its lifecycle. A MyDrivers report says that NVIDIA's next-generation GeForce RTX 50-series, probably slated for a late-2024 debut, as well as AMD's competing RDNA4 graphics architecture, could introduce GDDR7 at its starting speeds of 36 Gbps. A GPU with a 256-bit wide GDDR7 interface would enjoy 1.15 TB/s of bandwidth, and one with 384-bit would have a cool 1.7 TB/s to play with. We still don't know what is the codename of NVIDIA's next graphics architecture, it could be any of the ones NVIDIA hasn't used from the image below.
 
Speaking of cadence, I just mentioned them today in another thread

“While GDDR7 promises major performance increases without major increases of power consumption, perhaps the biggest question from technical audiences is when the new type of memory is set to become available. Absent a hard commitment from JEDEC, there isn't a specific timeframe to expect GDDR7 to be released. But given the work involved and the release of a verification system from Cadence, it would not be unreasonable to expect GDDR7 to enter the scene along with next generation of GPUs from AMD and NVIDIA. Keeping in mind that these two companies tend to introduce new GPU architectures in a roughly two-year cadence, that would mean we start seeing GDDR7 show up on devices later on in 2024.


Of course, given that there are so many AI and HPC companies working on bandwidth hungry products these days, it is possible that one or two of them release solutions relying on GDDR7 memory sooner. But mass adoption of GDDR7 will almost certainly coincide with the ramp of AMD's and NVIDIA's next-generation graphics boards.”

View attachment 554716

Source: https://www.anandtech.com/show/18759/cadence-derlivers-tech-details-on-gddr7-36gbps-pam3-encoding
Your source doesn't mention "RTX 50-series" or "RDNA 4" at all. The editorialized title for the thread is clickbait. I take no issue with the speculation in the first paragraph of the post, but lets try to not mislead people with the thread title.
 
Your source doesn't mention "RTX 50-series" or "RDNA 4" at all. The editorialized title for the thread is clickbait. I take no issue with the speculation in the first paragraph of the post, but lets try to not mislead people with the thread title.
Agree. No evidence so far that RDNA 4 has GDDR7
 
"A MyDrivers report says that NVIDIA's next-generation GeForce RTX 50-series, probably slated for a late-2024 debut, as well as AMD's competing RDNA4 graphics architecture, could introduce GDDR7 at its starting speeds of 36 Gbps."

So all hypothetical.
 
So all hypothetical.
It is a 1 year old article, yes at the time we can imagine Nvidia-AMD did not know yet if GDDR7 would be ready for launch. The thread title was a bit embellishing to make it more fun.
 
I do not understand what those means:
https://www.jedec.org/news/pressreleases/jedec-publishes-gddr7-graphics-memory-standard
  • Doubles the number of independent channels doubles from 2 in GDDR6 to 4 in GDDR7.
  • Support for 16 Gbit to 32 Gbit densities including support for 2-Channel mode to double system capacity.
GDDR6 uses 32 pins for 2 independent memory channels, 16 lanes each (16 Gbit), it can be configured to work in burst where they can be combined to function as a single 32-lane channel (32 Gbit) for higher bandwidth to a single chip but that comes with some drawbacks as it essentially becomes a single lane road for huge trucks so traffic is one way only until everything is delivered.

So if they expand GDDR7 to have 4, 16-pin lanes, it can do quad-channel at 16Gbit, or dual-channel at 32-Gbit configurations.
 
Could that eventually lead to doubling the bandwith ? (and - or doubling the amount of memory chips driven by that same memory bus) ?
 
Could that eventually lead to doubling the bandwith ? (and - or doubling the amount of memory chips driven by that same memory bus) ?
Not likely the same bus, the channel layout is very different, far more so than GDDR5 to GDDR6, and if each pin is 40% faster and there are twice as many pins for twice as many channels, but the number of pins to each channel remains the same then how much faster depends a lot on what you are measuring and how it is utilizing the memory. Going from 2 channels to 4 channels where each channel is 40% faster could be huge, and burst mode being 80% faster while going from single duplex to full duplex communication could be more so, but if software and architectures can't take advantage of it then it means very little.
 
AMD might be going for slower memory to not get the cards banned in China, the 7900XT just skirts under the limit, so much as a 5% performance increase there and it no longer makes the cut.

If it’s true the RDNA4 GPU’s are going to see significant increases in AI performance and much better FP8 then even something of the 8800 level could see themselves on the wrong side of that ban. AMD sells too many cards to China to risk that, so they are going to improve architecture so the smaller silicon can run cooler and they will gimp the memory to stay on the happy side of the laws their presented with.
 
Back
Top