SK Hynix Unveils DDR5 Memory Details – Up To 8400 MHz Speeds, 64 Gb Densities, DRAM Mass Production This Year

erek

[H]F Junkie
Joined
Dec 19, 2005
Messages
10,875
Looking forward to DDR5?

"DDR5 provides a power-efficient design and improved reliability features, while delivering increased performance compared to DDR4. First of all, with an operating voltage of 1.1V, lowered from DDR4’s 1.2V, DDR5 aims to reduce power consumption per bandwidth by more than 20% of its predecessor. According to the market research company International Data Corporation (IDC), demand for DDR5 was expected to rise from 2020 and account for 22% of the total DRAM market in 2021 and 43% in 2022.4 SK hynix is planning to lead the market by actively responding to customer demands for ultra-high-speed, high-capacity memory, starting with 10nm-class 16Gb DDR5."

https://www.guru3d.com/news-story/sk-hynix-to-has-superfast-ddr5-memory-on-its-roadmap.html

https://wccftech.com/sk-hynix-ddr5-...-speeds-64-gb-dram-mass-production-this-year/
 
looking forward to it :)

is it basically what we have on graphics cards todays? or was that (G)DDR6 ? is there a fundamental difference between GDDR and just DDR?

either way, looks good IMO !
 
looking forward to it :)

is it basically what we have on graphics cards todays? or was that (G)DDR6 ? is there a fundamental difference between GDDR and just DDR?

either way, looks good IMO !
Good question, while similar in construction methods internally they are fundamentally different, GDDR is optimized for RISC, and further optimized for a single instruction per tick and can very quickly process single input output commands. DDR is basically the polar opposite it is designed to maximize its ability to multi-task so multiple operations per tick from multiple sources but as a result they do not handle simultaneous loads as well so taking in new data while reading out existing data is orders of magnitude slower.
Gamers Nexus did a break down of it back in 2017: https://www.gamersnexus.net/guides/2826-differences-between-ddr4-and-gddr5

Excerpt from Gamers Nexus:
DDR4 SDRAM
A higher speed and lower voltage successor to DDR3, DDR4 has been accepted as the current mainstream standard as many processors/platforms such as Skylake, Kaby Lake, Haswell-E, Z170, Z270, X99, and the upcoming Skylake-X and Ryzen have adopted DDR4. Much like a CPU, DDR4 is built to handle a bombardment of small tasks with low latency and a certain granularity. DDR4 is fundamentally suited to transferring small amounts of data quickly (comparatively speaking), at the expense of aggregate bandwidth. DDR4 bus width is 64 bits per channel, but is combinational; i.e., 128-bit bus width in dual channel. Additionally, DDR4 has a prefetch buffer size of 8n (eight data words per memory access), which means 8 consecutive data words (words can be between 8–64 bits) can be read and presciently placed in the I/O buffer. Also, the I/O interface is limited to a read (output from memory) or write (input to memory) per clock cycle, but not both. Below, we’ll discuss how these specs contrast with GDDR5.

GDDR5 SGRAM
GDDR5 is currently the most common graphics memory among the last couple generations of GPUs, but the newest version is GDDR5X, with it only being currently implemented on two cards: the GTX GeForce 1080 and Titan X (soon, 1080 Ti). Worth mentioning is HBM (High-Bandwidth Memory) used in some of the high-end Fiji GPUs by AMD. HBM 2 was ratified by the JEDEC in January of 2016 and is used in the the nVidia Tesla P100 and will presumably be used in the high-end Vega-based GPUs by AMD.

GDDR5 is purpose-built for bandwidth; e.g., moving massive chunks of data in and out of the framebuffer with the highest possible throughput. This is made possible by a much wider bus—anywhere from 256 to 512-bits across 4-8 channels. Albeit it comes at the cost of increased latency via much looser internal timings when compared to DDR4. Latency isn’t entirely an issue with GPUs, as their parallel nature allows them to move across multiple calculations simultaneously. Although GDDR5 has the same prefetch buffer size as DDR4 of 8n, the newest GDDR5X standard surpasses that with a depth of 16n (16 data words per memory access). Moreover, GDDR can handle input and output on the same clock cycle, unlike DDR. In addition, GDDR5 operates at a lower voltage than DDR4 at around ~1V, meaning less heat waste and higher performing modules. In small packages that are packed together densely, like on a graphics card PCB, lower heat is critical. System memory has the entire surface area of the stick to spread, and is isolated from high-heat components (like the GPU).
 
Good question, while similar in construction methods internally they are fundamentally different, GDDR is optimized for RISC, and further optimized for a single instruction per tick and can very quickly process single input output commands. DDR is basically the polar opposite it is designed to maximize its ability to multi-task so multiple operations per tick from multiple sources but as a result they do not handle simultaneous loads as well so taking in new data while reading out existing data is orders of magnitude slower.
Gamers Nexus did a break down of it back in 2017: https://www.gamersnexus.net/guides/2826-differences-between-ddr4-and-gddr5

Excerpt from Gamers Nexus:
DDR4 SDRAM
A higher speed and lower voltage successor to DDR3, DDR4 has been accepted as the current mainstream standard as many processors/platforms such as Skylake, Kaby Lake, Haswell-E, Z170, Z270, X99, and the upcoming Skylake-X and Ryzen have adopted DDR4. Much like a CPU, DDR4 is built to handle a bombardment of small tasks with low latency and a certain granularity. DDR4 is fundamentally suited to transferring small amounts of data quickly (comparatively speaking), at the expense of aggregate bandwidth. DDR4 bus width is 64 bits per channel, but is combinational; i.e., 128-bit bus width in dual channel. Additionally, DDR4 has a prefetch buffer size of 8n (eight data words per memory access), which means 8 consecutive data words (words can be between 8–64 bits) can be read and presciently placed in the I/O buffer. Also, the I/O interface is limited to a read (output from memory) or write (input to memory) per clock cycle, but not both. Below, we’ll discuss how these specs contrast with GDDR5.

GDDR5 SGRAM
GDDR5 is currently the most common graphics memory among the last couple generations of GPUs, but the newest version is GDDR5X, with it only being currently implemented on two cards: the GTX GeForce 1080 and Titan X (soon, 1080 Ti). Worth mentioning is HBM (High-Bandwidth Memory) used in some of the high-end Fiji GPUs by AMD. HBM 2 was ratified by the JEDEC in January of 2016 and is used in the the nVidia Tesla P100 and will presumably be used in the high-end Vega-based GPUs by AMD.

GDDR5 is purpose-built for bandwidth; e.g., moving massive chunks of data in and out of the framebuffer with the highest possible throughput. This is made possible by a much wider bus—anywhere from 256 to 512-bits across 4-8 channels. Albeit it comes at the cost of increased latency via much looser internal timings when compared to DDR4. Latency isn’t entirely an issue with GPUs, as their parallel nature allows them to move across multiple calculations simultaneously. Although GDDR5 has the same prefetch buffer size as DDR4 of 8n, the newest GDDR5X standard surpasses that with a depth of 16n (16 data words per memory access). Moreover, GDDR can handle input and output on the same clock cycle, unlike DDR. In addition, GDDR5 operates at a lower voltage than DDR4 at around ~1V, meaning less heat waste and higher performing modules. In small packages that are packed together densely, like on a graphics card PCB, lower heat is critical. System memory has the entire surface area of the stick to spread, and is isolated from high-heat components (like the GPU).


I see, thanks a lot for the cool insight :)
 
I plan on using DDR5 in my next system, which will likely be in 2-4 years. I am still kicking it with a 6700k and about the only thing that actually maxes out my CPU is my back up software (yes, really) so I feel no need to upgrade anytime soon.
 
If 16+ core CPUs are going to become the norm, we need higher data transfer rates for all of those cores, especially if all of them are in full-SMP, depending on the tasks of course.
 
I have to build an AI based system for our new AV/Network suite. And what are the chances the next gen Threadrippers will use it?
 
Looks like I'm gonna get primed for an upgrade. For some weird reason my entire life, I seem to upgrade computers when there is a new memory specification..

Not on purpose mind you, it always just seems to happen that way.
 
If 16+ core CPUs are going to become the norm, we need higher data transfer rates for all of those cores, especially if all of them are in full-SMP, depending on the tasks of course.

Still not maxxing out my 4790k on DDR3 1600.. I will keep my 4790k until my rig starts to fail.
 
Still not maxxing out my 4790k on DDR3 1600.. I will keep my 4790k until my rig starts to fail.
That's an older quad-core, and those CPUs were never memory-bound.
Having 16+ CPU cores all fighting over two channels of memory, though, starts to hurt.

Nice work keeping it going all these years, though. (y)
Aside from the hardware CPU exploits, Haswell has been one of Intel's better CPUs in the last decade.
 
Typically used speed and latency, after the RAM becomes mainstream and sees widespread use:

DDR1-400 @ CL2

DDR2-800 @ CL4

DDR3-1600 @ CL8

DDR4-3200 @ CL16

So, DDR5-6400 @ CL32?


Realistically, this translates to very little real-world gain (FPS in games) from generation to generation, when using the same GPU, and CPUs with as close of specs as possible to try and keep the comparisons remotely fair.

That being said, with each new generation of RAM we usually get new processors with much better IMCs, which means more stability and compatibility with very high clocked and low latency modules populating every DIMM slot (for those that want or actually need that).
 
Back
Top