DRAM is stuck in a 10nm trap

erek

Supreme [H]ardness
Joined
Dec 19, 2005
Messages
5,502
Think we'll soon see 7nm DRAM?

"That’s for memory cells. But logic chips, built out of high-density logic gates, are entirely different. They are super-high density. They are incomparable to 1T1C repetitive building blocks, with relatively large capacitors in them, which is why they can scale down past the 10nm barrier."

https://blocksandfiles.com/2020/04/13/dram-is-stuck-in-a-10nm-process-trap/
 

power666

Weaksauce
Joined
Jun 23, 2018
Messages
108
This has been known about for years but we are just now hitting that barrier without any sort material science break through for the next generation of DRAM.

Back when this DRAM barrier was initially discussed, there was a proposed solution that can still work: use SRAM. This is the same high speed used in CPU cache and composed of six (or more) transistors. SRAM can be built on the same logic process used for processors as the dirty little secrete is that most of the transistors in a modern CPU go toward their SRAM based caches. The presumption at the time is that SRAM could reach the same density as the last generation of DRAM around the 5 nm logic process node, though during the discussion it wasn't stated whose 5 nm node that that would. (I would presume Intel's 5 nm node as that is expected to be denser than TSMC's and at the time was expected to be first to market.) There is some discussion of going this route at 5 nm for some memory products but they are all tied to HBM-like solutions which will remain expensive to implement and carry a premium in the market place. It likely wouldn't be until beyond the 3 nm node is on the market that 5 nm SRAM memory becomes more commodity (DDR6 or 7 perhaps?) as the number of fabs capable of such bleeding edge processes are expected to be at capacity for a long while.
 

whateverer

[H]ard|Gawd
Joined
Nov 2, 2016
Messages
1,143
SRAM? I think even at 6T, it's going to be a density decrease. Even at 7nm process node, we're only fitting 32MB on the Chiplet.

AMD's Chiplet is the same size as this 1GB DDR5 chip from Hynix:

https://www.anandtech.com/show/13999/sk-hynix-details-its-ddr56400-dram-chip

Even assuming the cache is 1/4 the chiplet total die space, you would still have only 128MB SRAM on the same size die.


I thought everyone was hoping that resistive ram would take over for DRAM, but the closest one to mass-production (OPTANE) has the same lifetime as NAND flash.
 
Last edited:

Revdarian

2[H]4U
Joined
Aug 16, 2010
Messages
2,598
I'll tell you that for cache since 2018 they have been testing mram for l2 exactly because it's much more dense even on the test iterations than sram at 7nm (also mram has been sent last year on small 28nm ddr4 test configurations to vendors)

Just saying that not so further down the road we might just jump technology exactly due to process scaling and other benefits (lower power/non volatility), we knew that this would eventually happen, it's just apparently going to finally be within this decade.
 

EniGmA1987

Limp Gawd
Joined
May 2, 2017
Messages
392
Do we really need higher density dram now though? We can easily wait a few years of looking for alternate routes IMO. Samsung just started making 12gb dies in 10nm, which means with 8 chips per stick that is a single 12GB ram stick.
Now make it a dual-rank dimm which is also very common and use 16 chips, thats a 24GB single stick of RAM. Do we really need more density than that right now? That would be 192GB of RAM in a threadripper platform, and 288GB of RAM in a dual socket, single rack space server unit. If we take the new double density dimm specs, which is up to 32 chips on a single dimm, that would be a 48GB on a single stick of ram.
 
Last edited:

Chimpee

[H]ard|Gawd
Joined
Jul 6, 2015
Messages
1,531
Do we really need higher density dram now though? We can easily wait a few years of looking for alternate routes IMO. Samsung just started making 12gb dies in 10nm, which means with 8 chips per stick that is a single 12GB ram stick.
Now make it a dual-rank dimm which is also very common and use 16 chips, thats a 24GB single stick of RAM. Do we really need more density than that right now? That would be 192GB of RAM in a threadripper platform, and 288GB of RAM in a dual socket, single rack space server unit. If we take the new double density dimm specs, which is up to 32 chips on a single dimm, that would be a 48GB on a single stick of ram.
The answer is always "Yes", this is [H] after all. But to answer seriously, increase density should in theory bring down pricing of ram, so while average consumer may not need more ram, it may benefit it from a pricing standpoint sans price collusion or higher initial production cost.
 

Ready4Dis

[H]ard|Gawd
Joined
Nov 4, 2015
Messages
1,321
The answer is always "Yes", this is [H] after all. But to answer seriously, increase density should in theory bring down pricing of ram, so while average consumer may not need more ram, it may benefit it from a pricing standpoint sans price collusion or higher initial production cost.
Increasing density costs $... Ask AMD, it's not like it used to be @ 32nm to be able to get more chips per water and lower the cost. The wafer quality has to be higher, the process is a lot more costly, so even though the chips are more dense, the prices aren't going to move as much as they used to. That said, prices at the moment seem pretty decent as long as they don't spike way up again.
 

whateverer

[H]ard|Gawd
Joined
Nov 2, 2016
Messages
1,143
Right, and it's harder to shrink a capacitor (and still maintain any charge longevity). You also can't 3d-stack the silicon structure like flash.

They're already running the largest wafers feasible.

There's just no easy way to make DRAM any cheaper - they neeed to come up with a new method of transistor storage or the need a process miracle breakthrough
 
Last edited:

mashie

Mawd Gawd
Joined
Oct 25, 2000
Messages
4,205
Do we really need higher density dram now though? We can easily wait a few years of looking for alternate routes IMO. Samsung just started making 12gb dies in 10nm, which means with 8 chips per stick that is a single 12GB ram stick.
Now make it a dual-rank dimm which is also very common and use 16 chips, thats a 24GB single stick of RAM. Do we really need more density than that right now? That would be 192GB of RAM in a threadripper platform, and 288GB of RAM in a dual socket, single rack space server unit. If we take the new double density dimm specs, which is up to 32 chips on a single dimm, that would be a 48GB on a single stick of ram.
I guess you don't follow server hardware. They are already using 128GB DDR4 sticks.
 
Top