Crucial Ships 128GB DDR4-2666 Modules for Servers at $3999 per Unit

Megalith

24-bit/48kHz
Staff member
Joined
Aug 20, 2006
Messages
13,000
Crucial has started shipments of its fastest and highest density server-class memory modules to date, but because of their positioning as super-dense memory, the price is very high. While the 128GB DDR4-2666 LRDIMMs should be usable in both AMD EPYC systems and Intel Xeon systems, they are optimized for Intel’s Xeon Scalable CPUs (Skylake-SP) launched earlier this year.

According to Crucial, production of 128 GB LRDIMMs involves 34 discrete stages with over 100 tests and verifications, making them particularly expensive to manufacture. These costs are then passed to customers buying such modules. The company sells a single 128 GB DDR4-2666 module online for $3,999 per unit, but server makers naturally get them at different rates based on quantity and support. At this rate, a full Xeon-SP system would cost $48k per socket, or for an EPYC system at 2 TB for each CPU, it would come to $64k per socket.
 
Still a long ways off from hitting the theoretical memory limitation of 64-bit systems. And, I see no need for it at the consumer level at this time. Server level, well, that's another story.
 
I remember someone saying "Who needs more than 640k" lol
 
Still a long ways off from hitting the theoretical memory limitation of 64-bit systems. And, I see no need for it at the consumer level at this time. Server level, well, that's another story.
With a 48-bit address bus, which was present back in 2003 with the AMD Athlon 64 Socket 754 processors, is 256TB of addressable system RAM.
Most of the current limitations are due to the hardware external to the CPUs.

I remember someone saying "Who needs more than 640k" lol
Back in the late 1970s and early 1980s, with only 20-bit address buses on microcomputers of that era, it really didn't seem like we needed more.
By 1985 with the release of the Intel 80386 and it's 32-bit address bus, the industry and software requirements pretty much put that "debate" to rest. :D

Granted the 80286 had a 24-bit address bus in 1983, which could address up to 16MB of system RAM, could do this, but protected-mode in DOS just wasn't quite there with the technology yet, and acted more as a beta-run on the hardware side of things during that time.

 
With a 48-bit address bus, which was present back in 2003 with the AMD Athlon 64 Socket 754 processors, is 256TB of addressable system RAM.
Most of the current limitations are due to the hardware external to the CPUs.

Since the memory controller was moved into the CPU, it's often due to limits of the CPU on laptops & desktops.

You can look up the spec sheets on various Intel CPU's.
I have laptops that were bought 8 years ago, and the i5 chips only supported 8GB ram.
The next couple generations only supported 16GB of ram, although some models only officially supported 8GB due to motherboard/bios.
At least the current generation now supports 32GB ram, so I still have an upgrade path if I need it.


Server CPU's however are usually limited by the motherboard and how many memory sockets then have.
Just upgraded one of my older server to 64GB, and that's the limit for that server.
Upgraded a newer model last month to 256GB, and still have empty sockets :)
 
Since the memory controller was moved into the CPU, it's often due to limits of the CPU on laptops & desktops.

You can look up the spec sheets on various Intel CPU's.
I have laptops that were bought 8 years ago, and the i5 chips only supported 8GB ram.
The next couple generations only supported 16GB of ram, although some models only officially supported 8GB due to motherboard/bios.
At least the current generation now supports 32GB ram, so I still have an upgrade path if I need it.


Server CPU's however are usually limited by the motherboard and how many memory sockets then have.
Just upgraded one of my older server to 64GB, and that's the limit for that server.
Upgraded a newer model last month to 256GB, and still have empty sockets :)
That's an artificial limitation imposed by the external bus.
The 80386 was made in 1985, had a 32-bit address bus capable of 4GB of system RAM internally, yet was normally limited to only 4-8MB on most motherboards (memory controllers) externally.

Internally, those i5 CPUs you purchase 8 years ago were capable of 256TB of RAM internally, but externally, those motherboards (memory controller on the CPU - which is still external to the internal address bus) could only support 8GB of system RAM.
Just because the RAM limit of a motherboard (memory controller) is relatively small doesn't mean the RAM limit of the CPU itself is that small. ;)

btw, nice upgrade on those last two servers of yours, would be nice to have that much memory available for large projects and/or RAM disks.
 
Since the memory controller was moved into the CPU, it's often due to limits of the CPU on laptops & desktops.

Same limit on CPUs when it was integrated before it was moved off the CPU for a time before moving back in.

The limit is usually dictated by maximum module sizes.
 
Same limit on CPUs when it was integrated before it was moved off the CPU for a time before moving back in.

The limit is usually dictated by maximum module sizes.
Normally, but the external address bus is what is the actual limitation on these boards.
There have been times when an imposed limit is made due to the maximum RAM modules available at the time, whereas in the future higher capacity modules are available, and thus the true maximum of the external address bus can be reached.

An example of this would be the Apple Quadra 950, which in 1992 had a maximum RAM capacity of 64MB.
Years later, higher capacity FPM DRAM modules were released, and it was possible to expand the RAM to 256MB on the board; the system used a Motorola m68k 68040 CPU with a 32-bit address bus (very similar specs to the Intel x86 80486) which was internally capable of addressing 4GB of system RAM, but was first limited by the RAM modules available at the time, and later limited by the external address bus' true capacity, that being 256MB.
 
So which is it?

Its the same thing. If you violate JEDEC you cant expect it to be used. That's the purpose of standards.

So if JEDEC sets a maximum module size of a certain amount then that's the max.

DDR3 is a classic example. Mainstream DDR3 DIMMs scaled to 8GB, but a modified DDR3 specification (Load Reduction DIMM) allows for an increase to 16GB or 32GB DIMMs.

So even there, AMD that could support larger modules only could support half the new spec because it was a sudden move.
 
Last edited:
With a 48-bit address bus, which was present back in 2003 with the AMD Athlon 64 Socket 754 processors, is 256TB of addressable system RAM.
Most of the current limitations are due to the hardware external to the CPUs.


Back in the late 1970s and early 1980s, with only 20-bit address buses on microcomputers of that era, it really didn't seem like we needed more.
By 1985 with the release of the Intel 80386 and it's 32-bit address bus, the industry and software requirements pretty much put that "debate" to rest. :D

Granted the 80286 had a 24-bit address bus in 1983, which could address up to 16MB of system RAM, could do this, but protected-mode in DOS just wasn't quite there with the technology yet, and acted more as a beta-run on the hardware side of things during that time.


But I want my 16 exabytes of RAM, damn it!
 
Ok, this is great if you are building a vblock for say... non critical systems to run in. WOOT I CAN RUN HUNDREDS OF VM's With all this RAm! WHEEEE!!!

But once you start putting critical systems in you need some hard core planning to not have an outage happen. This here is silly talk.
 
Yeah, that's actually pretty low markup over the $200 16GB stick :D

The base price (with linear price scaling) for A 128gb SERVER CHIP would already be a smoking $1600! The markup for fitting it on a single chip is just a measly 2.5x!
 
btw, nice upgrade on those last two servers of yours, would be nice to have that much memory available for large projects and/or RAM disks.

Server virtualization. Main limitation is usually server memory. Pretty cheap to upgrade older servers with used memory. I bought 64GB of ram for $65 :eek:

As for the 265GB of ram, it's for a couple virtualized SQL Servers. Really helps the performance when most your database fits in ram :D
 
Can't wait for this to come to desktop.
Not because I need it but because you can't ever have enough power when it comes to PC's

1TB of RAM sounds like a nice round number to me.
 
Back
Top