Up to 144TB NVME Storage in Action - "M.3 SSD"

FrgMstr

Just Plain Mean
Staff member
Joined
May 18, 1997
Messages
55,601
Samsung's NGSFF (Next Generation Small Form Factor) SSD for data centers is being demoed in Japan, and the folks over at AKIBA have some coverage for us. The demo unit shows an AIC SB127-LX rack being used that can house up 36 Samsung NGSFF NVMe SSDs. You can see a comparison of the new "M.3 SSD" compared to an M.2 SSD that you might be more familiar with.


NGSFF SSD is a new form factor for SSD for servers, ADATA and others use "M.3". It is an elongated shape that is optimized for rack mount case and when configuring storage with 1 U rack, it can realize capacity more than 4 times compared with using M.2 SSD (NGFF SSD).  The store will handle NGSFF SSD, but the delivery date is undecided. The price is 3 to 350 thousand yen in case of 4 TB module.
 
crazy speed.
81508430.png
 
There's no way that benchmark represents more than just one of the devices. If that's the performance of all 36 populated, that's really shitty performance.

The theorettical aggregate read speed of all 36 slots populated, is in the realm of 115,200MB/s (that's assuming 36x3,200MB/s specs). Which, converts to 921,600Mbps, and I'm very sure that server can't handle that storage, and push that fully across the network.

I anticipate the CPU, RAM and the network will be the bottleneck for such a system, like, in a very massive way. LOL!

What a time, where the storage performance can't even be fully utilized ;)
 
What a time, where the storage performance can't even be fully utilized ;)

A home user can saturate their 1Gb links with an old spinner...

And 10Gbit could be saturated by a pair of SATA SSDs.

While I agree that applications would still have a hard time making use of the storage speed, CPUs and RAM catch up pretty quick.

Network links? Not so much. Note the bleeding edge 200Gbit link mentioned up top- You'd need five of those just to hook this guy up to a backplane, and you'd need a backplane that could actually switch that at layer 2+/3 for LAG/LACP, that could actually distribute that across a network.

And here I am just trying to figure out how to do a modestly forward-looking 10GBase-T setup at home.
 
Last edited:
And I've been saying it for forever - the access to storage and the network to share it will drive the need for better PCI-e so much more than gaming.

PCI-e 4.0 and 5.0 can't come fast enough for this stuff.

Also, just wait 5 years and you'll be seeing used and (relatively) inexpensive 40 and 100gbs items up on EBay as enterprise moves on to things like terabit.
 
There's no way that benchmark represents more than just one of the devices. If that's the performance of all 36 populated, that's really shitty performance.

The theorettical aggregate read speed of all 36 slots populated, is in the realm of 115,200MB/s (that's assuming 36x3,200MB/s specs). Which, converts to 921,600Mbps, and I'm very sure that server can't handle that storage, and push that fully across the network.

I anticipate the CPU, RAM and the network will be the bottleneck for such a system, like, in a very massive way. LOL!

What a time, where the storage performance can't even be fully utilized ;)
A single Optane 90xP drive smokes those 4K results
 
The CrystalDiskMark screenshot shows drive E: with 10,731 Gib which, based on the other pics before it in the article, is three of the 4 TB (3576.86 GB) drives in a software RAID. That being the case, the sequential read and write scores are not good.

Note the title of the pic: Samsung製4TB×3枚構成(7/14)
 
Too rich for my blood.

Spinners still serve my mass storage needs just fine, but in an enterprise environment where large database performance is king, I'd imagine this would be invaluable.
 
Makes me wonder just how fast the network is on the back end in order to be able to accommodate such high transfer speeds.

I'm guessing very very fast.

One of the major data centers (SwitchNAP) near my house has a WeWork style office where you can rent desk space. Their network in the building is 1,000,000 Mbit/s because it shares the pipe with the data centers.
 
Too rich for my blood.

Spinners still serve my mass storage needs just fine, but in an enterprise environment where large database performance is king, I'd imagine this would be invaluable.

For sure, my NAS systems are still primarily spinning disk for cost reasons. SSD/NVMe is great for OS/cache drives but still too expensive for NAS systems.
 
A home user can saturate their 1Gb links with an old spinner...

And 10Gbit could be saturated by a pair of SATA SSDs.

While I agree that applications would still have a hard time making use of the storage speed, CPUs and RAM catch up pretty quick.

Network links? Not so much. Note the bleeding edge 200Gbit link mentioned up top- You'd need five of those just to hook this guy up to a backplane, and you'd need a backplane that could actually switch that at layer 2+/3 for LAG/LACP, that could actually distribute that across a network.

And here I am just trying to figure out how to do a modestly forward-looking 10GBase-T setup at home.

There is much more to storage speed than just copying files.
My largest SQL databases at work are limited by disk speed, and don't push the 1Gbit network more than 20-30% (except during backups)

I've been looking at upgrading to a server with NVMe drives, just need them to come down in price a bit more. This looks even better if the price is right.
 
There is much more to storage speed than just copying files.
My largest SQL databases at work are limited by disk speed, and don't push the 1Gbit network more than 20-30% (except during backups)

For sure, I wouldn't intend to imply otherwise! Though a stored database on spinners sounds like architected frustration ;).

[generally speaking, we use spinners, but mostly just to load the database into RAM; I'll personally advocate for 3D Xpoint in the future, whether that be bus or DIMM attached; we've also mostly moved beyond SQL for non-housekeeping stuff in production, it's just too slow]
 
There is much more to storage speed than just copying files.
My largest SQL databases at work are limited by disk speed, and don't push the 1Gbit network more than 20-30% (except during backups)

I've been looking at upgrading to a server with NVMe drives, just need them to come down in price a bit more. This looks even better if the price is right.

Supermicro has a similar product...

https://www.servethehome.com/supermicro-36x-ngsff-ssd-server-offers-576tb-of-nvme-storage-in-1u/

https://www.supermicro.com/flyer/f_All-Flash_SSG-1029P-NMR36L.pdf

Base price is only $95,500. Once the 16TB drives are released, one can have half a petabyte in 1U space. If extreme network bandwidth is needed, install two dual-port ConnectX-6 cards - that's 800Gb/s.

How about some switch action to go along with it?

http://www.mellanox.com/page/products_dyn?product_family=263&mtag=qm8700

40 x 200 Gb/s ports which equates to 16 Tb/s switch throughput, that's with a "T", 15.8 billion messages-per-second, 90 ns switch latency. For those with the $$$ this is some pretty kule stuff.
 
If you want real speed for the home, check out InfiniBand ;)

20gbps, 40gbps, similar or cheaper than 10gige, and bonding in IB is better than LACP ;D

Just a thought. May not work for all...


A home user can saturate their 1Gb links with an old spinner...

And 10Gbit could be saturated by a pair of SATA SSDs.

While I agree that applications would still have a hard time making use of the storage speed, CPUs and RAM catch up pretty quick.

Network links? Not so much. Note the bleeding edge 200Gbit link mentioned up top- You'd need five of those just to hook this guy up to a backplane, and you'd need a backplane that could actually switch that at layer 2+/3 for LAG/LACP, that could actually distribute that across a network.

And here I am just trying to figure out how to do a modestly forward-looking 10GBase-T setup at home.
 
If you want real speed for the home, check out InfiniBand ;)

20gbps, 40gbps, similar or cheaper than 10gige, and bonding in IB is better than LACP ;D

Just a thought. May not work for all...

Yep. You can get a 2 x 40gbe port ConnectX-3 card for <$200 and a 36 x 40gbe port Mellanox switch for <$500. If you want to connect just two computers together, with more bandwidth than you'll ever need, the switch is optional.
 
Also, at times IB switches can be super duper extra loud. But if it's two compies, you can connect them directly ;D

Just gotta run a Subnet Manager, but that can be done in software.

Yep. You can get a 2 x 40gbe port ConnectX-3 card for <$200 and a 36 x 40gbe port Mellanox switch for <$500. If you want to connect just two computers together, with more bandwidth than you'll ever need, the switch is optional.
 
Note: Link back to [H] news item is broken. Can't find the article itself.
 
For sure, I wouldn't intend to imply otherwise! Though a stored database on spinners sounds like architected frustration ;).

[generally speaking, we use spinners, but mostly just to load the database into RAM; I'll personally advocate for 3D Xpoint in the future, whether that be bus or DIMM attached; we've also mostly moved beyond SQL for non-housekeeping stuff in production, it's just too slow]

My main SQL server has 120GB of ram assigned to it, so most of the activity is in ram.
Still, the occasional poorly written search or report will hit a lot of not normally used data, and spike the disk activity.

With SSD's this fast, I could cut back on the ram, or handle much larger databases without slowing down the server.
 
Supermicro has a similar product...

https://www.servethehome.com/supermicro-36x-ngsff-ssd-server-offers-576tb-of-nvme-storage-in-1u/

https://www.supermicro.com/flyer/f_All-Flash_SSG-1029P-NMR36L.pdf

Base price is only $95,500. Once the 16TB drives are released, one can have half a petabyte in 1U space. If extreme network bandwidth is needed, install two dual-port ConnectX-6 cards - that's 800Gb/s.

How about some switch action to go along with it?

http://www.mellanox.com/page/products_dyn?product_family=263&mtag=qm8700

40 x 200 Gb/s ports which equates to 16 Tb/s switch throughput, that's with a "T", 15.8 billion messages-per-second, 90 ns switch latency. For those with the $$$ this is some pretty kule stuff.

Don't think I could justify that cost to my boss :confused:

I'm still waiting for prices to come down so I can upgrade my servers to 10gb.

At least server SSD's have come down enough in price that I have been able to start using some.
 
Back
Top