Samsung Releases Z-NAND Based Datacenter SSD

AlphaAtlas

[H]ard|Gawd
Staff member
Joined
Mar 3, 2018
Messages
1,713
Following the SZ985 that was announced last year, Samsung unveiled a new high performance datacenter SSD aimed at the same market as Intel's Optane drives. Unlike Optane SSDs, which use Intel's unique 3D Xpoint memory, the 983 ZET uses Samsung's 3D NAND in an SLC configuration that Samsung calls "Z-NAND". This gives the 983 ZET a massive 17,520 TB endurance rating, over 28 times that of the 960GB 970 Pro, as well as lower access latencies and higher sustained read/writes than MLC drives. As consumer drives move towards TLC and even QLC flash, it's interesting to see Samsung go the opposite direction in the datacenter.


That demand is part of the reason Intel introduced 3D Xpoint memory and began offering Optane SSDs for the data center. Companies like Aerospike were still purchasing some of the last SLC SSDs available as caching drives for their in-memory database applications, but supplies for those drives ultimately began to dwindle and opened the door for alternative products to come to the fore. The growth of applications that need both high performance and space for large data sets apparently prompted Samsung to begin making SLC NAND for the data center again.
 
it's interesting to see Samsung go the opposite direction in the datacenter.

Depends on what the life expectancy is for these drives. If data centers are expecting to only use them for a few years and then cycle them out for more density, going for extreme endurance isn't the play.

My guess is that data centers aren't cycling them out and want better endurance (or want the option to have decent density and high endurance). A server with a lot of data writes (or a really large database without RAM) can quickly saturate the write cycle endurance of TLC and MLC based drives. SLC has always been the higher endurance models.

To be honest, I welcome more advancements in SLC based drives. To me, TLC/MLC are more HDD replacement read storage drives, rarely written to. If I use one for the OS, I never let it go above 50% capacity so that the controller has plenty of room to even the writes-per-cell.
 
Last edited:
I still prefer to have an old SLC drive for my temp/swap drive on my main system. It helps the boot/app drive last longer. I don't trust TLC, and definitely don't trust QLC, for any active data. I use it for ISO storage on VM hosts, but that's about it. The write endurance is way too low for any active use.
 
I still prefer to have an old SLC drive for my temp/swap drive on my main system. It helps the boot/app drive last longer. I don't trust TLC, and definitely don't trust QLC, for any active data. I use it for ISO storage on VM hosts, but that's about it. The write endurance is way too low for any active use.

I have a 250GB Samsung 840 EVO drive on my home system for several years. It's the system/boot drive and has all my apps installed on it. Since I've switched to Windows 10, it even has my swap file.
After several years of use, it's still has only about 4% of it's life used.

I've also deployed many similar SSD's at work. 1TB drives used for demo VM's. Large server VM's that have SQL, IE, and our application installed.
After 4 years, the most heavily used drives are only at about 5-6% usage.

Don't have a problem with TLC for desktop/laptop usage.

Even the read intensive enterprise drives should be fine for most my servers, but I'm still using spinners for those servers, since they really don't need the extra performance.

It's only my SQL server that's a problem. The SSD's I'd need are still too expensive.
 
shame all the makers seem to drift away from SLC which at least initially could explain because of the price benefit (amount of cells that can be produced per wafer) MLC-TLC and now QLC (yuck)

now that they have shrunk and shrunk over the years and got better and better at reducing bad yield etc
shame that at least some of them did not start drifting back to the speed and endurance SLC offers over the others for data center and consumer alike.

they seemed to rush towards MLC etc as well as massive priced and debatable real world "advantage" that m.2 and such drives offer.. I suppose a chunk of buyers always look at the massive numbers which makes the sale no matter how much of this you can actually use or the cost to get it.

I am kind of hoping at some point they can figure out a way to make a sata based drive that links up a few ports to get the speed benefit that m.2/u.2/pci-e drives offer at a lower cost then them AND not be in a raid style format, I think it would be great as very few mobos that I have seen seem to have the m.2 drives in a place that is unlikely to throttle, whereas sata drives are most of the time in the perfect airflow spots ^.^

that is a crazy amount of endurance, I imagine the cost would also be crazy.
 
I have a 250GB Samsung 840 EVO drive on my home system for several years. It's the system/boot drive and has all my apps installed on it. Since I've switched to Windows 10, it even has my swap file.
After several years of use, it's still has only about 4% of it's life used.

I've also deployed many similar SSD's at work. 1TB drives used for demo VM's. Large server VM's that have SQL, IE, and our application installed.
After 4 years, the most heavily used drives are only at about 5-6% usage.

Don't have a problem with TLC for desktop/laptop usage.

Even the read intensive enterprise drives should be fine for most my servers, but I'm still using spinners for those servers, since they really don't need the extra performance.

It's only my SQL server that's a problem. The SSD's I'd need are still too expensive.
Dunno how you managed that. My 512GB 960 Pro is barely over a year old, and it is already at 4% of its spec of 400TBW, and that is with my temp files and swap on an Intel X25-E since I got it.

At 4% per year, that's still 25 years until it runs out, but not all cells would handle that many writes, and data would be lost with bad cells. I prefer to avoid that. TLC has one tenth the write endurance of MLC, and QLC is even lower. It just isn't worth the chance.
 
My 512GB 960 Pro is 5 months old and hs 2.5TB written. I need to check an old 840 Pro on my son's PC (previously mine) to see where it stands.
 
I've three 256GB Samsung 840 PRO used in three desktop PC's running Windows 7 and now Windows 10 at home for five years, and still have around 1% to 5% of their life's used. Once upon a time I even overprovisioned them to prolong life, but that is removed a long time ago.

In my main desktop there are now two 1TB Samsung 850 EVO in them, and I've sort of lost my fear of wearing them out in the foreseeable future.

Edit: One 840 PRO has 2.2 TB written while another has 5,6 TB written during five years of fairly light desktop usage. The third one is not installed now, though. The first 850 EVO I bought a bit more than 1.5 year ago, and that one has seen 8.0 TB written.

The numbers I posted where from memory, since I wasn't home.
Just checked, and my 840 EVO still has 97% of it's life left (I was close), with 7.6 TB written.
At this rate, I'll be over 150 years old before the drive wears out. :eek:

Guess I should stop worrying about SSD's wearing out on desktops/laptops.
 
Curious about the pricing, cost will 2-3x higher than current products which would still bring it well under Optane drive pricing right now but (a) 3DXP Gen 2 should put a nice dent in that next year and (b) there were no IOPS numbers in the referenced article.
 
Back
Top