Micron Intends to Make 10k HDDs Obsolete

AlphaAtlas

[H]ard|Gawd
Staff member
Joined
Mar 3, 2018
Messages
1,713
10,000 RPM hard drives have all but disappeared from consumer markets, but they still have a place in the enterprise hardware world. However, Micron intends to make those drives obsolete with their new 5210 ION QLC SSDs. The company claims the new drives have "MSRPs similar on a $/GB basis to 2.4TB 10K HDDs," while massively outperforming them and using less energy. While they're happy to tout read and write figures and cite deep learning applications, Micron doesn't go into much detail about the drive's durability specifications. The product page does list performances figures for various wear levels over a "DWPD for 5 years," but it's pretty clear that durability will be a major drawback for these drives, as it has been for most QLC SSDs.

Compared to the largest 10K RPM HDDs available, the Micron 5210 ION delivers 175x faster random reads, 30x faster random writes, and 2x faster throughput.3 It’s also 3x more energy efficient, fueling a lower terabyte-to-terabyte total cost of ownership. Compared to 3.5-inch 7200 RPM HDDs, you can pack twice as many 2.5-inch Micron 5210 SSDs into a typical 2U rack to save on power, cooling, licenses, and floor space.4 How much are your racks costing you?
 
Same with 15k.

Eventually we'll only have or need nearline, flash, and write optimized flash.
 
It feels like only yesterday two velociraptors in RAID 0 were the epitome of speed. Crazy how far we have advanced in the past 10-15 years in terms of storage.
Ran a pair of 150GB Velociraptors in RAID 0 for what seemed like forever. I remember being blown away at the speed. My first AHCI SATA SSD made me forget all about it.
 
It feels like only yesterday two velociraptors in RAID 0 were the epitome of speed. Crazy how far we have advanced in the past 10-15 years in terms of storage.
I remember buying my first 10k 36GB raptor and being amazed..... at how it felt no different from my old drive. Oh.. except it added a new high pitch whine to my machine. And then I remember buying an SSD and now working on any system with a mechanical drive is a pain. Although I hear the 36GB raptors weren't particularly fast.
 
Still have my 300gb velociraptor in my desktop. At this point I'm just curious to see how much longer it can last, since removing it from my case is a problem. Didn't think ahead, and securing the cage holding my pumps up with rivets meant I can't get to the screws keeping the velociraptor in.

Ten years since I bought it and running strong. According to crystal disk info, power on hours are only 5.35 years, power on count is 2,997.
 
Picked up two 6TB HGST Helium filled HDDs man are these thing noisy every few seconds the make inside my case sound like its making metal popcorn.
 
If they don't have a SAS interface, the enterprise market will not be interested. And even if they do, they still may not be cost competitive if their durability is a lot less than the spinning drives. High durability and reliability are critical characteristics for enterprise drives.
 
SSD's are now cheap enough to even consider replacing older 7200 RPM SATA drives.


Quick look on Amazon for server level drives.

WD Gold 4TB 7200 RPM SATA $171 ($43 per TB)

HGST Ultrastar 1.2TB 10K RPM SAS Drive $75 ($62 per TB)

Intel S4510 1.9TB 2.5 inch SATA (7.1PBW) $590 ($310 per TB)

If I just need large amount of storage, I'll still stick with the low cost 7200 RPM sata drives.

If I really need performance, I'd pick the SSD, as it's several times faster, more reliable, and a failed drive would rebuild the raid much faster.

Edit: Fixed my math.
 
Last edited:
Yeah... the 10K drives can last for 10+ years. How long can these QLC drives last? .1 DWPD is not enough.
 
I remember buying my first 10k 36GB raptor and being amazed..... at how it felt no different from my old drive. Oh.. except it added a new high pitch whine to my machine. And then I remember buying an SSD and now working on any system with a mechanical drive is a pain. Although I hear the 36GB raptors weren't particularly fast.

I've used all of them the 36GB Raptors, the 150GB Velociraptors, and 300GB Velociraptors in RAID 0. I still have my 300GB VR. Anyone want it?
 
SSD's are now cheap enough to even consider replacing older 7200 RPM SATA drives.


Quick look on Amazon for server level drives.

WD Gold 4TB 7200 RPM SATA $171 ($43 per GB)

HGST Ultrastar 1.2TB 10K RPM SAS Drive $75 ($62 per GB)

Intel S4510 1.9TB 2.5 inch SATA (7.1PBW) $590 ($310 per GB)

If I just need large amount of storage, I'll still stick with the low cost 7200 RPM sata drives.

If I really need performance, I'd pick the SSD, as it's several times faster, more reliable, and a failed drive would rebuild the raid much faster.

Its not just about acquisition price but TCO and durability. SSDs are not yet cost effective for high write performance applications. I have a system that has several thousand spinning disks in it. We write A LOT of data. We tried some enterprise level ssds and they died after a month...eventually the company that warrantied them (for 4 years) stopped honoring the warranty. Probably because we were costing them big $$$ on replacement disks....
 
If they can do this it will sell,
Its not just about acquisition price but TCO and durability. SSDs are not yet cost effective for high write performance applications. I have a system that has several thousand spinning disks in it. We write A LOT of data. We tried some enterprise level ssds and they died after a month...eventually the company that warrantied them (for 4 years) stopped honoring the warranty. Probably because we were costing them big $$$ on replacement disks....
I have had a very good experience with the Dell SSD’s for their servers but holy Jesus are they costly.
 
If they can do this it will sell,

I have had a very good experience with the Dell SSD’s for their servers but holy Jesus are they costly.

I would have to go back and look but I think these were in the 2-3k range each.
 
like others.. i used to have raptors.. great drives. Once i got an SSD.. never looked back. Now with my M2 drive.. LOL i will never look at "regular" SSD's again.


its the slowest part of a computer.. and man its been awesome to see them get so much faster.
 
This. I can't imagine using one for the OS.

What about for a server where the majority of disk IO is going to be on the non OS drive? The only thing the OS drive will really do is control the boot times in that case.
 
What about for a server where the majority of disk IO is going to be on the non OS drive? The only thing the OS drive will really do is control the boot times in that case.
Use cases.
If you run a server that does not get rebooted very rarely, it does not matter. People can run whatever they wish really. I will enjoy my SSD.
 
Yeah... the 10K drives can last for 10+ years. How long can these QLC drives last? .1 DWPD is not enough.
Depends on your performance needs and work requirements. Low latency, fast access to storage has many applications (like AI, deep learning, ect) and flash destroys the 10k drives when it comes to these scenarios.
 
Its not just about acquisition price but TCO and durability. SSDs are not yet cost effective for high write performance applications. I have a system that has several thousand spinning disks in it. We write A LOT of data. We tried some enterprise level ssds and they died after a month...eventually the company that warrantied them (for 4 years) stopped honoring the warranty. Probably because we were costing them big $$$ on replacement disks....
This isnt what micron is targeting with qlc. Their press release specifically states (multiple times) they are going after read-intensive applications. 0.8 DWPD is poopy if youre write heavy
 
I'll agree they don't last as long as platters, but we're just starting to see wear leveling at the enterprise level. SSD's have strategies for leveling wear across all the chips, but using that same technology at the datacenter level to move static data (data that doesn't change often) to SSDs that are at 70% of their life expectancy should give us a similar drive replacement cadence to current hard drives.
 
Last edited:
I'm not sure why, but whenever I read articles like this I always think, "affordable, like buying a 10TB drive from 7-Eleven".
 
Huh. I had assumed data centers would have no interest in QLC. What am I missing?
 
Still have my 300gb velociraptor in my desktop. At this point I'm just curious to see how much longer it can last, since removing it from my case is a problem. Didn't think ahead, and securing the cage holding my pumps up with rivets meant I can't get to the screws keeping the velociraptor in.

Ten years since I bought it and running strong. According to crystal disk info, power on hours are only 5.35 years, power on count is 2,997.

There's a pair of 1TB Vraptors in my wife's rig. Took them out of RAID as it wasn't playing nice after years, but they've been in there for nearly 5 years now without a hiccup since. My kid's rig is using a 300GB as the primary drive that was bought about 8 years ago. Even have a pair of the 150's sitting on a shelf as I can't part with them. I've been platter-free for about 4 years now, but still have a soft spot for those drives. :)
 
I remember buying my first 10k 36GB raptor and being amazed..... at how it felt no different from my old drive. Oh.. except it added a new high pitch whine to my machine. And then I remember buying an SSD and now working on any system with a mechanical drive is a pain. Although I hear the 36GB raptors weren't particularly fast.

I'm pretty sure the raptors were 12k drives. I still have my set 36GB I was running in raid0, and when they came out there was nothing else in the consumer market that could touch their speed. This was many years before SSD's came out.
 
I'm pretty sure the raptors were 12k drives. I still have my set 36GB I was running in raid0, and when they came out there was nothing else in the consumer market that could touch their speed. This was many years before SSD's came out.
Nope they were 10k, a quick google tells me the 36gb were as well.
crystaldiskinfo raptor 11-09-18.png
 
The product page does list performances figures for various wear levels over a "DWPD for 5 years," but it's pretty clear that durability will be a major drawback for these drives, as it has been for most QLC SSDs.

I'm not going to count out QLC quite yet.

Samsungs 3D V-Nand TLC (which they have since renamed to 3-bit MLC, just to be confusing) has very good write endurance, well in excess of what most loads require. I could easily see a low cost QLC version of this performing well and having more than sufficient write endurance for most consumer loads.

These Micron drives are targeted at enterprise server loads though. I would have my doubts about QLC in that environment.
 
Based on tests I can find for Intels current QLC based consumer SSD’s they survive 6 to 8 months of 24/7 write and wipe before a failure. If Micron can double that and work with Dell and HP on their raid controllers to ensure optimal drive utilization (hot disks are bad) then I could see real world durability being 5+ years in most environments. These are optimized for read heavy environments after all and as long as they are used for the intended purposes they shouldn’t have any reliability issues.
 
I'm not going to count out QLC quite yet.

Samsungs 3D V-Nand TLC (which they have since renamed to 3-bit MLC, just to be confusing) has very good write endurance, well in excess of what most loads require. I could easily see a low cost QLC version of this performing well and having more than sufficient write endurance for most consumer loads.

These Micron drives are targeted at enterprise server loads though. I would have my doubts about QLC in that environment.

I’m not too worried, for the intended job you would be seeing 8+ of these installed in a raid 10 array. For distribution servers where data is just uploaded and fed out or reporting servers where data is entered but not changed or even accounting systems. Servers with large amounts of static data that has lots of reports run against it, these types of drives are perfect for those environments, but put it in a write heavy environment where data is coming in and constantly changing, like a caching server or something and you would destroy them, that and it would under perform as QLC sustained write speeds are relatively speaking, painfully slow.
 
it would be nice to retire my 1.2gb 4x15k rpm seagate raid 0 game drive.

I had 2 10k's when WOW first launched, it was great how fast I could instance in while everyone else was still loading.


The noise on the other hand... yeeesh. Switch to SSD, you'll thank me.
 
Yeah... the 10K drives can last for 10+ years. How long can these QLC drives last? .1 DWPD is not enough.
From my experience with the raptors drives they died every 3 years. I RMA both drives once and the replacements both lasted another 3 years. Afterwards I moved on to a SSD. I don't even see enterprise 10k drives lasting 10 years.
 
Yeah... the 10K drives can last for 10+ years. How long can these QLC drives last? .1 DWPD is not enough.

I use a mirrored pair of Samsung 850 Evo drives as my boot drives and VM data store drives on my server. Based on the rate at which the smart values are declining they will be with me in this role for almost 15 years.

I don't know how much less write endurance QLC has compared to TLC, but I am guessing it's just 4 bits instead of 3? This would imply 25% less write endurance, so we are talking 11.25 years.

Write endurance just isn't that bad on SSD's anymore.
 
WD Raptor and Velociraptors are quiet compared to the original Seagate Cheetah 15k drives.

Anyway, the QLC durability can be weighed against the higher random failure rate of a spinning disk.
 
I don't know how much less write endurance QLC has compared to TLC, but I am guessing it's just 4 bits instead of 3? This would imply 25% less write endurance, so we are talking 11.25 years.

Write endurance just isn't that bad on SSD's anymore.

You just need to buy the right SSD for the job.

I work for a software company and we use lots of VM's for testing/demos that put a heavy load on the drive.
I started using 1TB SSD in our engineer's laptops 3 years ago, and have recently started replacing some of the laptops.
The oldest drive I've check so far was a Samsung 840 EVO, and it was less that 10% used.
That means it would be good for another 27 years :eek:

However, if I where to try an run our main SQL server on the same drive, it would be dead in about a month or two.
An enterprise drive like the Intel s4610 would last 10 years under the same load.

(assuming the Samsung rated at 150TB, and the Intel at 10PB)
 
If they don't have a SAS interface, the enterprise market will not be interested. And even if they do, they still may not be cost competitive if their durability is a lot less than the spinning drives. High durability and reliability are critical characteristics for enterprise drives.

There is also recovery from failure. We have yet to move to any SSD's, in our servers, as no one can assure recovery of the data is possible, in the advent of a failure. In the 30+ years I have managed servers, I have had two hard drive failures I had to send in for data recovery. They were both in the same RAID array. The nature of the failure was bizarre (dead short in the motherboard). The data was 100% recovered. I have seen too many reports of failed SSD's where no data could be recovered. Personally I would not want to be responsible for that.

My first deployment of an SSD, in a server, would be for temporary storage of volitile data where a failure would be irrelvant. Lots and lots of file writes and deletes. Not as happy place for an SSD.
 
Back
Top