8TB, HGST Disks Show Top Reliability, Racking Up 45 Years Without Failure

Megalith

24-bit/48kHz
Staff member
Joined
Aug 20, 2006
Messages
13,000
Backblaze is back with its hard drive findings for 2016, and Ars’ analysis points out the impressive performance of certain Toshiba and HGST disks, which made it through a year without failure. One Seagate drive also managed that honor, although it remains to be seen just how reliable it really is, as it was introduced only fairly recently. Analysis of that model should be pretty interesting because it is an enterprise drive—the results may show whether hard drives meant for professional settings are truly worth the additional cost.

The standout finding: three 45-disk pods using 4TB Toshiba disks, and one 45-disk pod using 8TB HGST disks, went a full year without a single spindle failing. These are, respectively, more than 145 and 45 years of aggregate usage without a fault. The Toshiba result makes for a nice comparison against the drive's spec sheet. Toshiba rates that model as having a 1-million-hour mean time to failure (MTTF). Mean time to failure (or mean time between failures, MTBF—the two measures are functionally identical for disks, with vendors using both) is an aggregate property: given a large number of disks, Toshiba says that you can expect to see one disk failure for every million hours of aggregated usage. Over 2016, those disks accumulated 1.2 million hours of usage without failing, healthily surpassing their specification.
 
Last edited:
This sounds like an airplane company having 10 employees who have been there for 20 years and the company claims "We have 200 years experience in aviation". Really? You have 200 years experience in an industry that has only been around a bit over a 100 years?

I get the statistic. But trying to liken this to "145 years without a fault" is....shall I say....a stretch.

I think it would be more helpful to say something like, we give this drive a 98% chance of running for a year without any issue. That's more useful to me as a consumer.
 
those hitachi and toshiba drives are just tanks.

really giving the duopoly a run for it's money (i am fully aware that WD owns HGST)
 
I love that Backblaze uses consumer HDDs in their storage pods. Then they even compile and release the results from the drives they buy! The only downside is the small sample of models they buy, but they do have a good sample size of each model.

Disclaimer: I subscribe to their service and it's saved my butt multiple times.
 
This sounds like an airplane company having 10 employees who have been there for 20 years and the company claims "We have 200 years experience in aviation". Really? You have 200 years experience in an industry that has only been around a bit over a 100 years?

I get the statistic. But trying to liken this to "145 years without a fault" is....shall I say....a stretch.

I think it would be more helpful to say something like, we give this drive a 98% chance of running for a year without any issue. That's more useful to me as a consumer.

I agree, or if they were to display what they do for aging and stress tests to qualify the drives, that would be useful as well.

Ill see your aviation experience and raise you some strings of xmas lights.
 
If I had something I needed extreme reliability on, I'd hands down go with Hitatchi.

For my home storage ZFS pool - however - I've stopped carding about reliability at all. I have excellent redundancy (ZFS RAID 60 equivalent) and swapping out a failed or failing drive is extremely simple, and almost risk free.

Because of this, I shop for drives on price and performance these days. Reliability (within reason, I don't want multiple failed drives at the same time) just isn't as much of a concern for me anymore.
 
This sounds like an airplane company having 10 employees who have been there for 20 years and the company claims "We have 200 years experience in aviation". Really? You have 200 years experience in an industry that has only been around a bit over a 100 years?

I get the statistic. But trying to liken this to "145 years without a fault" is....shall I say....a stretch.

I think it would be more helpful to say something like, we give this drive a 98% chance of running for a year without any issue. That's more useful to me as a consumer.

This is how they stay relevant.
 
One Seagate drive also managed that honor, although it remains to be seen just how reliable it really is, as it was introduced only fairly recently.
One Seagate Drive.

One.


04a4b6907ed0083255e9cfa4a7118542.jpg
 
I have a super old 40GB Hitachi drive (I know, unrelated) that's slowly dying but in the most graceful way I've seen.

It has these moments where it'll click, then have what sounds like a seizure by waving it's head around like mad while screeching, and it sounds like this battle cry to the rest of the spindle.
After like 5-7 seconds it returns to normal, never even causing an OS wide freeze. SMART is pristine. :D
I have since disconnected it because it has been doing this dance macabre more and more often the last year.
 
If I had something I needed extreme reliability on, I'd hands down go with Hitatchi.
For my home storage ZFS pool - however - I've stopped carding about reliability at all. I have excellent redundancy (ZFS RAID 60 equivalent) and swapping out a failed or failing drive is extremely simple, and almost risk free.
Because of this, I shop for drives on price and performance these days. Reliability (within reason, I don't want multiple failed drives at the same time) just isn't as much of a concern for me anymore.

Which drives have you been getting? I managed to get a pretty decent price on WD Reds 2TB drives that were marked down but should I build another ZFS Raid Box, I just wonder how "cheap" I can really get.

Of course, everyone should know RAID is not a backup. But ZFS puts it damn close to the ultimate backup filesystem.

Shucked a bunch of 8TB WD My Book externals that had lovely HGST 8TB He drives inside them to mirror the ZFS pool. Man, they are quiet and fast. Hitting 200MB/sec even over USB3 for a single drive.
 
So it's the same as having 1.2 million discs all surviving for one hour? Great. Good to know how the tests are done.

No, this would give you a MTBF of only one hour. The headline is terrible, and you really have to look at the failure rates / the number of drives they used.
 
ZFS puts it damn close to the ultimate backup filesystem.

Until you need to expand it.

Anyhoo, thumbs up for Hitachi -- I've been appreciating them for 10+ years since before it was cool.. back when every idiot was still derping "Deathstar lol" whenever they came up in a thread. And who can forget the people swearing HGST "Deathstars" were more likely to fail than competing drives because they had more platters per drive and thus more points of failure. Total horseshit.
 
Last edited:
Until you need to expand it.

Not as bad as it could be. Replace each drive in the pool one at a time with a larger drive. Once the last new larger drive is put in to replace the smaller drive being replaced, the pool will expand to use up the new available space across all drives.
 
one disk failure for every million hours of aggregated usage

I've gotten that one lucky disk a few times.
I envy the other guys who are part of the other million hours.
 
Which drives have you been getting? I managed to get a pretty decent price on WD Reds 2TB drives that were marked down but should I build another ZFS Raid Box, I just wonder how "cheap" I can really get.

Of course, everyone should know RAID is not a backup. But ZFS puts it damn close to the ultimate backup filesystem.

Shucked a bunch of 8TB WD My Book externals that had lovely HGST 8TB He drives inside them to mirror the ZFS pool. Man, they are quiet and fast. Hitting 200MB/sec even over USB3 for a single drive.

Nice find on those HGST's.

For me if I were buying drives today, I'd just look for the cheapest ones with TLER.

I still have the same drives in my server I bought in early 2014 though, 12x WD Red 4TB.
 
I've had good luck with the WD Reds. I used to use the Greens, some of the "energy efficient" green Seagates, etc. I have a pile of dead drives from that era. I know, these drives weren't meant to be used in a NAS. I learned my lesson.
I'd love to have a stack of the 8GB drives...
 
Until you need to expand it.

Whats wrong with expanding? You can add another group of disks (or even 1) in any combination you want. You could even use ZFS as a straight up JBOD with no redundancy and just keep adding disks 1 at a time.
 
Sample sizes of 45 drives compared to 34,738 (Seagate 4TB Desktop) is almost comical. It's 772 times the amount of drives! A 2.67% failure could be a bad batch of drives from the factory.
 
No, this would give you a MTBF of only one hour. The headline is terrible, and you really have to look at the failure rates / the number of drives they used.
Sorry, 180 drives lasting for ONE year is not a big enough sample to predict the performance of many thousands of others for a million hours. That kind of testing is how we wind up with airbags that fail five years down the line and other dumb mistakes. I realize that sometimes scientists get their brains messed up by living in a laboratory, but the rest of us live in the real world where we actually have to live with the stupid projections that numbskulls make in their tests. The wear and breakdown of things in a disc drive over a million hours IS NOT going to be the same as one that lasted for 8000 hours. Anyone who believes it is, is an imbecile and should turn in their engineering degree.
 
I swapped to HGST after reading their info. I lost 3 Seagate 4 TB drives. Within 2 years. My other 3 Seagate 4 TB drives seem to be chugging along. Not sure if bad batch or what. Either way, started grabbing the HGST ones. Only 1 right now that's a year old and another sitting there, waiting to get installed.
 
My quantum bigfoot from 1996 is still going today. Maybe in a perfect machine toshiba drives work that long. So far though I have seen nothing but bad sectors with those drives for the past 6 years. I went threw 4 of them in a year. I also tried seagate many failures there as well. My server has 4 2TB wdc black drives and they have never failed once. They just keep going and going. My main gaming computers all have SSD raids in them.
 
its a shame HDD reliablity numbers of any kind don't matter to me... becuase i can't bloody justify the prices of them here! It is RE-DIC-YOU-LUSS!!
 
My quantum bigfoot from 1996 is still going today. Maybe in a perfect machine toshiba drives work that long. So far though I have seen nothing but bad sectors with those drives for the past 6 years. I went threw 4 of them in a year. I also tried seagate many failures there as well. My server has 4 2TB wdc black drives and they have never failed once. They just keep going and going. My main gaming computers all have SSD raids in them.

I think the Seagate drives hate heat. Once I had less drives in my comp, I had no more failures. It's really all just a guess though. I could have just been unlucky and got some bad drives.
 
I think the Seagate drives hate heat. Once I had less drives in my comp, I had no more failures.

I have seen just the opposite happen with 7200.X drives. I mean when the AC was working in the server room (and the room temp was 65F) we had more failures than when it was not (AC is off now because it is unreliable). Although with that said with the AC on the temp was less stable than with it off.
 
My server is currently humming happily along with two newer HGST 4tb drives in RAID1 and two 55k+ hour 500gb WD drives in RAID1. I need to upgrade/replace the 500gb drives and when I do, I plan to go with HGST. I'm kinda biding my time and waiting for a good sale on 1tb or 2tb drives before I pull the trigger. My point is, I am a big WD and HGST fan, never really cared for Seagate drives.
 
Shucked a bunch of 8TB WD My Book externals that had lovely HGST 8TB He drives inside them to mirror the ZFS pool. Man, they are quiet and fast. Hitting 200MB/sec even over USB3 for a single drive.

I'm curious. Were you able to enable TLER on these?
 
I have seen just the opposite happen with 7200.X drives. I mean when the AC was working in the server room (and the room temp was 65F) we had more failures than when it was not (AC is off now because it is unreliable). Although with that said with the AC on the temp was less stable than with it off.

*shrugs* Not sure then. Maybe it didn't like the heat up/cool down cycles. I'm leaning more towards just a bad batch of drives.

My server is currently humming happily along with two newer HGST 4tb drives in RAID1 and two 55k+ hour 500gb WD drives in RAID1. I need to upgrade/replace the 500gb drives and when I do, I plan to go with HGST. I'm kinda biding my time and waiting for a good sale on 1tb or 2tb drives before I pull the trigger. My point is, I am a big WD and HGST fan, never really cared for Seagate drives.

Seagate use to be great. If you wanted a reliable drive, you just picked up a Seagate. Nowadays, it seems a bit iffy. They've randomly had mess ups here and there. I feel they've gotten worse since they bought up Maxtor.
 
Seagate use to be great. If you wanted a reliable drive, you just picked up a Seagate. Nowadays, it seems a bit iffy. They've randomly had mess ups here and there. I feel they've gotten worse since they bought up Maxtor.

I was going to say something similar but didn't want to start something haha But I do agree, I used to be somewhat of a Seagate fan 12+ years ago when I got into computers but now not so much.
 
what was it i used to have in my old tower? I wanna say scsi3 150gb 15krpm ibm drive. I remember it sounding like a supercharger when it was running lol. (stumbles away using his walker).
 
I got 5 hgst ssds running in raid 0 in my main rig. About 1tb of data :p probably not a great idea but they are going strong after 2 years (plus whatever the previous owner did with them). As for Seagate drives I have had about 5 consumer level drives go down and refuse to buy those however I have never had a issue with the Seagate 15k drives and cheatah ns drives.
 
Whats wrong with expanding? You can add another group of disks (or even 1) in any combination you want. You could even use ZFS as a straight up JBOD with no redundancy and just keep adding disks 1 at a time.

Must. resist. offtopic tangent that belongs in Storage subforum.

Let's just say someone that doesn't know better who has bought into the "zomg ZFS is the best" rhetoric is in for a rude awakening when they're out of space on their raidz2 pool and want to simply add a single drive to increase space. With hardware RAID or better yet SnapRAID, piece of cake - I guess I'm spoiled by the latter.
 
Last edited:
Must. resist. offtopic tangent that belongs in Storage subforum.

Let's just say someone that doesn't know better who has bought into the "zomg ZFS is the best" rhetoric is in for a rude awakening when they're out of space on their raidz2 pool and want to simply add a drive to increase space.
That's why one uses mirrors :)
 
I got 5 hgst ssds running in raid 0 in my main rig. About 1tb of data :p probably not a great idea but they are going strong after 2 years (plus whatever the previous owner did with them). As for Seagate drives I have had about 5 consumer level drives go down and refuse to buy those however I have never had a issue with the Seagate 15k drives and cheatah ns drives.

Ya, their consumer products can be iffy, but their Enterprise drives should still be spot on. But at those prices and size, I could just get an Enterprise SSDs.
 
Back
Top