Hard Drive Reliability Stats From 1B Drive Hours

Looks like the big three all have sweet spot products in terms of reliability. My server's been running years 24/7 on it's mix of Samsung and WDC RED 4TBs. Biggest thing that made it more reliable: UPS.
 
With such low defect rates it amazes me how warranty replacements are refurbished/previously broken. I usually order 10 at a time and on occasion have seen batches with 60% failure rate within warranty period. Other times 1-2%. Certain models have poor reliability within a manufactures sku.
 
Seagate, nuff said.

Dont forget the 12.57% for WDC, 5.31% for WDC, 8.63% from Toshiba, and only 9.63% for seagate. Yes, HGST looks great, but seagate performed very well. Time to get out of the seagate is crap mindset. They are as reliable as all of the others.
 
Dont forget the 12.57% for WDC, 5.31% for WDC, 8.63% from Toshiba, and only 9.63% for seagate. Yes, HGST looks great, but seagate performed very well. Time to get out of the seagate is crap mindset. They are as reliable as all of the others.
The average failure rate is 1.84%.
All of the HGST drives had a lower failure rate than the average, no other manufacturer can claim that.
2 of the 4 Seagate drives had a higher failure rate, one of them is 9.63% !! How can you be proud of that?

You cannot buy a new Seagate drive without some fear unless you have specific knowledge of a particular drives reliability.
You take much less of a risk buying HGST.
This matters much more with these super sized drives because its a lot of data to potentially lose (for a typical home user) and the downtime + time to repopulate is significant.
 
The average failure rate is 1.84%.
All of the HGST drives had a lower failure rate than the average, no other manufacturer can claim that.
2 of the 4 Seagate drives had a higher failure rate, one of them is 9.63% !! How can you be proud of that?

You cannot buy a new Seagate drive without some fear unless you have specific knowledge of a particular drives reliability.
You take much less of a risk buying HGST.
This matters much more with these super sized drives because its a lot of data to potentially lose (for a typical home user) and the downtime + time to repopulate is significant.
5 failures out of 207 isn't a very big sample, nor is 1 drive out of 47, especially when you look at the number of ST4000DM000 drives and even with 198 failures, that's stil 2.54%
 
Yes and no Nenu.
To be fair, Backblaze pods are hard drive torture chambers. The fact that the failure rate in on ALL drives are that low is a testament to drive durability in general.

In a standard desktop setup, your reliability numbers are likely to be an order of magnitude or greater than you see from Backblaze.
 
I've had bad luck with several Seagates (2tb and 3tb) drives.
Which really makes me Shy when it comes to buying one.

Saying that even the last 4tb drive I got was a WD Black for the Warranty.

I was looking for a couple of new drives to raid, and due to backblaze's last several reports. HGST is the goto ATM
 
Yes and no Nenu.
To be fair, Backblaze pods are hard drive torture chambers. The fact that the failure rate in on ALL drives are that low is a testament to drive durability in general.

In a standard desktop setup, your reliability numbers are likely to be an order of magnitude or greater than you see from Backblaze.

This is my exact thought. Drives are more reliable than ever. Sure, all of us have drives that have died quickly, or have lasted way longer than we thought possible. Out of all of the drive manufacturers I have used, has any particular one been better than the others? NO! (Quantum Bigfoot not included)

I assume all drives are going to fail, and make sure I have multiple backups. I could care less if the failure rate is 1% or 50%.
 
This is my exact thought. Drives are more reliable than ever. Sure, all of us have drives that have died quickly, or have lasted way longer than we thought possible. Out of all of the drive manufacturers I have used, has any particular one been better than the others? NO! (Quantum Bigfoot not included)

I assume all drives are going to fail, and make sure I have multiple backups. I could care less if the failure rate is 1% or 50%.

It depends on the model and then the manufacturer.. But for a particular use a particular drive from a particular manufacturer is needed.. I have used maxtor 512MB drives fully knowing it had 100% failure rates.. For the purpose it was what was needed.. If it is backup data storage is what I require it for, I would get a particular drive. Right now that would be Toshiba or HGST. If it is for NAS or Raid systems then it would be the cheapest like 4TB seagates.. Multiple factors play a role in it. concern over reliability comes in depending on its particular use.. Not everyone has a NAS under their bed and a fiber connection for cloud storage....
 
When it comes to HDD's I use either HGST or WD. Seagate still scares me. All the HDD's I've had over the years have never failed. Even the Quantum Fireballs and Maxtor back in the 90's.
 
Last edited:
As noted, if Seagates were so bad backblaze would not be using them and yet they make up for most of the drives do they not, thus higher numbers in production vs other brands.
 
I'll still avoid Seagate drives due to the failure rates I've seen on both server and desktop drives.
I've had a few WD 4TB server drives fail this year (RE's), but they where OEM drives and over 3 years old. Plus they are in a room with not quite enough air conditioning and are operating closer to their max temperature than I would prefer, so I can't complain too much.

As for desktops, I still have an old 8.4GB IBM Deskstar that works, unlike all those old Maxtor drives. It's been in an old test box that I'm getting ready to retire, but long ago it was my main drive for a few years.
 
I've been running a mix of seagate and Western Digital -- and i have a few Seagate 4TB's that are up there in age and still alive and kicking rather well. My seagate 8TB external has been solid as a rock.

I know I know -- everyone has their horror stories though. After i purchase my 1080GTX here in the coming weeks... the next little project will be probably replacing or consolidating drives to newer models. I have about 20TB worth of stuff that would really suck to redownload (meticulously cataloged movies/tv series)
 
I'll still avoid Seagate drives due to the failure rates I've seen on both server and desktop drives.
I've had a few WD 4TB server drives fail this year (RE's), but they where OEM drives and over 3 years old. Plus they are in a room with not quite enough air conditioning and are operating closer to their max temperature than I would prefer, so I can't complain too much.

As for desktops, I still have an old 8.4GB IBM Deskstar that works, unlike all those old Maxtor drives. It's been in an old test box that I'm getting ready to retire, but long ago it was my main drive for a few years.

I have a horror story about Maxtor drives -- first time I ever tried running a raid array with two maxtor 40GB drives - both crashed and burned hard. This was probably 18 years ago, swore off maxtor after that point.
 
I have a horror story about Maxtor drives -- first time I ever tried running a raid array with two maxtor 40GB drives - both crashed and burned hard. This was probably 18 years ago, swore off maxtor after that point.
I think CDC was the fastest drives originally which was brought out by Maxtor who also brought out DEC and Quantum and then Seagate bought them out hence the sudden performance jump of seagate drives which before were the slowest of the lot and even WDC being much faster.. The original seagates had stepper motors for the heads so you heard the click click click when ever the head moved.. CDC had the current type of servo head control.. But even though being much slower original seagates when they did not have sticktion problems were reliable in storing data.. That reliability drop came after they got maxtor..
 
Last edited:
9GB Seagate ST410800N used as a door stop in my office. :D

It is a 5.25" full-height (2-cd-rom drives stacked on top of each other for the youngins) with 9 platters and 1GB per platter. I've still got an Adaptec 2940 SCSI controller and I'm tempted to hook it up one of these days for the heck of it.

BTW, I've got a Quantum Bigfoot at home too. I've been hanging on to it because I'd love to take the lid off, cut most of it away and glue in a plexi window to see it in action.


BP
 

Attachments

  • IMG_20160519_083442_Small.jpg
    32.1 KB · Views: 20
HGST/Hitachi is the only manufacturer I put any level of trust in anymore. Got a Hitachi 750GB 7200 rpm in my laptop, a Hitachi 500GB 7200 rpm in the wife's laptop, a Hitachi 1TB 5400 rpm for backups, and a few other smaller cap Hitachi's for general use. They are the only brand/manufacturer that I've owned over the past 4 decades of using storage devices that have never failed on me personally, not one. In fact, not one of the dozens upon dozens of Hitachi drives I've owned personally or worked with in client machines have ever failed nor have I ever had a single issue with any of them, not one bad sector for that matter.

I've only had one Seagate that I personally owned go bad on me - had it on an eSATA connection and the one time I decided to just unplug it (since eSATA supports hot plugging) it died and never worked again, still can't figure that one out. I've had a few defective Seagates owned by clients over the decades but just that one instance of Seagate failure for me. Toshiba? Every damned Toshiba that crosses my path fails or has failed. Western Digital? Same thing aside from just one drive - the 500GB Caviar Blue the Wife had in her laptop (since replaced with the HItachi I mentioned above) - is like 3 years old, has 12K hours on it for powered on state, and still works without a single bad sector. Of course I'm always concerned about when it will start to fail but it hasn't happened yet.

Fujitsu, fails/failed. Samsung, fails/failed. Maxtor? Oddly enough I have 3 "OneTouch" USB external Maxtors, very old IDE drives inside USB enclosures, all three of them are like 12+ years old - a 120GB and two 250GB drives - and there's one bad sector on the 120GB but they still work so I keep 'em around for raw storage when needed.

Still haven't moved to SSD but I'm getting close to making a purchase, probably will get a Samsung EVO or EVO Pro based on reviews, haven't really seen any negatives so far.

But hard drives ain't going anywhere and Hitachi still makes the most reliable ones in my experience, nice to see they work just as well for others too.
 
Good to see HGST staying this reliable even after WD's acquisition.

Seagate can burn in the lowest level of the 9 hells for all the trouble they've put me through.
 
Hitachi's are basically upgraded death stars.. I have only had one.. I hooked it up as a slave to a WDC drive and next day it started developing errors, in a couple of days half the drive had errors.. I was so tramatised by it that it took almost a decade for me to get another death star which for some reason has been working fine for a decade.. But I have used 1mb IBM drives a couple of decades old that still worked.. You could sit on it, the platters and crash the heads and they would still hum.. So reliability changes.. I spend a lot of time finding current drives with high reliability. This changes with vendors and year.. I got fooled by the seagate 3TB drives.. But the 2TB drives were really good.. Toshiba 3TB drives also had high failure rates but very low for the 4 and 5TB drives. because of seagate fooling their customers I dont have any confidence in them now. But all the others are fine for now although WDC does change parts for same part numbered drives.. Using 5 800GB platters for a 4 TB drive has vastly different reliability and performance stats than one with 4 1TB platters or 3 1.3TB platters. Just look at the seagate 4TB drives.. the DM has very low failure rates.. DX has the highest failure rates.. find specific model/firmware and performance/part drives you want and then hope you get it. slight difference means a 1% failure rate or a 10% failure rate.
 
I have around 50 Hitachi / HGST drives here at work (mostly 2TB 7200 RPM deskstar models). Most of these have around 46 thousand POH. The only failure I have had with these was a DOA drive that I sent back to the vendor for a new replacement. All other drives are running 24/7/365 without issue. I do not even have a single reallocated sector on any of them. Although I do admit this is a small sample so luck could play a part..
 
Last edited:
I have 2x 1TB Hitachi drives still running in my system.
1 has 36,000hrs, the other 56,000hrs
Both in perfect working order, on 24x7 with no power saving.

I had 3 Samsung drives walk the plank in the last 4 years - 2x 1TB, 1x 2TB.
The only replacements I have bought in that period have been HGST, 4TB external and 4TB internal. Both are flawless so far.
 
As noted, if Seagates were so bad backblaze would not be using them and yet they make up for most of the drives do they not, thus higher numbers in production vs other brands.

They're "bad" in that they have a higher failure rate, but they're inexpensive enough that Backblaze doesn't view the higher failure rate as problematic (see how redundant their setups are). For an individual user, it might represent more of an issue.

Backblaze has even said they've tried to bulk-purchase WD and Samsung drives but have had no luck so they stick with Seagate knowing they'll fail more often than HGST units but also figuring they're inexpensive enough that it isn't a problem.
 
Perhaps WD and Samsung realise the value in having competitors drives seen to fail publicly.
:p
 
Samsung sold its hard drive business to Seagate who ended up killing it. I mean I have not seen any new products in years with the Samsung brand.
 
Samsung sold its hard drive business to Seagate who ended up killing it. I mean I have not seen any new products in years with the Samsung brand.

They were smart and stuck to SSD's instead. I think it is working out well for them.
 
I can't help but wonder if drive age plays a role here.


Note how the highest failure rate among WD drives was on their 2TB drives. I'm going to go out on a limb and guess that any 2TB drives Backblaze is still using are probably on average going to be older than the larger sized drives.

I'm still happy with my WD drives. I have 12 4tb Reds that have been running 24/7 for two years now without any issue.

I did have a 2TB green fail on me a couple of years back, but I also abused that drive running it in a ZFS array without changing the intellipark timer. It had something like 600k head parks when it finally shit the bed.

Either way, all my drives are arranged in striped groups of 6 drives in RAIDZ2 configurations, so I have two drive redundancy. If I do have a failure I'll just Amazon Prime a replacement, and when the warranty drive comes back I'll just keep it as a spare.

In my 25 years of tinkering with computers, I've only ever had data loss due to one drive failure. That was in college in 2001 when my media drive, a Maxtor, failed and I lost my mp3 collection :p
 
Last edited:
I can't help but wonder if drive age plays a role here.

2015Q3:
informare

2016Q1:
informare: Survival analysis of hard disk drive failure data: Update to Q1 2016


They take backblaze's raw data and create survival charts, which are graphs of the fraction of HDDs remaining alive as a function of days in operation. I think they are a good way of visualizing how drive age affects failure rate.

They have two graphs -- one of them is by manufacturer, and the other is for specific drive models. I wish they had done a better job of making the curves distinct. But if you look very carefully at the key and compare it to the curves, you can puzzle it out (you can download the images and zoom way in ; also, the order of the lines in the graph keys line up, more or less, with the curves on the right side of the graph).

Some things worthy of note:

1) The (negative) slope of the survival curve is proportional to the failure rate at a given age (time) of the drive. Since the MTBF methodology assumes a constant failure rate with age, it would only be valid if the survival curves are straight lines. While a few of the curves do approximate a straight line, most of them have slopes that vary significantly over time.

2) On the manufacturer graph, Seagate starts out with a slightly lower failure rate than WDC, but at around 300 to 450 days the curves cross and WDC ends up with a significantly lower overall failure rate than Seagate. Hitachi has a steady and very low failure rate out to about 750 days and then has a slight increase in failure rate but still quite low. It is amazing how much better Hitachi looks on this graph as compared to WDC and Seagate.

3) On the model graph, the ST3000DM001 actually looks pretty good from 0 to about 200 days but then starts failing badly from about 200 to about 350 days, and then plummets even more steeply after that, except that the failure rate stops the plummet after about 550 days to become a less steep drop (but still pretty bad). The three worst models are all from Seagate. The ST4000DM000, WD10EADS, WD30EFRX models are all have similar survival rates, and look to perform pretty well out to the end of the graph. The models with data past 750 days with the best survival rates are all HGST or Hitachi.
 
Last edited:
2015Q3:
informare

2016Q1:
informare: Survival analysis of hard disk drive failure data: Update to Q1 2016
Thank you for the explination, I had a hard time udnerstanding it since it mostly all looks red lines to me so not sure which is which.. But I see the SGT 500mb drives all failed after 500-900 days just like the 3TB drives. That is another thing people dont look at BB, BB replaces drives every few years once the warranty runs out. So they dont care about the drive lasting past 3 years or so. And we can see in the graph that seagate drives drop like a rock at that point. I have mostly seagates but I know this scenario so I just use them for cold storage. When I need a file I hook up the USB to get it. But they are not used other wise. So most of my drives have less than 10k hours on them. I swap them out every year or so with a new one this way. It is using them that creates the failures. Although you do have the dead on start up problems with maxtor where I lost half the drives on power up. The graph is a must read for those who have their NAS up and running 24x7.. For the rest of us, just using it for bulk storage, knowing they only last an year, or 2 or 3 and should be changed at those points. The seagate 1.5TB drives run very hot and also seem to have high failure rates. I had the same 1TB model which ran just as fast but far cooler.. So adding an extra platter with a smaller motor that cant handle the load seems to be the culprit there but the 1TB model also failed after 4 years and so did the 1.5TB one..
 
As noted, if Seagates were so bad backblaze would not be using them and yet they make up for most of the drives do they not, thus higher numbers in production vs other brands.

Uh. No. Wrong.

Backblaze's criteria are (in no particular order):
  • Sufficient availability in quantity
  • Price
  • Size
Reliability is arrived at in their ongoing data collection, but it's not one of their purchase criteria.
 
I have a horror story about Maxtor drives -- first time I ever tried running a raid array with two maxtor 40GB drives - both crashed and burned hard. This was probably 18 years ago, swore off maxtor after that point.

Basically hard drive history looks like this:

Seagate
Maxtor: Bought by Seagate
Quantum: Bought by Maxtor

Seagate's high end enterprise drives are still worth a damn. But their consumer offerings were never anything particularly special.
Maxtor had a history of crap drive quality.
Quantum had a history of crap drive quality.

Is it any surprise, with that sort of lineage, that Seagate drives are considered untrustworthy?
 
Basically hard drive history looks like this:

Seagate
Maxtor: Bought by Seagate
Quantum: Bought by Maxtor

Seagate's high end enterprise drives are still worth a damn. But their consumer offerings were never anything particularly special.
Maxtor had a history of crap drive quality.
Quantum had a history of crap drive quality.

Is it any surprise, with that sort of lineage, that Seagate drives are considered untrustworthy?

Yeah, but for a short time in history they were desirable. This was in the fallout of the IBM 75GXP Deathstar, of which I had a failed drive after 14 months.

Western Digital got a bad rap for reliability in the late 1990s (drive failures increased overall, and then their recall in 1999), and I wasn't yet willing to brave that road. Nor were a lot of people. And as you pointed out Maxtor sucked always.

Western Digital Recalls 400,000 Disk Drives

But for about five years there (2001-2007) Seagate was a good drive maker with a high reliability rating. They also had quieter drives than Western Digital at the time, which meant a lot in the still noisy days of 3.5" 7200 RPM drives. They weren't as fast though, which did lose them customers who switched to Western Digital.

After the Maxtor purchase the writing was on the wall, and I moved to WD. Unfortunately due to consolidation and falling sales I don't foresee any tumultuous changes happening, but it wasn't like this twenty years ago :D
 
Last edited:
Why do some brands seem to take the torture better than others?

There will be some drives that do better under extreme conditions than others, no doubt.

The problem is that this does not necessarily scale linearly with normal use conditions.

The temptation when looking at the Backblaze data is to say "Drive A lasts 20% longer than Drive B in torture conditions so it must also last 20% longer under normal conditions in my rig."

This is not necessarily the case. The two are not necessarily related at all.

You could have two drives, one which fails instantly under torture conditions and one that lasts for a good while under torture conditions, and both could perform fairly equivalently under normal conditions.

Now, we obviously know there is some correlation, because we have also seen that Seagate drives have a higher failure rate among us enthusiasts, but that is really just anecdotal at this point, as I have not seen any controlled studies.
 
But why? Higher dust levels in the final assembly room? Different final testing? Quality of bearings? Mechanical design? Electronic design? I can't find answers even from people who open up drives to do data recovery.
 
But why? Higher dust levels in the final assembly room? Different final testing? Quality of bearings? Mechanical design? Electronic design? I can't find answers even from people who open up drives to do data recovery.

Because the cost of data recovery is like 100x the cost of the drive. BB has their own custom setups with enough redundancy they don't need to restore the data from a failed drive, just plop a new drive in and keep going. The only people that pay for data recovery are the ones that have all their critical data on a single drive that died. They don't care how, just that they get the data back.
 
Back
Top