What Drives are We Putting in our NAS systems These Days?

And the thread is back from the dead! :p

As far as temperature goes, I think Backblaze disagrees.

I also vaguely recall a google study from several years ago now that curiously found the opposite, that warmer drives last longer than cooler ones, but I could be misremembering.

From my own anecdotal evidence, I have been running my NAS server in an un-air conditioned basement for 4 years, where in the summer temps often reach 90 degrees or above, and have not seen an excessive drive failure rate.

Sure, and I have a hitachi 2.5'' 500gb drive thats now 7.5 years old in a dell laptop that runs every day (knock on wood) between 45-52C. Doesn't mean I don't perfer my hdd's to be around 30-40C.
 
Samsung, the only spinning drive brand that hasn't failed on me. I have 3 of them from back in the day running 24/7.
plex-crystaldisk-samsung.jpg
 
Samsung, the only spinning drive brand that hasn't failed on me. I have 3 of them from back in the day running 24/7.
View attachment 90484

I just sold 4 of these.
I dismantled my 2 RAID-Z1 arrays, one with these Samsungs and one with 3TB Seagate NAS Hdds.
The Samsung ones were brilliant. Not enough for statistics, but they were working 24x7 for more than 6 years. One of the seagates which worked less failed. One of a 2TB WD Green running in mirror for less time already has some bad sectors which were relocated.

Now im running in the NAS 4 WD White labels (8TB) that were pulled from 8TB WD My Books.
They are pretty noisy.
 
These, I suppose:

22-172-025-s03-jpg.90581


Replacing 48x2TB Seagate ST32000444SS drives with 6x12TB Seagate ST12000VN0007 and downgrading from SAS to SATA. The 48x2TB will become a big-ass backup server instead of being split between my NAS and my backups.
 
So, I have a 2x 8tb WD Reds in my synology box. Only using about 4tb.

I happen to pick up a few more 8tb reds when they were on sale. You guys think I should keep them around in case some drives fail? ie, will the 8tb drives still be good in 5yrs? Lol, we might go to a different storage format in that time.
 
I got a bunch of reds. Theyre ok.

But TLER is a moot point with zfs.
 
Last edited:
So, I have a 2x 8tb WD Reds in my synology box. Only using about 4tb.

I happen to pick up a few more 8tb reds when they were on sale. You guys think I should keep them around in case some drives fail? ie, will the 8tb drives still be good in 5yrs? Lol, we might go to a different storage format in that time.

Unless your running a mirror in that 2 disk machine your going to lose 100% of your data on a failure. A spare disk is useless to you unless you run a raid 5 or 10.
 
I just dropped two 12TB Seagate IronWolf drives into my disasterproof NAS and then I read a ton of posts about how people hate Seagate and their failure rates :oops:
 
I just dropped two 12TB Seagate IronWolf drives into my disasterproof NAS and then I read a ton of posts about how people hate Seagate and their failure rates :oops:

This is really a model-to-model; did some extensive (retail) research on the current Ironwolfs, don't see much issue with them today. Though obviously you'll want to monitor their health and run them with some level of redundancy.
 
I just dropped two 12TB Seagate IronWolf drives into my disasterproof NAS and then I read a ton of posts about how people hate Seagate and their failure rates :oops:

This is really a model-to-model; did some extensive (retail) research on the current Ironwolfs, don't see much issue with them today. Though obviously you'll want to monitor their health and run them with some level of redundancy.

I wouldn't be too worried. Seagate did indeed have a period of excessive failure rates in their early 1.5 to 3TB consumer drives. Looking at Backblaze most recent data, this is mostly a historical note at this point. People are way too quick to take one point in time, or their own anecdotal evidence and project it over an entire brand for all eternity, but that's just not how the real world works.

Remember the IBM Deskstar (Deathstar) clicks of death? IBM sold that division to Hitachi, who renamed it HGST (and later sold to WD) and a few years later they were widely regarded the best most reliable drives money could buy.

Last time I loaded up my storage server, the Seagate 1.5TB failure rates were still pretty recent, so I avoided them then. I went with 12x 4TB WD Red's.

This time I did my research again, found that Seagates Enterprise drives were equivalent to (and better in some studies) failure rate wise compared to WD, so I went with 12x 10TB Seagate Enterprise drives.

I wouldn't worry about your Ironwolfs. They are probably fine.

That said, remember the golden rules of data storage:

1.) Always use redundant RAID configurations.*
2.) Redundant RAID configurations are not a replacement for backups. Still do regular backups.**
3.) Those regular backups should be offsite.***



*RAID5 is mostly considered obsolete. You really want RAID6 or better, so that you still have redundancy during a rebuild if you have to replace a drive.

**RAID protects against drive failure. It does not protect against fat finger deletions, file system corruption, ransomware, controller failure, fire & flood

***If you have a pipe burst in your house, and your backups are right next to your main storage, both are getting destroyed. Offsite offsite offsite.
 
I wouldn't be too worried. Seagate did indeed have a period of excessive failure rates in their early 1.5 to 3TB consumer drives. Looking at Backblaze most recent data, this is mostly a historical note at this point. People are way too quick to take one point in time, or their own anecdotal evidence and project it over an entire brand for all eternity, but that's just not how the real world works.

Remember the IBM Deskstar (Deathstar) clicks of death? IBM sold that division to Hitachi, who renamed it HGST (and later sold to WD) and a few years later they were widely regarded the best most reliable drives money could buy.

Last time I loaded up my storage server, the Seagate 1.5TB failure rates were still pretty recent, so I avoided them then. I went with 12x 4TB WD Red's.

This time I did my research again, found that Seagates Enterprise drives were equivalent to (and better in some studies) failure rate wise compared to WD, so I went with 12x 10TB Seagate Enterprise drives.

I wouldn't worry about your Ironwolfs. They are probably fine.

That said, remember the golden rules of data storage:

1.) Always use redundant RAID configurations.*
2.) Redundant RAID configurations are not a replacement for backups. Still do regular backups.**
3.) Those regular backups should be offsite.***



*RAID5 is mostly considered obsolete. You really want RAID6 or better, so that you still have redundancy during a rebuild if you have to replace a drive.

**RAID protects against drive failure. It does not protect against fat finger deletions, file system corruption, ransomware, controller failure, fire & flood

***If you have a pipe burst in your house, and your backups are right next to your main storage, both are getting destroyed. Offsite offsite offsite.
Thanks. Those two disks are for my waterproof/fireproof NAS which is the backup for my regular NAS and runs RAID1 as it's a two disk system. If the Earth opens up below me and swallows my NAS, I'll be SOL (and probably dead myself).
 
Being in IT for 25 years, I've had more Seagate, and Maxtor drives fail than Western Digital, or HGST. I personally will never buy another Seagate unless I am forced too. I use WD REDs in my Synology 1515+, going on 3 years with no issues.

Our EMC Storage has Seagates and they have failed a lot over its 7 year life.
 
Ahh, Maxtor. I was warned but the deal was so good at the time for a 160gb pata drive. Replaced twice in less than 3 years. Quantum with their Bigfoot/Fireball drives, bought out by Maxtor. Maxtor bought out later by Seagate.
 
Ahh, Maxtor. I was warned but the deal was so good at the time for a 160gb pata drive. Replaced twice in less than 3 years. Quantum with their Bigfoot/Fireball drives, bought out by Maxtor. Maxtor bought out later by Seagate.

I lost my entire digital life up to that point in 2001 due to a Maxtor drive. I'll never have that experience again.

Being in IT for 25 years, I've had more Seagate, and Maxtor drives fail than Western Digital, or HGST. I personally will never buy another Seagate unless I am forced too. I use WD REDs in my Synology 1515+, going on 3 years with no issues.

Our EMC Storage has Seagates and they have failed a lot over its 7 year life.

I've never been in IT professionally, but I have dabbled with enterprise hardware at home for many years.

With my 12 WD Red's, over 4 years in my basement, I had 3 failures. Not complete dead disks mind you, but they started having a few read and write errors logged in ZFS, so I swapped them out and replaced under warranty.

I've only had the 12 Seagate Enterprise drives in my server now for 7-8 months, and thus far it is smooth sailing, but 7-8 months is not enough time to collect useful data. Time will tell. Either way, with the kind of backups I take and the RAID redundancy, I don't feel like it is a big deal either way. If a drive starts failing, I'll pop it out, RMA it, and resilver with the replacement. With my ZFS setup and backups, drive reliability has only become a matter of minor nuisance, not one of, massive data loss.

I wish I exercised better practices back in 2001 with that Maxtor drive. I had a lot of nostalgic shit on that drive.
 
I wouldn't be too worried. Seagate did indeed have a period of excessive failure rates in their early 1.5 to 3TB consumer drives. Looking at Backblaze most recent data, this is mostly a historical note at this point. People are way too quick to take one point in time, or their own anecdotal evidence and project it over an entire brand for all eternity, but that's just not how the real world works.

Remember the IBM Deskstar (Deathstar) clicks of death? IBM sold that division to Hitachi, who renamed it HGST (and later sold to WD) and a few years later they were widely regarded the best most reliable drives money could buy.

Last time I loaded up my storage server, the Seagate 1.5TB failure rates were still pretty recent, so I avoided them then. I went with 12x 4TB WD Red's.

This time I did my research again, found that Seagates Enterprise drives were equivalent to (and better in some studies) failure rate wise compared to WD, so I went with 12x 10TB Seagate Enterprise drives.

I wouldn't worry about your Ironwolfs. They are probably fine.

That said, remember the golden rules of data storage:

1.) Always use redundant RAID configurations.*
2.) Redundant RAID configurations are not a replacement for backups. Still do regular backups.**
3.) Those regular backups should be offsite.***



*RAID5 is mostly considered obsolete. You really want RAID6 or better, so that you still have redundancy during a rebuild if you have to replace a drive.

**RAID protects against drive failure. It does not protect against fat finger deletions, file system corruption, ransomware, controller failure, fire & flood

***If you have a pipe burst in your house, and your backups are right next to your main storage, both are getting destroyed. Offsite offsite offsite.

Well, I've given up on Seagate. I used to work as the admin in a server software test lab, and the Seagate Constellations had by far the highest failure rate. I was having to replace 10-20 per month, while we were running about 30,000 of them. When we started a new project with their 4TB drives, every single one of the first shipment, 96 of them, went bad in the first month. They claimed it was a firmware problem and sent us a whole new set, but even the replacements had a very high failure rate, 16 in the first month and 30 over the next 6 months, until I left. My company ended up switching to HGST drives because of the high failure rate. Because of these experiences, over the course of 6 years I was employed in that lab, I will not trust Seagate drives again.

Fun point: while I was admin at that lab, we had just over 2000 Hitachi drives that were all put into place before I was even employed there, and I only ever had to replace 8 of those drives, over 6 years I was there. When I left, those storage arrays were still in use, at just over 10 years old. I trust HGST drives because of that.
 
Sounds like they might be some variant of a wd80efax? If I plan on putting a few 8tb drives in a zfs is there something better performing and more reliable I should go with in the 150-180 dollar range per drive?
 
Helium drives use a little less power. But to guarantee getting a helium drive you would have to step up to 10TB.
 
  • Like
Reactions: mikeo
like this
How about look at the report? See how many failures compared to all the other brands?

Are we looking at the same report?

Because my takeway from looking at the Q2 data is as follows:

Toshiba came away with a perfect score. Pretty damned impressive.

WD looked pretty good on their 3's and 4's but their 6's sucked, and thus on average they were pretty average.

HGST also looked pretty average. Far more consistent across the board though.

Seagate too looked pretty average to me.

I don't know how they calculated their annualized failure rate though.


Now, if we look at the lifetime chart, the current worst offenders are the 3TB and 6TB WD drives. The older 4TB seagate drives are a little high, but not terrible. The newer 10TB Seagate drives come out looking absolutely fantastic.

I'm hoping those ST10000NM0086 results hold up. My 12 drives downstairs are the ST10000NM0016 drives, the same drive hardware wise, but with built in encryption for easy "wiping". (I didn't need the feature, but they were the same price when I was shopping)
 
How about look at the report? See how many failures compared to all the other brands?
Did you even look at the sample sizes? Seagate and HGST are the only ones with enough drives to make any reasonable educated failure assumptions.

All the large Seagate 8+TB are about a 1.1-1.2% average failure rate, seems on par for manufacturing acceptable error with HDD, the HGST 8TB HUH728080ALE600 are a comparable 1.18%.

Sure, the Seagate 4TB 2.84% failure is high compared to the 4TB HGSTs, thats also comparing enterprise coolspin drives to desktop model drives, could that be a factor? absolutely
They also list the drives by cost so the Seagates are the cheapest, you get what you pay for sometimes.

I'm not arguing the 4TB Seagates had the highest relevant failure rates, just saying you have to look at the whole picture with all the information not just take the stats at face value.
 
I looked at it. You're wrong.

No, I am not. You need to look at it.


Are we looking at the same report?

Because my takeway from looking at the Q2 data is as follows:

Toshiba came away with a perfect score. Pretty damned impressive.

WD looked pretty good on their 3's and 4's but their 6's sucked, and thus on average they were pretty average.

HGST also looked pretty average. Far more consistent across the board though.

Seagate too looked pretty average to me.

I don't know how they calculated their annualized failure rate though.


Now, if we look at the lifetime chart, the current worst offenders are the 3TB and 6TB WD drives. The older 4TB seagate drives are a little high, but not terrible. The newer 10TB Seagate drives come out looking absolutely fantastic.

I'm hoping those ST10000NM0086 results hold up. My 12 drives downstairs are the ST10000NM0016 drives, the same drive hardware wise, but with built in encryption for easy "wiping". (I didn't need the feature, but they were the same price when I was shopping)

Yes we are. You aren't looking at the big picture. Seagate had 27k of drives. 134 failures. Look at that comparison vs the next other brand 15k drives and only 10 failures. No other brand even got near that many failures compared to Seagate did. Even if you bump the drives up to 27k and double the failures, it still isn't close. That just proves how unreliable their drives are.


Did you even look at the sample sizes? Seagate and HGST are the only ones with enough drives to make any reasonable educated failure assumptions.

All the large Seagate 8+TB are about a 1.1-1.2% average failure rate, seems on par for manufacturing acceptable error with HDD, the HGST 8TB HUH728080ALE600 are a comparable 1.18%.

Sure, the Seagate 4TB 2.84% failure is high compared to the 4TB HGSTs, thats also comparing enterprise coolspin drives to desktop model drives, could that be a factor? absolutely
They also list the drives by cost so the Seagates are the cheapest, you get what you pay for sometimes.

I'm not arguing the 4TB Seagates had the highest relevant failure rates, just saying you have to look at the whole picture with all the information not just take the stats at face value.

See my response above. Even if you double or quadruple the amount of drives from the other brands, it still won't touch how many failures Seagate had.

Like I previously said, I've had 25 years of IT experience, and Seagates are the worst drives for failures. BackBlaze continues to prove this as well as my own documentation in our own Enterprise grade storages from EMC.
 
No, I am not. You need to look at it.




Yes we are. You aren't looking at the big picture. Seagate had 27k of drives. 134 failures. Look at that comparison vs the next other brand 15k drives and only 10 failures. No other brand even got near that many failures compared to Seagate did. Even if you bump the drives up to 27k and double the failures, it still isn't close. That just proves how unreliable their drives are.




See my response above. Even if you double or quadruple the amount of drives from the other brands, it still won't touch how many failures Seagate had.

Like I previously said, I've had 25 years of IT experience, and Seagates are the worst drives for failures. BackBlaze continues to prove this as well as my own documentation in our own Enterprise grade storages from EMC.

Clearly you didnt take statistics, you can’t just “bump it up” and call it a larger sample size. I personally would consider samples above 1000 drives a good basis for analysis in this scenario (500 at minimum).

With your 25 years of experience I would expect you to be able to recognize the ebbs and flows of brands, new technologies, and varying datapoints due to differentiation in environments. You’ve got multiple people telling you you’re wrong, perhaps “you need to look at it” as you put it.
 
Clearly you didnt take statistics, you can’t just “bump it up” and call it a larger sample size. I personally would consider samples above 1000 drives a good basis for analysis in this scenario (500 at minimum).

With your 25 years of experience I would expect you to be able to recognize the ebbs and flows of brands, new technologies, and varying datapoints due to differentiation in environments. You’ve got multiple people telling you you’re wrong, perhaps “you need to look at it” as you put it.
Funny I see others telling them they are wrong. You can believe whatever you want fanboys. But I have the experience backing me and the facts. I won’t touch a seagate as they have proven many times to fail. And I got the work orders and loss of data that shows it.
 
Your name suits you.
By all means don't improve your argument, insult his user name instead.
That's totally going to impress the masses :rolleyes:.

Funny I see others telling them they are wrong. You can believe whatever you want fanboys. But I have the experience backing me and the facts. I won’t touch a seagate as they have proven many times to fail. And I got the work orders and loss of data that shows it.

What others? Experience is different for everyone, you don't think a workload in a netapp equipment might be different than a compellent or VNX one?

We get it you had bad experiences with Seagate, that doesn't change the "facts" of Backblaze's reports.
 
Last edited:
By all means don't improve your argument, insult his user name instead.
That's totally going to impress the masses :rolleyes:.



What others? Experience is different for everyone, you don't think a workload in a netapp equipment might be different than a compellent or VNX one?

We get it you had bad experiences with Seagate, that doesn't change the "facts" of Backblaze's reports.

Your facts are flawed. Backblazes report obviously shows seagate being the worst. Sorry that you are too dumb to figure that out kid.
 
Your facts are flawed. Backblazes report obviously shows seagate being the worst. Sorry that you are too dumb to figure that out kid.
We'll just have to agree to disagree then, in the meantime keep using your skewed punch card computing logic on false pretenses.
 
Last edited:
Back
Top