Seagate Laying Off 2,217 Employees

LOL time for some warm fuzzies.

Ok so raid 5 has a an unrecoverable read error (URE) rate of about 1 every 12tb read. So if you have 3 3tb drives(aka 1 parity), you have a 50% chance in hell of not losing ALL of your data. If you have 3 6tb drives, you have an almost 100% chance of losing all your data. Hell move to raid 6, then you will at least have a 50% chance.

Raid for protection died a long time ago. You raid for speed now :)
WTF? Raid isn't generating the errors, your discs are. Also, I tend to use RAID 10 as it provides 100% redundancy and increased speed - theoretically 200% write and 400% read.
 
LOL time for some warm fuzzies.

Ok so raid 5 has a an unrecoverable read error (URE) rate of about 1 every 12tb read. So if you have 3 3tb drives(aka 1 parity), you have a 50% chance in hell of not losing ALL of your data. If you have 3 6tb drives, you have an almost 100% chance of losing all your data. Hell move to raid 6, then you will at least have a 50% chance.

Raid for protection died a long time ago. You raid for speed now :)

That's not how URE's work.

If no drive dies, the only way you'll have data loss due to a URE is if it happens in exactly the same spot on your primary as well as all of your redundancy. A modern 4TB drive with 4k sector size over a billion sectors. The chance of exactly the same two (or more depending on raid level) sectors needed for data loss to fail across multiple billion sector drives is almost infinitesimal.

If you have a RAID5 array, and one drive dies, during the replacement and rebuild process, this is when you are exposed to URE's, as you have no redundancy, and since drives are so large these days, URE's resulting in data loss are all but guaranteed during this process. Thus you have dataloss. This is why RAID5 is no longer recommended.

This is why RAID6 is recommended these days. If one drive fails and you have to rebuild, you still have one drive of redundancy left, and are almost invulnerable to URE's unless you lose another drive, in which case you are guaranteed to have data loss due to URE's.

Now keep in mind. if in RAID5 you lose one drive or in RAID6 you lose 2 drives and thus have guaranteed data loss, the overwhelming majoprity of your data will still be good. it will only be in one, or a few spots, and a good RAID system will be able to tell you exactly which file was damaged, so you can attempt to repair or replace it.

And this only gets better with modern self healing systems like ZFS which actively recover and repair any latent URE's they encounter during normal operation, so you don't have URE's adding up over time, increasing that infinitesimal risk of the two or more URE's happening in the exact same spot.


On the other hand, RAID for speed is IMHO of limited use. It tends to increase sequential speeds, but decreases all other performance, and in the case of striping (RAID0) the reliability hit is rather large.
 
LOL time for some warm fuzzies.

Ok so raid 5 has a an unrecoverable read error (URE) rate of about 1 every 12tb read. So if you have 3 3tb drives(aka 1 parity), you have a 50% chance in hell of not losing ALL of your data. If you have 3 6tb drives, you have an almost 100% chance of losing all your data. Hell move to raid 6, then you will at least have a 50% chance.

Raid for protection died a long time ago. You raid for speed now :)


Raid is still used for protection, and I wouldn't want a drive failure to shut down a server.

Most my office servers are Raid 6 (for max capacity), Raid 10 (for write performance), or Raid 1 if I don't need much disk space (like the OS drive)
I also have my raid controllers set to do read scans every week, so any bad spots should be mapped out, which decreases the odds a URE during a rebuild.

I do still have a Raid 5, which consists of 10, 6TB drives :eek:

However, this is on my backup server, and the raid is just a temporary holding place for the data before it's copied off to tape.
Figured Raid 5 was better than Raid 0, since Raid 5 would catch any disk errors.
Not worried about rebuild times, since if a disk failed, it would be faster to replace the disk, wipe the array, and run a script to rebuild the backup data from the original servers.
 
Raid is still used for protection, and I wouldn't want a drive failure to shut down a server.

Most my office servers are Raid 6 (for max capacity), Raid 10 (for write performance), or Raid 1 if I don't need much disk space (like the OS drive)
I also have my raid controllers set to do read scans every week, so any bad spots should be mapped out, which decreases the odds a URE during a rebuild.

I do still have a Raid 5, which consists of 10, 6TB drives :eek:

However, this is on my backup server, and the raid is just a temporary holding place for the data before it's copied off to tape.
Figured Raid 5 was better than Raid 0, since Raid 5 would catch any disk errors.
Not worried about rebuild times, since if a disk failed, it would be faster to replace the disk, wipe the array, and run a script to rebuild the backup data from the original servers.

My issue now is the rebuild times are so long that by time you get it backed up a gain, the system you are backing up may have failed too! Most big data houses already have a redundant backup. The really big ones have many redundant copies.
 
Wow. How times change. I remember when Seagate was the gold standard. I remember Maxtor too and owned drives from each.

I remember Seagate being the gold standard, buying Maxtor, Maxtor being shit and improving slightly to the point to where Seagate was using Maxtor branded drives in Seagate branded externals... And then Seagate retiring the Maxtor brand becoming worse than both of them had ever been.

Now? Won't touch Seagate. Period. I will go out of my way to avoid using anything from them.
 
This is why RAID6 is recommended these days. If one drive fails and you have to rebuild, you still have one drive of redundancy left, and are almost invulnerable to URE's unless you lose another drive, in which case you are guaranteed to have data loss due to URE's.

Now keep in mind. if in RAID5 you lose one drive or in RAID6 you lose 2 drives and thus have guaranteed data loss, the overwhelming majoprity of your data will still be good. it will only be in one, or a few spots, and a good RAID system will be able to tell you exactly which file was damaged, so you can attempt to repair or replace it.

And this only gets better with modern self healing systems like ZFS which actively recover and repair any latent URE's they encounter during normal operation, so you don't have URE's adding up over time, increasing that infinitesimal risk of the two or more URE's happening in the exact same spot.

Even Raid 6 isn't good enough anymore. The new hotness is ZFS, as you mentioned because it is self healing, so Raid Z2 (aka Raid 6 with ZFS) is the bare minimum especially as pool sizes start in the double digit ranges. By that point, if you have drive bays to spare, it's a good idea to go for Raid Z3 (technically "Raid 7") for triple drive redundancy. That's how I plan to build my NAS.
 
Seagate was pretty good until maybe around the Barracuda 7200.7.
Once they bought Maxtor (crap reliability), started manufacturing in China (I used to only get their made in Singapore drives), everything started going to the shitter.

Also, the Barracudas were never the fastest, coolest or quietest, but never the worst either, sort of middle-of-the-road.

Some of Seagate's drives nowadays are still good, but mainly the enterprise stuff, and anything Samsung-derived and/or non-SMR.
Pretty much, Seagate still makes good and great drives but some of their product line is just tainted meat, that's actually both of mechanical hdds manufactures now of days, some reason they are fine with turning out turds to certain segments of the population, usually consumer level.
 
I am absolutely terrified to by Seagate.

Not that it's a complete sample but I have a san full of 8tb seagate enterprise capacity drives and for about a year now no issues. Keeping the ol fingers crossed but they were the fastest I tested at the time. Been happy with them. I was ruined by 3tb drives from them though a while back.
 
ZFS RAID 5 and 6 is just plain cool. The ZFS file system is just plain cool for any kind of NAS.

I havn't needed to mess with this stuff personally until very recently but my discovery of FreeNAS and the relatively easy setup of offsite automated backups has made some serious backup options available to me.

I'm putting together a ZFS RAID5 with 3 drives that does real time cloud sync with all my systems and devices... AND sends the data to an offsite 8TB HDD that has two layers of file history. Frankly, that's enough redundancy for most of us.

It's just amazing what you can do these days. FreeNAS, ZFS filesystem and NextCloud is just plain an impressive combo.
 
  • Like
Reactions: rat
like this
ZFS RAID 5 and 6 is just plain cool. The ZFS file system is just plain cool for any kind of NAS.

I havn't needed to mess with this stuff personally until very recently but my discovery of FreeNAS and the relatively easy setup of offsite automated backups has made some serious backup options available to me.

I'm putting together a ZFS RAID5 with 3 drives that does real time cloud sync with all my systems and devices... AND sends the data to an offsite 8TB HDD that has two layers of file history. Frankly, that's enough redundancy for most of us.

It's just amazing what you can do these days. FreeNAS, ZFS filesystem and NextCloud is just plain an impressive combo.
You can actually configure ZFS to take snapshots and have like 30 or more layers of file history with minimal wasted space. Pretty cool stuff.
 
Seagate sucks. Weak sales from garbage products.

Same applies for WD? Or whats their excuse for losing sales? :)

The truth is both Seagate and WD held on to HDDs way too long while the SSD segment exploded in volume. And both companies are a nobody in the SSD segment.
 
Same applies for WD? Or whats their excuse for losing sales? :)

The truth is both Seagate and WD held on to HDDs way too long while the SSD segment exploded in volume. And both companies are a nobody in the SSD segment.

Protip.

WD owns Hitachi now. Under Backblaze results, WD and HGST drives are among the most reliable.

WD also owns Sandisk.

Pretty much only Seagate looks to remain in irrelevance.
 
Protip.

WD owns Hitachi now. Under Backblaze results, WD and HGST drives are among the most reliable.

WD also owns Sandisk.

Pretty much only Seagate looks to remain in irrelevance.

Its like trying to argue about who is the best of 2 dying dinosaurs in the HDD segment. SSDs now account for 30-35% of all storage sales. In the client segment its even over 50%.
 
Last edited:
Protip.

WD owns Hitachi now. Under Backblaze results, WD and HGST drives are among the most reliable.

WD also owns Sandisk.

Pretty much only Seagate looks to remain in irrelevance.
Inaccurate pro-tip.

HGST was divvied up between WD and Toshiba.
Some Toshiba models are rebadged HGST = most reliable
Some WD models are HGST developed = most reliable (eg. He8, He10, He12)

A fair number of WD models have poor reliability (eg. 6TB in Backblaze's stats)
 
I remember Seagate being the gold standard, buying Maxtor, Maxtor being shit and improving slightly to the point to where Seagate was using Maxtor branded drives in Seagate branded externals... And then Seagate retiring the Maxtor brand becoming worse than both of them had ever been.

Now? Won't touch Seagate. Period. I will go out of my way to avoid using anything from them.

Maxtor never did me wrong. Good cheap drives. Maybe either I got lucky, or I got out from them before they went bad?

It's funny how HDDs fluctuated in terms of quality - I remember even a stretch where people wouldn't touch WD, and now they appear to be the last gold standard HDD left - hell, even I owned a 1 TB black caviar drive.
 
I've owned one Seagate HDD, and it died within a year. I've owned many WD HDDs, and only had a couple die, but only after several years. I know that everybody has a different story about their tech experiences, maybe I've just been lucky with WD, but I stick with what works for me.

Also, I just don't like hard drives with capacities above 1 TB. These newfangled multi-terabyte drives just give me the heebie-jeebies. Too much potential for huge amounts of data-loss. I would rather have several 1 TB drives. And I still do quarterly backups to DVDs. The above-mentioned Seagate HDD taught me that lesson. And it hurt. A lot.


If I'm being silly about not trusting HDDs above 1 TB, please explain why. Until then, you kids, with your multi-TB hard drives, get off my lawn!
 
Last edited by a moderator:
It's funny how HDDs fluctuated in terms of quality - I remember even a stretch where people wouldn't touch WD, and now they appear to be the last gold standard HDD left - hell, even I owned a 1 TB black caviar drive.
Hell no! Backblaze's stats has WD Red/Green 3TBs and 6TBs dropping like flies.
Their 4TBs seem to be rock solid though (I have about 12 of them running 24/7 after patching with wdidle - no SMART errors / reallocs / pending reallocs yet)
 
Hell no! Backblaze's stats has WD Red/Green 3TBs and 6TBs dropping like flies.
Their 4TBs seem to be rock solid though (I have about 12 of them running 24/7 after patching with wdidle - no SMART errors / reallocs / pending reallocs yet)

Meh i have 32 reds 3tb in a san, been that way for 3+ years, never had even a relocated sector count go up.
 
Even Raid 6 isn't good enough anymore. The new hotness is ZFS, as you mentioned because it is self healing, so Raid Z2 (aka Raid 6 with ZFS) is the bare minimum especially as pool sizes start in the double digit ranges. By that point, if you have drive bays to spare, it's a good idea to go for Raid Z3 (technically "Raid 7") for triple drive redundancy. That's how I plan to build my NAS.

When I set mine up I debated going with all 12 drives in one RAIDz3 vdev, or splitting them into two RAIDz2 vdevs (in the same pool) for a ZFS equivalent of RAID60. In the end the latter is what I did.

IMHO, RAIDz3 is a little over the top. It will impact performance, and RAIDz2 is already very good protection.

For you to have URE's resulting in minor corruption with RAIDz2 you'd have to lose two drives at the same time (or - you know - lose one, and have a second fail before the rebuild is complete). This is certainly possible, but rather unlikely once you get past the early downward bathtub curve slope on your hard drives. Do a thorough test on new drives, and it shouldn't be a problem at all.

For complete data loss you'd need two more drives to fail before you rebuild after the first one. That would be a pretty bad day, and also unlikely to happen.

The thing is, RAID is not a backup. You should have a backup of your data whether you RAID it or not, as RAID doesn't protect against accidental deletions/overwrites, file system corruption, ransomware, etc, and if you already have a backup, RAID isn't really protecting you against dataloss. RAID is protecting you against the inconvenience of having to restore your backup. (which, in my case with 12TB on Crashplan would be a rather large inconvenience unless I miraculously get good bandwidth from them during the restore)

So, with this in mind, you then have to decide, if I have a backup already - which I should - how much added protection over a good RAIDz2 configuration do I really need?

You can use this ZFS Reliability calculator (unfortunately no support for multiple VDEV's) to see what the real risk is.

Lets do an example:

As we all know there are optimal drive counts for different ZFS configurations (as I summarized here, years ago)

So for comparing RAIDIz2 to RAIDz3 we wouldn't optimally have the same number of drives.

We would want:

RaidZ: 3, 5 or 9 drives. (17 drives also winds up being divisible by 4, but this is above 12 recommended as max by ZFS documentation)

RaidZ2: 4, 6 or 10 drives. (and 18, which is above 12, as above and not recommended)

Raidz3: 7 and 11 drives (and 19 which is above 12, as above and not recommended)

We don't want RAIDz at all, as we prefer to avoid URE's during rebuild if one drive fails.

So lets compare a 6 drive RAIDz2 to a 7 Drive RAIDz3 configuration. This has the benefit of providing the same amount of available storage.

Lets assume we are going to use 4TB drives, the MTBF for these drives in years is 0.5, and the mean time to replace is 72 hours if one goes bad. Let's also assume a 600MB/s resilver speed once we replace the drives.

The RAIDz2 configuration would be predicted to have data loss due to hard drive failure once every 1.7*10^11 hours.

The RAIDz3 configuration would be predicted to fail once every 1.6*10^14 hours.

So, you are correct. RAIDz3 is a lot better. 942 times better in fact.

But is it relevant? Probably not.

The 1.7*10^11 hours for the RAIDz2 is 19.4 million years.

The 1.6*10^14 hours for the RAIDz2 is 18.2 billion years.

I don't plan on living long enough for this difference to be relevant to me :p
 
Meh i have 32 reds 3tb in a san, been that way for 3+ years, never had even a relocated sector count go up.
Good to know. I have a few very old 2TB Greens that are still going strong. Those were supposedly problematic as well.
 
Good to know. I have a few very old 2TB Greens that are still going strong. Those were supposedly problematic as well.

I replaced my 2TB and 3TB greens with 4TB reds 2-3 years ago.

The reds have been OK. One started having read errors after 2.5 years of 24/7 use so I RMA'd it and resilvered without problems. The rest are continuing to work just fine.

The reason greens had problems was because of their intellipark power saving feature. It was set way too aggressively for raid use, which would result in a park and a reactivation a few times a minute. It was possible to change this value to lower the park count using a firmware tool (wdidle3.exe) but many (myself included) were unaware of this at the time.

Mine started failing after about 600k head parks.

Properly wdidle'd green drives could last for a long time in a NAS. I still wouldn't recommend it due to lack of TLER, but it could work.
 
Holy crap! 600K parks?!
Good thing I've never used a Green or Red drive without first modding it with wdidle (300s).
Yes, the 4TB Reds somehow also had the default park interval set too low!
 
For you to have URE's resulting in minor corruption with RAIDz2 you'd have to lose two drives at the same time (or - you know - lose one, and have a second fail before the rebuild is complete). For complete data loss you'd need two more drives to fail before you rebuild after the first one. That would be a pretty bad day, and also unlikely to happen.

We don't want RAIDz at all, as we prefer to avoid URE's during rebuild if one drive fails.
I don't plan on living long enough for this difference to be relevant to me :p

If you have been bitten by bad drives as often as I have... there's no precaution that is too over the top for large amounts of data. I'm also thinking about future upgrades within the same setup... so while one my have a pool size small enough to not worry about Raid Z2 rebuild issues, I still went with Raid Z3 to allow for safer rebuilding when expanding capacity on top of redundancy. I figure by the time the 2TB or 3TB drives need replacing, 6 or 8TB drives will be about as cheap. (I also planned to do a 7 drive stripe, leaving the 8th bay available for rebuilds or capacity expansion without degrading the pool by removing a drive to do it first.)

But yes, you're right, Raid is not a backup method. My FreeNAS build will be doing weekly rsync backups to a 5 bay Raid 5 external (Raid5 with five drives is the same pool size as a Raid Z3 with seven drives) along with an offsite "cloud" backup service getting snapshots. Even with Raid Z3, you can't be too careful. It doesn't even have to come down to drive reliability: Theft, fire, plane from the sky...

Unlikely? Perhaps. But I also said the same thing about 3 Seagate 2TB drives from 3 different batches all failing in the exact same week. Some of the stuff on that pool was a bitch and a half to replace and what didn't get replaced ... can't. Given some of the stuff that I had lost, yeah. It was heartbreaking and I'm not going to let that happen again.
 
Unlikely? Perhaps. But I also said the same thing about 3 Seagate 2TB drives from 3 different batches all failing in the exact same week. Some of the stuff on that pool was a bitch and a half to replace and what didn't get replaced ... can't. Given some of the stuff that I had lost, yeah. It was heartbreaking and I'm not going to let that happen again.

That almost sounds like something else is to blame... Bad PSU, power spike, something like that...
 
I call BS.
Conner hard drives are the future.
Screw Conner. Now Micropolis was the shiznit back in the day!

Micropolis%20logo.jpg
 
Back
Top