RAID Strategy with 8x 6TB Disks

sk3tch

2[H]4U
Joined
Sep 5, 2008
Messages
3,343
Planning on using 8x 6TB WD Red NAS disks in RAID on a Synology NAS - because of their size/speed I'd like to opt for RAID6 or SH-2 (they're essentially the same when all disks are identical - but the idea would be it would allow me to upgrade disks later) for 2 disk redundancy. Do you think it is also a good idea for a hot spare or because of my use case (non-mission critical home data - any mission critical data is backed up via local disk and then to CrashPlan) would that be overkill?

With 8x 6TB it'll be ~36TB (6x) storage and ~12TB (2x) recovery - the other thing would be to do RAID10 - ~24TB (4x) storage and ~24TB redundancy (4x) - but not expansion capability. Unless it's an absolutely terrible idea to do RAID6 or SH-2, I'd prefer to stick with that plan.

Please refrain from bashing Synology and their lack of bitrot protection, ECC RAM (unless you get like a top-end $3k unit), etc.
 
Last edited:
Your calculations on redundancy are way off, it assumes a PERFECT failure case, causing minimal damage. It is just not likely that only one disk in each mirror will all fail.

raid6 is likely your best bet. And unles your running a database or a highly loaded website from it, will be accetable reliability and speed.

The problem with raid6 comes in at disk size, at 6tb disks, I would attempt to have a cold spare handy, to plug in withing 24h of a failure, and make sure you do scrub/paritycheck/... the raid6 every week or so. This will detect and fix after the fact bitrot, but the main purpose is to stress the disks some, so you know they will be able to withstand the stress of a rebuild when a disk fails.
 
Your calculations on redundancy are way off, it assumes a PERFECT failure case, causing minimal damage. It is just not likely that only one disk in each mirror will all fail.

raid6 is likely your best bet. And unles your running a database or a highly loaded website from it, will be accetable reliability and speed.

The problem with raid6 comes in at disk size, at 6tb disks, I would attempt to have a cold spare handy, to plug in withing 24h of a failure, and make sure you do scrub/paritycheck/... the raid6 every week or so. This will detect and fix after the fact bitrot, but the main purpose is to stress the disks some, so you know they will be able to withstand the stress of a rebuild when a disk fails.

Thanks - I'll definitely plan on having a cold spare handy.

I did not post any calculations on redundancy. Those just outline the amount of disks that will be needed and the approximate size sacrificed to provide the redundancy.
 
Last edited:
I travel quite a bit for work (60%) - although I'm typically only gone 2-3 days mid-week - I'm thinking I will do a hot spare, as well...for those random times when I won't be able to get there within 24h.

With RAID6 - that still gives me 5 drives worth of storage (~30TB)...which is plenty. I still think it's worth it for RAID6 over RAID10 (~24TB) for my use case.

Thoughts?
 
Instead of a hot spare, can you add the drive as additional redundancy? For example, in ZFS, you can have RAIDz2, which is equivalent to RAID-6, or you can have RAIDz3, that adds a third disk for redundancy. It's better if the drive is already in the array, instead of RAID-6 with a hot spare.
 
Instead of a hot spare, can you add the drive as additional redundancy? For example, in ZFS, you can have RAIDz2, which is equivalent to RAID-6, or you can have RAIDz3, that adds a third disk for redundancy. It's better if the drive is already in the array, instead of RAID-6 with a hot spare.

Unfortunately, Synology does not support anything between RAID6 and RAID10 for my 8 drive setup. Those are the only viable options (RAID5 would be suicide).

Raid 10, performance and no parity penalty and much faster rebuild times in case of failed drive.

http://www.smbitjournal.com/2012/11/choosing-a-raid-level-by-drive-count/

I'm tempted. ~24TB is plenty for me. Currently living on a ~6TB NAS (with a secondary ~10TB NAS for backups/etc.) and at about 70% utilization. The more that I think about it, RAID6 with a hot spare is pretty dumb. I'll probably start doing iSCSI backups of my Windows boxes - so the RAID10 will come in handy.
 
Instead of a hot spare, can you add the drive as additional redundancy? For example, in ZFS, you can have RAIDz2, which is equivalent to RAID-6, or you can have RAIDz3, that adds a third disk for redundancy. It's better if the drive is already in the array, instead of RAID-6 with a hot spare.

I'd say that with RAID-10, hot spares are important since most people will use 2-disk mirrors. You could use 3, 4 or more, but then you're looking at 33%, 25% or lower usable capacity. Hot spares offer a good option to get the array rebuild started as soon as possible to minimize the risk of the dead drive's mirror dying before you can manage the rebuild.

With ZFS, I'd say do RAID-Z3 before you do RAID-Z2 + 1 hot spare. But a hot spare is still better than no hot spare if you're considering the same RAID level.

I'd also take RAID-6 over a same-capacity RAID-5 w/ hot spare.

Also important with any RAID topology is that you should not use all drives from the same manufacturer and batch. Mix your manufacturers, drive models and/or batches (manufacture date). This way you minimize the chance that they die at the same time. Note that mixing any of these things - including brand and RPM - are fine as long as all of your disks are compatible with your controller. SATA/SAS/PCIe are MUCH better about this than IDE and SCSI were, though I didn't have many issues mixing different drives on those, either. RAID cards back in the day could be a bit more sensitive. Not a problem today. At work we have our storage vendors ship different brand and RPM drives pretty regularly as well.
 
I'd say that with RAID-10, hot spares are important since most people will use 2-disk mirrors. You could use 3, 4 or more, but then you're looking at 33%, 25% or lower usable capacity. Hot spares offer a good option to get the array rebuild started as soon as possible to minimize the risk of the dead drive's mirror dying before you can manage the rebuild.

With ZFS, I'd say do RAID-Z3 before you do RAID-Z2 + 1 hot spare. But a hot spare is still better than no hot spare if you're considering the same RAID level.

I'd also take RAID-6 over a same-capacity RAID-5 w/ hot spare.

Also important with any RAID topology is that you should not use all drives from the same manufacturer and batch. Mix your manufacturers, drive models and/or batches (manufacture date). This way you minimize the chance that they die at the same time. Note that mixing any of these things - including brand and RPM - are fine as long as all of your disks are compatible with your controller. SATA/SAS/PCIe are MUCH better about this than IDE and SCSI were, though I didn't have many issues mixing different drives on those, either. RAID cards back in the day could be a bit more sensitive. Not a problem today. At work we have our storage vendors ship different brand and RPM drives pretty regularly as well.

Well, in my case, if I'm doing an 8 drive RAID10 with no hot spares - aren't I protected fairly well? Especially for a home use scenario? Better performance and protection than an 8 disk RAID6 with one hot spare - right?

Too late on the drives. Good point. Never thought about it that way. I have 8 WD Red 6TB NAS drives coming.
 
Well, in my case, if I'm doing an 8 drive RAID10 with no hot spares - aren't I protected fairly well? Especially for a home use scenario? Better performance and protection than an 8 disk RAID6 with one hot spare - right?

Wrong on both points.

With RAID 6, any two drives can fail without data loss. Not so with RAID 10.

And for sequential reads, an 8-drive RAID 6 will be about 6 times as fast as a single HDD, while an 8-drive RAID 10 will only be about 4 times as fast. (The RAID 10 will be faster for most small writes).
 
Wrong on both points.

With RAID 6, any two drives can fail without data loss. Not so with RAID 10.

And for sequential reads, an 8-drive RAID 6 will be about 6 times as fast as a single HDD, while an 8-drive RAID 10 will only be about 4 times as fast. (The RAID 10 will be faster for most small writes).

Thanks - so I'm back to building a RAID6 then? :) What is your opinion - based on what I posted in OP?
 
Hot spares for raid 10 or 6 really are not needed, set up alerts on your box when an array has errors and just switch out a drive when you need to.

Incorrect on Raid 10, 2 drives CAN fail, they just have to be on separate groups.But again, the rebuild time of a raid 10 is going to be A HELL of a lot faster than a raid 6, raid 10 literally has to copy the data from the working array to the new drive, while raid 6 has to rebuild and calculate partity, twice... while that raid 6 rebuilds.. you may as well just restore from a backup and rebuild the array. Not to mention the chance of another drive failing while doing the rebuild or a single bad parity bit causing the rebuild to fail....
 
the rebuild time of a raid 10 is going to be A HELL of a lot faster than a raid 6

I say only on very poor RAID implementations.

Incorrect on Raid 10, 2 drives CAN fail, they just have to be on separate groups

The difference is with raid6 any 2 drives can fail with no data loss. In raid10 if 2 from the same group fail the array is lost.
 
So if I setup a single RAID10 volume comprised of all 8 drives - if 2 fail = poof? Sounds like RAID6 is a better option, then.

The hot spare for RAID6 was only because I tend to travel for work about 60% of the time - so I am often gone 2-3 days at a time so the chances of me being there within 24h for a disk swap are lower than normal.
 
So if I setup a single RAID10 volume comprised of all 8 drives - if 2 fail = poof? Sounds like RAID6 is a better option, then.

In the worst case, yes. If you are lucky, with RAID 10 you can have two (or more) drives fail and still be okay, but if you are unlucky with the two drives that fail, as you put it, "poof". And when planning for trouble, you should emphasize the worst case scenario. So RAID 6 beats RAID 10 for fault tolerance.
 
In the worst case, yes. If you are lucky, with RAID 10 you can have two (or more) drives fail and still be okay, but if you are unlucky with the two drives that fail, as you put it, "poof". And when planning for trouble, you should emphasize the worst case scenario. So RAID 6 beats RAID 10 for fault tolerance.

Thank you. I will implement RAID6 with 1 hot spare.
 
I say only on very poor RAID implementations.



The difference is with raid6 any 2 drives can fail with no data loss. In raid10 if 2 from the same group fail the array is lost.

You can have a $1000 LSI raid card, raid 6 rebuilds are slower than raid 10, fact, since it has to calculate parity, Raid 10 has no parity to calculate.

Not to mention Performance, a Raid 6 rebuild is going to kill performance to the point it would almost be unusable, you will not see the massive performance hit on Raid 10.
 
You can have a $1000 LSI raid card, raid 6 rebuilds are slower than raid 10, fact, since it has to calculate parity, Raid 10 has no parity to calculate.

That is incorrect.

With decent hardware, the parity computation can run at well over 1000 MB/sec. It can easily keep up with the read and write speed of the HDDs. Since you are limited only by HDD speed, the RAID 6 rebuild speed can be just as fast as RAID 10 rebuild speed.

That assumes decent hardware -- enough CPU power and enough bus bandwidth to read all the HDDs in parallel and write to the new HDD. I don't have any experience with the specific NAS being discussed in this thread. It is possible that it is underpowered. But just because some cheap hardware might not be up to the task, you cannot claim that it is a fundamental limitation that RAID 6 rebuilds slower than RAID 10. That is simply not true for any decent RAID 6 implementation running on decent hardware (whether a hardware RAID card, or an HBA with a decent CPU and software RAID implementation).
 
That is incorrect.

With decent hardware, the parity computation can run at well over 1000 MB/sec. It can easily keep up with the read and write speed of the HDDs. Since you are limited only by HDD speed, the RAID 6 rebuild speed can be just as fast as RAID 10 rebuild speed.

Yep. Just rebuilt one of SnapRAID arrays today with new 12 x 5TB data disks and 2 x 5TB parity disks connected to a single M1015 -- was flying at 2000MiB/s until it got far enough into the inner part of the disks to begin tapering off from the saturation point. Probably could've gone even faster if not bottlenecked by a 3G (SATA II) SAS expander. 58TB hashed and RAID6 double parity generated in a little under 8 hours. Even my Areca 1882i couldn't have done it that fast. Smoking. And best of all, if more data disks than parity disks were to suddenly die, only those additional disks would be lost, not the entire array like striping based RAID.

Unless its for business where uptime and performance are important then I just don't see the point in risking home data to striped arrays anymore.
 
Last edited:
So if I setup a single RAID10 volume comprised of all 8 drives - if 2 fail = poof? Sounds like RAID6 is a better option, then.

The hot spare for RAID6 was only because I tend to travel for work about 60% of the time - so I am often gone 2-3 days at a time so the chances of me being there within 24h for a disk swap are lower than normal.

Its important to note that its not just two drives. In theory an eight drive RAID10 can have four drives fail, but they have to be the "right" four drives. If one drive fails and then its mirror fails, there is data loss. If one drive fails and then another drive that is NOT its mirror fails youre fine. You can lose half of all the mirrors. With RAID6, ANY two drives can fail. Its the third that causes the problems.

I prefer RAID6 so I dont have to worry about which drive fails next. IMHO.
 
For a similar sized, 6x6TB drives, internal RAID6 array, what RAID controller card would you guys recommend? I have been looking at the Areca 1882i, and i know there are a few others out there that can do a similar task. Or is it even worth getting a RAID6 card these days?

Thanks!
 
Back
Top