4x 6 TB WD Red - RAID 5 or 6? And which controller?

Should anyone else care to help, this is for home use. It's not business-critical. A second drive failure soon after the first is an acceptable risk. I am willing to trade speed of rebuild for resilience. It doesn't really matter if the rebuild takes all night.
 
3b Again you don't understand that UREs only affect parity RAID.

RAID10 doesn't magically remain unaffected by UREs. If one drive fails, and the mirrored drive has a permanent URE in an area where there is data you will have some data loss.

I'm wondering if RAID 5 will be sufficient to cope with uncorrectable bit error rates and failures, or should I go for RAID 6?

RAID6 offers significantly more protection against data loss than RAID5. The problem with RAID5 is that you have no redundancy at all after a drive failure, so any problem with any of the 3 remaining drives is likely to cause some data loss. RAID10 has some of the same weaknesses, except for RAID10 the risk is limited to problems with the drive that was being mirrored by the failed drive. The likelihood of additional failures is high enough that I would personally not use RAID5 for storing important work or personal data.

RAID is no substitute for backups. If your data is important, you must have backups. Having backups is more important than having a RAID if you have irreplaceable data, so you will need at least two drives for backups as well. Do not skip this, if cost is a concern you should rather use the two parity drives for backups and not use RAID at all, or (better) use RAID5 with full backups. RAID protects you against some problems with the HDDs, backups will protect you against many other types of failures/mistakes.
 
Last edited:
RAID is no substitute for backups.

Oh yes indeedy.

I've got the Microserver and 16 GB RAM on order. I'll put a scratch 250 GB drive or two in and have a play and see what I can get working under ESX or the MS Hypervisor. But rather than RAID 0+1, I might go for 2x RAID 1, with the VMs for the OSs on one pair and the data on the other.
 
Another thing I haven't seen mentioned is that with RAID6 you can expand the array one drive at a time assuming the card or software supports online expansion. This will also force a rebuild of the entire array and redistribute all of the data and expose any parity or drive corruption.

RAID10 would require expanding two drives at a time, assuming the card or software supports it. In some implementations this will not rebuild the array and only new data will end up using all of the new spans.
 
I like running RAID 10 on my virtualization testbench/sandbox environment. I find that the VMs feel more responsive in this configuration, especially when I have several guests running at once. Of course, this is purely subjective and I am factoring in the potential of a double-disk failure ruining my fun, but I consider it a worthwhile tradeoff since it isn't really a production environment.

At work we're mostly using RAID6, with a few mirrored arrays still around for OS drives. All of our virtual machines are stored on a NAS that is based on RAID6. The RAID6 arrays all have more than 4 drives as well. We're slowly moving toward a full-virtualization model, but it takes time and we are still stuck supporting some legacy applications that don't play nicely with virtual environments.
 
Back
Top