Raid 5+1 or Raid 6?

Joined
Oct 28, 2004
Messages
722
I'm stuck in sort of a pickle here. I've got 4 solid drives in raid 5. I just got one more in and I'm ordering the other 3 shortly for a total of 8 on my Areca 1220. Except that you can't convert from raid5->raid6 (downgrade only it seems). This leaves me with either expanding 3 more drives worth and using a hot spare (5+1) or loading 4 drives as raid6 (640gb) and copying all the data over, then deleting the old array, and expanding this array to RAID 6, which is quite a bit riskier I think.

I'm not really worried about the performance loss using raid 6, I'm mainly worried about losing all my data. Any input or suggestions?
 
If you're really worried about your data, then what is your backup solution? The point of RAID 5 is uptime, about being able to throw in a drive if anothe goes south, not using it as an alternative to a backup
 
All my critical data is backed up through CVS or on another drive thats not hooked up and is known to be good. I did however find out that you can actually upgrade an array to RAID6, however you have to have a spare drive first. So you have to expand+modify the array at once, or in my case not know you can do that and expand first, then modify. Its about 11% done after 30 minutes so in another 5 hours or so I should have a better idea of whether it works or not.
 
With raid6, you can lose up to 3 drives, before you lose data.

With Raid 5 + 1, if you lose a drive, you have to wait for the rebuild, and if you lose a drive during that, you've lost data.
 
[h]ardc[h]ris said:
With raid6, you can lose up to 3 drives, before you lose data.
Fixed for accuracy against mix-ups: Raid6 works after two concurrent disk failures. Raid5 is one disk failure. I've already moved to RAID6 and posted some benchmarks somewhere else (~214mb/s on XFS).
 
Raid 6 allows for arbitrarily complex data+parity systems. As HPA puts it:
Reed-Solomon coding can be exploited further to allow for any combination of n
data disks plus m redundancy disks allowing for any m failures to be recovered.
However, with increasing amount of redundancy, the higher the overhead both
in CPU time and I/O. The Linux RAID-6 work has been focused on handling
the case of m = 2 efficiently in order for it to be practically useful.
I believe that the Areca card is also specifically locked to m=2. However, it's a general term. Just like raid 5 means any number of data disks and one parity disk, raid 6 means any number of data disks and any number of parity disks.

Now that I've made my point, yeah, it's two ;)

 
hokatichenci said:
Fixed for accuracy against mix-ups: Raid6 works after two concurrent disk failures. Raid5 is one disk failure. I've already moved to RAID6 and posted some benchmarks somewhere else (~214mb/s on XFS).

Do you have a link to the benchmarks? I'd like to see that.
 
IMO RAID 6 is overkill. I would rather have one big RAID 5 array and get an extra disk. In 6 years, I have only seen one instance of a two disk failure. Only under a rare circumstance would I ever integrate RAID 6 into a server.
 
IMO RAID 6 is overkill. I would rather have one big RAID 5 array and get an extra disk. In 6 years, I have only seen one instance of a two disk failure. Only under a rare circumstance would I ever integrate RAID 6 into a server.

In the enterprise environment you seem to see it more than you would like. Definetely more than once (2 disk failure) in 6 years. Over the past few years, ive seen countless power supply issues taking x amount of drives out and the RAID array having to be restored completely. This is with all different hardware vendors including mostly IBM / SUN.

Its definetely a case of keeping uptime figures high, but no replacement for a proper backup.
 
Back
Top