8x 2TB drives, RAID 5, 6, 50, or 10?

ben805

Limp Gawd
Joined
Nov 27, 2006
Messages
188
8x Hitachi 2TB drives connected to the Lsi 9265-8i controller, I'm debating which RAID configuration to go with? RAID 5 only allow 1 drive failure, rebuilding data will take ages and it would be disastrous if I lose a 2nd drive during rebuild, RAID 6 allow two drives failure simultaneously but having 2 parities and losing 2 drives capacity take a big hit on write speed. RAID 10 suppose to have great read and write speed but it also has the highest overhead, would used up 50% of the storage space, and I did not see any read/write performance gain over RAID 5 on the Lsi 9265-8i controller, that leave RAID 50....would also lose 2 drives to parities but both read and write speed is similar to RAID10 but overhead isn't as high. The RAID setup will be use primarily for storing bluray rip, iSO, pr0n, and HD video/photo editing files, which raid do you recommend? I have about 10x 1TB drives on the side to be use for data backup with an eSATA docking station.
 
rebuilding data will take ages and it would be disastrous

I know old cards with used to take a long time to rebuild however does it really take ages on that card? If so can't you increase the rebuild rate via settings? For me on linux software raid (at work) for the same amount of drives a rebuild is less than 9 hours on a core2quad, that is after coaxing the raid subsystem to rebuild as fast as it can by increasing the buffers and the minimum desired rebuild rate.

which raid do you recommend?

At work even though everything is backed up to LTO tape I use raid6 for 8 to 14 drives. With raid5 and these large drives the chance of a URE to me is not worth the loss of a single drive worth of data even though I have good backups.
 
RAID6, since you're worried about the risk of drives failing.

Your use is primarily mass storage, with video/photo editing listed last. Photo editing won't be too intensive on your storage array. HD video will be 'slower' than everything else anyway that you're probably not willing to spend extra money to shave a few minutes off of your processing time.

If you're building a new array, test out the transfer rates and see what you prefer.

FWIW I've had 8x1TB in RAID5 with a drive failure rate of about once per year.
When I went up to 8x2TB in RAID6, I had one drive fail in two years. I've also had drives drop out for no apparent reason at the time (I think it was a bad molex splitter) and had to rebuild.

To give you an idea of rebuild times, the latest two in my event log show 17h03m and 18h02m.
This is with RAID6 8x2TB WD20EADS on ARC-1231ML w/2GB RAM, rebuilding from losing (or disconnecting by accident) one drive.
 
With a good controller (likely with that LSI) then sequential writes will actually be faster in raid6 than in raid10 (raid5 would likely be even faster). Also you are probably looking at a 6-10 hour rebuild time with that controller in raid6 or raid5 unless the machine is being I/O thrashed.
 
TBH, raid5 with 8x 2TB drives is getting a little too risky - better safer than sorry and go with raid6 IMO.

Unless you need IOPs then RAID10 is expensive on overhead, and while raid50 might seem an attractive compromise, it can only survive certain double disk failures (though that can also be true of raid10)
 
Raid6 will be far more reliable than raid10. We have hundreds of arrays where I work and I have seen probably 15 cases of raid failures with raid10 which would not have failed had they been raid6. I have seen maybe 1 case of where raid10 survived where raid6 would not have (3 disk failure).
 
Won't see any high IOPs, the array is being use locally on workstation. Looks like I would end up with the exact same storage capacity with either RAID 50 or 6, the difference in redundancy is that the RAID 6 allow up to any 2 drives failure, while the RAID 50 also allow 2 drives failure as long as they are not on the same span (group of 4 drives per span). Still very hard to decide between the two....the 50 win slightly in speed while the 6 is more bullet proof.
 
I would still go for raid6. The performance difference would not be that noticeable (if at all) in your usage pattern.
 
an LSI9265 is among the most powerful RoC's, does its RAID6 calcs faster than realtime so 8 drives = RAID6, no brainer assuming you value the data. Array rebuild should only take 6-8 hrs. F*ck the nested raid levels (RAID50 and RAID60) - those are only relevant for very specific usage patterns which yours is not, and introduce unnecessary risk. And there are many other risks to RAID5 and only a single stripe of parity besides "what if a second drive fails during rebuild" -- statistically that won't happen but all sorts of other things can and do happen, electric issues, dumb end users, bit rot, the list goes on.
 
Last edited:
RAID 6. Nothing else will give you reliable protection. And don't worry about write speed.
 
Ok, if I go with RAID 6, is the following parameters optimized? if not how would you change them?


Rebuild Rate: 30%
Patrol Rate: 30%
BGI Rate: 30%
Check Consistency Rate: 30%
Reconstruction Rate: 30%


Isn't rebuild and reconstruction the same thing??
 
Back
Top