HW raid - how many dirves = too many?

ALpHaMoNk

Weaksauce
Joined
Oct 12, 2010
Messages
108
OK so i just want through the whole 10+ TB sticky thread and man...some really amazing setups out there. :eek:

I haven't posted my setup in the thread although i am just over the 10tb of usable space.

As I was going through I notice alot of different setups....now for those using HW raid!

I know raid6 is the way to go ( I am currently on raid5 with 8x 2TB hitachi drives with an Areca 1680 - no more available ports).

what is the max amount of drives that is deemed safe for even a raid6 in s single array?

The main use of my sever is for media storage - many HD movies so it makes it easy for me to have one large drive letter vs 2.

is it safe to go raid 6 20x2TB as one full array one volume in windows?

some of you guys are all out 100TB plus....I am using windows 2008 r2 as the os. Many are using linix base. Just trying to gather all that I have seen.
 
Last edited:
I am currently on raid5 with 8x 2TB hitachi drives with an Areca 1680 - no more available ports

Which is a good thing. 8 drives is too many for a raid 5 array..

is it safe to go raid 6 20x2TB as one full array one volume in windows?

No. I would use raid60 for that many disks.
 
OK so i just want through the whole 10+ TB sticky threat and man...some really amazing setups out there. :eek:

I haven't posted my setup in the threat although i am just over the 10tb of usable space.

As I was going through I notice alot of different setups....now for those using HW raid!

I know raid6 is the way to go ( I am currently on raid5 with 8x 2TB hitachi drives with an Areca 1680 - no more available ports).

what is the max amount of drives that is deemed safe for even a raid6 in s single array?

The main use of my sever is for media storage - many HD movies so it makes it easy for me to have one large drive letter vs 2.

is it safe to go raid 6 20x2TB as one full array one volume in windows?

some of you guys are all out 100TB plus....I am using windows 2008 r2 as the os. Many are using linix base. Just trying to gather all that I have seen.

I usually recommend no more than 14 drives per RAID6 array including spares
 
Which is a good thing. 8 drives is too many for a raid 5 array..



No. I would use raid60 for that many disks.
OK i have been fine for over 2years with the raid 5 and 8 drives, I wouldn't plan to go past that with the raid 5 I see where it can get scary especially since these are consumer drives.
with raid 60 i will now have 4 parity drives instead of just 2 of raid 6 and striped to create one volume? correct me if i am wrong, so the only major drawback would be capacity lost to parity.

I usually recommend no more than 14 drives per RAID6 array including spares
My raid card hold 8 drives, I plan on joining this with an HP expander and My case holds 20 drives (norco 4020) so 10 drives raid6 x2 stripped raid 0 is the best way for me to go?

That would be about my limit for raid6 and I try to backup everything at least 2 times to tape in addition to the raid array.
back up is not possible for me with that much data, I can't duplicate the server for backup no do i have the funds for a tape system that will suite.

Most important data will be off on a mirror raid and backed up offsite (docs, pics and music collection)
8tb worth of unusable disk space sucks but the fault tolerance tradeoff I guess it worth it.
 
Typically, for my home stuff, when it comes to backups (specially huge media files) instead of tape i just use the occasional large extra drive that i dont need anymore (or if i see a great sale somewhere) as a stop gap backup, xfer whatever files will fit and stash it somewhere just in case (and then repeat next time you have another extra or cheap disk). maybe even compress them to save space. that way if your raid storage ever dies and isnt recoverable, your not looking at a full loss. probably an easier option once HD prices come back down to a sane level.
 
If you plan to fill 20 drives, it all counts on how much fault tolerance you want between RAID 6 and 60, and whether you can afford 4 drives (plus hotspares) for parity. I think the problem you will probably be facing is that you'll probably be buying 3TB (or 4TB if you take a long time to fill the drives) to fill the remaining 12 bays. You pretty much will be forced to build 2 different arrays unless you plan to buy only 2TB HDD's which doesn't make any sense since you can increase density so easily from newer drives.

Or go zfs/unRAID/FlexRAID and not have to deal with these problems...
 
with raid 60 i will now have 4 parity drives instead of just 2 of raid 6 and striped to create one volume?

With raid 60, yes you will have 4 parity drives instead of 2 and yes its basically the same as 2 raid 6 arrays striped.

OK i have been fine for over 2years with the raid 5 and 8 drives
For that I recommend weekly scrubs to lessen the chance of a URE + dead drive causing data loss.
 
Been running a RAID-5 with 16 x 1TB drives (15 + 1 spare) and RAID-6 with 24 x 3TB drives (24 drives, no spare) on a 9650 and 9750 respectively for 3-4 years, no issues yet.

Yes the data is backed up elsewhere as well.
 
Typically, for my home stuff, when it comes to backups (specially huge media files) instead of tape i just use the occasional large extra drive that i dont need anymore (or if i see a great sale somewhere) as a stop gap backup, xfer whatever files will fit and stash it somewhere just in case (and then repeat next time you have another extra or cheap disk). maybe even compress them to save space. that way if your raid storage ever dies and isnt recoverable, your not looking at a full loss. probably an easier option once HD prices come back down to a sane level.
I hear what you are saying but that would be quite a few extra drives and also scattered drives. One of the main reasons i built the server was to consolidate all my loose data on various drives onto one machine and accessible from any other machine in the house or offsite. Thanks for the idea though.

If you plan to fill 20 drives, it all counts on how much fault tolerance you want between RAID 6 and 60, and whether you can afford 4 drives (plus hotspares) for parity. I think the problem you will probably be facing is that you'll probably be buying 3TB (or 4TB if you take a long time to fill the drives) to fill the remaining 12 bays. You pretty much will be forced to build 2 different arrays unless you plan to buy only 2TB HDD's which doesn't make any sense since you can increase density so easily from newer drives.

Or go zfs/unRAID/FlexRAID and not have to deal with these problems...
it took me about a year and a half to fill up 12.7TB I have been pinched for a few months now unable to add any more HD movies. I like the tolerance that 60 provided but the issues i see would be that, i already have data on my array. for me to move to 60 wouldn't i have to rebuild the array? Again i can't move the data rebuild then move it all back I have a space raid controller just not the spare amount of needed drives to perform this task. unless there is a way to do this expansion without losing data.
You are right about the upcoming chance of buying 3Tb 4TB drives when the prices come back down to earth. which again sheds new problems for me.. how would i again upgrade to a higher capacity drive without having to rebuild the array and move data.? Otherwise i would focus on larger drives vs trying to find 2TB drives, the Hitachi HDS722020ALA330 that i have been using are already out of production, i cant find them anywhere, but i have been told that the 7K3000 with mix in with what i already have with no issues.
Although i do plan on filling the case with drives I try not to do so all in one shot, waste of drives spinning, noise, warranty, and electricty. I try and expand when I fall short on space. but it seems that drives are being replaced faster than i can expand. I try to expand 3-4 drives at a time.
zfs = I know nothing about i would have to really study and understand, I figured if i have a hw raid card it just makes more sense to just use it for raid.
unRAID/FlexRAID = although i did look at both at one time, neither would work for me as I use my server for more than just file sharing. i ran software on it as well. FTP server airvideo torrents newgroups etc....

With raid 60, yes you will have 4 parity drives instead of 2 and yes its basically the same as 2 raid 6 arrays striped.


For that I recommend weekly scrubs to lessen the chance of a URE + dead drive causing data loss.
yes card runs weekly scrubs for me.

Been running a RAID-5 with 16 x 1TB drives (15 + 1 spare) and RAID-6 with 24 x 3TB drives (24 drives, no spare) on a 9650 and 9750 respectively for 3-4 years, no issues yet.

Yes the data is backed up elsewhere as well.
Good to hear of your success with both raid5 and 6 what drives are you using? I really wanted to run 20 drives with raid6 but now I am not so sure if that is the best way for me to go.
 
> Good to hear of your success with both raid5 and 6 what drives are you using? I really wanted to run 20 drives with raid6 but now I am not so sure if that is the best way for me to go.

For RAID-5: 16 x WD 1TB RE-3 (7200RPM)
For RAID-6: 24 x Hitachi 7K3000 (7200RPM)
 
Drive%20Space.jpg
As you can see i completely out of space...I never let me drive get this dry :(
 
it took me about a year and a half to fill up 12.7TB I have been pinched for a few months now unable to add any more HD movies. I like the tolerance that 60 provided but the issues i see would be that, i already have data on my array. for me to move to 60 wouldn't i have to rebuild the array? Again i can't move the data rebuild then move it all back I have a space raid controller just not the spare amount of needed drives to perform this task. unless there is a way to do this expansion without losing data.
You are right about the upcoming chance of buying 3Tb 4TB drives when the prices come back down to earth. which again sheds new problems for me.. how would i again upgrade to a higher capacity drive without having to rebuild the array and move data.? Otherwise i would focus on larger drives vs trying to find 2TB drives, the Hitachi HDS722020ALA330 that i have been using are already out of production, i cant find them anywhere, but i have been told that the 7K3000 with mix in with what i already have with no issues.
Although i do plan on filling the case with drives I try not to do so all in one shot, waste of drives spinning, noise, warranty, and electricty. I try and expand when I fall short on space. but it seems that drives are being replaced faster than i can expand. I try to expand 3-4 drives at a time.

More than likely, you will probably just need build, and run two seperate array's like war9200 has. Based on the above, I'd say scrap the RAID 60 idea and go with another RAID 6 using 3TB drives. Consider expanding 6 drives at a time with 3TB drives, and you are looking at 12TB available space. The harder part is whether you can wait so hard drive prices can go down, or suck up the costs now if you need the space. When you are out of room again, hopefully 4TB are the rage and you ditch the 2TB's.

zfs = I know nothing about i would have to really study and understand, I figured if i have a hw raid card it just makes more sense to just use it for raid.
unRAID/FlexRAID = although i did look at both at one time, neither would work for me as I use my server for more than just file sharing. i ran software on it as well. FTP server airvideo torrents newgroups etc....

zfs is pretty amazing if you are interested in data integrity as it's designed for that specific purpose. There is an all-in-one version that does more than fileserve too. FlexRAID runs on top of a filesystem like Windows so you can do all the things you wanted, data can be pooled into a massive hard drive, and hard drive data is accessable even if the pool fails. It doesn't have as much support, and really, the whole project is practically still in beta mode since quite a few features are locked (e.g., can't do multipule pools yet), and real time protection isn't 100% yet.
 
Number of drives is dependant on the controller. Cheap controllers can only handle a smaller number of drives.
 
"How many drives is too many" is too subjective a question for a simple answer. It all depends on the type of data, how its being used, how much its worth, business or home environment, presence or absence of separate physical copy of the data, model of controller and disks, and ultimately your tolerance for risk.

I've been running 24-disk RAID6 arrays just fine, one of them has been running since 2007, but A) I have a separate physical copy of the data (backup), B) The data is moved around a lot so the performance boost of the striping holds a benefit. But I *have* been slowly migrating away from hardware (striping) raid as a means to pooling multiple disks and moving toward software based non-striping solutions (i.e. FlexRAID) as its been maturing. Granted, its taking many years, since progress on the software based solutions is slow.

FlexRAID runs on top of a filesystem like Windows so you can do all the things you wanted, data can be pooled into a massive hard drive, and hard drive data is accessable even if the pool fails. It doesn't have as much support, and really, the whole project is practically still in beta mode since quite a few features are locked (e.g., can't do multipule pools yet), and real time protection isn't 100% yet.

I think FlexRAID is where things are headed, at least when it comes to the storage of HD home movie libraries, and other storage scenarios with vast amounts of write-once, read-infrequently data. There really is no need to stripe this type of archival data, when a single disk can handle playback with ease. Striping is a performance multiplier, not a redundancy or fault tolerance multiplier. So it's far more efficient in this archival scenario to have a set of disks independently formatted as NTFS and pooled into a single virtual volume and then several more disks designated as parity disks, such as the way FlexRAID operates.

Why? Because you can lose more drives than you have parity drives to failure, and the data on whatever disks remain is still intact and readable - after all they're simply formatted NTFS. In the case of hardware (striped) RAID6, after you lose more disks than you have parity disks, you have almost nothing left except the files on the remaining disks that were smaller than the volume blocksize/stripesize -- so maybe some JPG's or documents -- and much more complexity than simply "plugging in a drive" to recover those files.

With striped arrays, the weakest link really is the link itself - the fact that the drives are dependent on one another and interconnected, whereas with a non-striped, virtualized pool of independent NTFS disks, that's not the case. One is a house of cards, the other is just cards.
 
Last edited:
Number of drives is dependant on the controller. Cheap controllers can only handle a smaller number of drives.
I have an Areca 1680 2 port card will be combining with RES2SV240 expander (if all works well.)
Sorry for the late response - lost this thread for a bit there.
 
"How many drives is too many" is too subjective a question for a simple answer. It all depends on the type of data, how its being used, how much its worth, business or home environment, presence or absence of separate physical copy of the data, model of controller and disks, and ultimately your tolerance for risk.

I've been running 24-disk RAID6 arrays just fine, one of them has been running since 2007, but A) I have a separate physical copy of the data (backup), B) The data is moved around a lot so the performance boost of the striping holds a benefit. But I *have* been slowly migrating away from hardware (striping) raid as a means to pooling multiple disks and moving toward software based non-striping solutions (i.e. FlexRAID) as its been maturing. Granted, its taking many years, since progress on the software based solutions is slow.



I think FlexRAID is where things are headed, at least when it comes to the storage of HD home movie libraries, and other storage scenarios with vast amounts of write-once, read-infrequently data. There really is no need to stripe this type of archival data, when a single disk can handle playback with ease. Striping is a performance multiplier, not a redundancy or fault tolerance multiplier. So it's far more efficient in this archival scenario to have a set of disks independently formatted as NTFS and pooled into a single virtual volume and then several more disks designated as parity disks, such as the way FlexRAID operates.

Why? Because you can lose more drives than you have parity drives to failure, and the data on whatever disks remain is still intact and readable - after all they're simply formatted NTFS. In the case of hardware (striped) RAID6, after you lose more disks than you have parity disks, you have almost nothing left except the files on the remaining disks that were smaller than the volume blocksize/stripesize -- so maybe some JPG's or documents -- and much more complexity than simply "plugging in a drive" to recover those files.

With striped arrays, the weakest link really is the link itself - the fact that the drives are dependent on one another and interconnected, whereas with a non-striped, virtualized pool of independent NTFS disks, that's not the case. One is a house of cards, the other is just cards.

The data consists of Mostly HD movies and tv shows.. I have pics, music and docs on a separate raid 1 and backed up offsite, This server is in a home environment and using an Areca 1680 2 port card. I was going to combine this with the HP expander but then changed my mind and picked up the Intel RES2SV240. Disks are 13x2TB Hitachi 7k2000 and 7k3000.

You are another that is running 24 disk raid 6 like war9200 and many others I seen in the mega servers thread. Although everyone has different value of their data and their own means of backups it still questions me as to how far it should be pushed. I see some users that will not go past 8 drivers per raid 6 but in the end that leave you with a bunch of volumes? I was thinking anywhere between 15-20 drivers per raid 6?

I have looked into FlexRAID as well as it seems very fit for home use (after I aleady had my raid 5 up and running). windows 8 "Spaces" also looks interesting as well. I am not sure if this feature will also be available in server 8 or not. In the end I really want lots of space but fewer pools and like you said no need to spin up so many disks just to watch one movie. I also like the idea of not losing all data if more disks die than parity (not a total lost.) I went through a bad raid 5 failure some years back so once I am full on this Chasis hopfully FlexRAID or windows Space will be a bit more mainstreamed for the home server.
 
Back
Top