two 3ware cards together for one giant raid mothership

iansilv

Limp Gawd
Joined
Jun 10, 2004
Messages
335
I have been looking at Ockie's nuclear powered galaxy 4.0, and I want to build one, but with raid. I am a "raid noob," but I think I have my facts straight.can someone here correct me if I am wrong:

1. I can link two 3ware raid cards together to havea total of 30 drives for one giant chuck of space? I notice that they make a 24 port card, a six port card, an eight port card, etc. So if I wanted to have 30 individual drives I could configure the two cards to work together as one giant cluster, even if there was not 30 ports on one card. Am I right?

2. Raid 5 would let me hotswap a drive out on one of these cards, correct? So- movies is running, I have 29 drives setup, and I just plug in a drive cage with a hard drive in it and BOOM- I now have 30 drives, and the whole setup will configure itself- moving data, formatting, etc. Same thing if I pull out a single drive- everything keeps humming, and then I replace that drive,a nd everything rebuilds.

3. Raid 6 will let me do all of the above, but if one drive fails, I still have a spare failsafe drive so I cna replace the bad one, and while I am repalcing it, I still have protection if another drive goes down.

Please point me in teh right direction if I am wrong about raid. Thank you.:)
 
as i understand it, RAID 5 and 6 are basically the same in principle; RAID5 uses the equivalent of one drive in the array for parity storage and can absorb one drive failure. RAID 6 uses two drives in the array for parity data, but i do not know if that increases its failure capacity.

your add a drive/remove a drive i do not think would be as painless as you surmise. i would imagine rebuilding a 30-drive array would be extremely time consuming. besides, why would you be r&r drives all the time? constant failures? many controllers support online expansion (OCE), but i think removing a drive would simply be seen as an array failure and not something most controllers are set up for.

edit: no, you can't just plug in a drive to an array and the system automagically copies the data over and absorbs the drive into the array. adding a drive to the array is exactly that - adding a device and increasing the capacity of the array. the data would most likely be destroyed in the process. you can have a hot spare in RAID5, the array will change over to it automatically.
 
but can I use two cards for one giant storage array if one card does not ahve enough ports? I want one giant D drive where I will put all of my media, and I want to be able to use up to 30 hard drives. The best I can find is a 24 port controller card.
 
I know with Highpoint cards you can, but not sure about 3ware. Let's see what U_M says.
 
If you were to take a drive out of a raid 5 you would have 0 protection, in other words if any of 29 drives were to go bad then you would lose ALL of your data.

To add a drive you would need to get a card with online expansion but I don't think its not something you could really do why your have people over for movies.

Even if this were to work having 30 drives would be a very bad idea. Besides the fact that you are going to almost certainly have some incredibly bad performance when (not if) a drive goes bad you will have to rebuild the entire array with a new drive, which would take at least forever with that many drives and while it is rebuilding you have no protection on the array again. If you are going to acquire 30 drives you will almost certainly have some that come from the same batch unless you are going to start at 3 and slowly build your way up to 30, if you get a bad group your going to lose all of your data. Also unless you have some fairly small drives (what would be the point of spending the crazy amount of money this setup is going to take then?) its going to be hard to keep track of all the partions on that setup.

In other words you are going through a crazy amount of hassle to avoid using the space of a couple extra drives, and then you could do a raid 50 setup of some sort if you really wanted performance.
 
Get a 4 port scsi raid card. This would give you a maximum of 52 drives.
 
Rally-
Wouldn't I just have one large partition? My goal here is to use raid 5 or 6 on a group of 30 of the same drive- as in, that hitatchi 1 Tb or 30 750 gb drives. I would have the OS on a separate hard drive, and the block of 30 drives would just host ripped movies (that I own.)
 
Try the wikipedia article on RAID. I think it does a nice job explaining it in a slightly more accurate way than, say this thread, and the diagrams on the page are excellent for visualizing the material being discussed.

When you're talking about that many drives, the controller and power supplies alone will run a few thousand dollars. At that point, you're probably better off getting a dedicated SAN device. I don't even want to know how you're going to cool it or pay for all the electricity. :)

This is a pretty decent HOWTO covering cobbled together large disk clusters, even if it's a little dated and very Linux oriented.
 
You don't want a 30-disk raid array. If two disks fail, *all* your data is lost. Break it up into 6- or 8-disk arrays, independent of each other.

If the potential for failure doesn't deter you, consider the transfer speed of such a system. Since many writes will happen that are less than a 30 * blocksize in length, all 30 disks have to seek and read a block, then write a new updated one on many of them. This will lead to huge increases in the write access time.

Your best bet is probably using an OS-based volume manager to concatenate (or simply mount) several 6/8 disk arrays in one location. Solaris' ZFS, Linux's EVMS, Windows' dynamic disks, etc.
 
I have been looking at Ockie's nuclear powered galaxy 4.0, and I want to build one, but with raid. I am a "raid noob," but I think I have my facts straight.can someone here correct me if I am wrong:

Haha, I have a following.

1. I can link two 3ware raid cards together to havea total of 30 drives for one giant chuck of space? I notice that they make a 24 port card, a six port card, an eight port card, etc. So if I wanted to have 30 individual drives I could configure the two cards to work together as one giant cluster, even if there was not 30 ports on one card. Am I right?

Yes, you are right. Most higher end raid cards supports teaming and some supports quite a few cards... the system will see it as one card depending on the software setup, thus having one fat array.

2. Raid 5 would let me hotswap a drive out on one of these cards, correct? So- movies is running, I have 29 drives setup, and I just plug in a drive cage with a hard drive in it and BOOM- I now have 30 drives, and the whole setup will configure itself- moving data, formatting, etc. Same thing if I pull out a single drive- everything keeps humming, and then I replace that drive,a nd everything rebuilds.

Some cards have auto expanding but you still have to be careful, I tried this and it was basically a crap shoot. Obviously, I was using diffrent hardware than now.

Anyways, I would stay away from this type of setup, this is what I would recomend for you since you want a HUGE storage drive:

JBOD - Shows all your drives as one logical drive, most proper raid controllers works with this... so you can have 10 or you can have 20 drives, all of them will appear to be one big fat drive.

Windows Home Server - This OS is now in beta but availible and working well, it has a feature very similar to JBOD EXCEPT it has a function known as a "Storage Pond", this is an auto expansion single logical drive. So like you want, 29 drives showing up as one drive, but the second you plug the next one in, it automatically fattens that one logical drive to accomodate that 30th drive.

3. Raid 6 will let me do all of the above, but if one drive fails, I still have a spare failsafe drive so I cna replace the bad one, and while I am repalcing it, I still have protection if another drive goes down.
Keep in mind, with RAID drive failures, you are loading the stress on the other drives, so when you have a large array, the stress will lay on all the other drives, this is a good moment for that second or weaker drive to fail.

This is really why I just run disks, for the reason that if something fails, I am only out of one drive, I backup the really important stuff, the rest I can afford to lose. Raid is great, but the price to benifit ratio for my needs (read: my own needs), just doesn't balance out.
 
Hi Ockie!
You do have a following- I have posted a link to your work log in several other forums where people have asked about building a monster storage center. I want it to be as large as possible because I plan on ripping hd disks (that I own) so I can run a media center in the living room that networks to it. So you are saying- download windows home server, use storage pond, and configure my raid cards to use jbod instead of raid 5 or 6?

thank you for your input man,
Ian
 
To answer your original questions:
1. As stated by others, verify with 3ware with any particular cards/driver versions you are thinking of using.
2. You are talking about 2 separate things; online expansion and drive failure rebuild. To the first (adding an extra drive to an existing array), yes you can add an extra drive and do an online expansion provided it is supported by card + driver/software. As Ockie stated, it can be a crap shoot. Best case it works perfect, medium case you need to do it in BIOS and it takes forever keeping your server down, worst case it hoses your array. To the second (drive rebuild), yes if you have a drive failure (or unplug or remove a drive) the system will handle rebuilding in the background. You will take a major hit in performance while it does this. It can also take a very very long time to complete.
3. Yes, RAID5 you must manually replace the failed drive and wait for a rebuild before you are protected against a second failure. With RAID5 + hot spare, as soon as a failure is detected it will start the rebuild with the hot spare, and you can replace the failed drive at your leisure, and it will then be the new hot-spare. With RAID6 you can loose 2 drives before you are without protections, so yes if you lost one drive, while it is rebuilding you are still protected. Again major performance hit while that rebuild happens. I haven't looked recently, but when RAID6/RAID6 controllers were first out they were reported as slow, too slow to be of much use over RAID5.

The question to ask yourself is what is your budget, what is you desire to home brew this, and what is your desire to power it, listen to it, and maintain it. A 30 drive system will be loud, hot, and will use a lot of power. It will also be very expensive, especially if using high capacity drives.

Not knowing your budget, but if you have the money and want a system that works, buy a SAN/DAS (storage area network/direct attached storage) setup. Call up Dell and get a quote for what you're looking for. Also Aberdeen advertises a lot for storage servers, I guess they do SAN/DAS too. Haven't used either Dell or Aberdeen for these, just to give you somewhere to start looking...

That said if you have the budget of a mere mortal and want one giant drive, do as Ockie said, setup a software JBOD pool/dynamic disks, and then you can add drives as often as you want, there is no real rebuild time to speak of, and you can buy disks as you need them, and enjoy both lower cost over time, and the increased capacity disks of the future. Backup just important stuff using whatever method you want (tape, removable disk, DVD, disk in same or other system, etc)

I think for HD movies that you have access to, you do not need redundancy, you can re rip them easy enough, and you do not need the expense and complexity of a RAID system. To get 30 drives in one system you'd still end up having to buy a very expensive chassis/PSU and cards however.

One last thought I'll leave you with, if you want something less power consuming, less expensive, just as expandable, still fast enough for your needs, and much quieter and heat generating; consider external drive enclosures, USB or Firewire or eSATA. With the first two, just leave all drives hooked up, and power them on as needed. With the later get an external chassis, and swap drives as needed (available controller card ports the limitation there, 4 per card max pretty much). Disadvantage is that everything isn't always available all the time, which is either a deal breaker or a minor inconvenience, but it definitely has many advantages over the expensive system you are discussing.
 
Hi Ockie!
You do have a following- I have posted a link to your work log in several other forums where people have asked about building a monster storage center. I want it to be as large as possible because I plan on ripping hd disks (that I own) so I can run a media center in the living room that networks to it. So you are saying- download windows home server, use storage pond, and configure my raid cards to use jbod instead of raid 5 or 6?

thank you for your input man,
Ian

HD Disks are HUGE. You are going to need a lot of storage :)

What my suggestion was is to use either JBOD with your raid controllers... OR run windows home server and it will do the same thing (this eliminates the need for raid controllers or matching controllers, and you can use your onboard ports).

Since you have your stuff pretty much backed up alread (HD Disks), then you shouldn't worry about the added pain of raid5 or 6. The worst case senario is that you lose a drive, which you already have on hard media.
 
Guys- thank you for the responses. I want to make sure I am right on with this stuff, so I am going to do some more research. I will post pics when I start, Ockie style! :)
 
Back
Top