Multiple SATA controllers?

TType85

[H]ard|Gawd
Joined
Jul 8, 2001
Messages
1,554
Does anyone know of any issue running multiple SATA cards in a system?

I have one Syba 3114 based PCI card and want to add at least one more. (10 SATA ports total including whats on the mobo). I'll also be running this with a Promise IDE controller.
 
There shouldn't be an issue with multiple controllers, as long as they don't conflict with each other. The only issue I've had with multiple controllers is with booting, whichever add-on card gets initialized first tends to be the only one the system wants to boot from, but if you boot off the motherboard controller it should be a non-issue.
 
i know some people have used a couple of controllers with linux to do raid 5 over all the drives connected both controllers. but i can honestly say, i do not know how to do it. im pretty sure unhappy_mage does. ;)
 
let me give my favorite answer: it depends.

I surely hope that most "modern day" add-in SATA controllers will play nice with others, but it is not a guaranteed thing. I own a PATA controller that will not work in my Dell system unless the boot HDD is connected to it.

Also, since you will be having all these controllers hanging off a single PCI bus, I hope that you are aware that the performance will not be stellar. With high likeliness, I would say that your total throughput will be <= 100 MB/s.

The Syba website is here: http://www.syba.com/product/43/02/05/index.html
The User manual does not say anything about multiple controllers.

There is no FAQ entry either.

Bottom line: it's likely going to work, but not guarantees.
 
I would probably suggest that you make sure the sata add on cards you get are the same kind. I'm sure that most hardware manufacturers would have tested more than one of their cards in the same box at once.
 
Thanks for the answers everyone.

I am not really concerned about speed as this is a home-use server. It will mainly be streaming DVD's and music to a few HTPC's and desktops. The box boots off of an PATA drive connected to the mobo and will continue that way.
 
You can use multiple RAID cards from differnet vendors. But since you are using different cards you are left with 2 choices in terms of the array(s) you can create.

1) Use software RAID in the OS: This would allow you to create a single large volume since all the disks are in a JBOD configuration then the OS can create whatever arrays it wants with the disks. This also makes the arrays more portable since, say some of your motherboard SATA ports die then you can move the disk one of the cards and it should continue to work.

2) Create seperate arrays per controller: This means that for each controller that you add you could use their proprietary software and create array of just the disks attached to that controller. Personally I would recommend against this since it would create a nightmare in terms of maintenance, and heaven forbid that one of the cards dies and you now have to go find that exact card again to restore the array. (Vendor RAID implementations are rarely "to spec" and compatible with anyone else). The one benefit of this would be it would free up some bandwidth on the PCI bus since the OS driver would send commands to the controllers and they would just send back the data. In software RAID the OS sends commands to each drive since it is controlling the RAID array.

Like someone mentioned though, if you start throwing in alot of cards that PCI bus will get saturated quickly. The saturation would also affect stuff like your Ethernet card as well. Just some things to think about.
 
Nostradamus said:
You can use multiple RAID cards from differnet vendors. But since you are using different cards you are left with 2 choices in terms of the array(s) you can create.

1) Use software RAID in the OS: This would allow you to create a single large volume since all the disks are in a JBOD configuration then the OS can create whatever arrays it wants with the disks. This also makes the arrays more portable since, say some of your motherboard SATA ports die then you can move the disk one of the cards and it should continue to work.

2) Create seperate arrays per controller: This means that for each controller that you add you could use their proprietary software and create array of just the disks attached to that controller. Personally I would recommend against this since it would create a nightmare in terms of maintenance, and heaven forbid that one of the cards dies and you now have to go find that exact card again to restore the array. (Vendor RAID implementations are rarely "to spec" and compatible with anyone else). The one benefit of this would be it would free up some bandwidth on the PCI bus since the OS driver would send commands to the controllers and they would just send back the data. In software RAID the OS sends commands to each drive since it is controlling the RAID array.

Like someone mentioned though, if you start throwing in alot of cards that PCI bus will get saturated quickly. The saturation would also affect stuff like your Ethernet card as well. Just some things to think about.

the only problem with #1 is that not many OSes support raid via software. with windows xp you "can," but that would violate the EULA. i cannot see a person spending $600+ on a license for w2k3 either. then we come to linux. not too many people know it well enough to do this on their own. but that's why forums become so helpful. :D also, software raid takes up cpu processing, which may become an issue if you are using an older processor.

i suppose im more of #2's line of thought. i much prefer hardware raid just because it takes the strain from the processor. u do bring up a good point in if the controller dies, you need to find the exact card again in order to restore your array without losing data (which we all know "about," to make sure you BACKUP YOUR DATA). of course, not all of us practice what we preach.
 
Back
Top