Best way to connect 20 drives?

3one5

Gawd
Joined
Jan 6, 2005
Messages
579
I currently have a tower case that will hold 20 3.5" drives. Though I do not have 20 drives it is something I would like to work towards. However I'm trying to figure out the best way to hook them up. I've been looking at the Megaraid cards from LSI and have found an 8 port card for about $140. I believe this is a slightly older card but I'm putting this together for home use so that's not a huge issue. However, when searching around I ran into SATA Port Multipliers. From what I am reading these allow you to run up to 5 drives off of a single SATA port. I realize that a single SATA port has 300 MBPS bandwith and a standard drive can use up to 90 MBPS of that so there may be a bottle neck there. However if I could put together something for significantly less money it may be a better option.

What do you guys think?
 
What OS you planning to run? The latest Supermicro, the SASLP I think, does not play nice with Linux.
 
I'm kinda interested in those port multipliers.

If your not looking for insane speed, would those not work for the OP? (and for some stuff that I'm looking at that is similar)
 
You need a controller that will play nice with port multipliers. Most multipliers use Silicon Image chips, thus a controller card with a Silicon Image chipset would probably guarantee compatibility.
 
What OS you planning to run? The latest Supermicro, the SASLP I think, does not play nice with Linux.

Where did you hear that?

I doesnt work with BSD, but it does work with Linux. And is supported for Linux environments.
 

All of that will cost almost as much as three of the controllers I suggested, and you will be strangled for bandwidth.
Seriously three SASLPs will way outperform the above, and will be much easier to deal with issues that arise.
Also cable managment will be way better.
 
All of that will cost almost as much as three of the controllers I suggested, and you will be strangled for bandwidth.
Seriously three SASLPs will way outperform the above, and will be much easier to deal with issues that arise.
Also cable managment will be way better.

Thanks for all the additional reading and the hardware suggestions. You were correct that going with a setup that does not include multipliers is the way to go. I just need to verify that hardware will work with linux.
 
I guess the thing that I'm trying to get past now is the PCI-X slots. They don't have nearly the bandwidth of a PCI-E slot. Would running 8 SATA drives on a PCI-X slot cause a huge bottleneck?
 
I guess the thing that I'm trying to get past now is the PCI-X slots. They don't have nearly the bandwidth of a PCI-E slot. Would running 8 SATA drives on a PCI-X slot cause a huge bottleneck?

Depends on how you're using them. If you're using the machine as a network file server, well Gigabit Ethernet will probably pull 90MB/sec, so assuming your PCI-X bus has around 533MB/sec of bandwidth, you'll be OK.

If the machine is going to be a workstation, the likelihood is that the PCI-X bus will hold you back, especially if you choose multiple controller cards as opposed to a single one. Bus contention is your enemy.
 
PCI-X really isnt the issue when dealing with network transfers.

With a GBe network you will have a theoretical max of 125MB/s transfers, and the PCI-x 64bit bus is 1066MB/s IIRC.

I prefer the SASLPs because of Cable Mgt. and PCIexpress is more common than PCI-X on mobos.
Although the SAT2-MV8 will run in a standard PCI slot it will do so at 133MB/S maximum and if you are doing local transfers(non-network) it will possilby become the bottleneck.

You never mentioned I dont think, but are you using some sort of RAID or just JBOD?
Tell us a little more about what you are trying to do, and what components you already have and we can help you a little more with this.
 
PCI-X really isnt the issue when dealing with network transfers.

With a GBe network you will have a theoretical max of 125MB/s transfers, and the PCI-x 64bit bus is 1066MB/s IIRC.

I prefer the SASLPs because of Cable Mgt. and PCIexpress is more common than PCI-X on mobos.
Although the SAT2-MV8 will run in a standard PCI slot it will do so at 133MB/S maximum and if you are doing local transfers(non-network) it will possilby become the bottleneck.

You never mentioned I dont think, but are you using some sort of RAID or just JBOD?
Tell us a little more about what you are trying to do, and what components you already have and we can help you a little more with this.

I have a Tyan server board on the way. It has
2 133mhz PCI-X slots
1 100mhz PCI-X slot
2 standard PCI slots
1 16x PCI-e slot

I will be using software RAID which is a part of the Free NAS software. This will be for home use. The biggest transfers will be done while streaming files to the HTPC.
 
I have a Tyan server board on the way. It has
2 133mhz PCI-X slots
1 100mhz PCI-X slot
2 standard PCI slots
1 16x PCI-e slot

I will be using software RAID which is a part of the Free NAS software. This will be for home use. The biggest transfers will be done while streaming files to the HTPC.

You'd only be able to use one SASLP anyhow, only one available slot for it!
 
Correct, however if it were going to be a huge difference I would be willing to buy a different motherboard.

I'd suggest (4) $40 4-Port SATA cards combined with a MB with 4 spares to make the even 20.

You can find plenty of cards that will have built in Linux support. Should have no problem saturating gigE.
 
I'd suggest (4) $40 4-Port SATA cards combined with a MB with 4 spares to make the even 20.

You can find plenty of cards that will have built in Linux support. Should have no problem saturating gigE.

I could go with 3 PCI-X 4 port cards and 2 PCI 4 port cards. That would give me the 20 needed ports and spread the bandwidth around. This would around $200 delivered to door. Would there be any disadvantage to going this route?
 
I could go with 3 PCI-X 4 port cards and 2 PCI 4 port cards. That would give me the 20 needed ports and spread the bandwidth around. This would around $200 delivered to door. Would there be any disadvantage to going this route?

For this application don't buy any PCI-X cards, your future MB will not likely have any slots for them! You can put a 32bit card in a PCI-X slot with no problem.

Its too bad we don't have more experience with the SASLP in linux, 3 of those would give your choices for the next few years.
 
For this application don't buy any PCI-X cards, your future MB will not likely have any slots for them! You can put a 32bit card in a PCI-X slot with no problem.

Incorrect. The PCI-X card that Nitrobass24 recommended in his first post in this thread is backwards compatible with PCI slots. So as long as motherboards have a PCI slot (don't see this going away anytime soon), the PCI-X SuperMicro SAT2-MV8 is a good choice. Have two of those + a mobo with 4 spare SATA ports and you now have 20 ports for $200. SuperMicro SAT2-MV8 should have at least slightly better performance compared to the $40 4 port controller cards which uses Silicon Image chips.
 
Incorrect. The PCI-X card that Nitrobass24 recommended in his first post in this thread is backwards compatible with PCI slots. So as long as motherboards have a PCI slot (don't see this going away anytime soon), the PCI-X SuperMicro SAT2-MV8 is a good choice. Have two of those + a mobo with 4 spare SATA ports and you now have 20 ports for $200. SuperMicro SAT2-MV8 should have at least slightly better performance compared to the $40 4 port controller cards which uses Silicon Image chips.

Yes, the SuperMicro SAT2-MV8 is a great card and will work in 32bit PCIs, assuming there are not caps etc in the way. However a $40 Syba card that fits nicely and supports expanders if ever needed might be a good alternative.
 
Would it not be better to spread the drives over the 3 PCI-X and 2 PCI slots rather than concentrating them all on two PCI-X slots?
 
I have 2 of the SAT2-MV8 pci-x and a single $20 4 port silicon image pci card running 20 drives. I started with a dual xeon mobo that had 2 pci-x slots before migrating down to a single core turion64 for lower power usage. Since it is a file server, i only care about the 100mb network speeds and using the dual xeons were slightly snappier than than the turion64. . When I upgrade everything to gig ethernet, I will be migrating to the saslp cards using pci-express cards on some low wattage matx board.

I preferred the 8 port cards cuz they were easier to cable manage, used up fewer pci slots (for matx mobo), cost less than $100 through ebay/newegg open box/forums, work fine in linux, and can fit 8 drives on one pci slot. someone noted better performance than using 3 sil 4 port cards, but i dont remember those details.

I looked at using pms and do consider it an option if performance is not that critical. I'm using some supermicro PM cards in a linux backup system. It took a while to get setup, but it works fine. I wouldn't stream videos off of it, but its fine for running in the background or for basic file server.

I would still recommend the supermicro 8 port cards tho.
 
I would still be wary of compatibility, especially if you're going to be using an appliance distro like OpenFiler where it may not be easy to upgrade the kernel or build third-party modules. FreeNAS is probably out completely at this point; the best support for this card apparently comes via the SCST beta drivers (though an older version of this code is merged with the kernel proper), which have not been ported to FreeBSD as far as I can tell. It seems like it can be made to work properly in Linux, but if you're not a fairly proficient Linux user already you may have trouble.

The SAT2-MV8 works perfectly in both Linux and FreeBSD (as well as OpenSolaris), and if you've got a board with PCI-X it's slightly cheaper as well and using 2 or 3 of them seems like a no brainer to me. You're not going to be using those PCI-X slots for anything else...

What NICs does your Tyan board have? Some of them have terrible ones, even though they're 'server' boards.

Would it not be better to spread the drives over the 3 PCI-X and 2 PCI slots rather than concentrating them all on two PCI-X slots?
No. A single disk can nearly saturate PCI. You're much better off with 8 disks on a PCI-X slot than 4 on PCI.
 
I've been doing all kinds of research on openfiler and freenas and think I'll just go with Ubuntu Server 9.04. I'm not a huge fan of BSD so free nas is definitely out. Still looking at Open Filer but will prob just stick with Ubuntu server.
 
I've been doing all kinds of research on openfiler and freenas and think I'll just go with Ubuntu Server 9.04. I'm not a huge fan of BSD so free nas is definitely out. Still looking at Open Filer but will prob just stick with Ubuntu server.

You're that against BSD that you'll avoid using it just because it's BSD?

Debian > Ubuntu Server, if you're just going with a vanilla Linux.
 
I've been doing all kinds of research on openfiler and freenas and think I'll just go with Ubuntu Server 9.04. I'm not a huge fan of BSD so free nas is definitely out. Still looking at Open Filer but will prob just stick with Ubuntu server.

All three of those are great choices, pick what best fits your hardware and comfort level.

Ubuntu will have the newest kernels of that pool and most likely support the newer hardware.
 
You liked FreeNAS when you thought it was Linux based and now don't because you learned the back end was FreeBSD? FreeNAS didn't change, just your preconceptions. Doesn't make sense to me.....

The AOC-SASLP-MV8 doesn't work with *BSD (yet) and is limited yet functional support in Linux (you can compile your kernel, right?)

Read the thread http://hardforum.com/showthread.php?t=1397855.
 
You liked FreeNAS when you thought it was Linux based and now don't because you learned the back end was FreeBSD? FreeNAS didn't change, just your preconceptions. Doesn't make sense to me.....

The AOC-SASLP-MV8 doesn't work with *BSD (yet) and is limited yet functional support in Linux (you can compile your kernel, right?)

Read the thread http://hardforum.com/showthread.php?t=1397855.

No I liked FreeNAS when I knew it was BSD. That is until I started to look more into BSD and didn't like how they handled network adapters (naming mainly). I currently work with Ubuntu, SUSE and Red Hat and like consistency. Not saying FreeNAS is completely out, just looking at all of my options.
 
i suppose that is a reason... but you could always 'ifconfig bge0 name eth0' if it is such a concern. i normally don't have cause to look at the adapter name on my bsd box... so whatever they get named is fine if as long as it works ;)

Of course this is off topic. For your controller question, you could always just added a cheap 2 port PCIe 1x for each pair of new hdds after the mobo is full. Unless you are sure you are going to need 20 disks that is the easiest way to do it.
 
Now things are getting expensive. :) I've decided that if I want to do this half right I need to have a proper backup. Because I'm not terribly concerned with uptime as this is a home solution I'm not going to bother with RAID other than JBOD. Instead I'm going to build a semi duplicate server that can archive everything on the primary. I have a second Lian Li PC-V2100 on the way along with another Tyan server board. I figure I'll get the base systems together and then slowly start filling them with drives.
 
I have a second Lian Li PC-V2100 on the way along with another Tyan server board. I figure I'll get the base systems together and then slowly start filling them with drives.

Wait, you bought a second Lian Li PC V2100 right now? How much did you pay for it? If it's $200+, is it too late to cancel the order?
 
Wait, you bought a second Lian Li PC V2100 right now? How much did you pay for it? If it's $200+, is it too late to cancel the order?

Wanted a Silver one but could only find one in black. Got it on Ebay for $145
 
Wanted a Silver one but could only find one in black. Got it on Ebay for $145

Ahh KK. Well the cheapest place I can find the Lian Li 2100 off Froogle was $293. For that much you could have gotten the awesome Norco 4020 case. Hence why I asked whether or not you could cancel the order. The Norco 4020 would have been better use of that $293 for a file server.

Though for $145, that's a decent enough price.
 
Ahh KK. Well the cheapest place I can find the Lian Li 2100 off Froogle was $293. For that much you could have gotten the awesome Norco 4020 case. Hence why I asked whether or not you could cancel the order. The Norco 4020 would have been better use of that $293 for a file server.

Though for $145, that's a decent enough price.

Yeah, it will hold 12 drives below and then 8 more in 5.25" bays. Besides, really like Lian Li cases! :)

LianLiPC60s.JPG


LianLiPCV800b.JPG


LianLiServers.JPG


The PC-60USB on the right had it's parts scavenged for another project. Still need to replace it all.
 
Back
Top