Seeking input on adding a 16 port Sata expansion card

psy81

Gawd
Joined
Feb 18, 2011
Messages
605
I have an older system and the parts are as follows:

FX-8350
GA-990FXA-UD3 (revision 3)
16 GB DDR3 @1600MHz
250GB Samsung SSD
2 X GTX 670s
750 watt Corsair PSU
Antec 1200

Question/Input: (I apologize if the answer to my question is obvious beforehand but I've never added a sata expansion card before and tried searching online with no luck)

Being that the Antec 1200 can accommodate a bunch of hard drives I was thinking of adding 4 Icy Dock Tray-Less 5 Bay (3.5") for a total of 20 hot swap hard drives.
link: https://www.amazon.ca/DOCK-Flexcage-Hot-Swappable-Tray-Less-Backplane/dp/B06X9P4BYL/ref=sr_1_3?crid=2BSJS9PNDTWPZ&keywords=icy+dock+trayless&qid=1706231207&sprefix=icy+dock+trayless,aps,106&sr=8-3&th=1

In order to accommodate that many hard drives, I was thinking of adding a 16 port sata card
link: https://www.amazon.ca/MZHOU-16-Port-Controller-Expansion-Bracket(ASM1064/dp/B098QPJJFF/ref=sr_1_7?crid=1AJ7A20WGUF7N&keywords=16+port+sata&qid=1706231227&sprefix=16+port+sata+,aps,129&sr=8-7&th=1)

Being that my motherboard is running the GTX 670s in SLI, will I have enough PCI lanes left over for the 16 port sata expansion card?

The motherboard has a total of 38 lanes (https://www.modders-inc.com/gigabyte-990fxa-ud3-rev-4-0-motherboard-review/2/)
Motherboard specs: https://www.gigabyte.com/Motherboard/GA-990FXA-UD3-rev-30#ov

I'm also open to suggestions on parts (SATA expansion card and HDD bays)

P.S. I recognize I would have to upgrade the PSU to run 20 hard drives.

Thanks in advance!
 
As an Amazon Associate, HardForum may earn from qualifying purchases.
That's a x1 card, so it will work in an open slot.

But jeebus, 16 drives with a total bandwidth capacity of 500MB/s (less than a single SATA 3 port!) means you are not going to have a good time if you are intending anything even slightly intensive, and I don't see in the specs whether or not it supports hotplugging.

May I instead recommend going to fleabay and getting an LSI/Broadcom/Avago 92xx or 93xx HBA ("IT mode") with a -16i suffix and 4 SAS to 4xSATA forward breakout cables. Hotplug will work, it's not a sketchy Chinesium card, and even though you'll be restricted to only using 4 lanes of the 8 the card offers, that's still 2GB/s to play with.

You can also probably get 2 -8i cards for less than a single -16i and it will double the bandwidth potential between the sets.
 
That's a x1 card, so it will work in an open slot.

But jeebus, 16 drives with a total bandwidth capacity of 500MB/s (less than a single SATA 3 port!) means you are not going to have a good time if you are intending anything even slightly intensive, and I don't see in the specs whether or not it supports hotplugging.

May I instead recommend going to fleabay and getting an LSI/Broadcom/Avago 92xx or 93xx HBA ("IT mode") with a -16i suffix and 4 SAS to 4xSATA forward breakout cables. Hotplug will work, it's not a sketchy Chinesium card, and even though you'll be restricted to only using 4 lanes of the 8 the card offers, that's still 2GB/s to play with.

You can also probably get 2 -8i cards for less than a single -16i and it will double the bandwidth potential between the sets.

Forgive my lack of knowledge but based on the description the card supports 6Gbps transfer speeds. Where are you getting 500MB/s? Is it because the limited bandwidth from the PCI lane?

With my two GTX 670s installed I believe I would be using 32 (16x2) out of the available 38 lanes on my motherboard.

I was still have available 2 x PCI Express x16 slots, running at x4 and 2 x PCI Express x1 slots.

Note the motherboard uses PCI Express 2.0

Would this be enough?
 
Forgive my lack of knowledge but based on the description the card supports 6Gbps transfer speeds. Where are you getting 500MB/s? Is it because the limited bandwidth from the PCI lane?
Indeed. The card is PCIe 3.0, which would mean a theoretical 1GB/s, but because your board is only 2.0, you're limited to the 500MB/s lane speed. It's still enough for transfer between two spinning drives, but beyond that, or with an SSD, things will be getting choked. Really depends on your intended use case for all the storage if that will be a problem or not.

With my two GTX 670s installed I believe I would be using 32 (16x2) out of the available 38 lanes on my motherboard.

I was still have available 2 x PCI Express x16 slots, running at x4 and 2 x PCI Express x1 slots.
I'm not sure how you get 38. From the specs sheet, you have 42 total (2x16, 2x4, 2x1), and using the extra slots will not reduce any lane counts or disable onboard devices anywhere else (I miss those days of board designs...). You're free to go nuts with AICs :) .
 
  • Like
Reactions: psy81
like this
Would this be enough?
It really depend on what you want to do, if it is to create a fast bandwidth ZFS pool drive no, if it is to have 16 drive that you access mostly one at a time could be quite enough yes
 
  • Like
Reactions: psy81
like this
Thanks guys! This was helpful.

I was thinking of using it to add hard drives for storage as my older hard drives fill up. In all honestly, I don't need 20 hard drives in a PC but thought it would be a way to future proof and just keep adding hard drives.

I'm on the fence of whether its worth it. May be better to just clone/copy the older smaller hard drives on the newer larger capacity hard drives.

Installing and removing hard drives from the Antec 1200 is super tedious due to the number of screws so I was looking at the Icy Dock Tray-Less 5 bay which led down this rabbit hole.
 
If you have a bunch of old harddrive around or see bunch of them sold used could be.

But it if is to buy new, you could look to buy a small amount of larger drive.

14TB toshiba are 0.013 by TB, seagate exos 18TB are 0.014 cheaper than a lot of 4tb-6tb options:
https://pcpartpicker.com/products/internal-hard-drive/#sort=ppgb&page=1

And can make your case-sata connection setup much simpler for a total lower price.
 
I'm on the fence of whether its worth it. May be better to just clone/copy the older smaller hard drives on the newer larger capacity hard drives.

Also think of the power consumption. Running 16 or more harddrives 24/7 adds up.

I think a "high" drive count would be desirable if you want to run raiz3 or so, then 8 drives is a good array size. If you want to run the drives with no automatic redundancy I would also think that a smaller number of bigger drives is better.
 
When talking about 20 drives one, even regular HDD one could have been talking about 3800-4000 mbs type of speed no ?

Not in practice I think, unless you use them independently and hammer them all at the same time. Although, I did not try 20 drives in RAID0, ever.
 
Not in practice I think, unless you use them independently and hammer them all at the same time. Although, I did not try 20 drives in RAID0, ever.
I would more had large zfs pool with good enough CPU in mind, even at 20% of the max theoretical, PCI 2.0 being about 500mb/s, 10 drive in 2 striped 5x raid setup can go over 1000MB/s in read:
https://calomel.org/zfs_raid_speed_capacity.html

Common for single HDD to be over 250mbs able now and 20 drive is a lot.

The OP just want lot of drive and not high bandwith at the moment I think, so mute point anyway.
 
  • Like
Reactions: psy81
like this
Unless you're REALLY constrained for PCIe lanes, I would not waste money on that SATA card. Going with ASMedia and especially JMicron chipsets on that thing is a recipe for pain.

Second-hand LSI SAS HBAs can be had for less, even the 16i (16 internal lanes across 4 ports) variety. Attach a bunch of SFF-8087 (SAS2/6Gb) or SFF-8643 (SAS3/12Gb) to quad-SFF-8482 cables to each port (yes, they'll fit on SATA drives and you then only have to worry about one combined data/power plug), and there's your 16 drives, no expanders necessary - but unlike SATA, you can use expanders if needed to go further beyond 16 drives.

Just make sure you've got at least a x4 slot to not bottleneck the card too much; they're usually x8, but that tends to get covered up by stupid thick GPU heatsinks nowadays. Also, put a fan on the HBA's heatsink - they're designed with server airflow in mind, not desktop cases without a constant static pressure wind tunnel running through.

This isn't a matter of sheer bandwidth, but stability. These are enterprise-grade controllers with rock-solid driver support and the capability to use SAS disks if you ever wanted to, and if you ever intend to use ZFS, you're better off with an LSI HBA with IT mode firmware than anything else.
 
Back
Top