SATA port HDD distribution for maximum performance

SirMaster

2[H]4U
Joined
Nov 8, 2010
Messages
2,122
Hey guys.

So I finished migrating to my new ZFS machine a few days ago and I was thinking about how to maximize my SATA performance with where I plug in my disks.

So I am using this board:
http://www.supermicro.com/products/motherboard/Xeon/C220/X10SL7-F.cfm

Which has a total of 14 SATA ports.

2x SATA (6Gbps)
4x SATA (3Gbps)
8x SAS2 (6Gbps) via LSI 2308 onboard.

I also have just sitting in a drawer 2 PCIe 1x 2-port SATA cards and this motherboard does have 2 PCIe slots that I can use as I dont have any other use for them.

I do in total have 14 disks to connect and they consist of this:

300GB 10K drive for OS
1.5TB 7200RPM for misc/temp stuff

6x2TB 5400RPM in a RAIDZ2 vdev
6x3TB 7200RPM in a RAIDZ2 vdev

Both vdevs are in the same pool.

I currently have all 14 ports on the motherboard used and evertything is mostly plugged in randomly. I'm looking for suggestions to spread out the data load more to maximize performance for things like scrubbing and resilvering and concurrent users and stuff.

I really don't know much about the maximum interface bandwidth of these shared SATA controllers.

I was thinking to do this:

From each vdev, put 2 of the disks on the onboard SATA, and 3 of the disks on the onboard SAS, and 1 disk on each of the PCIe 1x cards.

And then connecting my OS drive and temp drive to the other 2 onboard SATA ports.

This would use all 6 onboard SATA ports (though the OS and temp disks are not used as much really)

And would use 6 of the 8 ports on the onboard SAS.

And would have 1 drive each on PCIe 1x cards.


What do you guys suggest? I'm probably overthinking this, but this is [H]ardforum, so why not? :p
 
Joined
Oct 30, 2013
Messages
55
I would stay aware from random PCIe > SATA boards.

Also, do realize that your array will be slowed down by the slowest device in normal usage. So, you may want to avoid putting vdevs on multiple controllers, because then you will have effectively made all of your vdevs the speed of the slowest controller.
 

Aesma

[H]ard|Gawd
Joined
Mar 24, 2010
Messages
1,850
Yeah I wouldn't complicate things, Intel SATA2 and LSI are plenty fast for spinners and I would only put single non OS drives on a random PCIe card.
 

plugwash

[H]ard|Gawd
Joined
Sep 17, 2010
Messages
1,540
Honestly I don't think it's going to make much difference.

AIUI with hard drives in servers you generally find the limiting factor is the random access performance. Even if requests from each individual users requests are sequential when multiple users pile up the access becomes effectively random and in random access the limiting factor is pretty much always mechanical.

The LSI 2308 controller is apparently a PCIe 3.0 x8 device and is connected directly to the processor. So it should in theory at least be able to provide full bandwidth on all ports at once.

The intel PCH has more limited upstream bandwidth and it's upstream bandwidth is shared with the network connections and one of the PCIe slots but I still think you will really struggle to saturate it with spinners.
 

SirMaster

2[H]4U
Joined
Nov 8, 2010
Messages
2,122
Thanks guys. I will not use the cheap PCIe 1x SATA controllers then.

I know spinners are "slow". But I was mainly just concerned about overall interface limits to maximize scrub and resilver performance.
 
Top