Need more SATA 6 ports

jyi786

Supreme [H]ardness
Joined
Jun 13, 2002
Messages
5,757
My motherboard only has 2 SATA 6 ports. I need more.

I did some research, but am still lost. I don't need RAID, but I do need the speed of a native ICH 10 SATA 6 port. Yes, I care about performance, no cheap solutions allowed. My motherboard actually has an additional 2 "SATA 6" (note the quotes) ports...that are Marvell controller based. For anyone who doesn't know, Marvell SATA 6 controllers are pure garbage, especially the version of the controller I have on my motherboard. Unfortunately, I didn't know this at the time (back in 2012).

What add on card should I get? I guess LSI or Intel, right?
 
I believe most of them are LSI rebrands (or a very similar design) so it does not make much difference.
 
Are you planning to connect SSDs and require TRIM? If so carefully check support with LSI first -- some of my Samsungs wont TRIM when connected to LSI controllers but work fine on the the Intel PCHs.
 
Are you planning to connect SSDs and require TRIM? If so carefully check support with LSI first -- some of my Samsungs wont TRIM when connected to LSI controllers but work fine on the the Intel PCHs.

Yep, any solution I get must have TRIM.

Any reason why yours doesn't work? Is it a hardware or software issue?
 
Oh man, this really sucks. After research:

http://hardforum.com/showthread.php?t=1676117

Looks like no matter what, I will not be able to achieve what I want. Any solution will NOT have TRIM support, because the drives will be presented to the OS as a SCSI device and not an ATA device. So regardless of whether or not a TRIM command is sent to the drive, the middle man (the controller) will not know what to do with it.

So unless I can find a controller that has native TRIM support, looks like I am SOL. :(
 
After doing some quick Googling it seems that LSI's SAS HBAs do support TRIM as long as the SSDs aren't part of a RAID, so that could be an option.

http://docs.oo-software.com/en/oodefrag17/trim-incompatibility
http://thread.gmane.org/gmane.linux.scsi/88189/focus=88248

Yeah, I saw all this. Support apparently is spotty at best with no real answers. You have to have a very specific environment for TRIM to work with any LSI controller, be it HBA or RAID. :(

For all the trouble, it looks like I'll have to upgrade my platform. :(
 
After doing some quick Googling it seems that LSI's SAS HBAs do support TRIM as long as the SSDs aren't part of a RAID, so that could be an option.

http://docs.oo-software.com/en/oodefrag17/trim-incompatibility
http://thread.gmane.org/gmane.linux.scsi/88189/focus=88294

For the LSI to pass TRIM/UNMAP commands to the SSD, the SSD must support DRAT. hdparm -I should show: * Deterministic read ZEROs after TRIM

That being said, the support is still total crap. Apparently in LSI firmware 14 and 15 the discard granularity is misread and requires an update of firmware. Having done that, it still doesn't work right because the trim sectors do not properly send to the drive. fstrim errors out with FITRIM ioctl failed: Input/output error. The kernel shows an error trying to access an LBA which is in the valid range but the controller/drive report out of range. This is with firmware 19; not brave enough to try 20 due to reported data corruption issues :)

TL;DR don't try and TRIM/UNMAP on LSI controllers if you value your data... :mad:
 
Last edited:
TL;DR don't try and TRIM/UNMAP on LSI controllers if you value your data... :mad:

Yep, way more trouble than it's worth. I do indeed value my data.

That being said, I've already begun to plot out my upgrade path. This is going to be a big one, as I'll need new monitors too.
 
Yep, way more trouble than it's worth. I do indeed value my data.

That being said, I've already begun to plot out my upgrade path. This is going to be a big one, as I'll need new monitors too.

I'll be in the same boat soon enough. Probably going to go with a motherboard that has LOM and at least 10 SATA3 (Intel chipset) ports.
 
I looked at the Asus ROG flagship board. It has 8 x SATA, but does something odd with the config? 4 shared with SATA-E? What does that mean?

https://www.asus.com/us/Motherboards/MAXIMUS-VIII-EXTREME/

It means you have 2 SATA express connections each taking up 2 SATA ports. If are not planning on using the SATA express connection you will have the 8 full SATA ports.

The connector looks like 2 SATA ports plus a 3rd smaller port, it would plug into all 3.

Nobody uses this, it never really took off, so ignore it.
 
Not sure what your exact use case is but you may want to look at the asrock rack workstation motherboards and E3v5 series xeons. They have lots of pcie lanes/ports and data ports and support ddr4.
 
I wouldn't worry about running 6gbps native sata at only 3gbps. You probably won't notice, unless you're constantly copying files between drives, or doing something write intensive like video editing.
 
I wouldn't worry about running 6gbps native sata at only 3gbps. You probably won't notice, unless you're constantly copying files between drives, or doing something write intensive like video editing.

I noticed it immediately.

I do almost everything with my workstation. Video editing, DAW (I am a musician), gaming, VMs, everything. I noticed when copying files to a new 1TB 850 Pro that it wasn't as fast as my 830 Pro and my 850 Pro, but then I remembered that I had those two latter ones hooked up to the gray ports on my board, which are the native SATA 6, while the rest were hooked to the blue SATA 3 ports.

So I switched them around, and immediately got back the speed I was expecting. Since I use the 1TB more than the 512GB one, I just left it on the slower port for now.
 
Not sure what your exact use case is but you may want to look at the asrock rack workstation motherboards and E3v5 series xeons. They have lots of pcie lanes/ports and data ports and support ddr4.
Early Xeon E3s had 20 PCIe lanes. Sandy Bridge (E3v1) was PCIe v2, while Ivy (E3v2) moved up to v3 with 2x the bandwidth. E3v3 & newer have only 16 v3 lanes, so E3v2 was better in this one area.

Of course, an LSI2008 doesn't need more than 8 lanes of PCIe v2. You're looking at just about 4GBps, which effectively matches 8x 6Gbps SSDs.
 
Right but the C236 PCH has 20 v3 lanes instead of the anemic 8 v2 lanes in prior chipset series. This may be important for his use case since he mentioned running numerous SSDs and not being thrilled about upgrading (future proof).

LSI were ruled out earlier in this thread for shoddy TRIM/UNMAP support.

Personally, I'm impatiently waiting for the E5v5's come out for my next upgrade. Massive PCIe lanes and memory bandwidth :)
 
Right but the C236 PCH has 20 v3 lanes instead of the anemic 8 v2 lanes in prior chipset series. This may be important for his use case since he mentioned running numerous SSDs and not being thrilled about upgrading (future proof).

LSI were ruled out earlier in this thread for shoddy TRIM/UNMAP support.

Personally, I'm impatiently waiting for the E5v5's come out for my next upgrade. Massive PCIe lanes and memory bandwidth :)

How do the lanes compare to "consumer level" chipsets like the new Z170 chipset? Even though personally I wouldn't mind going for a Xeon, I do quite like the ability to OC (my rig is a 2600K OC'd to 4.4Ghz). ;)
 
Same number on the z170, might be a good direction if you want to oc. Only has 6 SATA3 ports native though. The next big bump of lanes come when you get into the -E -EX or E5 series of processors, but only haswell based v3 are available so far. D-1500 v4 Broadwells are showing up but are a different beast (SoC).
 
Last edited:
Back
Top