Which PCIe -> 8 x sata HBA card for Ubuntu 9.1 mdadm software raid?

davros123

Weaksauce
Joined
Jan 18, 2010
Messages
102
Hi folks,
I have been working at creating a nas for the last month or so...perhaps longer?

It seems that under my chosen os (Ubuntu 9.10) the driver for the Supermicro AOC-USASLP-L8i has some conflict with smartmon tools which makes the card unstable (can drop disks on repeated smart requests and produced smart errors in dmseg).

I am trying to work out a workaround for this, but incase I can not, what would you suggest I replace these cards with?

Requirements:
1) Proven to work in Ubuntu 9.10 with mdadm soft raid 6
2) Proven to work with WD 1.5TB EADS GP drives (with TLER-ON and WDIDLE3 disabled)
3) 8 ports minimum
4) full speed sata II 3Gb/s
5) PCIe (naturally given 4).
6) Reasonably priced, say <$500USD...after all it's just giving me 8 sata ports!
7) Non-raid - I want to use software raid. (yes I know the pro's and con's, I like mdadm).
8) I am not keen on LSI for obvious reasons, but am willing to be convinced.
9) for a home nas/media server...so let's not go crazy.


I am hoping someone here is successfully running a similar system.

Your advice, as always is valued and appreciated.
 
USASLP-L8i should work fine, it's LSI based. I think you're thinking of USASLP-MV8.

Why don't you like LSI?
 
Thanks for the reply keenan.

I am most definitely referring to the AOC-USASLP-L8i - I am holding it in my hand now and I know it's LSI based.

I have tested the cards (I have 2) on 2 PC's with different mobo's/config's and operating systems (incl Ubuntu desktop and Server editions and CentOS 5.4). I do not get the errors when using the onboard sata, but I do when using the supermicro card. They are most obvious when resync'ing the array.

See this thread for the issues that a fellow Hard member had...they are also well documented on the web...
Potential issue with AOC-USAS-L8i in Ubuntu?

and here

I have also verified the same sense errors appear in my CentOS 5.4.

I saw a bug report on this but I can not locate it at the moment...been doing tooo much reading.
 
Seems the bug report is here: http://bugzilla.kernel.org/show_bug.cgi?id=14831

Thanks for the heads up on this one, I've used plenty of LSI cards on Linux without ever having issues, and I was actually planning to buy one of these in the next few months. Hopefully the Marvell driver matures or this gets fixed first. Not using SMART or other passthru commands apparently resolves the issues, but obviously that isn't an ideal solution.

I'm not aware of any others than the Marvell-based and LSI-based options though I'm afraid. You may have to bite the bullet on one of those two (OpenSolaris apparently works well with the 1068E cards, you could jump the ZFS bandwagon), or go with a RAID card but with a bunch of single-disk JBOD or RAID0 containers.
 
Yep, might need to go the raid 0 for each drive path....I do not understand why it's so hard to get a decent HBA?
 
I've been thinking about the LSI 9211-8i for use with linux / mdadm, but I have not found a single report yet of anyone having successfully used it. The mpt2sas module shipping with most kernels seems not to work well with the 9211, from reports I have seen. I did see that it just got a patch a few days ago, but I have yet to see anyone report success with it with the 9211.

http://groups.google.com/group/linu...6577f0200e?lnk=gst&q=mpt2sas#b78d2b6577f0200e
 
Yep, might need to go the raid 0 for each drive path....I do not understand why it's so hard to get a decent HBA?

I think because so many motherboard manufacturers include a dozen or two SATA ports onboard, there's much less market for it. Plus there's considerably less profit to be made than say, a RAID card.
 
I've used a 3ware 9650 series card (24 port version) with individual disks in raid5 (mdadm) without issue. You can get smart info but only for the first 16 drives.

I then switched to a onboard LSI 1068E performance is great, smart info works fine, the only potential issue is I've seen some weirdness when hotplugging new drives. I didn't really investigate further as I don't care much. -- BTW Problem I seemed to have was I'd have 4 drives plugged in, hotplug a 5th drive and it seemed like disconnected one of the first 4 for a moment... dmesg looked like I had unplugged disk 3 and then added disk 3 & 5. I think I've seen this happen a few times when hot plugging drives. Its never been a problem because I don't access the drives until I've plugged them all in, then I assemble the array and it works. I've not hotplugged drives while my array was assembled.

Recently I've moved to a Norco case and added a Highpoint somethign card. With the latest drivers from Highpoint the card works except no smart info and expanders cause system to hang. I currently just have 6 disks plugged in directly and no issues for past few weeks. If the drive already has a partition table on it the highpoint card will add it in legacy mode and you can easily use it in mdadm.
 
Thanks guys - keep the info. coming. I am doing a load of investigation in the background as a result of your sugestions...so appreciate the guidance.

Heck, it never occured to me that I could get a mobo with 12+ sata ports!
 
I've been using two RocketRaid 2680's for a few months in Debian Lenny just fine. One thing is that SMART info isn't accessible to the OS, you have to use HighPoint's management utility to view it, unless it's been added in a newer driver revision.

I am using 10 WD 1.5 EADS with TLER.


EDIT: Forgot to mention that I'm using mdadm raid 6.
 
Excellent...thanks for that post.

I've just grabbed a Perc 5i and will try that in a few days when I get some cables for it. I'll run it as a series of single raid 0 disks and use mdadm to handle the raid 6.

Will let you know how that goes.
 
I also have a HP2680 (under 9.10) and it works okay as a dummy card. No JBOD bios for it, sadly. I also get kernel panics trying to use smartmon or hdparm, which sucks.
 
Back
Top