AOC-USAS2-L8E

Joined
Sep 14, 2008
Messages
1,622
I haven't seen too many threads or mentions of this specific card. It is the UIO supermicro card but is the SAS 2.0 and PCI-E 2.0 version of the card. I got this specific model (vs the L8I) due to the fact that I will be using it for opensolaris and I heard the L8E uses the mpt_sas driver (card with no raid/hba) instead of the buggy mr_sas (card that supports raid0,raid1,raid10, etc...).

I don't think I have seen anyone actually post any reviews or benchmarks of these new SAS2 UIO cards especially the AOC-USAS2-L8E.

Anyway I gotta say I am pretty impressed with this sucker. I am still waiting for my HP SAS expander but I hooked this sucker up into the system I plan on using it which is a core 2 duo E6600 (2.4 Ghz) with only supports PCI-E 1.0. I used some spacers to fit the card correctly in the case.

Here is what lspci shows:

Code:
02:00.0 Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon] (rev 02)

For now my setup was 8x 2TB hitachi drives using linux software raid (mdadm).

Here is the driver loading (dmesg):

Code:
[    3.863130] mpt2sas version 03.100.03.00 loaded
[    3.863206] scsi0 : Fusion MPT SAS Host
[    3.863711] ACPI: PCI Interrupt Link [LNEA] enabled at IRQ 19
[    3.863715]   alloc irq_desc for 19 on node -1
[    3.863717]   alloc kstat_irqs on node -1
[    3.863723] mpt2sas 0000:02:00.0: PCI INT A -> Link[LNEA] -> GSI 19 (level, low) -> IRQ 19
[    3.863731] mpt2sas 0000:02:00.0: setting latency timer to 64
[    3.863735] mpt2sas0: 64 BIT PCI BUS DMA ADDRESSING SUPPORTED, total mem (8191576 kB)
[    3.863802]   alloc irq_desc for 28 on node -1
[    3.863803]   alloc kstat_irqs on node -1
[    3.863807] mpt2sas 0000:02:00.0: irq 28 for MSI/MSI-X
[    3.863823] mpt2sas0: PCI-MSI-X enabled: IRQ 28
[    3.863825] mpt2sas0: iomem(0xfeafc000), mapped(0xffffc90011d70000), size(16384)
[    3.863827] mpt2sas0: ioport(0xd000), size(256)
[    3.936006] mpt2sas0: sending message unit reset !!
[    3.938005] mpt2sas0: message unit reset: SUCCESS
[    3.974436] mpt2sas0: Allocated physical memory: size(1028 kB)
[    3.974438] mpt2sas0: Current Controller Queue Depth(435), Max Controller Queue Depth(4287)
[    3.974439] mpt2sas0: Scatter Gather Elements per IO(128)
[    4.032369] mpt2sas0: LSISAS2008: FWVersion(05.00.00.00), ChipRevision(0x02), BiosVersion(07.05.01.00)
[    4.032371] mpt2sas0: Protocol=(Initiator,Target), Capabilities=(TLR,EEDP,Snapshot Buffer,Diag Trace Buffer,Task Set Full,NCQ)
[    4.032430] mpt2sas0: sending port enable !!
[    4.033114] mpt2sas0: port enable: SUCCESS

And detecting drives:

Code:
[    4.461219] scsi 0:0:0:0: Direct-Access     ATA      Hitachi HDS72202 A28A PQ: 0 ANSI: 5
[    4.461229] scsi 0:0:0:0: SATA: handle(0x0009), sas_addr(0x4433221100000000), device_name(0x0000000000000000)
[    4.461231] scsi 0:0:0:0: SATA: enclosure_logical_id(0x500304800070cc80), slot(0)
[    4.461313] scsi 0:0:0:0: atapi(n), ncq(y), asyn_notify(n), smart(y), fua(y), sw_preserve(y)
[    4.461317] scsi 0:0:0:0: qdepth(32), tagged(1), simple(1), ordered(0), scsi_level(6), cmd_que(1)
[    4.461602] sd 0:0:0:0: Attached scsi generic sg0 type 0
[    4.463184] sd 0:0:0:0: [sda] 3907029168 512-byte logical blocks: (2.00 TB/1.81 TiB)
[    4.663051] ata2: SATA link down (SStatus 0 SControl 300)
[    4.780951] sd 0:0:0:0: [sda] Write Protect is off
[    4.780953] sd 0:0:0:0: [sda] Mode Sense: 73 00 00 08
[    4.783492] sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[    4.960666] scsi 0:0:1:0: Direct-Access     ATA      Hitachi HDS72202 A28A PQ: 0 ANSI: 5
[    4.960671] scsi 0:0:1:0: SATA: handle(0x000a), sas_addr(0x4433221101000000), device_name(0x0000000000000000)
[    4.960674] scsi 0:0:1:0: SATA: enclosure_logical_id(0x500304800070cc80), slot(1)
[    4.960754] scsi 0:0:1:0: atapi(n), ncq(y), asyn_notify(n), smart(y), fua(y), sw_preserve(y)
[    4.960757] scsi 0:0:1:0: qdepth(32), tagged(1), simple(1), ordered(0), scsi_level(6), cmd_que(1)
[    4.961028] sd 0:0:1:0: Attached scsi generic sg1 type 0
[    4.962605] sd 0:0:1:0: [sdb] 3907029168 512-byte logical blocks: (2.00 TB/1.81 TiB)
[    4.977075] ata4: SATA link down (SStatus 0 SControl 300)
[    5.058463]  sda: unknown partition table
[    5.278351] sd 0:0:1:0: [sdb] Write Protect is off
[    5.278353] sd 0:0:1:0: [sdb] Mode Sense: 73 00 00 08
[    5.280896] sd 0:0:1:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[    5.348184] ieee1394: Host added: ID:BUS[0-00:1023]  GUID[00023c041104188c]
[    5.358423] sd 0:0:0:0: [sda] Attached SCSI disk
[    5.408179] ieee1394: Host added: ID:BUS[1-00:1023]  GUID[000010dc0106cb79]
[    5.460934] scsi 0:0:2:0: Direct-Access     ATA      Hitachi HDS72202 A28A PQ: 0 ANSI: 5
[    5.460939] scsi 0:0:2:0: SATA: handle(0x000b), sas_addr(0x4433221102000000), device_name(0x0000000000000000)
[    5.460942] scsi 0:0:2:0: SATA: enclosure_logical_id(0x500304800070cc80), slot(2)
[    5.461023] scsi 0:0:2:0: atapi(n), ncq(y), asyn_notify(n), smart(y), fua(y), sw_preserve(y)
[    5.461026] scsi 0:0:2:0: qdepth(32), tagged(1), simple(1), ordered(0), scsi_level(6), cmd_que(1)
[    5.461298] sd 0:0:2:0: Attached scsi generic sg2 type 0
[    5.462855] sd 0:0:2:0: [sdc] 3907029168 512-byte logical blocks: (2.00 TB/1.81 TiB)
[    5.555863]  sdb: unknown partition table
[    5.775175] sd 0:0:2:0: [sdc] Write Protect is off
[    5.775177] sd 0:0:2:0: [sdc] Mode Sense: 73 00 00 08
[    5.777723] sd 0:0:2:0: [sdc] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[    5.855823] sd 0:0:1:0: [sdb] Attached SCSI disk
[    5.960564] scsi 0:0:3:0: Direct-Access     ATA      Hitachi HDS72202 A28A PQ: 0 ANSI: 5
[    5.960569] scsi 0:0:3:0: SATA: handle(0x000c), sas_addr(0x4433221103000000), device_name(0x0000000000000000)
[    5.960572] scsi 0:0:3:0: SATA: enclosure_logical_id(0x500304800070cc80), slot(3)
[    5.960652] scsi 0:0:3:0: atapi(n), ncq(y), asyn_notify(n), smart(y), fua(y), sw_preserve(y)
[    5.960655] scsi 0:0:3:0: qdepth(32), tagged(1), simple(1), ordered(0), scsi_level(6), cmd_que(1)
[    5.960923] sd 0:0:3:0: Attached scsi generic sg3 type 0
[    5.962487] sd 0:0:3:0: [sdd] 3907029168 512-byte logical blocks: (2.00 TB/1.81 TiB)
[    6.052677]  sdc: unknown partition table
[    6.275578] sd 0:0:3:0: [sdd] Write Protect is off
[    6.275580] sd 0:0:3:0: [sdd] Mode Sense: 73 00 00 08
[    6.278136] sd 0:0:3:0: [sdd] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[    6.352647] sd 0:0:2:0: [sdc] Attached SCSI disk
[    6.461019] scsi 0:0:4:0: Direct-Access     ATA      Hitachi HDS72202 A28A PQ: 0 ANSI: 5
[    6.461024] scsi 0:0:4:0: SATA: handle(0x000e), sas_addr(0x4433221104000000), device_name(0x0000000000000000)
[    6.461027] scsi 0:0:4:0: SATA: enclosure_logical_id(0x500304800070cc80), slot(4)
[    6.461107] scsi 0:0:4:0: atapi(n), ncq(y), asyn_notify(n), smart(y), fua(y), sw_preserve(y)
[    6.461110] scsi 0:0:4:0: qdepth(32), tagged(1), simple(1), ordered(0), scsi_level(6), cmd_que(1)
[    6.461385] sd 0:0:4:0: Attached scsi generic sg4 type 0
[    6.462942] sd 0:0:4:0: [sde] 3907029168 512-byte logical blocks: (2.00 TB/1.81 TiB)
[    6.553093]  sdd: unknown partition table
[    6.777994] sd 0:0:4:0: [sde] Write Protect is off
[    6.777996] sd 0:0:4:0: [sde] Mode Sense: 73 00 00 08
[    6.780538] sd 0:0:4:0: [sde] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[    6.853051] sd 0:0:3:0: [sdd] Attached SCSI disk
[    6.960460] scsi 0:0:5:0: Direct-Access     ATA      Hitachi HDS72202 A28A PQ: 0 ANSI: 5
[    6.960465] scsi 0:0:5:0: SATA: handle(0x000d), sas_addr(0x4433221105000000), device_name(0x0000000000000000)
[    6.960468] scsi 0:0:5:0: SATA: enclosure_logical_id(0x500304800070cc80), slot(5)
[    6.960548] scsi 0:0:5:0: atapi(n), ncq(y), asyn_notify(n), smart(y), fua(y), sw_preserve(y)
[    6.960551] scsi 0:0:5:0: qdepth(32), tagged(1), simple(1), ordered(0), scsi_level(6), cmd_que(1)
[    6.960818] sd 0:0:5:0: Attached scsi generic sg5 type 0
[    6.962384] sd 0:0:5:0: [sdf] 3907029168 512-byte logical blocks: (2.00 TB/1.81 TiB)
[    7.055503]  sde: unknown partition table
[    7.278799] sd 0:0:5:0: [sdf] Write Protect is off
[    7.278802] sd 0:0:5:0: [sdf] Mode Sense: 73 00 00 08
[    7.281345] sd 0:0:5:0: [sdf] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[    7.355472] sd 0:0:4:0: [sde] Attached SCSI disk
[    7.460671] scsi 0:0:6:0: Direct-Access     ATA      Hitachi HDS72202 A28A PQ: 0 ANSI: 5
[    7.460676] scsi 0:0:6:0: SATA: handle(0x0010), sas_addr(0x4433221106000000), device_name(0x0000000000000000)
[    7.460679] scsi 0:0:6:0: SATA: enclosure_logical_id(0x500304800070cc80), slot(6)
[    7.460759] scsi 0:0:6:0: atapi(n), ncq(y), asyn_notify(n), smart(y), fua(y), sw_preserve(y)
[    7.460762] scsi 0:0:6:0: qdepth(32), tagged(1), simple(1), ordered(0), scsi_level(6), cmd_que(1)
[    7.461040] sd 0:0:6:0: Attached scsi generic sg6 type 0
[    7.462595] sd 0:0:6:0: [sdg] 3907029168 512-byte logical blocks: (2.00 TB/1.81 TiB)
[    7.556310]  sdf: unknown partition table
[    7.779789] sd 0:0:6:0: [sdg] Write Protect is off
[    7.779792] sd 0:0:6:0: [sdg] Mode Sense: 73 00 00 08
[    7.782346] sd 0:0:6:0: [sdg] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[    7.856267] sd 0:0:5:0: [sdf] Attached SCSI disk
[    7.960361] scsi 0:0:7:0: Direct-Access     ATA      Hitachi HDS72202 A28A PQ: 0 ANSI: 5
[    7.960366] scsi 0:0:7:0: SATA: handle(0x000f), sas_addr(0x4433221107000000), device_name(0x0000000000000000)
[    7.960368] scsi 0:0:7:0: SATA: enclosure_logical_id(0x500304800070cc80), slot(7)
[    7.960448] scsi 0:0:7:0: atapi(n), ncq(y), asyn_notify(n), smart(y), fua(y), sw_preserve(y)
[    7.960451] scsi 0:0:7:0: qdepth(32), tagged(1), simple(1), ordered(0), scsi_level(6), cmd_que(1)
[    7.960719] sd 0:0:7:0: Attached scsi generic sg7 type 0
[    7.962280] sd 0:0:7:0: [sdh] 3907029168 512-byte logical blocks: (2.00 TB/1.81 TiB)
[    8.057306]  sdg: unknown partition table
[    8.272079] sd 0:0:7:0: [sdh] Write Protect is off
[    8.272081] sd 0:0:7:0: [sdh] Mode Sense: 73 00 00 08
[    8.274638] sd 0:0:7:0: [sdh] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[    8.357265] sd 0:0:6:0: [sdg] Attached SCSI disk
[    8.549594]  sdh: unknown partition table
[    8.849557] sd 0:0:7:0: [sdh] Attached SCSI disk

Here are some benchmarks I got in raid0, amazingly enough I got higher read speeds and about equal write speeds to my ARC-1280ML (granted its in raid6):

hdparm:

Code:
/dev/md127:
 Timing buffered disk reads:  2824 MB in  3.00 seconds = 941.18 MB/sec

Very nice result... My areca caps out at around 820-830 megabytes/sec. This could likely go further as hdparm on a single disk showed 120 MB sec, 120 * 8 = 960 so this is almost scaling perfectly.

Here is a 50 gigabyte read using dd with this command:

dd bs=1M count=50000 if=/dev/md127 of=/dev/null

result:

Code:
50000+0 records in
50000+0 records out
52428800000 bytes (52 GB) copied, 56.7052 s, 925 MB/s
50000+0 records in
50000+0 records out
52428800000 bytes (52 GB) copied, 56.6312 s, 926 MB/s

I ran each DD test twice to be sure. Again very good.


Here is a write 30 GB write test using:

dd bs=1M count=30000 if=/dev/zero of=/dev/md127

One thing to note is that DD sat around 90% cpu usage and sometimes hit 100% during the test which could be limiting this a bit.

The results:

Code:
30000+0 records in
30000+0 records out
31457280000 bytes (31 GB) copied, 41.8887 s, 751 MB/s
30000+0 records in
30000+0 records out
31457280000 bytes (31 GB) copied, 42.9038 s, 733 MB/s

Random access reads....

1 thread:

Code:
Seeker v3.0, 2009-06-17, http://www.linuxinsight.com/how_fast_is_your_disk.html
Benchmarking /dev/md127 [31256231936 blocks, 16003190751232 bytes, 14904 GB, 15261832 MB, 16003 GiB, 16003190 MiB]
[512 logical sector size, 512 physical sector size]
[1 threads]
Wait 30 seconds..............................
Results: 76 seeks/second, 13.032 ms random access time (4970178045 < offsets < 16002929280578)

64 threads:

Code:
Seeker v3.0, 2009-06-17, http://www.linuxinsight.com/how_fast_is_your_disk.html
Benchmarking /dev/md127 [31256231936 blocks, 16003190751232 bytes, 14904 GB, 15261832 MB, 16003 GiB, 16003190 MiB]
[512 logical sector size, 512 physical sector size]
[64 threads]
Wait 30 seconds..............................
Results: 859 seeks/second, 1.163 ms random access time (52879863 < offsets < 16002049683371)


Now for the raid6 tests, first up hdparm:

Code:
/dev/md6:
 Timing buffered disk reads:  2120 MB in  3.00 seconds = 706.64 MB/sec

This was right around what I was expecting for losing two disks worth of sequential read bandwidth to parity.

Here is the big reads (this time using a 40 GB read as I knew it would go slower):

command:

dd bs=1M count=40000 if=/dev/md6 of=/dev/null


Code:
40000+0 records in
40000+0 records out
41943040000 bytes (42 GB) copied, 59.6086 s, 704 MB/s
40000+0 records in
40000+0 records out
41943040000 bytes (42 GB) copied, 59.6698 s, 703 MB/s

Again right around what I was expecting and very impressive for software raid6. This is completely I/O limited.

Big write test. I gotta say I was really surprised by this. When I tested a PCI-X one I only got around 110 MB/sec and it wasn't CPU or I/O limited. Maybe it was the newer kernel but the write test left my system only 2-5% idle and I saw:

dd use about 60-70%, md6_raid6 77-85% and a flush process used 25-35% which was almost all the CPU (including %sys and other misc stuff) that this dual core system could muster. The write speeds were much better than what I saw in the past:

command:
dd bs=1M count=30000 if=/dev/zero of=/dev/md6

Code:
30000+0 records in
30000+0 records out
31457280000 bytes (31 GB) copied, 92.8034 s, 339 MB/s
30000+0 records in
30000+0 records out
31457280000 bytes (31 GB) copied, 92.7274 s, 339 MB/s

This is very good and I would say is performing better in software raid than an ARC-1220 and close to an ARC-1222.

Here are the random read/seek tests for raid6.

1 thread:

Code:
Seeker v3.0, 2009-06-17, http://www.linuxinsight.com/how_fast_is_your_disk.html
Benchmarking /dev/md6 [23442172416 blocks, 12002392276992 bytes, 11178 GB, 11446373 MB, 12002 GiB, 12002392 MiB]
[512 logical sector size, 512 physical sector size]
[1 threads]
Wait 30 seconds..............................
Results: 77 seeks/second, 12.848 ms random access time (7743477092 < offsets < 11998198790842)

64 threads:
Code:
Seeker v3.0, 2009-06-17, http://www.linuxinsight.com/how_fast_is_your_disk.html
Benchmarking /dev/md6 [23442172416 blocks, 12002392276992 bytes, 11178 GB, 11446373 MB, 12002 GiB, 12002392 MiB]
[512 logical sector size, 512 physical sector size]
[64 threads]
Wait 30 seconds..............................
Results: 851 seeks/second, 1.174 ms random access time (641617286 < offsets < 12001136411994)

All these results were all around impressive I would say. I was seeing very close results to some hardware raid controllers I have seen.

I do plan to test out 20 (seagate 7200.11 AS 1TB) drives and possibly on a faster CPU in the future and hopefully on a PCI-E 2.0 motherboard but I am not making any promises about the faster and PCI-E 2.0 slot (I will definitely test with 20 disks though) I am very impressed with the speeds this card gave me especially when I am not using it under the most optimum circumstances. This seems like a good card to use for anyone who is going the software raid/replication route. I think this card will give me excellent ZFS performance.
 
Yes very nice card; too bad it doesn't work yet on FreeBSD; at least that's what i heard. It uses the LSI SAS2008 chipset; while the USAS-L8i uses an older 3Gbps LSI controller.

I suspect it to be supported soon by FreeBSD though; but its nice to hear it already works with OpenSolaris.

Nice speeds you got!
 
Last edited:
Does it suffer from any dropouts if you repeatedly scan the SMART parameters?
 
Does it suffer from any dropouts if you repeatedly scan the SMART parameters?

I didn't test that. Should I randomly check smart status as fast as I can one after another while doing read operations or something (or write?) I don't know how the 'dropouts' would look on software raid but I can definitely test it for you.
 
I didn't test that. Should I randomly check smart status as fast as I can one after another while doing read operations or something (or write?) I don't know how the 'dropouts' would look on software raid but I can definitely test it for you.

The bug has been reported for SAS1068 based controllers. Hopefully it is not present in the newer controllers, but it would be good to test it.

Here is the bug report:

https://bugzilla.kernel.org/show_bug.cgi?id=14831

Comment #7 has a nice script attached to stress test writing and smartctl.
 
The bug has been reported for SAS1068 based controllers. Hopefully it is not present in the newer controllers, but it would be good to test it.

Here is the bug report:

https://bugzilla.kernel.org/show_bug.cgi?id=14831

Comment #7 has a nice script attached to stress test writing and smartctl.

Ok, I have to move a 45U rack to my new house which will take a bit so I might not be able to test it before I go to bed (I work nights so will likely be sleeping in 3-4 hours) but I will definitely test this for you by tomorrow.
 
The bug has been reported for SAS1068 based controllers. Hopefully it is not present in the newer controllers, but it would be good to test it.

Here is the bug report:

https://bugzilla.kernel.org/show_bug.cgi?id=14831

Comment #7 has a nice script attached to stress test writing and smartctl.

Actually my dad ended up needing some time for stuff and moving the rack is a two person job. I went ahead and did the testing (only have 6 drives hooked up to the controller ATM. I started doing the script on two drives at once and then I got a smartctl failed message once or twice on both but with one alone the script just did tons of output like:


ssssDDsssssDD

I let it go for 5+ minutes. A few times the machine became really unresponsive for a few seconds. Here is some top output:

Code:
top - 03:59:26 up 11 min,  0 users,  load average: 5.29, 3.08, 1.51
Tasks: 132 total,   5 running, 127 sleeping,   0 stopped,   0 zombie
Cpu(s):  0.3%us,  6.9%sy,  0.0%ni, 54.4%id, 38.4%wa,  0.0%hi,  0.0%si,  0.0%st
Mem:   8192160k total,  2802616k used,  5389544k free,   107024k buffers
Swap:        0k total,        0k used,        0k free,  2427544k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND                                      
 1593 root      20   0     0    0    0 R  101  0.0   0:28.43 fw_event0                                    
    1 root      20   0  3820  648  548 S    0  0.0   0:00.57 init                                         
    2 root      20   0     0    0    0 S    0  0.0   0:00.00 kthreadd

Also in dmesg I saw tons of:
Code:
[  735.136334] sd 0:0:1:0: [sdb] ASC=0x0 ASCQ=0x1d
[  741.126420] sd 0:0:0:0: [sda] Sense Key : 0x1 [current] [descriptor]
[  741.126424] Descriptor sense data with sense descriptors (in hex):
[  741.126426]         72 01 00 1d 00 00 00 0e 09 0c 00 00 00 00 00 00
[  741.126433]         00 4f 00 c2 00 50
[  741.126437] sd 0:0:0:0: [sda] ASC=0x0 ASCQ=0x1d
[  741.319938] sd 0:0:1:0: [sdb] Sense Key : 0x1 [current] [descriptor]
[  741.319941] Descriptor sense data with sense descriptors (in hex):
[  741.319943]         72 01 00 1d 00 00 00 0e 09 0c 00 00 00 00 00 00
[  741.319950]         00 4f 00 c2 00 50
[  741.319953] sd 0:0:1:0: [sdb] ASC=0x0 ASCQ=0x1d
[  741.411615] sd 0:0:1:0: [sdb] Sense Key : 0x1 [current] [descriptor]
[  741.411618] Descriptor sense data with sense descriptors (in hex):
[  741.411619]         72 01 00 1d 00 00 00 0e 09 0c 00 00 00 00 00 00
[  741.411626]         00 4f 00 c2 00 50
[  741.411630] sd 0:0:1:0: [sdb] ASC=0x0 ASCQ=0x1d
[  741.593100] sd 0:0:0:0: [sda] Sense Key : 0x1 [current] [descriptor]
[  741.593104] Descriptor sense data with sense descriptors (in hex):
[  741.593106]         72 01 00 1d 00 00 00 0e 09 0c 00 00 00 00 00 00
[  741.593113]         00 4f 00 c2 00 50

I think this was just when smart was being accessed and several times I saw output like this ( repeated twice so you can see how often it was repeating from time stamps):

Code:
[  743.552026] mpt2sas0: fault_state(0x0d04)!
[  743.552029] mpt2sas0: sending diag reset !!
[  744.282006] mpt2sas0: diag reset: SUCCESS
[  744.343342] mpt2sas0: LSISAS2008: FWVersion(05.00.00.00), ChipRevision(0x02), BiosVersion(07.05.01.00)
[  744.343345] mpt2sas0: Protocol=(Initiator,Target), Capabilities=(TLR,EEDP,Snapshot Buffer,Diag Trace Buffer,Task Set Full,NCQ)
[  744.343399] mpt2sas0: sending port enable !!
[  752.197132] mpt2sas0: port enable: SUCCESS
[  752.197222] mpt2sas0: _scsih_search_responding_sas_devices
[  752.197891] scsi target0:0:0: handle(0x0009), sas_addr(0x4433221103000000), enclosure logical id(0x500304800070cc80), slot(3)
[  752.197967] scsi target0:0:2: handle(0x000a), sas_addr(0x4433221105000000), enclosure logical id(0x500304800070cc80), slot(5)
[  752.197969]  handle changed from(0x000b)!!!
[  752.198045] scsi target0:0:3: handle(0x000b), sas_addr(0x4433221106000000), enclosure logical id(0x500304800070cc80), slot(6)
[  752.198047]  handle changed from(0x000c)!!!
[  752.198122] scsi target0:0:1: handle(0x000c), sas_addr(0x4433221104000000), enclosure logical id(0x500304800070cc80), slot(4)
[  752.198124]  handle changed from(0x000a)!!!
[  752.198424] scsi target0:0:4: handle(0x000d), sas_addr(0x4433221107000000), enclosure logical id(0x500304800070cc80), slot(7)
[  752.198505] mpt2sas0: _scsih_search_responding_raid_devices
[  752.198507] mpt2sas0: _scsih_search_responding_expanders
[  752.198509] mpt2sas0: _base_fault_reset_work: hard reset: success



[  762.198047] mpt2sas0: fault_state(0x0d04)!
[  762.198049] mpt2sas0: sending diag reset !!
[  762.926005] mpt2sas0: diag reset: SUCCESS
[  762.988338] mpt2sas0: LSISAS2008: FWVersion(05.00.00.00), ChipRevision(0x02), BiosVersion(07.05.01.00)
[  762.988340] mpt2sas0: Protocol=(Initiator,Target), Capabilities=(TLR,EEDP,Snapshot Buffer,Diag Trace Buffer,Task Set Full,NCQ)
[  762.988394] mpt2sas0: sending port enable !!
[  770.842132] mpt2sas0: port enable: SUCCESS
[  770.842222] mpt2sas0: _scsih_search_responding_sas_devices
[  770.842890] scsi target0:0:0: handle(0x0009), sas_addr(0x4433221103000000), enclosure logical id(0x500304800070cc80), slot(3)
[  770.842966] scsi target0:0:1: handle(0x000a), sas_addr(0x4433221104000000), enclosure logical id(0x500304800070cc80), slot(4)
[  770.842969]  handle changed from(0x000c)!!!
[  770.843044] scsi target0:0:2: handle(0x000b), sas_addr(0x4433221105000000), enclosure logical id(0x500304800070cc80), slot(5)
[  770.843046]  handle changed from(0x000a)!!!
[  770.843122] scsi target0:0:3: handle(0x000c), sas_addr(0x4433221106000000), enclosure logical id(0x500304800070cc80), slot(6)
[  770.843124]  handle changed from(0x000b)!!!
[  770.843434] scsi target0:0:4: handle(0x000d), sas_addr(0x4433221107000000), enclosure logical id(0x500304800070cc80), slot(7)
[  770.843514] mpt2sas0: _scsih_search_responding_raid_devices
[  770.843516] mpt2sas0: _scsih_search_responding_expanders
[  770.843519] mpt2sas0: _base_fault_reset_work: hard reset: success

I think those resets is when the machine became seemingly completely unresponsive for a few seconds.

So it looks like this card suffers from the same problem? This was on kernel 2.6.33.
 
I will be getting my HP SAS expander tomorrow and will have more benchmarks of speeds through the SAS expander by wednesday.
 
Speeds through the SAS expander and 20 drives ended up being almost identical to what I got before (slightly lower actually). I think its limited by the SAS expander or PCI-E 1.0. I am going to be testing it in PCI-E 2.0 very soon as my nforce 650i board is having problems with opensolaris snv_134 (2009.06 works fine) which is what I need to run to use the LSI controller so I bought an openbox supermicro board off newegg that has a PCI -E 2.0 slot.

Depending on what time it is delivered I might have some more benchies tomorrow.
 
Well I am having problems running linux after upgrading the motherboard but I finally got open solaris working ok. Other than losing more disk space than I should I am pretty happy with it and getting excellent read/write performance:

writes:
Code:
root@opensolaris: 11:36 AM :/data# dd bs=1M count=100000 if=/dev/zero of=./100gb.bin
100000+0 records in
100000+0 records out
104857600000 bytes (105 GB) copied, 233.257 s, 450 MB/s

reads:
Code:
root@opensolaris: 11:44 AM :/data# dd bs=1M if=./100gb.bin of=/dev/null
100000+0 records in
100000+0 records out
104857600000 bytes (105 GB) copied, 131.051 s, 800 MB/s

The zpool:

Code:
root@opensolaris: 11:46 AM :/data# zpool status data
  pool: data
 state: ONLINE
 scrub: scrub completed after 0h8m with 0 errors on Sun May 30 10:38:37 2010
config:

        NAME                         STATE     READ WRITE CKSUM
        data                         ONLINE       0     0     0
          raidz2-0                   ONLINE       0     0     0
            c4t5000C500028BD5FCd0p0  ONLINE       0     0     0
            c4t5000C50009A4D727d0p0  ONLINE       0     0     0
            c4t5000C50009A46AF5d0p0  ONLINE       0     0     0
            c4t5000C50009A515B0d0p0  ONLINE       0     0     0
            c4t5000C500028A81BEd0p0  ONLINE       0     0     0
            c4t5000C500028B44A1d0p0  ONLINE       0     0     0
            c4t5000C500028B415Bd0p0  ONLINE       0     0     0
            c4t5000C500028B23D2d0p0  ONLINE       0     0     0
            c4t5000C5000CC3338Dd0p0  ONLINE       0     0     0
            c4t5000C500027F59C8d0p0  ONLINE       0     0     0
            c4t5000C50009DBF8D4d0p0  ONLINE       0     0     0
            c4t5000C500027F3C1Fd0p0  ONLINE       0     0     0  
            c4t5000C5000DAF02F3d0p0  ONLINE       0     0     0
            c4t5000C5000DA7ED4Ed0p0  ONLINE       0     0     0
            c4t5000C5000DAEF990d0p0  ONLINE       0     0     0
            c4t5000C5000DAEEF8Ed0p0  ONLINE       0     0     0
            c4t5000C5000DAEB881d0p0  ONLINE       0     0     0
            c4t5000C5000A121581d0p0  ONLINE       0     0     0
            c4t5000C5000DAC848Fd0p0  ONLINE       0     0     0  
            c4t5000C50002770EE6d0p0  ONLINE       0     0     0

errors: No known data errors

I will need to re-create it though as I messed up my 18/19 disk order after I installed smartmontools so I could pull the serial numbers and verify from my old setup. The unintelligible device names made it kind of a pain to make sure the order was correct.

The writes are significantly better than mdadm on linux. Also this is pretty much maxing out CPU on a dual core 2.66 Ghz core 2 duo. I am sure a faster CPU could give even more impressive speeds.

I am curious what I can get with linux/mdadm now that its on PCI-E 2.0 but the machine isn't wanting to boot from my USB cd-rom drive (used a USB stick for solaris).
 
Yes, since ZFS is very threaded a quadcore or octo-core would put all those threads at work. UNIX is generally quite SMP-oriented.

houkouonchi, not sure whether it is appropriate to ask, but would it be possible for you to boot from a FreeBSD kernel and send me the "dmesg" output (kernel messages) that detects your USAS2-L8E? That way, i could create a PR (Problem Report) to get it working on FreeBSD also. Would probably need a "pciconf -l" too.

The easiest to get this done would be by downloading a "memstick.img" disk image, which you then copy to a USB stick from a linux pc with:
dd if=/path/to/memstick.img of=/dev/sdc bs=1M
NOTE: change /dev/sdc to the name of your USB pendrive; or this might kill data on one of your HDDs!

The images can be downloaded here:
ftp://ftp.freebsd.org/pub/FreeBSD/releases/amd64/ISO-IMAGES/8.1/

You would need at least 1GB USB memory stick. Once booted, you get in the sysinstall menu and can go to the 'Fixit' menu and start Fixit shell. Then you can type:

dmesg > dmesg.txt
pciconf -l -> pciconf.txt

then transfer those two files to another PC via SFTP:

sftp username@password
(works like SSH)

If you could do that, it would be great! But if you rather not, then i accept that of course. :)
 
where did you buy the card? It's not available from my usual supermicro source.
 
Yes, since ZFS is very threaded a quadcore or octo-core would put all those threads at work. UNIX is generally quite SMP-oriented.

houkouonchi, not sure whether it is appropriate to ask, but would it be possible for you to boot from a FreeBSD kernel and send me the "dmesg" output (kernel messages) that detects your USAS2-L8E? That way, i could create a PR (Problem Report) to get it working on FreeBSD also. Would probably need a "pciconf -l" too.

The easiest to get this done would be by downloading a "memstick.img" disk image, which you then copy to a USB stick from a linux pc with:
dd if=/path/to/memstick.img of=/dev/sdc bs=1M
NOTE: change /dev/sdc to the name of your USB pendrive; or this might kill data on one of your HDDs!

The images can be downloaded here:
ftp://ftp.freebsd.org/pub/FreeBSD/releases/amd64/ISO-IMAGES/8.1/

You would need at least 1GB USB memory stick. Once booted, you get in the sysinstall menu and can go to the 'Fixit' menu and start Fixit shell. Then you can type:

dmesg > dmesg.txt
pciconf -l -> pciconf.txt

then transfer those two files to another PC via SFTP:

sftp username@password
(works like SSH)

If you could do that, it would be great! But if you rather not, then i accept that of course. :)

I would use scp instead of sftp (unless BSD didn't have that) but yeah when I am done migrating my raid (in a couple days) I can definitely do that as I have spare USB sticks laying around.

where did you buy the card? It's not available from my usual supermicro source.

I got it from wiredzone:

http://www.wiredzone.com/itemdesc.a...=SATA,+SAS+RAID+Controllers&source=googleProd

If I recall they shipped it pretty fast too. So far things have been flawless on open solaris once I got b134 installed that supports the card.

I did a scrub when I had about 300 GB of data on the raidz2 pool and it completed in 14 minutes which was CPU limited. Also I have been copying data non stop to it for like the last day and its been going so far without any problems:

Code:
root@opensolaris: 09:46 PM :~# df -H /data
Filesystem             Size   Used  Avail Use% Mounted on
data                    18T   8.4T   9.2T  48% /data

Still got a bit until I get everything offloaded:

Code:
root@sabayonx86-64: 09:44 PM :~# df -H /data
Filesystem             Size   Used  Avail Use% Mounted on
/dev/sdc3               18T    15T   3.6T  80% /data

So likely another 20-24 hours.
 
If houkouonchi doesn't have a chance to do this, I'll do it once I'm back at my place (currently on road)....

I have the same card and the lack of BSD support has been quite troublesome ;)... I had intended to use FreeNAS, but I was going to dive into a Solaris variant (like EON) because of the driver support.
 
Alright when one of you guys have the dmesg and pciconf information, i'll create a PR or Problem Report on the FreeBSD bug tracking system, to get some devs working on this.

Would be quite cool if both USAS-L8i and the newer USAS2-L8e work. :)
 
Hmm, sorry to ask this question, but are you using this particular HBA with a SuperMicro board?

The reason I ask it is because that seems a great controller card for WHS and the likes, but UIO is SM-only... They have the AOC-SASLP-MV8 8-port PCIe 1.1 HBA, but that one seems "da bomb" when it comes to "dumb" controllers, and if it's UIO-only, then, well, it sucks... I assume there isn't a way of turning a UIO card to regular PCIe, right?

Also, if I may ask, how much did it cost? I believe the AOC-SASLP-MV8 is around $130 (great per-port price), how does that one compare?

Great numbers, btw. It seems a wicked setup.

Thanks for the help.
 
I am using it in a SM board ATM but wasn't originally and the board I have does not have any UIO slots. I just bought some cheap spacers at home depot so I was able to get the bracket to make it fit in a regular slot. You can see them in this picture.

 
I am using it in a SM board ATM but wasn't originally and the board I have does not have any UIO slots.
Which means UIO is basically a PCIe slot with a wrong placement. Well, at least it's not an inverted PCIe slot, as I've already seen (I believe it was HP that had that "brilliant" idea).

Damned OEMs and their "let's use a completely standard technology on a non-standard way and call it something different so we can fleece out more money from the consumer, say we're the only ones who have this hardware and lock them to the brand". SM is not the only one doing this, however, I'm not really ranting about them specifically.

It's just, standards are there for a reason: to make everything cross-compatible. If you don't want to support something working outside your brand, you're free to do it. But using standard tech in non-standard implementations on purpose is akin to sending us back almost three decades, where "compatibility" was absent from the IT dictionary...

Oh, well, /RANT. Thanks for the input, it's a good thing you can make it work on regular motherboards. I'll try and check prices on the thing, but as always availability will be a MAJOR problem here in Portugal... And if I manage to get a quote, it will probably be heart-attack inducing... lol
 
Just got my AOC-USAS2-L8e today and am a little confused on how you got your led lights on the card to not rub on another slot, the leds are currently resting on the slot above and I'm worried they may snap?

photojc.jpg


EDIT: Sorted out that I can simple push the LED plastic through :D
 
Last edited:
Simple fix was popping it out, didn't realize there wasn't any 'internal' wiring. Also fixed the bracket issue by popping along to the local hardware store and grabbing some longer screws + bolts.

My next question is to AOC-USAS2-L8e owners - is this card passing the 'sleep' command through the HP SAS Expander? I can't for the life of me get my HDDs to spin down using the "Turn off Hard Drive after" setting in Windows Server 2008 R2.

Thoughts?
 
Hello, does this have dual-link support on the HP SAS Expander?

And does it work for FreeBSD now?
 
Last edited:
LSI SAS2008 controllers like the SuperMicro USAS2-L8i or USAS2-L8e work on latest FreeBSD and ZFSguru - but only in IT mode firmware. If you got a controller that can only operate in IT mode (i believe an IBM model?) then you cannot use it for FreeBSD; it has to operate in IT-mode.

The older 1068E chip from LSI can work in both IR and IT mode; though IT mode is always preferred.
 
So which one would be a better coice for me? For use with FreeBSD, ZRAID and a HP SAS Expander?

This one or LSI SAS 9211-8i?

And is it hard to put stuff from IR mode to IT mode?
What does"IR" and "IT" stand for in this case?
 
IT is HBA mode (only SATA controller)
IR is RAID mode (0,1, etc...)

For ZFS best thing is to flash to IT mode, since hardware RAID is of no use and it could also allow more direct access to the HDD.
I have the IT mode firmware available, ask me if you can't get however, you should get it from supermicro ftp servers somewhere.
 
In this post someone mentioned they found a replacement bracket for the SuperMicro AOC-USAS-L8i:
http://opensolaris.org/jive/thread.jspa?threadID=128819

The bracket is a PCI replacement for the UIO bracket the card comes with: http://www.mouser.com/ProductDetail/Keystone-Electronics/9203/?qs=sGAEpiMZZMsQtlBhqKq43bjBhDwhG44Z

Could I do the same with the AOC-USAS2-L8e? Could I use basically any bracket off of an old PCI card?

Hi RavenShadow,

The bracket is pretty standardized and it should fit on any standard non low profile PCI/PCIe card.

Check the holes:
http://www.overclockers.com.au/pix/image.php?id=caf7h&t=1

It would be best, if somebody, who owns the card can measure the holes on the card. After that you can check it ...

This is also interesting:
http://blog.agdunn.net/wp-content/uploads/2009/10/spacers.JPG

It looks like owner just raised the UIO bracket very well.

If you really want original bracket, what about this ? - PERC5i PERC 5i 5/i 6i SAS Controller PCI Bracket | eBay

The manufacturer, that you found is pretty expensive. Shipping is expensive on his shop - 40 eur to my country.

Ivan
 
Thanks Kiwwiaq, I hadn't thought of extending the UIO bracket over to the next slot. That seems cheaper.

The mouser site shows $6 shipping for me in the US, still extremely steep for such a cheap piece, if I end up needing to purchase a bracket I'll definitely have to look elsewhere.

Another question: Has anyone used the AOC-USAS2-L8E with a p35 based board? I have an ABIT IP35 Pro left over that I want to leverage.
http://www.abit.com.tw/page/en/motherboard/motherboard_detail.php?pMODEL_NAME=IP35+Pro&fMTYPE=LGA775

It has a secondary PCIe physical x16 / electrical x4 slot that I'm hoping the USAS2-L8E will work with (I know I'll be bandwidth limited but I only need to put two drives on it for now).
 
Hi,

tried somebody this AOC-USAS2-L8e on Solaris 11E ?

I just installed the OS, I can see the cards, but no disks. I did not checked why yet, just want to know, if there is successful user, or I`ll go for latest OI ..

Thanx ...
 
i run 3 of them on se11 work fine

Hi !

Any special setup or just plug and play ? Very strange, that I can`t see any disk ... ctrl+s during card bios initialization is not working. Probably disabled by IT firmware ...

Thanx !

Ivan
 
Hi !

Any special setup or just plug and play ? Very strange, that I can`t see any disk ... ctrl+s during card bios initialization is not working. Probably disabled by IT firmware ...

Thanx !

Ivan

Issue solved ! Nex time I should check which connector am I using, because wrong port was used. I connected proper sata port on backplane and disks are visible and working fine.
 
Hi !

Any special setup or just plug and play ? Very strange, that I can`t see any disk ... ctrl+s during card bios initialization is not working. Probably disabled by IT firmware ...

Thanx !

Ivan

This actually confused me as well. If you want to go in there (to change staggered spinup and other settings) you have to change your boot order.

Its really weird how the 'bios utility' works on these cards. Basically when you hit control-s it shows a INT13 disk to the BIOS 'LSI management disk' or something along those lines but basically it shows up as a disk instead of invoking an actual utility. Thus if your machine is not actually setup to boot from this disk that it creates when you hit control-S you can't access the utility.


So if you hit del to go into you rmobos BIOS and also hit control + s you should see a disk that is for the LSI controller utility. From there change your boot order and on next reboot hitting control + s should bring you into the utility.

It was really random that I even figured this out as this is a really ghetto way to have a management utility (IMHO).
 
This actually confused me as well. If you want to go in there (to change staggered spinup and other settings) you have to change your boot order.

Its really weird how the 'bios utility' works on these cards. Basically when you hit control-s it shows a INT13 disk to the BIOS 'LSI management disk' or something along those lines but basically it shows up as a disk instead of invoking an actual utility. Thus if your machine is not actually setup to boot from this disk that it creates when you hit control-S you can't access the utility.


So if you hit del to go into you rmobos BIOS and also hit control + s you should see a disk that is for the LSI controller utility. From there change your boot order and on next reboot hitting control + s should bring you into the utility.

It was really random that I even figured this out as this is a really ghetto way to have a management utility (IMHO).

Hmm, very nice bug, good to know. I hit the same bug, but I did not had time for researching the workaround.

What is your card BIOS version ? Mine is 7.11 as far as I remember, can`t check it now. The latest is 7.17.

I am going to flash the cards to the latest firmware and share the findings ...

USB disk Is already prepared with latest BIOS and firmware upgrade. Just need some free time. :)

What mobo do you use ?

I am using standard desktop PC - Asus M4A78T-E with latest BIOS.
 
Hmm, very nice bug, good to know. I hit the same bug, but I did not had time for researching the workaround.

What is your card BIOS version ? Mine is 7.11 as far as I remember, can`t check it now. The latest is 7.17.

I am going to flash the cards to the latest firmware and share the findings ...

USB disk Is already prepared with latest BIOS and firmware upgrade. Just need some free time. :)

What mobo do you use ?

I am using standard desktop PC - Asus M4A78T-E with latest BIOS.

Its not a bug, its a feature ! :)
 
Sorry to necro an old thread - interested to see if people have had any luck with 4-6TB HDDs on this Controller.

Looking if I should upgrade or if my trusty old AOC-USAS2-L8e will kick on :)
 
Back
Top