houkouonchi
RIP
- Joined
- Sep 14, 2008
- Messages
- 1,622
I haven't seen too many threads or mentions of this specific card. It is the UIO supermicro card but is the SAS 2.0 and PCI-E 2.0 version of the card. I got this specific model (vs the L8I) due to the fact that I will be using it for opensolaris and I heard the L8E uses the mpt_sas driver (card with no raid/hba) instead of the buggy mr_sas (card that supports raid0,raid1,raid10, etc...).
I don't think I have seen anyone actually post any reviews or benchmarks of these new SAS2 UIO cards especially the AOC-USAS2-L8E.
Anyway I gotta say I am pretty impressed with this sucker. I am still waiting for my HP SAS expander but I hooked this sucker up into the system I plan on using it which is a core 2 duo E6600 (2.4 Ghz) with only supports PCI-E 1.0. I used some spacers to fit the card correctly in the case.
Here is what lspci shows:
For now my setup was 8x 2TB hitachi drives using linux software raid (mdadm).
Here is the driver loading (dmesg):
And detecting drives:
Here are some benchmarks I got in raid0, amazingly enough I got higher read speeds and about equal write speeds to my ARC-1280ML (granted its in raid6):
hdparm:
Very nice result... My areca caps out at around 820-830 megabytes/sec. This could likely go further as hdparm on a single disk showed 120 MB sec, 120 * 8 = 960 so this is almost scaling perfectly.
Here is a 50 gigabyte read using dd with this command:
dd bs=1M count=50000 if=/dev/md127 of=/dev/null
result:
I ran each DD test twice to be sure. Again very good.
Here is a write 30 GB write test using:
dd bs=1M count=30000 if=/dev/zero of=/dev/md127
One thing to note is that DD sat around 90% cpu usage and sometimes hit 100% during the test which could be limiting this a bit.
The results:
Random access reads....
1 thread:
64 threads:
Now for the raid6 tests, first up hdparm:
This was right around what I was expecting for losing two disks worth of sequential read bandwidth to parity.
Here is the big reads (this time using a 40 GB read as I knew it would go slower):
command:
dd bs=1M count=40000 if=/dev/md6 of=/dev/null
Again right around what I was expecting and very impressive for software raid6. This is completely I/O limited.
Big write test. I gotta say I was really surprised by this. When I tested a PCI-X one I only got around 110 MB/sec and it wasn't CPU or I/O limited. Maybe it was the newer kernel but the write test left my system only 2-5% idle and I saw:
dd use about 60-70%, md6_raid6 77-85% and a flush process used 25-35% which was almost all the CPU (including %sys and other misc stuff) that this dual core system could muster. The write speeds were much better than what I saw in the past:
command:
dd bs=1M count=30000 if=/dev/zero of=/dev/md6
This is very good and I would say is performing better in software raid than an ARC-1220 and close to an ARC-1222.
Here are the random read/seek tests for raid6.
1 thread:
64 threads:
All these results were all around impressive I would say. I was seeing very close results to some hardware raid controllers I have seen.
I do plan to test out 20 (seagate 7200.11 AS 1TB) drives and possibly on a faster CPU in the future and hopefully on a PCI-E 2.0 motherboard but I am not making any promises about the faster and PCI-E 2.0 slot (I will definitely test with 20 disks though) I am very impressed with the speeds this card gave me especially when I am not using it under the most optimum circumstances. This seems like a good card to use for anyone who is going the software raid/replication route. I think this card will give me excellent ZFS performance.
I don't think I have seen anyone actually post any reviews or benchmarks of these new SAS2 UIO cards especially the AOC-USAS2-L8E.
Anyway I gotta say I am pretty impressed with this sucker. I am still waiting for my HP SAS expander but I hooked this sucker up into the system I plan on using it which is a core 2 duo E6600 (2.4 Ghz) with only supports PCI-E 1.0. I used some spacers to fit the card correctly in the case.
Here is what lspci shows:
Code:
02:00.0 Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon] (rev 02)
For now my setup was 8x 2TB hitachi drives using linux software raid (mdadm).
Here is the driver loading (dmesg):
Code:
[ 3.863130] mpt2sas version 03.100.03.00 loaded
[ 3.863206] scsi0 : Fusion MPT SAS Host
[ 3.863711] ACPI: PCI Interrupt Link [LNEA] enabled at IRQ 19
[ 3.863715] alloc irq_desc for 19 on node -1
[ 3.863717] alloc kstat_irqs on node -1
[ 3.863723] mpt2sas 0000:02:00.0: PCI INT A -> Link[LNEA] -> GSI 19 (level, low) -> IRQ 19
[ 3.863731] mpt2sas 0000:02:00.0: setting latency timer to 64
[ 3.863735] mpt2sas0: 64 BIT PCI BUS DMA ADDRESSING SUPPORTED, total mem (8191576 kB)
[ 3.863802] alloc irq_desc for 28 on node -1
[ 3.863803] alloc kstat_irqs on node -1
[ 3.863807] mpt2sas 0000:02:00.0: irq 28 for MSI/MSI-X
[ 3.863823] mpt2sas0: PCI-MSI-X enabled: IRQ 28
[ 3.863825] mpt2sas0: iomem(0xfeafc000), mapped(0xffffc90011d70000), size(16384)
[ 3.863827] mpt2sas0: ioport(0xd000), size(256)
[ 3.936006] mpt2sas0: sending message unit reset !!
[ 3.938005] mpt2sas0: message unit reset: SUCCESS
[ 3.974436] mpt2sas0: Allocated physical memory: size(1028 kB)
[ 3.974438] mpt2sas0: Current Controller Queue Depth(435), Max Controller Queue Depth(4287)
[ 3.974439] mpt2sas0: Scatter Gather Elements per IO(128)
[ 4.032369] mpt2sas0: LSISAS2008: FWVersion(05.00.00.00), ChipRevision(0x02), BiosVersion(07.05.01.00)
[ 4.032371] mpt2sas0: Protocol=(Initiator,Target), Capabilities=(TLR,EEDP,Snapshot Buffer,Diag Trace Buffer,Task Set Full,NCQ)
[ 4.032430] mpt2sas0: sending port enable !!
[ 4.033114] mpt2sas0: port enable: SUCCESS
And detecting drives:
Code:
[ 4.461219] scsi 0:0:0:0: Direct-Access ATA Hitachi HDS72202 A28A PQ: 0 ANSI: 5
[ 4.461229] scsi 0:0:0:0: SATA: handle(0x0009), sas_addr(0x4433221100000000), device_name(0x0000000000000000)
[ 4.461231] scsi 0:0:0:0: SATA: enclosure_logical_id(0x500304800070cc80), slot(0)
[ 4.461313] scsi 0:0:0:0: atapi(n), ncq(y), asyn_notify(n), smart(y), fua(y), sw_preserve(y)
[ 4.461317] scsi 0:0:0:0: qdepth(32), tagged(1), simple(1), ordered(0), scsi_level(6), cmd_que(1)
[ 4.461602] sd 0:0:0:0: Attached scsi generic sg0 type 0
[ 4.463184] sd 0:0:0:0: [sda] 3907029168 512-byte logical blocks: (2.00 TB/1.81 TiB)
[ 4.663051] ata2: SATA link down (SStatus 0 SControl 300)
[ 4.780951] sd 0:0:0:0: [sda] Write Protect is off
[ 4.780953] sd 0:0:0:0: [sda] Mode Sense: 73 00 00 08
[ 4.783492] sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[ 4.960666] scsi 0:0:1:0: Direct-Access ATA Hitachi HDS72202 A28A PQ: 0 ANSI: 5
[ 4.960671] scsi 0:0:1:0: SATA: handle(0x000a), sas_addr(0x4433221101000000), device_name(0x0000000000000000)
[ 4.960674] scsi 0:0:1:0: SATA: enclosure_logical_id(0x500304800070cc80), slot(1)
[ 4.960754] scsi 0:0:1:0: atapi(n), ncq(y), asyn_notify(n), smart(y), fua(y), sw_preserve(y)
[ 4.960757] scsi 0:0:1:0: qdepth(32), tagged(1), simple(1), ordered(0), scsi_level(6), cmd_que(1)
[ 4.961028] sd 0:0:1:0: Attached scsi generic sg1 type 0
[ 4.962605] sd 0:0:1:0: [sdb] 3907029168 512-byte logical blocks: (2.00 TB/1.81 TiB)
[ 4.977075] ata4: SATA link down (SStatus 0 SControl 300)
[ 5.058463] sda: unknown partition table
[ 5.278351] sd 0:0:1:0: [sdb] Write Protect is off
[ 5.278353] sd 0:0:1:0: [sdb] Mode Sense: 73 00 00 08
[ 5.280896] sd 0:0:1:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[ 5.348184] ieee1394: Host added: ID:BUS[0-00:1023] GUID[00023c041104188c]
[ 5.358423] sd 0:0:0:0: [sda] Attached SCSI disk
[ 5.408179] ieee1394: Host added: ID:BUS[1-00:1023] GUID[000010dc0106cb79]
[ 5.460934] scsi 0:0:2:0: Direct-Access ATA Hitachi HDS72202 A28A PQ: 0 ANSI: 5
[ 5.460939] scsi 0:0:2:0: SATA: handle(0x000b), sas_addr(0x4433221102000000), device_name(0x0000000000000000)
[ 5.460942] scsi 0:0:2:0: SATA: enclosure_logical_id(0x500304800070cc80), slot(2)
[ 5.461023] scsi 0:0:2:0: atapi(n), ncq(y), asyn_notify(n), smart(y), fua(y), sw_preserve(y)
[ 5.461026] scsi 0:0:2:0: qdepth(32), tagged(1), simple(1), ordered(0), scsi_level(6), cmd_que(1)
[ 5.461298] sd 0:0:2:0: Attached scsi generic sg2 type 0
[ 5.462855] sd 0:0:2:0: [sdc] 3907029168 512-byte logical blocks: (2.00 TB/1.81 TiB)
[ 5.555863] sdb: unknown partition table
[ 5.775175] sd 0:0:2:0: [sdc] Write Protect is off
[ 5.775177] sd 0:0:2:0: [sdc] Mode Sense: 73 00 00 08
[ 5.777723] sd 0:0:2:0: [sdc] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[ 5.855823] sd 0:0:1:0: [sdb] Attached SCSI disk
[ 5.960564] scsi 0:0:3:0: Direct-Access ATA Hitachi HDS72202 A28A PQ: 0 ANSI: 5
[ 5.960569] scsi 0:0:3:0: SATA: handle(0x000c), sas_addr(0x4433221103000000), device_name(0x0000000000000000)
[ 5.960572] scsi 0:0:3:0: SATA: enclosure_logical_id(0x500304800070cc80), slot(3)
[ 5.960652] scsi 0:0:3:0: atapi(n), ncq(y), asyn_notify(n), smart(y), fua(y), sw_preserve(y)
[ 5.960655] scsi 0:0:3:0: qdepth(32), tagged(1), simple(1), ordered(0), scsi_level(6), cmd_que(1)
[ 5.960923] sd 0:0:3:0: Attached scsi generic sg3 type 0
[ 5.962487] sd 0:0:3:0: [sdd] 3907029168 512-byte logical blocks: (2.00 TB/1.81 TiB)
[ 6.052677] sdc: unknown partition table
[ 6.275578] sd 0:0:3:0: [sdd] Write Protect is off
[ 6.275580] sd 0:0:3:0: [sdd] Mode Sense: 73 00 00 08
[ 6.278136] sd 0:0:3:0: [sdd] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[ 6.352647] sd 0:0:2:0: [sdc] Attached SCSI disk
[ 6.461019] scsi 0:0:4:0: Direct-Access ATA Hitachi HDS72202 A28A PQ: 0 ANSI: 5
[ 6.461024] scsi 0:0:4:0: SATA: handle(0x000e), sas_addr(0x4433221104000000), device_name(0x0000000000000000)
[ 6.461027] scsi 0:0:4:0: SATA: enclosure_logical_id(0x500304800070cc80), slot(4)
[ 6.461107] scsi 0:0:4:0: atapi(n), ncq(y), asyn_notify(n), smart(y), fua(y), sw_preserve(y)
[ 6.461110] scsi 0:0:4:0: qdepth(32), tagged(1), simple(1), ordered(0), scsi_level(6), cmd_que(1)
[ 6.461385] sd 0:0:4:0: Attached scsi generic sg4 type 0
[ 6.462942] sd 0:0:4:0: [sde] 3907029168 512-byte logical blocks: (2.00 TB/1.81 TiB)
[ 6.553093] sdd: unknown partition table
[ 6.777994] sd 0:0:4:0: [sde] Write Protect is off
[ 6.777996] sd 0:0:4:0: [sde] Mode Sense: 73 00 00 08
[ 6.780538] sd 0:0:4:0: [sde] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[ 6.853051] sd 0:0:3:0: [sdd] Attached SCSI disk
[ 6.960460] scsi 0:0:5:0: Direct-Access ATA Hitachi HDS72202 A28A PQ: 0 ANSI: 5
[ 6.960465] scsi 0:0:5:0: SATA: handle(0x000d), sas_addr(0x4433221105000000), device_name(0x0000000000000000)
[ 6.960468] scsi 0:0:5:0: SATA: enclosure_logical_id(0x500304800070cc80), slot(5)
[ 6.960548] scsi 0:0:5:0: atapi(n), ncq(y), asyn_notify(n), smart(y), fua(y), sw_preserve(y)
[ 6.960551] scsi 0:0:5:0: qdepth(32), tagged(1), simple(1), ordered(0), scsi_level(6), cmd_que(1)
[ 6.960818] sd 0:0:5:0: Attached scsi generic sg5 type 0
[ 6.962384] sd 0:0:5:0: [sdf] 3907029168 512-byte logical blocks: (2.00 TB/1.81 TiB)
[ 7.055503] sde: unknown partition table
[ 7.278799] sd 0:0:5:0: [sdf] Write Protect is off
[ 7.278802] sd 0:0:5:0: [sdf] Mode Sense: 73 00 00 08
[ 7.281345] sd 0:0:5:0: [sdf] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[ 7.355472] sd 0:0:4:0: [sde] Attached SCSI disk
[ 7.460671] scsi 0:0:6:0: Direct-Access ATA Hitachi HDS72202 A28A PQ: 0 ANSI: 5
[ 7.460676] scsi 0:0:6:0: SATA: handle(0x0010), sas_addr(0x4433221106000000), device_name(0x0000000000000000)
[ 7.460679] scsi 0:0:6:0: SATA: enclosure_logical_id(0x500304800070cc80), slot(6)
[ 7.460759] scsi 0:0:6:0: atapi(n), ncq(y), asyn_notify(n), smart(y), fua(y), sw_preserve(y)
[ 7.460762] scsi 0:0:6:0: qdepth(32), tagged(1), simple(1), ordered(0), scsi_level(6), cmd_que(1)
[ 7.461040] sd 0:0:6:0: Attached scsi generic sg6 type 0
[ 7.462595] sd 0:0:6:0: [sdg] 3907029168 512-byte logical blocks: (2.00 TB/1.81 TiB)
[ 7.556310] sdf: unknown partition table
[ 7.779789] sd 0:0:6:0: [sdg] Write Protect is off
[ 7.779792] sd 0:0:6:0: [sdg] Mode Sense: 73 00 00 08
[ 7.782346] sd 0:0:6:0: [sdg] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[ 7.856267] sd 0:0:5:0: [sdf] Attached SCSI disk
[ 7.960361] scsi 0:0:7:0: Direct-Access ATA Hitachi HDS72202 A28A PQ: 0 ANSI: 5
[ 7.960366] scsi 0:0:7:0: SATA: handle(0x000f), sas_addr(0x4433221107000000), device_name(0x0000000000000000)
[ 7.960368] scsi 0:0:7:0: SATA: enclosure_logical_id(0x500304800070cc80), slot(7)
[ 7.960448] scsi 0:0:7:0: atapi(n), ncq(y), asyn_notify(n), smart(y), fua(y), sw_preserve(y)
[ 7.960451] scsi 0:0:7:0: qdepth(32), tagged(1), simple(1), ordered(0), scsi_level(6), cmd_que(1)
[ 7.960719] sd 0:0:7:0: Attached scsi generic sg7 type 0
[ 7.962280] sd 0:0:7:0: [sdh] 3907029168 512-byte logical blocks: (2.00 TB/1.81 TiB)
[ 8.057306] sdg: unknown partition table
[ 8.272079] sd 0:0:7:0: [sdh] Write Protect is off
[ 8.272081] sd 0:0:7:0: [sdh] Mode Sense: 73 00 00 08
[ 8.274638] sd 0:0:7:0: [sdh] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[ 8.357265] sd 0:0:6:0: [sdg] Attached SCSI disk
[ 8.549594] sdh: unknown partition table
[ 8.849557] sd 0:0:7:0: [sdh] Attached SCSI disk
Here are some benchmarks I got in raid0, amazingly enough I got higher read speeds and about equal write speeds to my ARC-1280ML (granted its in raid6):
hdparm:
Code:
/dev/md127:
Timing buffered disk reads: 2824 MB in 3.00 seconds = 941.18 MB/sec
Very nice result... My areca caps out at around 820-830 megabytes/sec. This could likely go further as hdparm on a single disk showed 120 MB sec, 120 * 8 = 960 so this is almost scaling perfectly.
Here is a 50 gigabyte read using dd with this command:
dd bs=1M count=50000 if=/dev/md127 of=/dev/null
result:
Code:
50000+0 records in
50000+0 records out
52428800000 bytes (52 GB) copied, 56.7052 s, 925 MB/s
50000+0 records in
50000+0 records out
52428800000 bytes (52 GB) copied, 56.6312 s, 926 MB/s
I ran each DD test twice to be sure. Again very good.
Here is a write 30 GB write test using:
dd bs=1M count=30000 if=/dev/zero of=/dev/md127
One thing to note is that DD sat around 90% cpu usage and sometimes hit 100% during the test which could be limiting this a bit.
The results:
Code:
30000+0 records in
30000+0 records out
31457280000 bytes (31 GB) copied, 41.8887 s, 751 MB/s
30000+0 records in
30000+0 records out
31457280000 bytes (31 GB) copied, 42.9038 s, 733 MB/s
Random access reads....
1 thread:
Code:
Seeker v3.0, 2009-06-17, http://www.linuxinsight.com/how_fast_is_your_disk.html
Benchmarking /dev/md127 [31256231936 blocks, 16003190751232 bytes, 14904 GB, 15261832 MB, 16003 GiB, 16003190 MiB]
[512 logical sector size, 512 physical sector size]
[1 threads]
Wait 30 seconds..............................
Results: 76 seeks/second, 13.032 ms random access time (4970178045 < offsets < 16002929280578)
64 threads:
Code:
Seeker v3.0, 2009-06-17, http://www.linuxinsight.com/how_fast_is_your_disk.html
Benchmarking /dev/md127 [31256231936 blocks, 16003190751232 bytes, 14904 GB, 15261832 MB, 16003 GiB, 16003190 MiB]
[512 logical sector size, 512 physical sector size]
[64 threads]
Wait 30 seconds..............................
Results: 859 seeks/second, 1.163 ms random access time (52879863 < offsets < 16002049683371)
Now for the raid6 tests, first up hdparm:
Code:
/dev/md6:
Timing buffered disk reads: 2120 MB in 3.00 seconds = 706.64 MB/sec
This was right around what I was expecting for losing two disks worth of sequential read bandwidth to parity.
Here is the big reads (this time using a 40 GB read as I knew it would go slower):
command:
dd bs=1M count=40000 if=/dev/md6 of=/dev/null
Code:
40000+0 records in
40000+0 records out
41943040000 bytes (42 GB) copied, 59.6086 s, 704 MB/s
40000+0 records in
40000+0 records out
41943040000 bytes (42 GB) copied, 59.6698 s, 703 MB/s
Again right around what I was expecting and very impressive for software raid6. This is completely I/O limited.
Big write test. I gotta say I was really surprised by this. When I tested a PCI-X one I only got around 110 MB/sec and it wasn't CPU or I/O limited. Maybe it was the newer kernel but the write test left my system only 2-5% idle and I saw:
dd use about 60-70%, md6_raid6 77-85% and a flush process used 25-35% which was almost all the CPU (including %sys and other misc stuff) that this dual core system could muster. The write speeds were much better than what I saw in the past:
command:
dd bs=1M count=30000 if=/dev/zero of=/dev/md6
Code:
30000+0 records in
30000+0 records out
31457280000 bytes (31 GB) copied, 92.8034 s, 339 MB/s
30000+0 records in
30000+0 records out
31457280000 bytes (31 GB) copied, 92.7274 s, 339 MB/s
This is very good and I would say is performing better in software raid than an ARC-1220 and close to an ARC-1222.
Here are the random read/seek tests for raid6.
1 thread:
Code:
Seeker v3.0, 2009-06-17, http://www.linuxinsight.com/how_fast_is_your_disk.html
Benchmarking /dev/md6 [23442172416 blocks, 12002392276992 bytes, 11178 GB, 11446373 MB, 12002 GiB, 12002392 MiB]
[512 logical sector size, 512 physical sector size]
[1 threads]
Wait 30 seconds..............................
Results: 77 seeks/second, 12.848 ms random access time (7743477092 < offsets < 11998198790842)
64 threads:
Code:
Seeker v3.0, 2009-06-17, http://www.linuxinsight.com/how_fast_is_your_disk.html
Benchmarking /dev/md6 [23442172416 blocks, 12002392276992 bytes, 11178 GB, 11446373 MB, 12002 GiB, 12002392 MiB]
[512 logical sector size, 512 physical sector size]
[64 threads]
Wait 30 seconds..............................
Results: 851 seeks/second, 1.174 ms random access time (641617286 < offsets < 12001136411994)
All these results were all around impressive I would say. I was seeing very close results to some hardware raid controllers I have seen.
I do plan to test out 20 (seagate 7200.11 AS 1TB) drives and possibly on a faster CPU in the future and hopefully on a PCI-E 2.0 motherboard but I am not making any promises about the faster and PCI-E 2.0 slot (I will definitely test with 20 disks though) I am very impressed with the speeds this card gave me especially when I am not using it under the most optimum circumstances. This seems like a good card to use for anyone who is going the software raid/replication route. I think this card will give me excellent ZFS performance.