Interesting cheap JBODs available

WTH!

Please tell me I'm doing something wrong.

Make raid0 with 6 Hitachi 2TB 3Gbps drives:

Code:
# zfs create rack1 c6t5000CCA221D42102d0 c6t5000CCA221D2E20Fd0 c6t5000CCA221DA8BD0d0 c6t5000CCA221DC3FA6d0 c6t5000CCA222C8C6BDd0 c6t5000CCA221DFD972d0

# zfs set primarycache=none rack1

Note: Purposely disable ZFS read cache.

Code:
# time dd if=/rack1/1gb.img of=/dev/null bs=1048592000 count=1
0+1 records in
0+1 records out

real    0m6.120s
user    0m0.001s
sys     0m2.027s
# time dd if=/rack1/1gb.img of=/dev/null bs=1048592000 count=1
0+1 records in
0+1 records out

real    0m6.012s
user    0m0.001s
sys     0m2.008s
# time dd if=/rack1/1gb.img of=/dev/null bs=1048592000 count=1
0+1 records in
0+1 records out

real    0m5.904s
user    0m0.001s
sys     0m1.994s

1GB/6s = ~175MB/s

You're kidding...right?

It should be 500-600MB/s with that many drives.

Please tell me I"m doing something stupid.

Has anyone else done perf testing with these Rackable Systems enclosures yet?
A quick test with 2 drives in RAID0 yielded average 320MB/sec using HDTune (seems too high for me though)
 
Bizzare...write throughput is as expected:

Code:
# time dd of=/rack1/10g_2.img if=/dev/zero bs=1048576 count=10000
10000+0 records in
10000+0 records out

real    0m15.003s
user    0m0.008s
sys     0m5.599s


# zpool iostat -v 10 10


               capacity     operations    bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
rack1       17.6G  10.9T      3  4.59K  1.84K   571M
  c6t5000CCA221D42102d0  2.94G  1.81T      1    786    658  95.0M
  c6t5000CCA221D2E20Fd0  2.93G  1.81T      0    784    329  94.8M
  c6t5000CCA221DA8BD0d0  2.94G  1.81T      0    784     94  95.0M
  c6t5000CCA221DC3FA6d0  2.94G  1.81T      0    770     94  93.9M
  c6t5000CCA222C8C6BDd0  2.92G  1.81T      0    782    188  95.9M
  c6t5000CCA221DFD972d0  2.98G  1.81T      1    791    517  96.4M

But read throughput blows monkey chunks:

Code:
Ouch!

# time dd if=/rack1/10g.img of=/dev/null bs=1048576 count=10000
9536+1 records in
9536+1 records out

real    0m50.764s
user    0m0.022s
sys     0m5.261s


               capacity     operations    bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
rack1       20.0G  10.9T  1.97K      0   185M      0
  c6t5000CCA221D42102d0  3.33G  1.81T    323      0  30.8M      0
  c6t5000CCA221D2E20Fd0  3.32G  1.81T    316      0  31.0M      0
  c6t5000CCA221DA8BD0d0  3.33G  1.81T    317      0  31.5M      0
  c6t5000CCA221DC3FA6d0  3.32G  1.81T    363      0  30.1M      0
  c6t5000CCA222C8C6BDd0  3.32G  1.81T    358      0  30.1M      0
  c6t5000CCA221DFD972d0  3.39G  1.81T    339      0  31.8M      0
 
Maybe the PMC Sierra expander isn't as well behaved as the LSI one?
What SAS HBA/raid card are you using?

I've only used the Intel and Chenbro SAS expanders, which are both based on LSI chipsets.
Chenbro was rather finicky, but the Intel has been fairly well behaved.

As for raid cards, I've had nothing but problems with Adaptec (kernel panics, etc),
for the most part the Areca works well but sometimes it hard locks the system
or drops all the freaking disks if one disk is being marginal
(so much for raid providing improved availability). At least the raids don't go to
a degraded state when that happens, they are detected as normal on next boot.
 
Anyone who tests these on an ARC-1882/1880 can check this.

Go to Information -> SAS Chip Information in the web interface and it will list under attached expander the number of lanes and what speed. I had two SFF-8087 cables hooked up to this expander and it was 6 gig SAS drives so it lists 8x6 G for 8 lanes (4 per cable) of 6 gigabit:

http://box.houkouonchi.jp/archttp1882/arc1882_2.png

Here is my home machine with an 1880X hooked up to two HP SAS expanders with 6GB SATA 3TB disks:

http://box.houkouonchi.jp/sas_chip_information.png

Only one cable each so 4x6 G.

Interesting enough the disk info itself says that the current SATA mode is only SATA300 for the disks. So not sure if its doing multiplexing or not. I am bottlenecked by the speed of the card though so its hard for me to tell. I get around 2 gigabytes/sec read which is about 1000 megabytes/sec per SAS expander (each hooked up to 15 disks):

http://box.houkouonchi.jp/disk_info.png

Hi, did you get this from digitalmind2000 or mrackables?
 
Yeah same question for you packetboy - which model? from digitalmind2000 or mrackables?
 
For anyone still interested in purchasing an inexpensive enclosure, there is an SGI infiniteStorage 220 for sale on ebay as well for 215 plus shipping. it is 12 drives instead of 16, but it looks a lot more aesthetically pleasing. The seller is also including cabling. Not sure if linking it is against the rules, but i'll err on the side of recklessness and remove the link if asked to do so

http://www.ebay.com/itm/380405415953
 
As an eBay Associate, HardForum may earn from qualifying purchases.
Yeah same question for you packetboy - which model? from digitalmind2000 or mrackables?

From mrRackables.

Enclosure shows up like this:

Code:
scsi 4:0:16:0: Enclosure         RACKABLE SE3016-SAS       0227 PQ: 0 ANSI: 5
scsi 4:0:16:0: SSP: handle(0x001b), sas_addr(0x50019400009e823e), phy(24), device_name(0x0000000000000000)
scsi 4:0:16:0: SSP: enclosure_logical_id(0x50019400009e823f), slot(24)
scsi 4:0:16:0: qdepth(254), tagged(1), simple(1), ordered(0), scsi_level(6), cmd_que(1)



Something must be screwing with OI151 and the LSI 9200-8e HBA.

Today, I installed a LSI 9205-8e into one of the Hadoop blades (Centos 6.2) and read throughput is much closer to what is expected...and NO looks like these enclosures do NOT do SAS Multiplexing...perf results in a moment.
 
OK..here we go.

First the setup.

Supermicro X8DTT-HIBQF+
Dual Intel Xeon 3Ghz 6 core
48GB ECC Ram
LSI 9205-8e HBA
(LSISAS2308: FWVersion(09.00.02.00), ChipRevision(0x01), BiosVersion(07.17.03.00))

OS: Centos 6.2

Two Rackable Systems SE3016 SAS2 enclosures w/ built-in Expander
Mix of Sata 1.5TB and 2.0TB drives (all 3Gbps)
8 drives connected to each enclosure via SAS wide cable.

DD throughput test, adding one drive at a time:

First I just kept adding one drive at a time to a single enclosure..I stopped at 10 drives for obvious reasons:

Dr Seq. Read Throughput
[Enclosure #1]
01 - 122MB/s
02 - 247MB/s
03 - 368MB/s
04 - 484MB/s
05 - 613MB/s
06 - 737MB/s
07 - 854MB/s
08 - 932MB/s
09 - 966MB/s
10 - 952MB/s
(flatline)

Next, I reverted to 8 drives in the first enclosure and started adding one drive at a time to the second enclosure:

Dr Seq. Read Throughput
[Enclosure #1]
01 - 122MB/s
02 - 247MB/s
03 - 368MB/s
04 - 484MB/s
05 - 613MB/s
06 - 737MB/s
07 - 854MB/s
08 - 932MB/s

(add second enclosure)
09 - 1080MB/s
10 - 1174MB/s
11 - 1284MB/s
12 - 1384MB/s
13 - 1486MB/s
14 - 1556MB/s
15 - 1655MB/s
16 - 1741MB/s


Conclusions:

No way does this enclosure support SAS multiplexing...or if it does, its a moot point as the max throughput of the expander seems to cap out at around 950MB/s....unless my math is wrong.

SAS 3.0Gbps * 4 (for a wide port) = 12Gbps * 80% (account for 8B/10B SAS encoding overhead) = 9.6Gbps = ~1200MB/s

Expander is obviously only doing 3Gbps, but even at that we're only getting about 80% of the 1200MB/s I'd expect.

That is dissapointing...why build a SAS 6Gbps enclosure that holds 16 6Gbps drive (nearly 2000MB/s of throughput potential), yet use an expander with only half that capability...grrrrr.

Still...for the price and density, ti's a pretty decent deal...certainly fine for home storage usage, just not ideal for a high performance application.

I'm thinking about swapping out the SAS Expanders in these things to something that would give more throughput...I know there was one expander that advertised that they do support SAS Multiplexing, but can't seem to find it now...anyone remember which one it was?

Alternately, I'm looking at just ripping the exapnders out of these and wiring each 4 drive group to a SAS wide port on the 9205. I see some of the cable makers make cables with a SFF-8087 on one end and a SFF-8088 on the other for about $50/each...I'd need 4 of them per enclosure, but that should enable me to get 2000MB/s out of a single enclosure (presuming I don't hit a cap on the LSI 9205 HBA). Thoughts?
 
Last edited:
OK..here we go.

First the setup.

Supermicro X8DTT-HIBQF+
Dual Intel Xeon 3Ghz 6 core
48GB ECC Ram
LSI 9205-8e HBA
(LSISAS2308: FWVersion(09.00.02.00), ChipRevision(0x01), BiosVersion(07.17.03.00))

OS: Centos 6.2

Two Rackable Systems SE3016 SAS2 enclosures w/ built-in Expander
Mix of Sata 1.5TB and 2.0TB drives (all 3Gbps)
8 drives connected to each enclosure via SAS wide cable.

DD throughput test, adding one drive at a time:

First I just kept adding one drive at a time to a single enclosure..I stopped at 10 drives for obvious reasons:

Dr Seq. Read Throughput
[Enclosure #1]
01 - 122MB/s
02 - 247MB/s
03 - 368MB/s
04 - 484MB/s
05 - 613MB/s
06 - 737MB/s
07 - 854MB/s
08 - 932MB/s
09 - 966MB/s
10 - 952MB/s
(flatline)

Next, I reverted to 8 drives in the first enclosure and started adding one drive at a time to the second enclosure:

Dr Seq. Read Throughput
[Enclosure #1]
01 - 122MB/s
02 - 247MB/s
03 - 368MB/s
04 - 484MB/s
05 - 613MB/s
06 - 737MB/s
07 - 854MB/s
08 - 932MB/s

(add second enclosure)
09 - 1080MB/s
10 - 1174MB/s
11 - 1284MB/s
12 - 1384MB/s
13 - 1486MB/s
14 - 1556MB/s
15 - 1655MB/s
16 - 1741MB/s


Conclusions:

No way does this enclosure support SAS multiplexing...or if it does, its a moot point as the max throughput of the expander seems to cap out at around 950MB/s....unless my math is wrong.

SAS 3.0Gbps * 4 (for a wide port) = 12Gbps * 80% (account for 8B/10B SAS encoding overhead) = 9.6Gbps = ~1200MB/s

Expander is obviously only doing 3Gbps, but even at that we're only getting about 80% of the 1200MB/s I'd expect.

That is dissapointing...why build a SAS 6Gbps enclosure that holds 16 6Gbps drive (nearly 2000MB/s of throughput potential), yet use an expander with only half that capability...grrrrr.

Still...for the price and density, ti's a pretty decent deal...certainly fine for home storage usage, just not ideal for a high performance application.

I'm thinking about swapping out the SAS Expanders in these things to something that would give more throughput...I know there was one expander that advertised that they do support SAS Multiplexing, but can't seem to find it now...anyone remember which one it was?

Alternately, I'm looking at just ripping the exapnders out of these and wiring each 4 drive group to a SAS wide port on the 9205. I see some of the cable makers make cables with a SFF-8087 on one end and a SFF-8088 on the other for about $50/each...I'd need 4 of them per enclosure, but that should enable me to get 2000MB/s out of a single enclosure (presuming I don't hit a cap on the LSI 9205 HBA). Thoughts?

That's too bad. Should still suffice for home file server and esxi store connected to an all in one?
 
Last edited:
OK..here we go.

First the setup.

Supermicro X8DTT-HIBQF+
Dual Intel Xeon 3Ghz 6 core
48GB ECC Ram
LSI 9205-8e HBA
(LSISAS2308: FWVersion(09.00.02.00), ChipRevision(0x01), BiosVersion(07.17.03.00))

Thoughts?

There are few people that I know that have more hardware sitting around at home than I do... and you just made me drool.
 
@packetboy

Maybe you are doing the math for SAS multiplexing.... But you also have to account for the SATA encoding inside the SAS protocol.

Controllers doing SAS to SATA encoding.... Might be manageable and with a controller that has enough ports you might get linear improvements in speed.?

But when an expander has to do the encoding workload it might be a different kettle of fish? And they are not really designed for that type of workload.

Only way to tell would be to use SAS drives instead of SATA and see if your score is different?

.
 
@packetboy

Do you have any SATAIII drives to test with? I would like to make sure that they do in fact have 6Gbps expanders before I buy anything.
 
Do you have any SATAIII drives to test with? I would like to make sure that they do in fact have 6Gbps expanders before I buy anything.

Sadly, no...so frustrating...I have bunches of them, but they are in my production ZFS ararys....and there's just no way I'm paying a premium for SATAIII drives right now, especially when I have goobs of SATAII lying around.

If there's someone in Atlanta with a pile of SAS2 or SATA3 drives and is interested in playing in the sandbox, hit me up on IM.

Anyone figure out where there are manuals for these enclosures...someone said SGI wanted $$$ for support contract in order to get manuals...I think it's even worse than that...a friend of mine has an SGI support login and we spend an hour going through their system .... I see NOTHING on the InfiniteStorage 1116....no manuals, no mention of firmware updates, ... nothing.

Anyone figure out the pin out for the console port...I presume it's a serial connection?
 
Maybe you are doing the math for SAS multiplexing.... But you also have to account for the SATA encoding inside the SAS protocol.
.

You might be on to something there. It's actually called STP - Sata Tunneling Protocol

I knew it was at play, but presumed the overhead would be nominal...however, when I look at the Wiki:

http://en.wikipedia.org/wiki/Serial_attached_SCSI

Note this bullet point in the Nearline SAS section:

* Faster interface compared to SATA, up to 30%, no STP (Serial ATA Tunneling Protocol) overhead

So they are talking about special SATA drives that are fitted with SAS interfaces...one of the benefits is it eliminates STP which they are implying has "up to 30%" overhead. Yikes.
 
I would have to agree, using real 6g expanders, and 3g sata drives, I have capped out just around 1000MB/sec +-50MB/sec.

I believe my 6g expander (intel) is doing the sas padding, cause lsiutil says the link from the card to the expander is 6g.

When I built my next system, using another intel 6g expander, but this time with 6g sata drives, I managed to get 1800MB/sec, but if I remember right, I think I started to max out the cpu at that rate.
 
Is it possible to use the SAS in and SAS out to both go to the SAS card, or are they dedicated input/output ports?
 
Is it possible to use the SAS in and SAS out to both go to the SAS card, or are they dedicated input/output ports?

It's worked with either port, though haven't verified that perf is the same.

Guess it's also worth a try to see if connect the expander to the HBA with BOTH ports if it actually does multi-link (or whatever it's called).
 
It's worked with either port, though haven't verified that perf is the same.

Guess it's also worth a try to see if connect the expander to the HBA with BOTH ports if it actually does multi-link (or whatever it's called).

Hey - did you manage to test this out yet? I'd be interested to see if you get past ~950mB/sec that way..
 
a friend of mine has an SGI support login and we spend an hour going through their system .... I see NOTHING on the InfiniteStorage 1116....no manuals, no mention of firmware updates, ... nothing.

What about 'OmniStor' or 'SE3016' ?

You know what... I think the Ebay seller has just labelled them incorrectly and they are not 6G expanders.

The output you pasted mentions SE3016 and according to this press release:
http://www.sgi.com/company_info/newsroom/press_releases/rs/2007/08072007.html
...OmniStor SE3016 supports up to 16 SAS or SATA II drives per system with 1.2 Gigabytes per second of bandwidth between the server and the storage device....

No mention of SAS2 or SATAIII, and the speed is only 3G speed. Also that's from 2007.. before 6G expanders were out I think?
 
Officially all gone I think can't find anymore. That's too bad finally decided to get one.
 
@doofoo, I have tested one of these expanders to work with an SAS 6i, so it should work with the 6e. At least one drive was recognized in OI through this expander and an SAS 6i, waiting on screws to mount the rest of the drives before i can test fully.

EDIT: My apologies, i did not read the question properly (no coffee), i realize my answer was pointless
 
With 8 Hitachi drives in RAID 6 config with arc1880x:

arc-1880x_with_se3016-sas_raid6_8drives.png
 
I just changed the RAID set to RAID0 just trying to see how much throughput I can get out from it, but surprisingly, it's also around 500MB/sec. Anyone know what's limiting to 500MB/sec? I am using Windows 2008 R2. Thanks.

arc-1880x_with_se3016-sas_raid0_8drives.png
 
Last edited:
The output you pasted mentions SE3016 and according to this press release:
http://www.sgi.com/company_info/newsroom/press_releases/rs/2007/08072007.html


No mention of SAS2 or SATAIII, and the speed is only 3G speed. Also that's from 2007.. before 6G expanders were out I think?

my conclusion:
InfiniteStorage 1116 or SE3016 is the same beast with just rebranding name with supporting only 3G.
That could be the new InfiniteStorage 1116 ( if SGI sells as today) is using different SAS expander chipset where supports 6Gb
 
I just changed the RAID set to RAID0 just trying to see how much throughput I can get out from it, but surprisingly, it's also around 500MB/sec. Anyone know what's limiting to 500MB/sec? I am using Windows 2008 R2. Thanks.

on my understanding, that's reasonable for 3Gb

I have adaptec 4805SAS with write cache and read cache( do not remember the exact wording) enabled.
I can only max 400MB to480M raw data rate on 1Gb ethernet ( broadcom NIC)
 
I just changed the RAID set to RAID0 just trying to see how much throughput I can get out from it, but surprisingly, it's also around 500MB/sec. Anyone know what's limiting to 500MB/sec? I am using Windows 2008 R2. Thanks.

Well, that 500 MB/sec is about what you can get out of 2 PCIe 1.0 lanes or 1 PCIe 2.0 lanes.
But that seems like a really low number of lanes to be run to a 8x slot (I assume that's what the card is?) Might be worth verifying though.
 
I just changed the RAID set to RAID0 just trying to see how much throughput I can get out from it, but surprisingly, it's also around 500MB/sec. Anyone know what's limiting to 500MB/sec? I am using Windows 2008 R2. Thanks.
[/IMG]

Can you paste an image of your raid controllers web interface on Information -> SAS chip information?

Also did you try using two SFF-8088 cables to the chassis to see if you can get more throughput?
 
Can you paste an image of your raid controllers web interface on Information -> SAS chip information?

Also did you try using two SFF-8088 cables to the chassis to see if you can get more throughput?
arc-1880x_with_se3016-sas.png

arc-1880x_with_se3016_sas-info.png


No, I didn't use 2 cables as the labels on the enclosure says one of the "In", and the other is "Out". I had assumed it's for daisy-chaining. Let me know if this assumption is wrong.
 
No, I didn't use 2 cables as the labels on the enclosure says one of the "In", and the other is "Out". I had assumed it's for daisy-chaining. Let me know if this assumption is wrong.
Yes that is most likely incorrect. Afaik ALL expanders and cards have all ports act the same - just like an ethernet switch. Most of these things have ports labelled but the labels don't actually matter.
 
Attached to the 2nd port, no difference. The card doesn't even recognized the extra port.
 
Someone may be able to help with checking on the Areca - but I believe on my LSI's I don't see a second link as such, but the connection became 8-wide instead of 4-wide.

So you're still getting the same max speed?
 
I just changed the RAID set to RAID0 just trying to see how much throughput I can get out from it, but surprisingly, it's also around 500MB/sec. Anyone know what's limiting to 500MB/sec? I am using Windows 2008 R2. Thanks.

Depends on the XOR chip on the card...
ie PERC5 cards top out at about 500-550

so in summary, the little chip that does raid calculations on the card, can only process so much... and you have found the limit.

.
 
I just changed the RAID set to RAID0 just trying to see how much throughput I can get out from it, but surprisingly, it's also around 500MB/sec. Anyone know what's limiting to 500MB/sec? I am using Windows 2008 R2. Thanks.

If you want to rule out your card performance, you could just set up all the drives as JBOD single disks and then run a benchmark on each at the same time and measure your overall throughput..
 
Back
Top