Interesting cheap JBODs available

I'll let everyone know how they work with my setup when it arrives (Adaptec 5805 and Chenbro CK13601 expanders)
 
Turned down my offer of 150 with free shipping. He want 160 to ship to Canada or 35 to ship an hour away from me at the border. Still deciding.

Hey, You mean he wanted $160 total, including shipping, to ship to Canada? or $160 on top of the $150?
 
Mine was delivered today from digitalmind2000 FedEx Ground (only took 1 day!) and they left it at my front door. I can't test it yet, but I'm impressed with the build quality. There is zero dust in mine or any sign of wear besides a couple scratches in the top when it was slide out of the rack.

Fedex attempted redelivery around 5PM so I got lucky today plus mine arrived next day too! Mine is also in very good condition; it's definitely not new, but the scratches and such are quite minimal. I plugged it into the 2nd port of my 5805 (SFF-8087 to SFF-8088 external PCI bracket + SFF-8088 to SFF-8088 cable from server to enclosure). ASM screenshot of enclosure
 
Here is a picture of the enclosure plugged into the external port of my Areca ARC-1680IX. All the sensors appear to be working just fine. I'm planning on testing some drives here in a bit.

rackable-expander.jpg
 
Last edited:
Wish I had some cash right now. I'd order a couple of em. Waiting for packetboy to get his and see what compatibility is with LSI cards.

They shipped yesterday...should arrive on Feb 10th. I've got enough spindles to populate 3 of these enclosures so should be able to put this through it's paces.

What I am really curious about is if the expander included with these supports SAS Multiplexing....that's what enables you to put 3Gbps drives in the enclosure but still make full use of the 6Gbps linkup from the enclosure to the HBA.

In other words 16 3Gbps SATA drives would be about 1600MB/s. If enclosure upload is limited to 3Gbps * 4 (SAS wide port) upload would be limited to about 1200MB/s...thus you wouldn't be able to achieve the full potential of the enclosure.

OTOH, if the enclosure will function at 6Gbps, then you've got 2400MB/s to play with and thus no bottleneck.

We'll see.
 
They shipped yesterday...should arrive on Feb 10th. I've got enough spindles to populate 3 of these enclosures so should be able to put this through it's paces.

What I am really curious about is if the expander included with these supports SAS Multiplexing....that's what enables you to put 3Gbps drives in the enclosure but still make full use of the 6Gbps linkup from the enclosure to the HBA.

In other words 16 3Gbps SATA drives would be about 1600MB/s. If enclosure upload is limited to 3Gbps * 4 (SAS wide port) upload would be limited to about 1200MB/s...thus you wouldn't be able to achieve the full potential of the enclosure.

OTOH, if the enclosure will function at 6Gbps, then you've got 2400MB/s to play with and thus no bottleneck.

We'll see.

Anyone who tests these on an ARC-1882/1880 can check this.

Go to Information -> SAS Chip Information in the web interface and it will list under attached expander the number of lanes and what speed. I had two SFF-8087 cables hooked up to this expander and it was 6 gig SAS drives so it lists 8x6 G for 8 lanes (4 per cable) of 6 gigabit:

http://box.houkouonchi.jp/archttp1882/arc1882_2.png

Here is my home machine with an 1880X hooked up to two HP SAS expanders with 6GB SATA 3TB disks:

http://box.houkouonchi.jp/sas_chip_information.png

Only one cable each so 4x6 G.

Interesting enough the disk info itself says that the current SATA mode is only SATA300 for the disks. So not sure if its doing multiplexing or not. I am bottlenecked by the speed of the card though so its hard for me to tell. I get around 2 gigabytes/sec read which is about 1000 megabytes/sec per SAS expander (each hooked up to 15 disks):

http://box.houkouonchi.jp/disk_info.png
 
Last edited:
Anyone who tests these on an ARC-1882/1880 can check this.

Go to Information -> SAS Chip Information in the web interface and it will list under attached expander the number of lanes and what speed. I had two SFF-8087 cables hooked up to this expander and it was 6 gig SAS drives so it lists 8x6 G for 8 lanes (4 per cable) of 6 gigabit:

http://box.houkouonchi.jp/archttp1882/arc1882_2.png

Thanks, I was curious what expander chip was in this thing and it looks like a LSI SAS2X38 :)
 
Thanks, I was curious what expander chip was in this thing and it looks like a LSI SAS2X38 :)

Ah sorry for the confusion this is not the cheap JBOD thing I was just showing examples of areca controllers showing the sas chip info =P

That screenshot is from an ARC-1882 hooked up to a supermicro chasis with built-in 6GB SAS expander (24 slot 4u chasis) not the chasis in this thread.
 
Ah sorry for the confusion this is not the cheap JBOD thing I was just showing examples of areca controllers showing the sas chip info =P

That screenshot is from an ARC-1882 hooked up to a supermicro chasis with built-in 6GB SAS expander (24 slot 4u chasis) not the chasis in this thread.

Ah, gotcha.
 
Ah sorry for the confusion this is not the cheap JBOD thing I was just showing examples of areca controllers showing the sas chip info =P

That screenshot is from an ARC-1882 hooked up to a supermicro chasis with built-in 6GB SAS expander (24 slot 4u chasis) not the chasis in this thread.

I can only speak for the SC847 JBOD chassis...while the LSI expander chipset *does* support SAS Multiplexing, Supermicro chose NOT to implement it. I have two SC847 deployments both hooked to LSI 9200-8e HBAs and a mix of 3Gbps and 6Gbs Hitachi drives. The SAS uplinks all negotiate to 6Gbps ... however, the effective throughput across the uplinks is only 3Gbps as a empty "ALIGN primatives" are inserted every other "frame".

Ridiculous.

Good explanation of how SAS Multiplexing works when actually implemented by both the HBA and expander:

http://www.serialstoragewire.net/Articles/2008_10/pmcsierra.html

See Page 34,35 here for how it works with the SC847:

http://www.scsita.org/sas_library/tutorials/SAS_Link_layer_2_public.pdf

I just love how they make "link rate matching" sound like a fantastic feature.

Grrrr....
 
Generally if people are happy with theirs and the speed is adequate. I know one poster from Canada was not.
 
Check out post #40
I'm still waiting on some more hardware so I can test it in another system but I doubt the enclosure is causing the low speeds, I'm thinking the motherboard is being wonky with the PCI-e speeds or there's something funky going on with my Adaptec. If anyone else could post some HD Tune results that would be awesome though. Since I'm limited by gigabit ethernet I'm not overly concerned about the speeds but I'll definitely make sure to post my results on my other motherboard.

Other than the weird speed bottleneck and the odd enclosure status I'm very happy with it though. Tested hot swapping, expanding the array, multiple arrays, everything seems to be working smoothly. I really dig the all metal trays and whatnot, a lot of the expensive arrays at my work like the Netapps and Sun arrays feel cheap in comparison with their flimsy plastic trays and case.
 
Does anyone have a good source for the drive mounting screws needed for these drive caddies? Mine came with no mounting screws, and some of the caddies were a bit out of whack but nothing too serious.

Also, I just want to make sure if i try to source them locally (hardware store or such) that these are 6-32 x 1/4" countersunk head machine screws... i think.
 
Also, I just want to make sure if i try to source them locally (hardware store or such) that these are 6-32 x 1/4" countersunk head machine screws... i think.

Countersunk 6-32x1/4" is perfect (that's what I use on all my hot swap carriers).
 
Got my stuff a few days ago and installed them last night with 2 Seagate drives. As soon as I get the screws to install more drives, I will put them through some testings. Here is what it shows for my ARC-1880x for those interested (so it's doesn't have SAS mux):

DzJnhNFB
 
Got my stuff a few days ago and installed them last night with 2 Seagate drives. As soon as I get the screws to install more drives, I will put them through some testings. Here is what it shows for my ARC-1880x for those interested (so it's doesn't have SAS mux):

So the enclosure uses a PM8399 which is 3GB/s. Oh well, there goes the notion of having a 6GB/s SAS enclosure. Thanks for posting up the screenshot!
 
Is it possible that they are shipping with different expanders so some people are getting some with 6Gb controllers and others with 3Gb?
 
Hrm.. that's a bit lame if that is the case. The first ebay says:
Host Interfaces Two external 6Gb/s SAS 2x wide ports via two SFF-8088 mini-SAS interfaces (front cabled)
 
I would tend to think it is possible that the expander boards are different between units, though most of the ones purchased from the same supplier (likely same batch installed then decommissioned at the same time) should have the same board installed. I will collect what information i can from the one i received when i get everything connected and have time to poke around.
 
I would tend to think it is possible that the expander boards are different between units, though most of the ones purchased from the same supplier (likely same batch installed then decommissioned at the same time) should have the same board installed. I will collect what information i can from the one i received when i get everything connected and have time to poke around.

I think this maybe true. It may very well be that OMNISTOR SE3016 is different than SGI InfiniteStorage 1116 because the ones we got from digitalmind2000 have firmware version 0102 while OP has a different firmware version (0227).
 
I think this maybe true. It may very well be that OMNISTOR SE3016 is different than SGI InfiniteStorage 1116 because the ones we got from digitalmind2000 have firmware version 0102 while OP has a different firmware version (0227).

could be Yes or No.

the "green" boards on both looks identical with the same revision number on the board.

on my understanding ( could be wrong, since no public datasheets for PM8339 and PM8004.

PM8399 is 3G, and PM8004 is 6G with ethernet support. My assumption is both PM8399 and PM8004 are not compatible with pin arrangement.

on digitalmind200, the firmware (112) is older than 227, the question is how can we get 227 firmware and try to flash with newer firmware.

I can plug to HP P410,but complaining on high temperature. the boot stops at the high temperature warning.

when unplug and power-on my N40l with P410, and replug after the OS finish loading. I can see SE3016 expander

es3106onp410.png

Screenshot from HP array configuration utility
 
Last edited:
So all I would need to drive this is say an IBM M1015 with an 8087 to 8088 external adapter? Hell even an old AOC-SASLP-MV8 would work wouldn't it?
 
$100 is expensive? Unless you already have them, you're going to be paying $50 for a bracket and $30 for cables which take up an extra slot and room inside your case.. IMO the extra $20 is worth not having to deal with that, let alone getting a LSI chip instead of the Marvel one on the AOC.
 
Sorry I didn't mean new - I've seen them for $100 on Ebay multiple times. I figured you weren't worrying about new if you were already looking at the refurb JBOD. Here are two available for $100..
 
Given that these JBOD arrays (some it seems anyway) only support 4x3Gb links, a frugal (read: cheap) builder (such as myself, cheap) could go with a used dell sas6. It uses the LSI 1068e chipset and can be fairly easily flashed to IT firmware for software raid. I have managed to pick one off ebay for $20, albeit without a pci bracket, but you can salvage a bracket or cobble one together fairly easily. If anyone has a reason not to use a 1068e or specifically the dell controller please weigh in and correct me, i could use the guidance as well.
 
Can anyone with one of these already confirm that spindown works ok? Does spindown work with all expanders as long as the raid card supports it?
 
Enclosure attached to LSI-9200-8e using LSIutil 1.63


Code:
Main menu, select an option:  [1-99 or e/p/w or 0 to quit] 8

SAS2008's links are 3.0 G, 3.0 G, 3.0 G, 3.0 G, down, down, down, down

 B___T___L  Type       Vendor   Product          Rev      SASAddress     PhyNum
 0  17   0  Disk       ATA      Hitachi HDS72202 A28A  50019400009e8200     0
 0  18   0  Disk       ATA      Hitachi HDS72202 A3EA  50019400009e8201     1
 0  19   0  Disk       ATA      Hitachi HDS72202 A28A  50019400009e8204     4
 0  20   0  Disk       ATA      Hitachi HDS72202 A3EA  50019400009e8205     5
 0  21   0  Disk       ATA      Hitachi HDS72202 A28A  50019400009e8208     8
 0  22   0  EnclServ   RACKABLE SE3016-SAS       0227  50019400009e823e    24
 0  23   0  Disk       ATA      Hitachi HDS72202 A3EA  50019400009e820c    12

Main menu, select an option:  [1-99 or e/p/w or 0 to quit] 16

SAS2008's links are 3.0 G, 3.0 G, 3.0 G, 3.0 G, down, down, down, down

 B___T     SASAddress     PhyNum  Handle  Parent  Type
        500605b0021a1c40           0001           SAS Initiator
        500605b0021a1c40           0002           SAS Initiator
        500605b0021a1c40           0003           SAS Initiator
        500605b0021a1c40           0004           SAS Initiator
        500605b0021a1c40           0005           SAS Initiator
        500605b0021a1c40           0006           SAS Initiator
        500605b0021a1c40           0007           SAS Initiator
        500605b0021a1c40           0008           SAS Initiator
 0  16  50019400009e823f     0     0010    0001   Edge Expander
 0  17  50019400009e8200     0     0011    0010   SATA Target
 0  18  50019400009e8201     1     0012    0010   SATA Target
 0  19  50019400009e8204     4     0013    0010   SATA Target
 0  20  50019400009e8205     5     0014    0010   SATA Target
 0  21  50019400009e8208     8     0015    0010   SATA Target
 0  22  50019400009e823e    24     0016    0010   SAS Initiator and Target
 0  23  50019400009e820c    12     0017    0010   SATA Target

Type      NumPhys    PhyNum  Handle     PhyNum  Handle  Port  Speed
Adapter      8          0     0001  -->   19     0010     0    3.0
                        1     0001  -->   18     0010     0    3.0
                        2     0001  -->   17     0010     0    3.0
                        3     0001  -->   16     0010     0    3.0

Expander    25          0     0010  -->    0     0011     0    3.0
                        1     0010  -->    0     0012     0    3.0
                        4     0010  -->    0     0013     0    3.0
                        5     0010  -->    0     0014     0    3.0
                        8     0010  -->    0     0015     0    3.0
                       12     0010  -->    0     0017     0    3.0
                       16     0010  -->    3     0001     0    3.0
                       17     0010  -->    2     0001     0    3.0
                       18     0010  -->    1     0001     0    3.0
                       19     0010  -->    0     0001     0    3.0
                       24     0010  -->   24     0016     0    3.0

Enclosure Handle   Slots       SASAddress       B___T (SEP)
           0001      8      500605b0021a1c40
           0002     25      50019400009e823f    0  22

Note:
SAS2008's links are 3.0 G, 3.0 G, 3.0 G, 3.0 G,

This shows that expander uplink connection to LSI HBA is negoitiating at 3Gbps...NOT 6Gbps. So looks like dreams for 6Gbps SAS multiplexing are out the window....THAT really sucks. I'm assuming as soon as you put a 3Gbps drive in the enclosure, it slaps down the uplink. I need to see if I have a 6Gbps drive just to verify that this enclosure will do 6Gbps.
 
WTH!

Please tell me I'm doing something wrong.

Make raid0 with 6 Hitachi 2TB 3Gbps drives:

Code:
# zfs create rack1 c6t5000CCA221D42102d0 c6t5000CCA221D2E20Fd0 c6t5000CCA221DA8BD0d0 c6t5000CCA221DC3FA6d0 c6t5000CCA222C8C6BDd0 c6t5000CCA221DFD972d0

# zfs set primarycache=none rack1

Note: Purposely disable ZFS read cache.

Code:
# time dd if=/rack1/1gb.img of=/dev/null bs=1048592000 count=1
0+1 records in
0+1 records out

real    0m6.120s
user    0m0.001s
sys     0m2.027s
# time dd if=/rack1/1gb.img of=/dev/null bs=1048592000 count=1
0+1 records in
0+1 records out

real    0m6.012s
user    0m0.001s
sys     0m2.008s
# time dd if=/rack1/1gb.img of=/dev/null bs=1048592000 count=1
0+1 records in
0+1 records out

real    0m5.904s
user    0m0.001s
sys     0m1.994s

1GB/6s = ~175MB/s

You're kidding...right?

It should be 500-600MB/s with that many drives.

Please tell me I"m doing something stupid.

Has anyone else done perf testing with these Rackable Systems enclosures yet?
 
Back
Top