HP SAS Expander Owner's Thread

Well, I just got mine and have started testing it with the LSI 9280-8e. So far it isn't working too well. It will recognize all the drives during boot (and takes forever to do so, mind you), but Windows cannot start the LSI card after booting if the SAS expander is attached. If I attach the SAS expander after Windows has booted and the LSI card has started, the LSI storage manager freezes (as will scanning for P&P devices in the device manager to see if the card died)...actually Windows seems to have died altogether now.
 
I can't seem to get my Areca 1680ix-24 to recognize it either (I did change the SES2 settings). It sits on the initialization page during boot and gets nowhere (and instantly finishes initialization if I unplug the expander). No luck with plugging it in after the card initializes either.
 
I assume you're running the latest LSI firmware. Also is there an option in the LSI BIOS to disable SES2 (enclosure management) support? I had to disable that on my Areca 1680 cards before it would see all the drives on the expander.

Also are you attaching to the 1680ix-24 via the external SFF-8088 connector? That's what I'd recommend. I attached my 1680ix-24 to the HP expander last night via SFF-8088 and once SES2 was disabled it saw all 32 drives on the expander.
 
Last edited:
I'll see if I can do that on the LSI card. As for the Areca, I have it attached via the external port.
 
I also assume you're using latest 1.47 Areca firmware. Here is a screenshot of my settings page. Again my 1680ix-24 is working fine with HP expander 2.02 when attached to the external port. Also, what is the length of the SFF-8088 cable? I assume its not something crazy like 3 meters+. A longshot but figured I'd ask.

http://f.imagehost.org/0013/1680x_Config.jpg
 
Last edited:
Either 1.46 or 1.47 on the Areca card. I'll check it/update it when I put the card back in (board only has 2 x PCIe x8/16 slots). The LSI card doesn't appear to have any settings like the Areca when it comes to expanders. The expander isn't always recognized (but when it does, all the drives are recognized) and occasionally it will hang on the boot screen saying that it can't even find the controller itself (if it even gets that far). If I try to scan for drives under the LSI BIOS config thing, it freezes. When the expander and drives are regonized, they disappear shortly after entering the LSI BIOS it seems. As for the cables, they are only 1 meter long.
 
Odditory, if you disabled the SES2 support, what do you use to control the back-plane LEDs?
I assume you use an enclosure that uses SAS back-plane(s) and has the drive activity and status LEDs and you actually use those to indicate the status of each drive?!?
 
Odditory, if you disabled the SES2 support, what do you use to control the back-plane LEDs?
I assume you use an enclosure that uses SAS back-plane(s) and has the drive activity and status LEDs and you actually use those to indicate the status of each drive?!?
You'll only really find that in backplanes that have some kind of logic in them (think Supermicro for example). The Norcos that are popular don't have that anyway.
 
You'll only really find that in backplanes that have some kind of logic in them (think Supermicro for example). The Norcos that are popular don't have that anyway.

I figured as much. I have a Chenbro chassis and it has logic on every single one of it's backplanes. So I am trying to find a solution that will work for my setup including SES2 enclosure management!
 
Odditory, if you disabled the SES2 support, what do you use to control the back-plane LEDs?
I assume you use an enclosure that uses SAS back-plane(s) and has the drive activity and status LEDs and you actually use those to indicate the status of each drive?!?

You don't need SES2 for drive LED's. I'm just using it in a Norco 4220, but that doesn't mean the expander wouldn't be compatible with an SES2 capable backplane. I would think the expander would pass the SES2 data between the RAID card and your backplane, but only way to be sure it to test..
 
Well, I updated the firmware on the 1680ix-24 from 1.46 to 1.47. With the expander plugged in (either with or without drives), it freezes on the initialization screen still and eventually reboots the system after saying that the firmware has timed out. I give up. :(
 
Blue, if you want to bring your Areca over to my place we can test it out to see if theres some other issue?
I dunno man, thats weird.

Did you update all BIOS files? I think there were 4 of em when i did it the other day. I did all of them except the MBR0 file and then it worked.

Just curious what mobo are running this on?
 
BlueFox? Over? Did you say over? Nothing is over until we decide it is! Was it over when the German's bombed Pearl Harbor? Hell no! And it ain't over now! Check your PM.
 
Blue, if you want to bring your Areca over to my place we can test it out to see if theres some other issue?
I dunno man, thats weird.

Did you update all BIOS files? I think there were 4 of em when i did it the other day. I did all of them except the MBR0 file and then it worked.

Just curious what mobo are running this on?
Thanks for the offer. If you aren't busy tonight, I could just bring over my entire server. I updated everything on the RAID card (all 4 files). Motherboard is a Asus P5BV-C/4L.
BlueFox? Over? Did you say over? Nothing is over until we decide it is! Was it over when the German's bombed Pearl Harbor? Hell no! And it ain't over now! Check your PM.
I guess it's not over until we say it's over. :D

Lets see if I can get this all working if nitrobass has time as that way you don't have to mail me more stuff quite yet. I can even bring over my Chenbro cards to see if it's just me having problems.
 
Hey BlueFox -- I found out who the bum was that bought that Chenbro 50-bay chassis -- my man treadstone here.

@treadstone: we were waiting for that case to come down to $1500, but noooo.. you had to go and buy it out from under us. :) you *BETTER* post a build log. That is one BAD ASS case.

Chenbro_RM-91250.jpg
 
Last edited:
DARN I've been found out !!!!

I better go hide under a rock :)

And yes I will post a build log...

Last week I bought 52 WD20EADS drives for my little server...
I still need a MB along with a controller and two SAS expanders to start the actual assembly of the server.
 
@odditory: PM replied. Think the expander could still provide worth for daisy chaining out of my first norco 4220 and into a second. I think I even have the equipment to test that scenario...

@treadstone: Grab a brew. Don't cost nothin'.


My three PCIe slots will be taken: Video, Quad Intel Nic, Areca 1300ix.

The only use for the expander is to attach another Norco down the road.
 
If the SAS expander only requires power, I wonder how difficult and expensive it would be to have a pcb made with a pci-e slot and molex power connector? Both are through-hole so no smd soldering would be required; I may have to do some research.
 
I could whip one up in no time, I design PCB boards for a living :)

There are a few things to be considered though. Depending on the power supply you want to use, if there isn't enough load on the power supply it will not stay in regulation and that can be very bad for the rest of the components in your system. A couple of things I would need to know first is what voltages the board requires and on what pins. Once I have one of those HP SAS Expanders, I can easily find that out myself. But first I have to get one :)
Another thing to consider is if that board should also contain a bit of logic to control the power on/off from a case button?!?
 
Alright, so I went over to nitrobass' place and we figured a few things out. First off, the LSI card just sucks. We got it to recognize the expander finally, but it could not create an array no matter. Even if the card worked properly, I still wouldn't use it due to the god awful management program. It would freeze for 10 minutes if you unplugged the expander and plugged it back in while it recognized all the new drives (and no, the expander wasn't to blame). The new SAS expander that I just received does work...with nitrobass' 1680x. My 1680ix-24 doesn't like it at all (so that kinda narrows it down). The Chenbro cards were still having drive timeouts (though I'm not sure if the expander or drives are to blame) on nitrobass' card, so who knows what is up with them. He certainly saved me a lot of trouble. Thanks again. :)

So, the LSI card is getting returned tomorrow morning and the Chenbro cards are probably going to be sold. As for the 1680ix-24, I would like to know why it is having issues. Fortunately I have more time for testing and maybe the built in expander is to blame. If I wind up selling the 1680ix-24 and 1280ML, I'm wondering if I should just get a 1680x like nitrobass or wait for the new 1880 series...maybe both? Too bad those don't come out until next month (and who knows if they work). I guess I'll probably be the person around here to test one and give some feedback on it. Now to think everything over...
 
Last edited:
Well I think the drives are an issue no matter what RAID card you end up with.

So i would get new drives, and possibly sell the chenbros.
Even with my card we had better results with the HP.

At least with the HP and some Hitachi's we know we have two parts working.

Then your 1680 should work, unless you and odditory have different revisions im not sure why yours is not working.

One thing we didnt try was using the internal ports, so you could try that to eliminate a bad external connection.
 
BlueFox, return the LSI, and sell the 1680ix-24 -- even if it did work, who cares? Even if you get back 75 cents on the dollar you could buy a 1680X and have money left over for an 1880X in April- which I happen to think *will* play well with the HP expander since they're both SAS-2 and the ROC is by Marvell.
 
Last edited:
Yeah, I do plan on selling the drives as well. As for the external port being bad, it did work (to a point) with the Chenbro cards, so I'm not sure that's the issue. I will be trying the internal ports soon though (just to see if that was the issue). Even if they do work, it's still more cost effective to sell the 1680ix-24 and buy a 1680x like you have though.
 
BlueFox, return the LSI, and sell the 1680ix-24 -- even if it did work, who cares? Even if you get back 75 cents on the dollar you could buy a 1680X and have money left over for an 1880X.
I can probably sell both my 1280ML and 1680ix-24 for a profit since I got such a good deal on them. :p
 
That may be true unless you keep an eye out on ebay and price search engines for the non-RAID HBA's that have been tested compatible, that's why I'm trying to build a list. Example the Adaptec 1045 sitting on ebay right now for $50. If that card ends up compatible then for $225 total you have 32 ports of connectivity together with the expander.

I MAY be bidding on that one.

@PigLover: Asus P6T7 WS Supercomputer = lots of PCIe slots :)

And treadstone I'm fairly certain that you could use a power supply switch to turn things on/ off. Just need to short the ATX connector on a 400w power supply.
 
Here's a pin out of a x4 connector.
http://www.interfacebus.com/Design_PCI_Express_4x_PinOut.html

Given that one would be using the PCB in a second file server, it should be be drawing ~11 watts for the SAS expander, x watts for the chassis fans, y * hard drives connected.

I don't know how true the minimum load requirement is nowadays, given that computers can go to sleep and draw single digit wattage.

On a tangent, am I correct in assuming not all SAS cabling is directional? I was going to try and test a PERC 5/i, but I'd need to acquire a SFF-8484 to SFF-8087 cable, but the cheap ones I can find are labeled 8087 host to 8484 backplane. I really don't feel like dropping $50 on a cable marked as 8484 host to 8087 backplane for a experiment :x
 
I could whip one up in no time, I design PCB boards for a living :)

There are a few things to be considered though. Depending on the power supply you want to use, if there isn't enough load on the power supply it will not stay in regulation and that can be very bad for the rest of the components in your system. A couple of things I would need to know first is what voltages the board requires and on what pins. Once I have one of those HP SAS Expanders, I can easily find that out myself. But first I have to get one :)
Another thing to consider is if that board should also contain a bit of logic to control the power on/off from a case button?!?

That's great! Saves me the trouble of learning how to do pcb work :) I can solder just fine but have no experience with the designing aspect of it.

I would think that having the drives on would provide enough of a draw. The Seagate LP drives are spec'd to draw 4.3 watts at idle, assuming a full compliment in a norco that'd be about 86 watts, along with xnoodle's estimate of 11 watts for the SAS expander leaves us at about 97 watts at just idle. I wonder if that's enough to keep a power supply in the safe zone? I also don't have a HP SAS expander so I can't tell you either :p

As for the power button I was thinking of just jumpering the atx pins and use the power supply switch. No need to make the SAS expander power adapter board needlessly complex when a simple jumper will do :)
 
You know a ITX mobo with a celer would pull less than 50w right?
Prob easier to do than a custom PCB.
 
You know a ITX mobo with a celer would pull less than 50w right?
Prob easier to do than a custom PCB.

I'm assuming that since treadstone does PCB design work, designing a custom pcb with only power traces _shouldn't_ be difficult. It's not a crazy multi-layer pcb. I'm mostly just feeling out the idea, since it seems building a norco with just a HP SAS expander would be quite a nice minimal DAS enclosure. Also, with a mobo + celeron, that's 50w that you can save with a simple PCB! :D

I'm also a bit of an electronics geek, so I can't help but think up stuff like this.


FWIW, I've been browsing mouser/digikey for a pci-e 8x socket that's stocked; I never realized just how many varieties there are!
 
A right angle riser with a few bits and pieces attached to it should do the trick. It probably only uses the 12v line and not the 3.3v anyway. I'll see what I can whip up when I have some free time.
 
Wouldn't it be easier to just use the 16x PCIe powerboard linked on the 2nd page of this thread? Seems like that would work perfectly.
 
Wouldn't it be easier to just use the 16x PCIe powerboard linked on the 2nd page of this thread? Seems like that would work perfectly.

Totally missed that link; it looks like it's worth a try at the very least.
 
^ I've got one of those on the way. I fully expect that mounting it in the case will be a huge pain. We'll see.
 
...and the LSI card is history. Means I get $800 back to spend on other shiny things.

^ I've got one of those on the way. I fully expect that mounting it in the case will be a huge pain. We'll see.
I really am going to see about creating a custom backplane for just PCIe slots that is mATX sized. Shouldn't be too hard (I'm supposed to be finishing my EE degree after all...gotta put it to use somehow :p ).
 
Just a suggestion, mATX is too big. Ideally you could put the PCIe power board in the last expansion slot, and then use the rest of the case for more drives. Use the SFF-8088 external port for connectivity and cram an additional 10 drives in a Norco RPC-4020. 32 internal ports on the SAS expander, 32 internal drives in a Norco case = low port costs (you could probably do ~$20/ port to house the drives. Then all we need is a PSU that will power 32 drives + an expander. Probably a 400-450w unit if you had staggered spin-up.
 
Well, what I meant for mATX sized is 4 expansion slots. Since only the expansion slots are useful, it wouldn't occupy the entire space that a normal board would. It also means it could fit in smaller cases, not to mention bigger PCBs just wind up costing more. Might be a week or two before I can do this however (not to mention more a bit more time for prototyping).
 
...and the LSI card is history. Means I get $800 back to spend on other shiny things.

I really am going to see about creating a custom backplane for just PCIe slots that is mATX sized. Shouldn't be too hard (I'm supposed to be finishing my EE degree after all...gotta put it to use somehow :p ).

Assuming we can all agree to a form factor, I think a group pcb fab buy wouldn't cost too much per person. I personally was thinking in terms of single adapters, if you keep the adapter pcb small and light, the locking mechanism on the pci-e slot should be enough to keep it stable. Then, as long as the card is properly mounted to the case there won't be any issue.


^ I've got one of those on the way. I fully expect that mounting it in the case will be a huge pain. We'll see.
I was thinking that might be an issue, since it definitely isn't designed for standard atx mounting holes :p

Just a suggestion, mATX is too big. Ideally you could put the PCIe power board in the last expansion slot, and then use the rest of the case for more drives. Use the SFF-8088 external port for connectivity and cram an additional 10 drives in a Norco RPC-4020. 32 internal ports on the SAS expander, 32 internal drives in a Norco case = low port costs (you could probably do ~$20/ port to house the drives. Then all we need is a PSU that will power 32 drives + an expander. Probably a 400-450w unit if you had staggered spin-up.
I have also thought of doing this, but a main detractor of fitting drives inside the case would take away one of the main benefits of having the norco: the hotswap cages :(
 
Back
Top