Is there a way to delay the firmware initialization with LSI

Master_shake_

Fully [H]
Joined
Apr 9, 2012
Messages
17,794
i recently moved my raid array to a norco 4224 case and every time i cold boot it something like 8 drives won't show up and then the beeper starts going off.

i have to scan foreign config to get my array back up and then it starts a consistency check.

it only happens on a cold boot so i think it has to do with the firmware initializing to fast and the drives aren't completely spun up.

anyone know how to delay the start time?

LSI 8888elp raid card and 16 toshiba dt01aca200 drives.
 
Sounds more like a lack of power issue.

But, the drives should be powering up in standby and waiting for the lsi card to spin them up. I have never had the lsi card fail to see the disks, as it has always waiting, too long in my thoughts, to spin them up and check them.

Is something not cabled correctly? and all your disks are spinning up at once? instead of controlled by the lsi card? and the psu you have isn't large enough to handle it?
 
Sounds more like a lack of power issue.

But, the drives should be powering up in standby and waiting for the lsi card to spin them up. I have never had the lsi card fail to see the disks, as it has always waiting, too long in my thoughts, to spin them up and check them.

Is something not cabled correctly? and all your disks are spinning up at once? instead of controlled by the lsi card? and the psu you have isn't large enough to handle it?

i thought it was a power issue too but it's a 1000 watt coolermkaster modular psu with 80 amps on a single rail.

the other server i am running has 17 drives and a 750 watt ocz power supply with a single rail and it's fine.

i think it has to do with the raid controller and the ibm sas expander i am using...

the raid card is 3gbps and the sas expander is 6gbps but it's the same 4 drives every time that are not picked up by the controller...

i tried using the 8 ports on the raid controller 4 to the expander and 4 to the 4 drives that would get picked up and it was fine.

so since that worked i thought it was a port on the expander switched the 2 around and it worked fine as long as only 12 drives were plugged in to the expander.

i even moved the drives to a different backplane and the same issue.

i'm really at a loss.

i have a 9260-4i coming to see if that solves my problem.

hope it does.
 
Try changing the port the drives are plugged into the expander. See if it is the expander port.

If the drives not working follow that port, I think it still sounds like a power issue, that when those drives on that port are told to power up, it's an issue.

How exactly did you connect the psu to the drive backplanes? that 18awg wire can only handle so much power, no matter how large a psu is behind it, and using adaptors in addition will just cause more limits on power. You will want to spread the backplan load over as many wires from the psu as possible.
 
The LSI cards I have support logical group spinup delay and staggered disk spinup. Maybe give those options (if available) a try. Staggered spinup will definitely help if it's a power issue so you don't have all the drives drawing power at the same time.

Depending on the motherboard used, some have the option to set the amount of time to wait at the enter BIOS prompt (i.e. Press F2 or del to Enter BIOS). I believe this precedes (I'm not at home to test) add-on card firmware initialization. IF you can set the prompt delay longer, it might give enough time for you.
 
thanks for the suggestions guys.

i tried mobing the molexs around (there are 6 from the power supply) so i tried 2 from each chain for 4 drives.

no luck

so i tried just swapping sas cables, that did it.

i think one sas port is slow so after i moved the input to a different place it works every time from a cold boot.

thanks guys
 
It sounds like you have found a workaround, but another consideration is related to power. I have a norco case which was pre-wired with one long 4 pin 'molex' extension splitter providing power from the bottom backplane to the top.

If I added more than a foot or so of additional extensions/splitters between the 4 pin from the power supply and the start of the backplane's 4 pin splitter chain, the voltage drop on the 5v pin of the top two backplanes was severe enough with the high current draw of system startup that the drives in those top two backplanes would not start reliably. Additionally, drives would periodically drop out of the array during times of peak load.

Changing the wiring to reduce voltage drop (lower gauge wires, reduced distance) fixed the issue once and for all. General idea is to use the closest plugs to the power supply, and minimize the length of the path when possible.

Some power supplies also provide too few amps on 5v to reliably power large numbers of HDDs. The maximum wattage spec of the power supply is not necessarily a good indicator of its suitability.
 
Back
Top