*Official* Norco data storage products thread

got a couple of questions.

Best location to mount an internal SSD for the OS?

How loud is the fan wall fans with the case closed up? I'm switching to the 120mm fan wall but am thinking of getting some very high volume fans to make sure there is plenty of air flow through the hotswap bays.

Specifically looking at something like these.

http://www.frozencpu.com/products/1...25mm_2000RPM_High_Speed_Fan.html?tl=g36c15s60

I'm thinking I could remove the 80MM fans from the back, plug those fans straight into the MB to let it control the fan speed, and change out the expansion "plugs" with vented ones.

Thanks in advance.

You could get one of these for the OS drive. http://www.amazon.com/StarTech-com-2-5-Inch-Removable-Expansion-S25SLOTR/dp/B002MWDRD6
Uses a single expansion slot and is hotswappable
 
As an Amazon Associate, HardForum may earn from qualifying purchases.
About that, Does 4*80mm fans are really louder than 3*120mm ?
Because i'' ve got a dilemma:

-Norco RPC-4224:
pro= 3*120mm, cheaper
con= terrible quality control

-Chenbro RM41824:
pro=quality should be alot better
con=4*80mm fans, more expensive

What do you think ?
The Norco appeard first as the ultimate homeserver case for a reasonable price, but all these problems about quality worry me.
 
Go with the narco if it for home use, and if you get some bad back planes or other stuff missing, i guess you can send them an RMA or get some kind of support too get the missing contont.

Now when that is said, if money is no isue at all and you dont care at all how much it coast go with the supermicro
 
The Norco appeard first as the ultimate homeserver case for a reasonable price, but all these problems about quality worry me.

What problems about quality? Somebody ranted about getting a bad backplane and therefore all Norco's have bad backpanes? Their customer service has always been excellent and no-questions-asked ... If you end up with a bad backplane or develop one later, they'll generally send a replacement out free. Sure, cases like Supermicro and Chenbro are built to tighter tolerances and if that makes a diff to your home media server then by all means spend the extra cash, but the tech support of those two companies if/when an issue arises is pulling teeth at best, I've dealt with all of them and in the case of SM its sometimes days until receiving a response.

The only time I ever had a Norco backplane go bad was when I fried it by my own stupidity, using a Corsair modular PSU power cable on an Antec modular PSU where the pinouts turned out to be different. Sent Mike at Norco an email, next day I had a free replacement and they didn't care about me returning the bad one.
 
Last edited:
About that, Does 4*80mm fans are really louder than 3*120mm ?
Because i'' ve got a dilemma:

-Norco RPC-4224:
pro= 3*120mm, cheaper
con= terrible quality control

-Chenbro RM41824:
pro=quality should be alot better
con=4*80mm fans, more expensive

What do you think ?
The Norco appeard first as the ultimate homeserver case for a reasonable price, but all these problems about quality worry me.

I wouldn't say bad quality control. Mine is fine. Granted some of the screw holes appeared to be off but that was my own doing and not Norco's. As my first and second server cases I have to say that I am pleased with them. Also the 4224 comes with 6 80MM fans but you can remove the back two without having heat issues arrive. I've been folding on that server non-stop for about 2 weeks now and I even have a 560 video card in it that is folding. Only spot it gets warm is where the video card is. Rest of the case is cool to the touch.

Now with that said. I removed the back 2 fans to test out the cooling of the case as I do plan on switching over to 3 120MM fans that have some insane static pressure stats and I wanted to see how cool the system would run with only the fan wall going. I haven't looked back and can say that I am eager to get the new fan wall and new fans.

chenbro does make a lot of good cases. Right now I think they are the only ones who make a 50 bay case that treadstone used to create his 100TB home server. They also make the cases I am looking at buying to use for my gaming systems.

supermicro is generally considered king but that is only because they tend to come with a lot of things that the others don't.


Oh one thing. And not sure if it was my fault or not. But I did notice a drive drop out of my array. I pulled it and pushed it back in to reconnect it and haven't had a problem since. So that could be called a quality issue as the drive was securely locked into place and seemed to have a connection error. When I reconnected it it did cause the drive on the opposite side to "fail" but reconnecting it fixed it as well. I would say that the outside bays may need a better "brace" for the backplane to prevent that from happening. Mind you no data was lost and the array picked up the drives right away with no issues. So ok I could say 1 quality control issue but it is a minor one that just takes reconnecting to fix it.

Odditory:
could you talk to Norco about that? Just need to brace the ends somehow to prevent it from happening in the future. Probably not a common problem so not sure if they will worry about it but I would appreciate it if you could bring the issue up to your contacts. Thanks!
 
Now with that said. I removed the back 2 fans to test out the cooling of the case as I do plan on switching over to 3 120MM fans that have some insane static pressure stats and I wanted to see how cool the system would run with only the fan wall going. I haven't looked back and can say that I am eager to get the new fan wall and new fans.

We fold on our S2011 CPU in a 4224 with 18 green drives in front and 3 Corsair SP120 fans with the quieting adapter in place. It keeps the system board/CPU/drives plenty cool (drives above 30, not above 37 or so, CPU around 55C) with no rear fans... definitely the way to go :D There may be a better choice of fan but they were what we had and do the job well enough!
 
chenbro does make a lot of good cases. Right now I think they are the only ones who make a 50 bay case that treadstone used to create his 100TB home server. They also make the cases I am looking at buying to use for my gaming systems.

I actually almost bought that very same case mere moments before Treadstone grabbed it from the same site we were both eyeing, but my gut told me weight would be an issue and it'd be too cumbersome to deal with, and that turned out to be right - the damn thing needs two people to move it even when its empty.

My advice is stay away from cases that massive, instead break your storage up into 24-bay units, unless ofcourse you dont mind the hassle of needing a second person whenever you need to move it around. It's awkward and the PSU fans in the three PSU's are screamers since they were intended for data centers & server rooms where noise isn't usually a constraint, similar to the SM's.
 
In other news they let me know they're still trying to get an 8-bay mini-ITX case out before EOY.
 
In other news they let me know they're still trying to get an 8-bay mini-ITX case out before EOY.

I so very wish they would. I think that's the direct things are heading. Did they say if it would be rackmount or desktop?
 
No but I wouldn't expect a rackmount form factor with only 8 drive bays. Ive been harping for more than a year on the idea of a DIY drobo/synology type case, but unlike those embedded systems running peashooter hardware, would have to have room for a mini-ITX board (which increases the size but shouldn't by too much).
 
We fold on our S2011 CPU in a 4224 with 18 green drives in front and 3 Corsair SP120 fans with the quieting adapter in place. It keeps the system board/CPU/drives plenty cool (drives above 30, not above 37 or so, CPU around 55C) with no rear fans... definitely the way to go :D There may be a better choice of fan but they were what we had and do the job well enough!

good to hear. That is actually awesome to hear thanks!!! ;)
 
In other news they let me know they're still trying to get an 8-bay mini-ITX case out before EOY.

Is the form factor still supposed to match similarly to the NSC-800 by Yu Technologies? That chassis looks fantastic for the home enthusiast that wants more than 2-4 drives, but still has size/power/noise restraints (WAF).

I have been trying to acquire a small batch order, but have had little-to-no luck getting a response back from the Chinese vendors that claim to carry it. If any more details come out on a Nocro version, I'd gladly hold off my search for a couple months to have the option to buy locally from a place that will actually sell me a few.
 
What problems about quality? Somebody ranted about getting a bad backplane and therefore all Norco's have bad backpanes? Their customer service has always been excellent and no-questions-asked ... If you end up with a bad backplane or develop one later, they'll generally send a replacement out free. Sure, cases like Supermicro and Chenbro are built to tighter tolerances and if that makes a diff to your home media server then by all means spend the extra cash, but the tech support of those two companies if/when an issue arises is pulling teeth at best, I've dealt with all of them and in the case of SM its sometimes days until receiving a response.

The only time I ever had a Norco backplane go bad was when I fried it by my own stupidity, using a Corsair modular PSU power cable on an Antec modular PSU where the pinouts turned out to be different. Sent Mike at Norco an email, next day I had a free replacement and they didn't care about me returning the bad one.

Yeah, about bad backplanes, and the crude looking of the soldering even in good ones. Also about rails and bad consistency about what exact product you' ll get as they change small things without notice, which could be easily improved by keeping a "changelog" on their website and telling us what version=what serial numbers.

But you' re probably right, like all other products, a lot of Norco should work perfectly while we only hear about the problems, and it' s finally certainly the best choice, but when it comes to the security of our precious data, stored in redundant RAID systems, it' s logic to be a little more picky.

I wouldn't say bad quality control. Mine is fine. Granted some of the screw holes appeared to be off but that was my own doing and not Norco's. As my first and second server cases I have to say that I am pleased with them. Also the 4224 comes with 6 80MM fans but you can remove the back two without having heat issues arrive. I've been folding on that server non-stop for about 2 weeks now and I even have a 560 video card in it that is folding. Only spot it gets warm is where the video card is. Rest of the case is cool to the touch.

Now with that said. I removed the back 2 fans to test out the cooling of the case as I do plan on switching over to 3 120MM fans that have some insane static pressure stats and I wanted to see how cool the system would run with only the fan wall going. I haven't looked back and can say that I am eager to get the new fan wall and new fans.

chenbro does make a lot of good cases. Right now I think they are the only ones who make a 50 bay case that treadstone used to create his 100TB home server. They also make the cases I am looking at buying to use for my gaming systems.

supermicro is generally considered king but that is only because they tend to come with a lot of things that the others don't.


Oh one thing. And not sure if it was my fault or not. But I did notice a drive drop out of my array. I pulled it and pushed it back in to reconnect it and haven't had a problem since. So that could be called a quality issue as the drive was securely locked into place and seemed to have a connection error. When I reconnected it it did cause the drive on the opposite side to "fail" but reconnecting it fixed it as well. I would say that the outside bays may need a better "brace" for the backplane to prevent that from happening. Mind you no data was lost and the array picked up the drives right away with no issues. So ok I could say 1 quality control issue but it is a minor one that just takes reconnecting to fix it.

Odditory:
could you talk to Norco about that? Just need to brace the ends somehow to prevent it from happening in the future. Probably not a common problem so not sure if they will worry about it but I would appreciate it if you could bring the issue up to your contacts. Thanks!

That' s not very reassuring, knowing the drive connectors can disconnect them with only this small pressure from swapping a drives !?! :eek:
Do you managed to reconnect the drive without having to rebuild the entire array ?


Also one question: how do you guys buy your hdds ?
I mean it' s recommended to have hdds not all from the same batch for better security, but how to achieve this, without buying the hdds one by one each in a different reseller ?

Is there shops that make this kind of mixing before shipping ?
 
Two words: power supply. I'd lay money it was either a cheapo, faulty, underpowered, miswired or all of the above.

The guy concludes "3TB drives fry Norco backplanes". Yeah okay. There's more to the story. And there may be merit to the idea that the backplanes could use beefier MOSFETs but the way he went about drawing that conclusion was anything but scientific.
 
Last edited:
I was thinkin about buying few 3 tibbers for my 4224 but now i am not sure do i wanna do that.. :eek: tho i have quality corsair psu but.. there is always that but thingy.. :confused: :confused:
 
Lul my man do you know how many people have 3TB and 4TB drives in Norco cases worldwide. Not only that but many 3TB drives actually draw less power at startup than their older 2TB brethren for the simple reason they've got 40% less platters in the case of the 1TB/platter models. But even with the same platter density, still not all 3TB are created equal, there are drastic differences from one make/model to another, so for someone to conclude "Norco cases cannot handle 3TB drives due to high drive current and shitty engineering" is less than scientific at best. Doesn't mean it didn't happen with his unique combination of components, just means he hasn't gone far enough in determining how and why it happened, or at least failed to include that in his posting.

This is FUD until the guy gives a few more details to the story which as it stands right now is very vague. And the statement about being on a "shoestring budget" makes me doubly suspicious about the PSU. I sent him a message, will see if he bothers responding. I'll also ask Norco if they have any comments on this. Will be interesting to get to the bottom of it.
 
Last edited:
The problem was obviously not the 3TB drive capacity but the current needed for spinup the drives.

This current can be quite high when a lot of power hungry drives are powered up at the same time without staggered spinup, and a weak PSU can be overloaded, especially the ones with several small 12V rails.
But any good PSU have overload safety and shut itself down before destroying anything.

we bought good servers and host adapters, but skimped on the easiest part: the case. If a computer runs fine splayed out on the test bench, it should work fine in any metal box, right?

If i understand this correctly they successfully tried the hardware before putting it into the norco case, so the PSU was able to handle all the drives booting without problem.

I believe we had a 500 or 600W Antec PSU in there; the 3U cases are roomy enough to fit a standard ATX supply, so we picked a good consumer-level brand. None of the drives seemed to have issues when they were plugged into the power supply directly (except for the ones that were already dead).

They even tried to plug only one drive and managed to kill the port each time, so that' s definitely not a PSU overload.

This was tested to exhaustion: by plugging in one of our new 3TB drives into any case, we could cause that port to die. We killed at least 8 ports on different backplanes by plugging in a single drive.


That' s one more problem about the backplane electronics, Norco should REALLY provide quality components and fix these serious safety and design issues.

-bad backplanes found; yeah shit happen sometimes.
-crude soldering on good backplanes; disappointing, but if it works...
-drive disconnecting from small pressure/swapping drive; not very encouraging about quality and data safety, hopefully no data was lost.
-MOSFET frying when overloaded, destroying the drive and data; really that' s a serious issue here, the component should be able to handle ANY drive available in the market, with a safety margin. And it should never kill any connected drive even in case of overload.

Seriously, that' s very annoying, because supermicro & chenbro are very expensives and don' t support 120mm fans, and this norco would be the perfect product with quality backplanes.
All these issues really kill all confidence about the product, which is very sad.
 
If i understand this correctly they successfully tried the hardware before putting it into the norco case, so the PSU was able to handle all the drives booting without problem.

Nothing is obvious to me, because the details are vague. hard to imagine they plugged in ALL drives outside the case unless they were using a zillion Y splitters. not enough information given there how and what they did to conduct their testing.

They even tried to plug only one drive and managed to kill the port each time, so that' s definitely not a PSU overload.

All things being equal no, not a result of overload, however its not proof positive its the backplane's fault. too many other variables that could've and likely interfered here -- ground/short, defective PSU, improper cabling, wrong modular cable, loose connector elsewhere on the same rail, we don't know and he doesn't go into details. the fact it was reproducible among several backplanes makes me think it was something OTHER than the backplanes. maybe they were all part of a bad batch, anything's possible, but I wouldn't lay money on it. All that is not to say Norco shouldn't re-evaluate their backplane design and components - by all means they should err on the side of caution and take the guy's claims at face value, but gut is telling me there's more to the equation than what the guy has bothered to detail.

I'll give the example that I once fried a Norco backplane, and I was fortunate in that I was able to quickly determine I'd mistakenly used an Antec modular PSU cable on a Corsair PSU, the cables got mixed up in the spares box, and as it turned out the pin-outs were different between these two brands of cables. Now had I not been fortunate and discovered the problem early I might have continued connecting more backplanes, fried them, and I guess I could have gone online to rant and rave about Norco backplanes frying complete with photos of the entire ordeal and all aboard the FUD train.

So I'm not saying this guy is wrong but have doubts about his testing methodology. There are more variables at play than what he may realize or has bothered to test or document. And if the conditions that led to his particular problem were commonplace I imagine there would be complaint threads cropping up every day here and elsewhere for the past 5 yrs people have been running Norco's. One man's opinion.
 
Last edited:
Odditory, I don't know man. I'm having a very similar issue with my Norco Chassis. I cannot get it to address more than 8 hard drives. If I boot with 20 hard drives connected, it only addresses 8 and ignores the rest. If I leave everything but 8 plugged in, then boot. I see 8 hard drives, then if I start to then plug in hard drives one by one, they either are not detected at all, or they are detected and cause one some the first 8 drives to fall off.

It cannot be the power supply, I have a Seasonic X-750 and I have swapped it for a older PSU that I had laying around, same affect.
It's not the M1015's that I have, those 8 hard drives work regardless of which SAS card it is connected to
It's not the hard drives, if I bypass everything and connect them straight to the motherboards SATA ports, they work fine.

The only thing it can be is the backplanes, and I know that one of mine are blown entirely because the power slot for of the SATA connectors is physically warped and any hard drive I slot into that particular slot causes the power connector to get bent and destroyed (Thank the heavens I found that out with a cheap 80GB hard drive and not one of my 2TBs)

The one and only thing I haven't tried to to bypass the backplanes and to connect the hard drives directly to the SAS cards with some forward breakout cables, but I do not want to spend any more money at this point. This is maddening, I spent about $1300 for this system and I can't even get more than 8 hard drives slotted.
 
I'm not sure your issue is necessarily similar to what the guy we were discussing at the blog site was experiencing, but being backplane related is a possibility.

First, how many M1015's? Which firmware on the M1015's - IT or IR? Which motherboard? Which model Norco?

Also, have you tested each backplane one by one connecting it to the same M1015 card? The backplane with the warped connector I would get replaced for sure, they usually send replacements free of charge - I'll PM the link. Test the backplanes one by one and figure out if its just the one or you need additional replacements.

Lastly, note that you don't need two MOLEX connectors plugged into each backplane - one is fine. The second is there for dual-PSU setups.
 
Last edited:
First, how many M1015's? Which firmware on the M1015's - IT or IR? Which motherboard? Which model Norco?

Three, cross-flashed to IT mode on P14 firmware, no bios. I had P11 on them and I upgraded them to see if it would fix the problem.

Supermicro X9SCM-F, previously had the 1.1 BIOS, but I upgraded it as well to 2.0 to see if it fixes my issues.

NORCO RPC-4224, SAS backplanes are connected via monopirce SAS cables, I tried Norco's SAS cables and I get the same result, so the cables cannot be the problem.

Also, have you tested each backplane one by one connecting it to the same M1015 card?

I can always get 4 hard drives working, 8 too, it's when I tried to connect more than 8 do I have an issue. I can have all 8 on one card, or 4 on one and 4 on another, or 4-4-2, doesn't matter. Plugging in more than 8 causes them either to not be addressed, or causes the new drives to push the older ones off. It *seems* that the remaining 5 backplanes are fine.

This is such a bizarre problem, all I want a 24 ports of hot-swappable goodness.
 
I am current re-initializing (Using Hard Disk Sentinel) each of my disks that "Don't work" when plugged into the Norco Backplanes by bypassing the backplane AND the M1015 and connecting the drive directly to the motherboard. This will ensure that the drive is good, bad-sector free and has no mechanical problems. I have 8 disks to do and they take about an hour each, so I will have an update tomorrow or so.

If I was convinced it was a hard drive issue, I would have bought 4 3TB Reds and called it a day, but I fear the simple act of inserting them into the backplane is causing the disks to have problems, I would rather wait.
 
Too bad u don't have a SAS expander to test with just a single m1015 to rule out backplanes and power, I vaguely recall hearing about issues with three M1015's on certain boards, but that could've also been people that hadn't bothered to disable bios on m1015's. You might try running it by the guys at forums.serve the home.com, some sharp dudes hang there. Mobilenvidia might have an idea there.

Based purely on symptoms, in my experience it was always due to a PSU issue, but you said seasonic so hard to doubt that unit. Any other PSU u can try for kicks? Or maybe an additional PSU going to backplanes temporarily. Just to rule it out.
 
Odditory, I don't know man. I'm having a very similar issue with my Norco Chassis. I cannot get it to address more than 8 hard drives. If I boot with 20 hard drives connected, it only addresses 8 and ignores the rest. If I leave everything but 8 plugged in, then boot. I see 8 hard drives, then if I start to then plug in hard drives one by one, they either are not detected at all, or they are detected and cause one some the first 8 drives to fall off.

It cannot be the power supply, I have a Seasonic X-750 and I have swapped it for a older PSU that I had laying around, same affect.
It's not the M1015's that I have, those 8 hard drives work regardless of which SAS card it is connected to
It's not the hard drives, if I bypass everything and connect them straight to the motherboards SATA ports, they work fine.

The only thing it can be is the backplanes, and I know that one of mine are blown entirely because the power slot for of the SATA connectors is physically warped and any hard drive I slot into that particular slot causes the power connector to get bent and destroyed (Thank the heavens I found that out with a cheap 80GB hard drive and not one of my 2TBs)

The one and only thing I haven't tried to to bypass the backplanes and to connect the hard drives directly to the SAS cards with some forward breakout cables, but I do not want to spend any more money at this point. This is maddening, I spent about $1300 for this system and I can't even get more than 8 hard drives slotted.

Note consumer PSU are geared more for providing a lot of power to the video cards and not the hard drives. Not familiar with that specific PSU but it could be that it does not provide enough 12V current to the molex to handle that many drives at once. A single drive at startup can pull over 1A on the 12V rail, most are closer to 1.5A. This is why enterprise gear uses staggered spinup so as not to overtax the PSU.

750W supply seems a little on the low side to for a 24 drive system.
 
What about going straight from the motherboard into one of the problematic backplane connections? Might it be that the backplane is OK but just doesn't want to play nice with the cards (as configured)? I'm guessing you'd have to pick up a reverse SAS-SATA breakout cable for the purpose.

I'm leaning toward it being a card-related issue.
 
750W supply seems a little on the low side to for a 24 drive system.
It cannot be the PSU, Seasonic is top quality and 750W is well above what I need. Considering I can't even get 8 hard drives working, it can't be the wattage.

What about going straight from the motherboard into one of the problematic backplane connections? Might it be that the backplane is OK but just doesn't want to play nice with the cards (as configured)? I'm guessing you'd have to pick up a reverse SAS-SATA breakout cable for the purpose.

I'm leaning toward it being a card-related issue.

I honestly hope it is, but I'd have to but additional cables and I don't really feel that I should do that. Additionally, I only have 6 SATA ports on the motherboard, 4 of which are free, so I don't see how that proves anything since my problem is that I cannot get 8 hard drives working.
 
It cannot be the PSU, Seasonic is top quality and 750W is well above what I need. Considering I can't even get 8 hard drives working, it can't be the wattage.



I honestly hope it is, but I'd have to but additional cables and I don't really feel that I should do that. Additionally, I only have 6 SATA ports on the motherboard, 4 of which are free, so I don't see how that proves anything since my problem is that I cannot get 8 hard drives working.

While I will not stand by Norco backplanes and it probably is the backplane. You can not say it is not the PSU.
Yes Seasonic makes great supplies but they are a consumer psu maker. As such they put more into providing power to the video card than anything else.

750W is most likely not going to be enough for 24 drives, particularly if you are not staggering the spinup. While it should be enough for 8, it would not surprise me if its 12V rail can not keep up. Again I do not know how much current they alot for the 12V rail. Also you do not mention the rest of the hardware, do you have a massive video card? Which CPU, etc.

Regardless I think you will need more than 750W when you go to fill that case with 24 drives. Check out chassis makers that have 24 drives and include a PSU, they all start with 900+ W supplies, many pushing 1200W redundant supplies.
 
While I will not stand by Norco backplanes and it probably is the backplane. You can not say it is not the PSU.
Yes Seasonic makes great supplies but they are a consumer psu maker. As such they put more into providing power to the video card than anything else.

750W is most likely not going to be enough for 24 drives, particularly if you are not staggering the spinup. While it should be enough for 8, it would not surprise me if its 12V rail can not keep up. Again I do not know how much current they alot for the 12V rail. Also you do not mention the rest of the hardware, do you have a massive video card? Which CPU, etc.

Regardless I think you will need more than 750W when you go to fill that case with 24 drives. Check out chassis makers that have 24 drives and include a PSU, they all start with 900+ W supplies, many pushing 1200W redundant supplies.

Considering my fan literrally has never kicked on for the PSU, it's not load, at start-up or any other time.

The rest of my specs:
Operating System: Windows Home Server 2011
Storage Platform: StableBit Drive Pool BETA (1.2.0.6989)
CPU: Intel Core i3-2100T Sandy Bridge 2.5GHz LGA 1155 35W Dual-Core Desktop Processor
Motherboard: Supermicro X9SCM-F
Chassis: Norco RPC-4224
Drives: 4x WDC WD5000AAKS 500GB, 4x WDC WD10EACS 1TB, 4x WDC WD20EADS 2TB, 2x WDC WD20EARS 2TB, 2x WDC WD20EARX 2TB
RAM: 2x Kingston ValueRAM 4GB 240-Pin DDR3 SDRAM ECC Unbuffered Server Memory
Add-in Cards: 3x IBM M1015 running in IT mode (P14)
Power Supply: SeaSonic X750 750W 80 PLUS GOLD Certified Modular Active PFC Power Supply
Other Bits: 6x Monoprice 1m 30AWG Internal Mini-SAS 36pin cables (SFF-8087), 3x Scythe SY1225SL12M 120mm "Slipstream" (Firewall), 2x MASSCOOL FD08025S1M4 80mm Case Fan (Exhaust)
 
Considering my fan literrally has never kicked on for the PSU, it's not load, at start-up or any other time.

The rest of my specs:
Operating System: Windows Home Server 2011
Storage Platform: StableBit Drive Pool BETA (1.2.0.6989)
CPU: Intel Core i3-2100T Sandy Bridge 2.5GHz LGA 1155 35W Dual-Core Desktop Processor
Motherboard: Supermicro X9SCM-F
Chassis: Norco RPC-4224
Drives: 4x WDC WD5000AAKS 500GB, 4x WDC WD10EACS 1TB, 4x WDC WD20EADS 2TB, 2x WDC WD20EARS 2TB, 2x WDC WD20EARX 2TB
RAM: 2x Kingston ValueRAM 4GB 240-Pin DDR3 SDRAM ECC Unbuffered Server Memory
Add-in Cards: 3x IBM M1015 running in IT mode (P14)
Power Supply: SeaSonic X750 750W 80 PLUS GOLD Certified Modular Active PFC Power Supply
Other Bits: 6x Monoprice 1m 30AWG Internal Mini-SAS 36pin cables (SFF-8087), 3x Scythe SY1225SL12M 120mm "Slipstream" (Firewall), 2x MASSCOOL FD08025S1M4 80mm Case Fan (Exhaust)

Fan kicking on has nothing to do with it. Fan wont kick on until it gets hot enough, not due to load, sure load will make it get hotter, but a short high load will not.
Trust me 750W is not enough for a server with 24 drives. It should though be enough for 8.
 
I'm not sure why your first suspect is not the M1015 driver and/or firmware. The 8 drives is highly suspicious, since it is the number of drives that could be handled by a single M1015.

That would be the first thing I would investigate. First, I'd make sure I had the most recent drivers and firmware available (from LSI, they seem to often be ahead of IBM). Then, to simplify a bit, I'd shift down to two M1015 cards, and try connecting 5 drives to one and 4 drives to the other (and vice versa with which gets 5 and 4). If you still only get 8 drives, is the drive that gets left off the same serial number in both cases (5/4 and 4/5)? Is it on the same quad-backplane board?

By the way, I'd ignore the person suggesting that the X750 does not have enough power (assuming it is not defective). Here is what it is specified to supply:

+3.3V@25A, +5V@25A, +12V@62A, -12V@1A, +5VSB@3A
 
I'm not sure why your first suspect is not the M1015 driver and/or firmware. The 8 drives is highly suspicious, since it is the number of drives that could be handled by a single M1015.

That would be the first thing I would investigate. First, I'd make sure I had the most recent drivers and firmware available (from LSI, they seem to often be ahead of IBM). Then, to simplify a bit, I'd shift down to two M1015 cards, and try connecting 5 drives to one and 4 drives to the other (and vice versa with which gets 5 and 4). If you still only get 8 drives, is the drive that gets left off the same serial number in both cases (5/4 and 4/5)? Is it on the same quad-backplane board?

By the way, I'd ignore the person suggesting that the X750 does not have enough power (assuming it is not defective). Here is what it is specified to supply:

+3.3V@25A, +5V@25A, +12V@62A, -12V@1A, +5VSB@3A

62A on the 12V rail is not going to be enough to cold start 24 drives at once. Considering he also has 5 fans on that 12V rail and whatever else might need 12V in the system.

Each drive will probably pull 1.5A at startup if not more. So that alone is 36A, the fans he has probably pull close to if not more than 10A at spinup (note fans like all motors draw much more current at startup) so we are at 46A. 16A leftover for the rest of the 12V rail is not much particularly since I was being conservative with my current estimates.
 
Well, I have 24 HDDs in my Norco 4224, I have three M1015s, and I have a Seasonic X750. Cold starts fine.

Please ignore the nonsense about the X750 not having enough power.
 
You need 50amps on the 12v line to spin-up 24drives, generally, unless they are green drives, then it will be less.

I have no issue on mine, doing that with a 650w psu, that has 60amps on the 12v line. Lucky I don't have a video card in it, and only a low power cpu, so there isn't much additional overhead on that 12v power.

Also if you have a powersupply with multible rails, it's probably going be a bitch to balance them between the rails for the drives vs video/cpu draws so it all works.
 
Considering my fan literrally has never kicked on for the PSU, it's not load, at start-up or any other time.

The rest of my specs:
Operating System: Windows Home Server 2011
Storage Platform: StableBit Drive Pool BETA (1.2.0.6989)
CPU: Intel Core i3-2100T Sandy Bridge 2.5GHz LGA 1155 35W Dual-Core Desktop Processor
Motherboard: Supermicro X9SCM-F
Chassis: Norco RPC-4224
Drives: 4x WDC WD5000AAKS 500GB, 4x WDC WD10EACS 1TB, 4x WDC WD20EADS 2TB, 2x WDC WD20EARS 2TB, 2x WDC WD20EARX 2TB
RAM: 2x Kingston ValueRAM 4GB 240-Pin DDR3 SDRAM ECC Unbuffered Server Memory
Add-in Cards: 3x IBM M1015 running in IT mode (P14)
Power Supply: SeaSonic X750 750W 80 PLUS GOLD Certified Modular Active PFC Power Supply
Other Bits: 6x Monoprice 1m 30AWG Internal Mini-SAS 36pin cables (SFF-8087), 3x Scythe SY1225SL12M 120mm "Slipstream" (Firewall), 2x MASSCOOL FD08025S1M4 80mm Case Fan (Exhaust)

There are a few posts on the net regarding this combination - haven't had time to digest but worth a look? (there are more than this, these were just the first few links that looked vaguely matching.

http://forums.servethehome.com/show...ed-with-M1015-in-3rd-or-4th-slot-on-X9SCM-IIF
http://lime-technology.com/forum/index.php?topic=22057.0
http://hardforum.com/showthread.php?p=1039229554 - could ask this person how he got on

In the first post, it seems like a BIOS upgrade on the mobo was required - I know you have done this already but maybe need to ask for something "beta"?

Btw, where do you live? If its in the UK I can lend you an expander if you wanted to test that option out.
 
I'm not sure why your first suspect is not the M1015 driver and/or firmware. The 8 drives is highly suspicious, since it is the number of drives that could be handled by a single M1015.

That would be the first thing I would investigate. First, I'd make sure I had the most recent drivers and firmware available (from LSI, they seem to often be ahead of IBM). Then, to simplify a bit, I'd shift down to two M1015 cards, and try connecting 5 drives to one and 4 drives to the other (and vice versa with which gets 5 and 4). If you still only get 8 drives, is the drive that gets left off the same serial number in both cases (5/4 and 4/5)? Is it on the same quad-backplane board?

You make a good point, but the reason I do not suspect the HBA's is because regardless of where I put the 8 drives (All on one card, split amongst two or three) those same 8 drive work. I already know one of my backplanes is shot because it literally physically damages the SATA power port on any hard drive I insert. So I was leaning towards the backplanes cause of Norco's notorious issues with QA.

There are a few posts on the net regarding this combination - haven't had time to digest but worth a look? (there are more than this, these were just the first few links that looked vaguely matching.

http://forums.servethehome.com/show...ed-with-M1015-in-3rd-or-4th-slot-on-X9SCM-IIF
http://lime-technology.com/forum/index.php?topic=22057.0
http://hardforum.com/showthread.php?p=1039229554 - could ask this person how he got on

In the first post, it seems like a BIOS upgrade on the mobo was required - I know you have done this already but maybe need to ask for something "beta"?

I've done hours upon hours of research and I can find a single other person that has my exact problem.

I don't have the fault issue, I don't have any issues with Windows seeing the card, and I don't have any IRQ conflicts.

Btw, where do you live? If its in the UK I can lend you an expander if you wanted to test that option out.

I picked this particular combo (4224/X9SCM-F/M1015) because it was extremely popular, I did so for the express purpose of NOT having the one in a million compatibility issue I seem to be having.

That is very generous of you, but I live in Southern California.
 
Back
Top