*Official* Norco data storage products thread

My original post said I was using lacing bars to support it. Lacing bars are handy for adding support underneath amps, receivers, anything that's heavy. They go across adjacent posts, placed on the front posts and the back posts.

Are you mounting it just using the rack wings? The case is designed for 4-post rail mount. It would sag terribly if just mounted off the wings.

Has anyone contacted Norco about this issue? Are there small aluminum handles they can ship? Everything lines up just fine, it's just that the middle support on the handle is extra thick since they used plastic instead of metal. If aluminum was used, the handle could be thinner.
 
I called Norco and they don't seem to care about the poor design/location of the two holes.

What a nice case otherwise though! Here are some pictures of the new version RPC-4216:
IMG_20130518_002809.jpg


IMG_20130517_185156.jpg
 
I seriously hope you are not front mounting that case. definitely not designed for that.
Should be using 4 post rails.
 
I seriously hope you are not front mounting that case. definitely not designed for that.
Should be using 4 post rails.

He's apparently using cable routing bars to support the weight:

PS: Don't worry, you can't see them but I put a few middle atlantic lacing bars to hold the weight of the unit. The screws are just to hold it from sliding out. Although I'd really like to either order a new front or a thinner handles so I can put a screw in the empty hole.

I don't think those are designed to be load-bearing, imo. :D
 
It's common to use them to support vertical loads at the rear of a rack (depending on which style routing bars you buy). I'm holding up a 120 lb amp with them too. You can even bolt things to them (IR controllers, port servers, network switches etc...) depending on which bars you buy. This makes for a cleaner and more compact install and saves the cost of extra shelves.

The Middle Atlantic "L" shaped "lacer" bars I linked to are heavy gauge and made in the US. The formed "L" shape gives the bars a lot of added strength in the vertical direction (the only direction of the load in my install). The LBP-1A or LBP-2A are what I use. There is no reason to not use these if you already have them laying around (they come in 10 packs so I had some to spare).

You need one going across the front and one going across the back. For the front, I ended up using an even beefier flanged steel faceplate (EB1):
http://www.middleatlantic.com/rackac/panels/bpanels.htm#2

If I put another case below the RPC-4216, I will not have room for lacer bars, and will have to buy the slides at that point in time. This is doubtful as 16 4TB drives is plenty for my storage needs.

Being an engineer, I have no problem not using the manufacturer recommended slides (as folks on here have stated they are the wrong width anyways) as lateral support across the bottom of the chassis is plenty as the bottom of the chassis will not flex due to its design. However, I agree if folks can't look at something and tell where the load is going to be carried and where it needs to be carried, they shouldn't do stuff like this.

For me, I'm not doing a seismically qualified install as this is for my home... Then again if it was an industrial setting with seismic concerns, I wouldn't be using the Norco case in the first place.
 
So I have 10 HDDs in the case using 3 backplanes and it is working great. I just ordered 10 more HDDs to fill in the case but found that I cannot power up the server whenever I have more than 3 backplanes connected. It is not specific to any backplane as I tried many combinations. I don't even have any HDD plugged in yet.

This leads me to believe it is a PSU problem, but according to this:

http://hardforum.com/showthread.php?t=1790992

my 900W PSU should be enough, especially considering I have removed all HDDs and SSDs. Thus it seems to be a backplane problem. Anyone running RPC-4224 with all 6 backplane connected and can power up? What is your PSU?

My case is brought in Nov 2012. Not sure if the backplane design matters.
 
So I have 10 HDDs in the case using 3 backplanes and it is working great. I just ordered 10 more HDDs to fill in the case but found that I cannot power up the server whenever I have more than 3 backplanes connected. It is not specific to any backplane as I tried many combinations. I don't even have any HDD plugged in yet.

This leads me to believe it is a PSU problem, but according to this:

http://hardforum.com/showthread.php?t=1790992

my 900W PSU should be enough, especially considering I have removed all HDDs and SSDs. Thus it seems to be a backplane problem. Anyone running RPC-4224 with all 6 backplane connected and can power up? What is your PSU?

My case is brought in Nov 2012. Not sure if the backplane design matters.

I use a Seasonic 400W Platinum PSU with all six backplanes running and it powers up fine. Have used in the past a 1000W Platinum and a 560W X-series Gold with no issues. Are you running everything off the one rail, perhaps?
 
No I separated them. Well, by separating I mean using different lines coming from the PSU. I hope each line is one rail?
 
Just curious. Is there a certain type of drive that fits in an RPC-4220 slim optical bay? I purchased a slim bluray drive but it seems just a hair too small for that slot. I was starting to wonder if there are desktop slim drives and laptop slim drives perhaps?
 
No I separated them. Well, by separating I mean using different lines coming from the PSU. I hope each line is one rail?

I had a similar issue with a 850W PSU. I can't recall the brand. It turned out that PSU could not supply the require wattage across the hard drive power supplies (iirc it had 3). I switched to the corsair 750W PSU running 24x7200RPM drives across all 6 backplane with no issue.
 
What is everyone using for cable management arms with their Norco chassis?

I have my lovely RPC-4220 and RPC-470 in my 12u rack but I am in dire need of some cable management.

I'm surprised norco doesn't make something to complement its line.
 
While I will not stand by Norco backplanes and it probably is the backplane. You can not say it is not the PSU.
Yes Seasonic makes great supplies but they are a consumer psu maker. As such they put more into providing power to the video card than anything else.

750W is most likely not going to be enough for 24 drives, particularly if you are not staggering the spinup. While it should be enough for 8, it would not surprise me if its 12V rail can not keep up. Again I do not know how much current they alot for the 12V rail. Also you do not mention the rest of the hardware, do you have a massive video card? Which CPU, etc.

Regardless I think you will need more than 750W when you go to fill that case with 24 drives. Check out chassis makers that have 24 drives and include a PSU, they all start with 900+ W supplies, many pushing 1200W redundant supplies.

This is old, but just to give my head up for the Seasonic X-750 PSU. It is top notch server grade PSU and it supplies enough current single rail to the molex connectors for disks. I saw a lot of 4224 based systems running full of 2..4TB drives without a single issue for years off a Seasonic 750. About the same as the Corsair HX 750.
 
This is old, but just to give my head up for the Seasonic X-750 PSU. It is top notch server grade PSU and it supplies enough current single rail to the molex connectors for disks. I saw a lot of 4224 based systems running full of 2..4TB drives without a single issue for years off a Seasonic 750. About the same as the Corsair HX 750.

Yeah, not sure if you saw my follow-up post, but my issue was the terrible quality 1 to 7 molex splitter that Norco sold me. IT was shorting out the backplanes and ended up costing me 5 hard drives and 2 backplanes.

The power supply was always fine, and top-notch no less.

http://hardforum.com/showpost.php?p=1039528017&postcount=1002
 
Sorry but if you for even one moment thought that you could feed 24 drives off a single Molex feed or even 12 if you doubled them up, you were an idiot.

As for the shorting claim, skeptical.

Feed the the backplanes from the PSU, if you have a decent common rail supply, then use the Norco cable to tie the second set of sockets together to load-balance the power draw.
 
Sorry but if you for even one moment thought that you could feed 24 drives off a single Molex feed or even 12 if you doubled them up, you were an idiot.

As for the shorting claim, skeptical.

Feed the the backplanes from the PSU, if you have a decent common rail supply, then use the Norco cable to tie the second set of sockets together to load-balance the power draw.
Hey, don't be an asshole. My PSU (Seasonic X-750) pumps out 62A on a single rail, regardless of how many molex connectors are used, which is irrelevant.

The splitter that Norco gave had very thin gauging, and the molex pins were loose, I'm assuming that they were shorting out because of the loose connectors.

It's also not just me, read the reviews at Newegg: http://www.newegg.com/Product/Product.aspx?Item=N82E16816133040
 
Not a case of being an arse to you, its a case of you spending a great of money on gear then making a big scene blaming drives for issues that would have been clear to any person who should be near a system.
I give a hoot about the PSU, look at the wire gauge and then figure out its current carrying capacity. You have just stated yourself that the wire gauges look thin, so add more circuits to carry more total current.
A short means just that, a SHORT. Loose pins in the plugs means you haven't inserted them far enough which is very likely as they do need a lot of pressure to get them home. If you want to see a short, look around and look for threads and forums with bad backplanes, you might even find myself in them as I have built a lot of these chassis and repaired them too.
 
These single 12V rail PSU obviously have a built in splitter inside them, meaning all the 12V already come from the exact same source.
On good PSU like seasonic the wire and connectors are well built and thick enough.
 
These single 12V rail PSU obviously have a built in splitter inside them, meaning all the 12V already come from the exact same source.
OFFS, It has nothing to do with being a single rail UNLESS you are joining multiple rails together at the backplane (two sockets on each that are joined making a short if you have two different rails to it.

On good PSU like seasonic the wire and connectors are well built and thick enough.
Not if you are like knuckles who tried running 24 farken drives off a single lead..... Read the rest of the posts please.
 
Good afternoon. I was hoping someone has run across this or has an idea of what might be happening. I have an RPC-4224 case that I got from a friend who upgraded to a really nice supermicro case. I also managed to get a great deal in some trade for a Dell PERC H710P card which is supposed to be a LSI 9265-8i 1GB Cache card with Dell firmware which is seems a lot of people have used Dell PERC cards because of price however I can't find much data on the H710P that I got. I purchased 5 4TB WD RE hard drives to start with.

So here is the problem I am having. I got 2 SFF8087 - SFF8087 cables to go from each port of the RAID card to the top 2 backplanes in the server case. I installed the drives, 4 on the top backplane, and 1 on the second backplane. When I go into the RAID Bios it lists the drives on the first backplane as 28, 29, 30, and 31 as the DISK ID's. Then it lists the drive on the second backplane 28 (or 29, 30, or 31 depending on what slot I stick it in.) It was really strange behavior. No matter how I changed it this always happened. So I was thinking it was something incompatible with the backplanes and the Dell card.

So here is what I just tried. I bought 2 cables. They are SFF8087 to plug into the Dell card, and split out to 4 individual sata ports each for a total of 8 ports. I also had a surplus of 2TB drives. So I hooked up the new cables, and 8 2TB drives, 1 to each sata plug. No backplane this time. In the RAID bios it lists the DISK ID's as 28, 28, 29, 29, 30, 30, 31, and 31. So in short it is doing the same thing as the other cables and the backplanes with the 4TB drives.

So I am kind of at a stopping point. I am not sure if this is just some incompatibility with the Dell card not being in a Dell system? Do this Dell RAID card need some kind of proprietary Dell cable? (Would seem it would take a standard cable since it is basically and LSI card) Or lastly is it just a bad card? If anyone has any experience with this card or run into a similar situation please let me know. Thanks so much for taking the time to read this.
 
I've had a 4220 for a few years now and for some reason 3 of the 5 back planes have gone haywire. The 3 that are not working properly only power the very last drive slot and not the rest. Has anyone seen this behavior before??
 
I've had a 4220 for a few years now and for some reason 3 of the 5 back planes have gone haywire. The 3 that are not working properly only power the very last drive slot and not the rest. Has anyone seen this behavior before??

Flaky backplanes seems to be a common complaint about the Norcos.
 
I just read the rest of the thread, didn't realize that people were having so many issues! Mine has been working pretty well since day one, but guess I was just one of the rare lucky ones. Looks like I'll have to see if I can get one of the new backplanes.
 
It is hard to say how common backplane failures are on these units. Obviously, some people are having problems. But likely there are a lot of people who have had no problems (I am in the latter group). Unfortunately, I do not know of anyone collecting failure rate data on this sort of hardware.

I see on the newegg reviews that the 4220 has 15% below-average reviews, and the 4224 has 19% below average reviews. That is NOT a failure rate, since we can guess that people with problems are more likely to post reviews. However, by comparing with newegg reviews of other hardware, we can see that it is not atypical. My rule of thumb for newegg reviews is that less than 5% below-average reviews is excellent, less than 10% is good, 10-20% is fair, 20-30% is poor, and more than 30% is terrible.

So, if I had to guess (obviously this is not a conclusive study), I would say that the Norco's have their share of problems, but they are not among the most troublesome hardware that you can buy.
 
I am contemplating building a storage server with a 4224 case, what do I need to know? I have never done something like this before, but I am comfortable with hardware and have many years IT experience building servers etc. I already have an Areca RAID card that can be used in JBOD mode and I am thinking about picking up a second M1015 card, planning on using Snapraid. I already have a board, cpu and ram I can put into the case, how difficult is the case to work with?
 
I am contemplating building a storage server with a 4224 case, what do I need to know? I have never done something like this before, but I am comfortable with hardware and have many years IT experience building servers etc. I already have an Areca RAID card that can be used in JBOD mode and I am thinking about picking up a second M1015 card, planning on using Snapraid. I already have a board, cpu and ram I can put into the case, how difficult is the case to work with?

I built my system with a 4220 but it's close enough.

First thing to do is watch out for sharp edges

Second: Take out the center fan wall to be able to connect to the backplanes.

Third: Pull out every single back plane and look at them. My particular case came with a backplane with the capacitors soldered in reverse polarity. Others have seen backplanes with missing components.

I replaced the fans with noctuas to reduce the noise a little, but the included fans (80mm) are decent.

Not sure if this is just me or not, but I had no success wiring 2 backplanes with the same power cable. I currently have 1 power cable per backplane with the 2 extra power connectors sitting unused.

Lastly their cases do not fit in standard width rack with their recommended RL-26 rails. Most people just use the shelf type rail and let them sit on it.
 
I'm still trying to figure out if the norco 4220 can be mounted into a rack cabinet (seem to see every where that the case is too wide ?!).
If the 4220 can be mounted in a rack cabinet : which one and with what rails ? I would love to get the answer, as I've been searching this forum, and no real positive answer could be found.

Thanks.
 
Of course it can. It's a rackmount case after all. I have mine in an APC cabinet using generic newegg rails. They're terrible and a bit tight, but they do work.
 
I'm still trying to figure out if the norco 4220 can be mounted into a rack cabinet (seem to see every where that the case is too wide ?!).
If the 4220 can be mounted in a rack cabinet : which one and with what rails ? I would love to get the answer, as I've been searching this forum, and no real positive answer could be found.

Thanks.

Most of those posts where people claimed to not be able to fit the case in is due to mounting the rails incorrectly in the rack.

I had a 4020 and a 4220 mounted in a Dell rack using generic istar rails I bought at a local store.
 
Don't know if this is an appropriate place to post this or not (and apologies if it's not)

Rather than professing ignorance of the rules and apologizing, you should have checked the easily available rules of these forums before making this inappropriate post.
 
@Blue Fox & @nitrobass24

Thanks both of you, so you confirm that any generic rails should work for the norco RPC-4220. Cool.
So that mean that the RPC-4220 is IEA compliant ?
 
Looks like I just joined the club and ordered an RPC-4216.

I have no need for 24 slots, and would rather put fan controllers and such in the 5.25 bays.

Speaking of which, I just noticed the 5.25 bays have some sort of springed covers on them Anyone know if they are easy to remove and replace with accessories (such as fan controllers?)
 
They are oversize doors, so you'd have to make a bracket of some sort if you wanted to access it from the outside without having a gap around it. Stupid design decision, made the thing seem like an old packard bell pc. Other than that it's a nice case. I did the fan wall mod and put 3 120mm high power fans in there (it's in a server room, didn't care about noise). Moves a ton of air, drives stay very cool.
 
They are oversize doors, so you'd have to make a bracket of some sort if you wanted to access it from the outside without having a gap around it. Stupid design decision, made the thing seem like an old packard bell pc. Other than that it's a nice case. I did the fan wall mod and put 3 120mm high power fans in there (it's in a server room, didn't care about noise). Moves a ton of air, drives stay very cool.

Thank you.

After I posted the above, I read through the thread suggesting that new versions of the 4216 had standard blanks instead of the doors. I can't find any pictures online though.

I guess I will know when mine gets here.

Oversized holes I can deal with using some electrical tape. It won't be pretty, but this thing is going in my basement, so I don't necessarily care about pretty.
 
Sorry but if you for even one moment thought that you could feed 24 drives off a single Molex feed or even 12 if you doubled them up, you were an idiot.

As for the shorting claim, skeptical.

Feed the the backplanes from the PSU, if you have a decent common rail supply, then use the Norco cable to tie the second set of sockets together to load-balance the power draw.

Just to be clear about best practice for connecting up power here.

Are you suggesting running a separate strand from the PSU out to the back plane for each of the molex connectors? That seems a little bit overkill. My PSU only has three strands in total, and only two with 4 pin molex connectors on it... I would have thought that once strand should at least be able to handle 8 drives...

Also, do the secondary molex connectors on the backplanes really need to be connected? I thought those were there only for redundant PSU's?

Please advise!
 
I only connected a single molex on each backplane (I think each goes back to the PSU, although it might be 2 & 2). Been a while since I opened it up.
 
Just received my new 4216.

As mentioned previously, they no longer come with the swinging doors on the 5.25" bays.

They ship with ventilated blanks installed (see left side) and the holes are no longer oversized (see how touch fan controller fits on right)

14880277219_ab02d76002_c.jpg


Certainly not airtight as many of you would have liked, but they will do the job. The trays behind them have space and holes to mount 4 2.5" drives on each side, or one 3.5" drive or 1 5.25" drive.


Close up to show the fit of the bay:

14880389658_8d0194425e_c.jpg


Mine came with a small bend in th erear wall (where the two fans are attached, next to the PSU mount hole. Hopefully this won't be a big deal, I might be able to bend it back by hand.

Otherwise, this seams like a fantastic server case. There are some sharp edges though. May wear gloves during the build, I already lost some skin.

Still waiting for parts, but I am excited to get started on this build.
 
nice to see they fixed the stupid flappy doors. I've been pretty pleased with mine otherwise. BTW, make sure you open/close the vent on each tray depending on if it's occupied, it's kind of hidden a little.
 
BTW, make sure you open/close the vent on each tray depending on if it's occupied, it's kind of hidden a little.

Thank you for the heads up.

Can you give me details on how to do this?

I pulled out a caddy, and don't see any flap on the caddy that can be closed. Peeked inside the backplane as well, and also didn't see anything apparent.

Much appreciated,
Matt
 
Back
Top