*Official* Norco data storage products thread

I found the direct link to those fans on their website, but as stated above didn't see any static pressure info. Do you know where that might be and also maybe have that info for the S-FLEX series?

It's possible I bought the wrong fans due to static pressure differences, but in my experience, the S-FLEX fans have undoubtedly been the best. For a while, I debated over the Ultra Kraze 38mm depth series which are stated to be "static pressure type" fans. However, they were sleeve-type and the MTBF was 30,000 hours compared to my 150,000 hours with the S-FLEX series.
 
I haven't measured it myself (other than simple empirical comparisons to other fans) but I've read quite a few analytical reviews that were impressed with the static pressure. I don't want to misquote one though.

Allow me to rephrase my statement: The Gentle Typhoon AP-15 is the best all around fan if you care about the noise per unit 1) airflow, 2) pressure, and 3) cost combined. You can find a better fan for any one or two of those things, but not all three. Rather, if you can find a better fan, let me know :)
 
Mine was wrapped in a USB PCI expansion card box.
I got 2 of the wall fans bracket delivered yesterday from them. Mine was clear wrapped together in a 'keyboard' box. Hehe.

Just like several people here, just need to find some decent fans now.
 
My fan selections:

3 x Noctua NF-P12: http://www.noctua.at/main.php?show=productview&products_id=12&lng=en
2 x Noctua NF-R8: http://www.noctua.at/main.php?show=productview&products_id=9&lng=en
2 x Noctua NH-U9DX: http://www.noctua.at/main.php?show=productview&products_id=20&lng=en

Power supply is a Corsair AX1200 which is 140mm and very quiet in the middle of its power range (which is where I expect to be with this). This will be in my living room, so it needs to be as quiet as possible. My current ESX machine + external array are my benchmark, hoping to end up quieter if at all possible.

Viper GTS
 
My fan selections:

3 x Noctua NF-P12: http://www.noctua.at/main.php?show=productview&products_id=12&lng=en
2 x Noctua NF-R8: http://www.noctua.at/main.php?show=productview&products_id=9&lng=en
2 x Noctua NH-U9DX: http://www.noctua.at/main.php?show=productview&products_id=20&lng=en

Power supply is a Corsair AX1200 which is 140mm and very quiet in the middle of its power range (which is where I expect to be with this). This will be in my living room, so it needs to be as quiet as possible. My current ESX machine + external array are my benchmark, hoping to end up quieter if at all possible.

Viper GTS

I'm a Noctua fan (heh, pun) as well. I'm running dual Opteron 6128 CPUs, so I have a pair of NH-U9DO A3. I've never seen the CPU temperature register over ambient. If only I could overclock Opterons...

How are the NH-R8? I'm seeing them at $16 each, which seems awfully high for 80mm fans. I still have a stack of decent generic MassCool 80mm that cost me less than $1 each, so it's hard to justify so much, but part of me wants to Noctua the crap out of my RPC-4224.
 
I have never used the 80mm Noctuas, my experience thus far has all been 120mm. I am hoping for good things.

The Noctua part of my order totalled ~$215 which is certainly a lot. Yate-Loon or similar are a far better value. I opted to cut corners on CPU & RAM rather than fans. CPU & RAM will change over the years I'll use this, I have to live with the fans from day one so I wasn't going cheap there.

Out of curiosity what motherboard did you use? I really wanted to find a competitive AMD option but all the SSI EEB dual socket G34 boards were really limited on PCI-e slots. All of the good stuff was EATX and most physically too large (13x16" range). I ended up going with a Tyan S7025WAGM2NR for sheer bandwidth reasons. Dual E5606 + 24 GB for now & eventually I'll upgrade them to dual hex cores + 48/64 GB.

Viper GTS
 
Last edited:
I have never used the 80mm Noctuas, my experience thus far has all been 120mm. I am hoping for good things.

The Noctua part of my order totalled ~$215 which is certainly a lot. Yate-Loon or similar are a far better value. I opted to cut corners on CPU & RAM rather than fans. CPU & RAM will change over the years I'll use this, I have to live with the fans from day one so I wasn't going cheap there.

Out of curiosity what motherboard did you use? I really wanted to find a competitive AMD option but all the SSI EEB dual socket G34 boards were really limited on PCI-e slots. All of the good stuff was EATX and most physically too large (13x16" range). I ended up going with a Tyan S7025WAGM2NR for sheer bandwidth reasons. Dual E5606 + 24 GB for now & eventually I'll upgrade them to dual hex cores + 48/64 GB.

Viper GTS

I posted a pretty thorough build log in my ZFS Thoughts thread, including all parts and photos :)

I went for the Supermicro H8DG6-F with integrated IPMI and LSI-based SAS controller, terminating in SFF-8087 ports. Comes in just below $600, but it includes dual Northbridges, for a TON of PCI-e lanes: 3 x16 slots, 3 x8 slots, and that integrated controller occupies the last 8 lanes. I absolutely love the board.
 
How can you modify the NORCO 4220 backplane to support SGPIO? (I've heard it's possible on the Newegg comments section)

Alternatively, how much does a replacement backplane cost that does support SGPIO?
 
@jmk396: I'm not familiar with the 4220 backplane, maybe you can post a high res picture somewhere so that I can have a look at it.

The 4224 (Rev 3.5) backplanes are 'upgradeable' or 'modifiable' in the sense that Norco designed the backplane PCB with SGPIO in mind. There is a footprint for a backplane management IC (I assume it's one of the AMI backplane management chips from the looks of it). The backplanes come without this chip and you could solder one on there if you wanted to, however getting these chips will be tricky as I believe these chips are only available to OEMs. The backplanes do however support the failed or locate red LED by means of individual control. There is a header (don't remember the actual reference designation of this connector of the top of my head right now), that lets you control each red LED separately. You could connect that to your controller, if the controller supports this. Maybe the 4220 backplane was designed the same way?!?
 
On the 4224 case, which way are the middle bracket fans suppose to face?

Should they be pointing away from the hard drives and toward the mobo and back of the case?

If so, how do the hard drives get cooled?
 
Hoping for a quick DIY or cheap buy.

Norco RPC-4224 case contains these ribbons for power/indicator jumpers. Any idea on a quick way to 'extend' them? Normally I would just cut the wire, splice a longer length on and heat shrink wrap them or just electrical tape them if I'm lazy.

These are ribbons so I can't exactly cut and splice them. Any way to easily extend them? For those that own this case, there is a power LED extender (I think) which would be perfect if I have an extra 12 of them. (6 each, I have 2 cases with a 3rd soon).

http://www.frozencpu.com/products/1...Motherboard_Reset_Switch_Extension_Cable.html

How about a bag of the ends? Do they sell those? I can make them fine and would rather not pay $36+shipping plus for just some simple wire.

BTW, has anyone else noticed that these same jumper header are kinda 'fat'? They are a very tight fit on my motherboard jumpers when bunched together. I tried on 2 different supermicro boards and both are very tight fit. [Power/HDD/NIC1/NIC2/Reset/Power]
 
On the 4224 case, which way are the middle bracket fans suppose to face?

Should they be pointing away from the hard drives and toward the mobo and back of the case?

If so, how do the hard drives get cooled?

I pull air from the front towards the rear. So the wall has fans blowing away from the hard drives.
 
If anyone has been considering a 4224, I noticed NewEgg not only has 10% off, but a free BlackBerry HS-300 Bluetooth headset with purchase of one right now. I would have posted in the Deals thread, but I guess I need more posts. If anyone else feels it is worth it, you can post it over there.
 
I've been debating on buying one of these norco enclosures (probably this one: http://www.newegg.com/Product/Produ...ction-_-cables-_-na-_-na&Item=N82E16811219033) or a supermicro equivalent.

I read that with the Norco cases, when you pull a drive out, all the drives on that channel go offline. Is this really the case? So it's not true hot swap then?

Also is anyone running one of these full of 1 or 2TB consumer grade drives? Is there any issues with weird i/o errors due to vibration durring rebuilds or intense operations? I really want to move my stuff to a SAN based environment. I'd probably put openfiler on it.
 
I run 4220 with Samsung F3 and F4's and so far, no problems. If I pull out one drive, all the others are still running. So far, I also have 0 IO errors(at least as far as ZFS and OI shows). Never had to do the rebuilds...

If you go to SAN, think about OpenIndiana(or any other Solaris clone) and napp-it. I love it much more then openfiler. I get better performance and no space limit and the main bonus: ZFS. Once you use ZFS, you don't want anything else:) Or at least I dont:)

Anyway, take a look at 10TB+ system builds thread in this forum and see what people are running in case you mentioned.
 
I've been debating on buying one of these norco enclosures (probably this one: http://www.newegg.com/Product/Produ...ction-_-cables-_-na-_-na&Item=N82E16811219033) or a supermicro equivalent.

I read that with the Norco cases, when you pull a drive out, all the drives on that channel go offline. Is this really the case? So it's not true hot swap then?

Also is anyone running one of these full of 1 or 2TB consumer grade drives? Is there any issues with weird i/o errors due to vibration durring rebuilds or intense operations? I really want to move my stuff to a SAN based environment. I'd probably put openfiler on it.

No that is not true. I had one full of 1TBs, now its half full with 2TB drives.

I have not had any issues with dropouts or IO issues due to vibrations during intense IO operations or rebuilds or anything like that.

I am using 10x Hitachi 2TB 5K3000s currently.

I wouldn't waste your time with OF. Does not perform very well in my testing.

If you want iSCSI, the MS iSCSI target software is now free. Just install it on Server08.
 
Ok good to know about the vibration not being an issue, and the fact that it's false that the drives drop out. I think I will go ahead and buy one soon. I may canibalize my IBM SAN for the drives, undecided yet. I could add bigger ones as needed. The IBM SAN I have is cool and all but can't get replacement drives/parts and can't put my own drives, so it's basically a very cool looking dinosaur.

I rather go with something Linux based so I will look at my options as far as software goes. Lot of options.
 
Definitely at least take a look at ZFS features. I've been using Linux for the last 15 years but I don't regret switching to ZFS for NAS/SAN. And since you already have a knowlage of Linux(I assume), the learning curve for Solaris will be very steep. Well, actually you can do most of the work from GUI, but sometimes CLI comes handy...

The things I love the most: RAID on FS level, snapshot support, snapshot cloning, NFS and CIFS integrated in FS(no need for samba), no fsck, read&write cache, copy-on-write,...

Anyway, take a look, it doesn't hurt. Better now if you are in a state of rebuilding the server. As I said before, I tried OpenFiler, but the performance and features just weren't on the level of ZFS.

Matej
 
Yeah NFS DOES look nice, so I'm sure I'll experiment with it. I was reading up on it and I like the caching feature. The beauty with this type of stuff at home is I actually get to play with it before I put it into production.
 
Yea, caching is sweet... My server is running with 8GB of memory and currently my cache hit is around 95%, serving web pages really fast...

On the other hand, snapshots are the thing I need the most at the moment, since daily/hourly backup 10TB of data would take a lot of time. Snapshot only takes a few seconds and no extra space which is great! And recovering old files from snapshots is even easier. Just enter .zfs/snapshot folder, locate the snapshot and copy file from it or use Properties-History in Windows. No need to untar backup archieve:)

Matej
 
By the way, if you will be using NFS, set sync option in nappit to disable, otherwise the performance will be VERY bad(you need SSD for sync to perform as it should). By default, in ZFS, all writes over NFS are synced...

Matej
 
I just thought about something, these Norco 4U could easily become very nice home storage server tower cases that could be put under desks, if put vertically, on wheels, and with a nice cover the same color the front bays.
The space between the rack itself and the better looking panels could also be filled with dampening foam to help lowering the noise.

Some sort of vertical cabinet/ dampened enclosure with optional wheels, on kit.

I say this because the tower format is much more easier to store for home users like me who don' t have nor want big tall rackmounts and don' t want wide, space eaters racks nor putting them on their table to avoid vibration.

And because of all the other alternative standard desktop, non rackmount cases; none of them allow such storage ease and possibilities.

What do you think of this, Norco guy ?
 
Another request, it would be a good and cheap idea to drill holes in the side case for mounting a few 2.5" boot/OS hdds on the RPC-4224.
 
Another request, it would be a good and cheap idea to drill holes in the side case for mounting a few 2.5" boot/OS hdds on the RPC-4224.

Yeah, people do it all the time. I have even drilled holes in mine to accommodate 1/2" tubing.
 
Another request, it would be a good and cheap idea to drill holes in the side case for mounting a few 2.5" boot/OS hdds on the RPC-4224.

If you are using SSDs for the boot drive (things that don't vibrate or generate much heat) then you don't need holes. Velcro tape works wonderfully.

If you don't want to risk your warranty by putting Velco on the drive, get a mounting bracket for the SSDs, screw them into it normally, and then use Velcro to secure the mount to the case.

Don't do this for spinning drives. It might take a year or more, but the small vibrations from the drive will eventually cause the Velcro to let go.
 
Is there anyway to mount a 3.5" drive on the inside of the case without drilling holes? The mounting kit doesn't come with any instructions as we know.
 
@zeroARMY: I assume you are referring to the RPC-4224?

Some of the brackets are CPU cooler supports (they go on the backside of the MB between the MB and the chassis) and the rest of them are for mounting different power supply units, namely redundant power supply modules. I do agree with you that Norco should at least include a single sheet with the brackets to indicate what these parts are for...
 
I've just read through most of this thread, but I still have got some questions.

I was looking at getting the Norco 4224, however, now I am not so sure as they have released other models.

I have very limited understanding of rackmountable servers (usually looked after by the infrastructure guys at my work) so please bear with me.

As far as I can tell,

1. Norco 4224 is a basic unit, with just the chassis and a backplane with 6 SFF-8087 ports.

2. In order to use these, I'd need a PCI Express card with 6 SAS ports, each connected to one of the corresponding ports on the backplane with an SFF-8087 cable. Each SAS port can control 4 drives only. I think?

Now I was pretty much set on the Norco 4224, however I am unsure what to purchase with the introduction of the newer products.

The new DS-24xx series come with a SAS Expander. Being new to this, I wasn't exactly sure what this was. From what I can gather,

1. this has an on-board logic chip that controls the drives, and
2. this allows single cable connectivity using an SFF-8088 cable.

How would I connect the drives within that chassis to a mainboard with an SFF-8088 cable? Do I need a special PCI Express expansion card?

Other than less cable clutter, what are the advantages of going with the DS-24xx series? Are the various PCI Express controller cards not as good as using on board controllers? Why/why not?

Thanks for your time :)
 
@brendanz:

You are correct for the most part, here are some additional information for you:

1. The RPC-4224 has 6 backplanes each with one SFF-8087 connector. A single SFF-8087 carries 4 individual SATA/SAS connections, hence 6 x 4 = 24 drive bays.

2. There are a few RAID controller cards on the market that have a SAS expander chip on-board to give you the 24 internal ports you need for the RPC-4224. Those cards would usually have 6 SFF-8087 connectors. These types of cards are fairly expensive (you didn't mention if you have a particular budget or price point in mind). There are cheaper alternatives you can use to achieve the same goal. Do a search based on SAS expanders and you will find lots of different builds on this site...

3. The new DS-24 expander chassis that Norco recently introduced is rather on the expensive side if you are looking at this for home use. I think the price point of this chassis is geared more towards enterprise use.

4. Think of an expander chip like an ethernet switch. With an appropriate controller, you can talk to as many HDD as the expander has ports (that is if you don't cascade the expander chips). For example, if you take a 36 port expander chip, you would normally use 4 ports (a single SFF-8087 connection) to connect the expander chip to the controller which leaves you with 32 ports to connect to HDDs. The limitation of such an architecture is throughput as the combined read or write throughput to all drives will be limited to the combined throughput of the 4 connections you have between the controller and the expander chip. Some controller cards as well as expanders will allow you to dual link (use up to 8 ports/connections) and hence double your throughput.

5. SFF-8087 connections are used within a chassis and SFF-8088 cables are for inter-chassis connections (e.g. you would use a SFF-8088 cable to connect a HBA or RAID controller to an external chassis filled with additional HDDs). Both SFF-8087 and SFF-8088 carry 4 SATA/SAS connections. There are some other minor differences between these cables but I don't want to confuse you with those and they are for the most part irrelevant for the majority of home type servers/storage setups.

6. Most chipset based SATA ports are not capable of communicating with expander chips and hence you will need a controller card that has the ability to talk to those chips.

Hope this helps...
 
If you are using SSDs for the boot drive (things that don't vibrate or generate much heat) then you don't need holes. Velcro tape works wonderfully.

If you don't want to risk your warranty by putting Velco on the drive, get a mounting bracket for the SSDs, screw them into it normally, and then use Velcro to secure the mount to the case.

Don't do this for spinning drives. It might take a year or more, but the small vibrations from the drive will eventually cause the Velcro to let go.


Yeah, i also thought about using a PCI thing like the Scythe slot rafter for 4*2.5" or 80mm fan (useful for helping the overheating ARC-1880), but the case could come with pre-drilled mounting holes, cheap upgrade idea for future revisions.
And why not holes that could also safely handle a 3.5" hdd.
 
@treadstone

Thanks very much for the explanation. Very helpful. I've got a few other things for clarification if you don't mind though :) :p

@brendanz:

You are correct for the most part, here are some additional information for you:

2. There are a few RAID controller cards on the market that have a SAS expander chip on-board to give you the 24 internal ports you need for the RPC-4224. Those cards would usually have 6 SFF-8087 connectors. These types of cards are fairly expensive (you didn't mention if you have a particular budget or price point in mind). There are cheaper alternatives you can use to achieve the same goal. Do a search based on SAS expanders and you will find lots of different builds on this site...

I see :) as far as a budget in mind, well, I'm fairly open. I need to build a whole new system to go inside the server chassis. I was originally just going to put in a standard Asus or Gigabyte board, though I'm starting to think differently. Essentially I'm just after a basic storage for my media. I doubt I'd ever have more than say, 4 drives in use at once, and even then it would be fairly low (copying, extracting, watching, thats about it). I thought I would research the controller card first, though I am now assuming I can get boards with SAS on board. As far as CPU and Memory goes, nothing too drastic would be required.

So for instance, I was originally looking at the HighPoint Rocketraid 2760

http://www.highpoint-tech.com/USA_new/series_rr276x-rr274x.htm

So from your comments,

1. There is a maximum of 16 channels that can be used by one card, hence the need for expanders. I assume this is due to PCI Express maximum of 16 lanes?
2. In the case of the Rocketraid 2760, it is likely using some sort of onboard expander chip, to provide access to the additional 8 ports (16+8=24). So normally, each SAS port has a dedicated 4 channels. However, in this case, it is being split/shared.

3. The new DS-24 expander chassis that Norco recently introduced is rather on the expensive side if you are looking at this for home use. I think the price point of this chassis is geared more towards enterprise use.


4. Think of an expander chip like an ethernet switch. With an appropriate controller, you can talk to as many HDD as the expander has ports (that is if you don't cascade the expander chips). For example, if you take a 36 port expander chip, you would normally use 4 ports (a single SFF-8087 connection) to connect the expander chip to the controller which leaves you with 32 ports to connect to HDDs. The limitation of such an architecture is throughput as the combined read or write throughput to all drives will be limited to the combined throughput of the 4 connections you have between the controller and the expander chip. Some controller cards as well as expanders will allow you to dual link (use up to 8 ports/connections) and hence double your throughput.

So in regards to the Norco DS-24E, for example

http://www.norcotek.com/item_detail.php?categoryid=8&modelno=ds-24e#

it says "Built in 6Gb/s SAS Expander for single SFF-8088 cable connectivity, LSISAS2x36/28 expander IC embedded" and "Connector One SFF-8088 "IN" connector for connection to the host, two SFF-8088 "OUT" connectors for expansion to an additional JBOD enclosure"

This is where I was getting confused before. So you actually physically connect the controller to the expander with a standard SFF-8087 cable, providing access to 4 channels worth of throughput. So, with 36 ports with the LSIAS2x36, 4 ports go to the Input, 8 ports go to the output for an external JBOD enclosure, and the remaining 24 ports are distributed amongst the drives in that case. This means that they share 4x6.0Gbps of bandwidth between them all, provided by the input cable connected to the controller. Some quick maths and the numbers seem like that wouldn't be too bad (with mechanical drives). I suppose I/O would come to a factor as well.

So in summary, the main advantages for using a SAS expander

1. One cable from controller card to SAS expander
2. Acts as an ethernet switch, sort of, so that you can use an inexpensive SAS controller card to control many drives

Disadvantages

1. More expensive outlay in that you need to purchase a case with built in expander OR buy an additional one (often fitting into a 5.25 bay from what I've read).

Think I'm starting to understand it now.

A last few questions if it's OK :)

1. Can you use any standard controller card with a standard SAS SFF-8087 port to control the SAS expander?
2. What is the purpose of the Norco DS-24D? http://www.norcotek.com/item_detail.php?categoryid=8&modelno=ds-24d

It says it has 6 external SFF-8088 connectors, but makes no mention of a SAS expander included. Does it just come with the standard 24 bay enclosures plus external connector plugs to connect to other enclosures? Assuming this would require using either individual SAS connectors directly to a controllor or third party internal SAS expander.

3. Assuming as long as the controllor card supports it, there's no problem in using 3TB drives?

I might ask for some help later in choosing a motherboard, but I think I'll keep that in a new thread (don't want to derail this even further!!)


5. SFF-8087 connections are used within a chassis and SFF-8088 cables are for inter-chassis connections (e.g. you would use a SFF-8088 cable to connect a HBA or RAID controller to an external chassis filled with additional HDDs). Both SFF-8087 and SFF-8088 carry 4 SATA/SAS connections. There are some other minor differences between these cables but I don't want to confuse you with those and they are for the most part irrelevant for the majority of home type servers/storage setups.

6. Most chipset based SATA ports are not capable of communicating with expander chips and hence you will need a controller card that has the ability to talk to those chips.

Hope this helps...

Absolutely has been helpful. Thanks again.
 
@brendanz: PCI lanes have nothing directly to do with how many SATA or SAS ports a controller has. That's like apples and oranges! However the amount of PCI-E lanes a controller uses will dictate what your maximum throughput (Drives to CPU) will be. For example, if you take a RAID controller that has a 4 lane PCI-E bus interface and you use dual links between your expander and controller (especially if they are 6Gbps links), that would not do you much good since the PCI-E bus will be a bottleneck between the HDDs and your CPU.

DS-24E (with build-in expander): You would use a controller with an SFF-8088 connector and SFF-8088 cable to connect to the DS-24E (controller to DS-24E IN). You can add additional DS-24E enclosures by connecting one of the OUT ports of the first DS-24E to the IN port of the second DS-24E with a SFF-8088 cable. Again, this is an easy and flexible way of expanding your storage, however keep in mind that you are still limited to the 4 lane throughput that you are now sharing among ALL drives. A better setup with two DS-24E would be if you had a controller with two SFF-8088 connectors and you would hook up each DS-24E to its own SFF-8088 port on the controller, that way you are dedicating 4 lanes per 24 drives.

DS-24D (no expander): This is obviously a lot cheaper alternative to the DS-24E, however you need to provide individual connections either from a controller or an external expander to each drive in the chassis, hence the 6 SFF-8088 connectors on the back of the DS-24D (6 x 4 = 24). You do need to use a lot more cables compared to the DS-24E, however it does give you more flexibility especially if you are looking for higher throughput. Personally I wouldn't buy either, but if I had to choose between the two, I would probably buy the DS-24D and mod the chassis by sticking a cheaper SAS expander alternative into it...

You would have to do some research on the controller you would like to use and see if it supports SAS expanders. Most descent RAID and SAS HBA controllers support them, however there are compatibility issues with between some. Have a look at some of the threads on this forum and you will find a wealth of information... Happy reading :)

It's up to the controller if it supports 3TB drives. Again, you would have to do your own research on whatever controller you want to use for your project.

You didn't mention if this is for your own (home) use or if this is for a business.
 
@treadstone: Thanks for clarifying. I think I misread this exctract from the review of that card - http://www.tweaktown.com/reviews/35...24_port_sas_sata_6gb_s_controller/index2.html

The flagship 2760 that we are looking at today is the only 24 drive controller being offered with the technology that makes it possible to utilize all 16 lanes of PCIe bandwidth. .

Anyway, I see now :) thanks for clearing all that up.

This is really only for home usage :)
 
I use the 2740 16-port controller and doing a good job with the use of single disks ,
i didnt try raid arrays yet but it is already fast enough for the things i do.
It outperforms easy the mainboard controller wit the use of single disks so it does a good job

I just ordered the 4220 , 120mm fanbracket , slimline DVD and cables
, the 4220 is at production right now , and will be shipped in container coming weeks , between begin june and end july ..
Norco could not tell the exact date, as shipping can take weeks
but the order is on backorder , hopefully it will arrive in Holland soon, i really need an big upgrade as my recent case is a big mess and the sitewalls cant even closed by all the cable stuff lol

I will post here some material as soon things are arived.
Norco contact in Holland is pretty fine !


edit: thinking now of using a Corsair H70 watercooler for the cpu , as my current Prolimatec is exelent but a bit to high for a 4U case ,
and coolers like that are a huge flow blocker in a serverchassis, so it might be a good solution to apply a H70 its well tested and it even perform slightly better than Prolimatec
when use both fans.
But.. it will require some mod to the case, the space left for this cooler on the left or right side of the case is less, i really dont know if it will fit with keep the radiator inside the case
as this would be a much more friendly setup than a radiator outside the case, with the fan on it it will raise 7,5 cm ouside the case ...
anyone have used a H70 into the 4220 or any other 4U case ? i would like to see examples about possible setups.
I can choose for the smaller H50 , but performance is less, i dont want to go down in coolingperformance as room temps here can become high.
Any ideas very welcome and posible pictures.

thanks !
 
Last edited:
Back
Top