24*drives RAID setup requesting help

Mastaba

Limp Gawd
Joined
Apr 2, 2011
Messages
228
Hi, i' m planning to build a noisefree 24*3TB RAID6 homeserver setup, i already made some research but i need some help:

The case & racks :

I was first thinking using a LianLi PC343B cube like here filled with 4in3 LianLi hdds racks, but the case when using EX36B (with silent 120mm fans) will not be able to fit long controllers cards like Areca' s 24ports.
I don' t know if there is some silent 4in3 SFF8087 backplane that would fit ?
Are the iStarUSA racks used here are silent ?

I saw the very interesing Norco RPC-4224, it ' s not as pretty as the LianLi but seems very functional.
The only (BIG) drawback for me is the very high noise output.
I saw here there is some $11 3*120mm bracket upgrade available for this case, very interesting but :

-If i put some silent fans here (like 3*Noctua 1000/1200RPM) would it be enough to cool down the 24*drives ? I don' t want noise but i don' t want them to overheat..

-What about the two (VERY NOISY) contrarotative 80mm rear fans ?
If i replace them with silent 80mm noctua, same question, would it be efficient enough cooling and extraction ?
I heard the 1880 are very hot and need a good cooling despite their fanless stock heatsink.

I think there is also some SuperMicro & Chenbro cases, any good references ?
I also noted the akiwa GHS-2000 but can' t find much information/price.

About drives, is there any extra 2.5"/3.5" bay for the boot/OS drive included in these 4U cases ?

About all these 4U cases, is there any cover for dust protection and better look ?
I have no server shelf and will place it on a table, like any normal desktop PC.


The HDDs:

I was planning to use some 3TB hitachi or WD drives, 5400 or 7200RPM.
Hitachi drives seems to have good compatibility with areca cards (but the 3TB is "sheduled" in the areca compatibility list), i don' t know if 5400/7200RPM will change a lot about performance/heat output/power consumption ?

I will using it mainly as a storage server but also need some speed, i wish it could easily handle a dozen clients downloading random files each at 1Gb ethernet without slowdowns.


The card:

What to choose, the good old ARC-1280ML or the newer, more performant and also cheaper ARC-1880ix-24 ?

About boot time, how many seconds are needed to boot them using 24 drives ?
I saw the ARC-1680 being very slow to boot and wondering if the 1880 is similar/better/worse ?
I read the boot time increase when the ports are not fully populated, so how many time is needed for the 1880 24*ports filled with 24drives ?

Do you think there is some safety issues while putting 24 drives in an unique 60TB RAID6 array ?
-About having "only" 2 parity for 24.
-About having such a huge unique volume.


The PSU:

How much power will i need ? Does the Corsair AX850W will be enough ?
There will be :
24hdds, 7200RPM at worse
1/2 boot drive (SSD.perhaps another 2.5" for temp/data)
Sandy bridge H67 mobo + i3
areca 24p controller, 1880 at worse
4/6/8GB DDR3

I know the drive pull a lot of power when spin up, do you think it' s ok without using staggered spinup ?


The motherboard:
What mobo to use ?
I planned to use something like H67 Gigabyte with some small i3 and integrated graphic.
Is there compatibility problems with areca cards ?
Is there better mobo to use ? Perhaps something with ECC memory ?

And finally, have you any advices, remarks, thoughts about modifying, enhancing things, removing flaws in that project i didn' t see ?
 
I'll let some of the others comment on some of the more detailed points, but I would advise against a 24-drive array. It would be better to break it up into several smaller ones.
 
Like 2*12 ?
Yeah but that mean less usable space and less convenience for sorting files in several arrays instead of several directories.

What are the risks of a large array ? Greater error rate ? Greater risk of losing all instead of "only" 50% of the data ?

I read about the unrecoverable error rate of the hdds , but isn' t it for each drive and not a RAID array ?
 
The Akiwa GHS-2000 is extremely rare and probably not what you want. I used to own one and it is gigantic, heavy, and not quiet. Filled, it will easily get close to 200lbs and is the size of a small fridge. It has 8 x 92mm fans in the middle and 2 x 120mm on the back...none of which are quiet. I can give you all the details on the case that you want as I'm very familiar with it. The Lian Li is a poor choice as pointed out before. For the Areca cards, get an 1880i and SAS expander. 24 drive arrays are no problem with RAID 6 even though all the naysayers will point you to articles on why you shouldn't (if you think they're bad, just wait for the ZFS zealots to show up).

Your power supply is fine (even 650w would be adequate when not using staggered spinup), but with the other components that you're after, you might as well spend a bit more and get a proper server board. Supermicro as usual is recommended and you should get ECC RAM while you're at it too due to the negligible cost difference.
 
This has been up some times before. My own system has 28 disks in RAID 60. But i wouldn't do that if i didn't have my backups. In the 1½ years its been running now i did have 3 disks go at the same time. Had it been the wrong 3 it would have broken.

Hard disks do fail and there are also other things that can happen. Like a cable falling out, a tray that doesn't connect and so on. If a disk looses connection and get connected again it need to rebuild to work again, at least in my setup. That takes 8-9 hours with my 28 disks on my Adaptec 52445 controller.
 
Thanks!

Do you think it' s possible to build a relatively silent but well cooled server using the Norco and silent 120mm fans ?

About supermicro sandy bridge mobo, i see the Supermicro X9SCM with 2*PCI-E 8x +dual LAN or the X9SCI-LN4 with only one PCI-E 16x but the quad LAN, don' t know i i will need the IPMI..
dual/quadLAN, Does that mean it can aggregate the 2/4 Gb LAN ?
Don' t know, perhaps it' s good to have another PCI-E 8x port for future upgrades.


Perhaps a smaller 16 drive setup in a LianLi A77A/A77FA+4*EX36 would be more easy ?
Do you think these cases are good enough ? They seems to have more space for the cards.
 
The case you are looking for (24 bays, non-rackmount form factor, quiet, pretty, modern cooling) does not exist. At least, that has been my conclusion based upon my own search. I concluded that the choices are,
- drop to about 14 bays and many options open up
- get the lian li cube and deal with the consequences
- get a 4u unit and live with the noise/aesthetic issues
- look at multiunit configs aka two servers, or server + external enclosure
 
- get the lian li cube and deal with the consequences
you mean the difficulties from the assembly ?
If there is some silent SFF8087 4in3 backplanes available, it could work.

- get a 4u unit and live with the noise/aesthetic issues
But isn' t it possible to get rid of the noise using 120mm fans on the norco ?

I wish LianLi make cases as functional as the 4U :p


Another question, the supermicro mobo have VGA out but no DVI ?
Does the VGA can handle 2560*1600 ?
 
With any backplanes, you'll have a hell of a time fitting any expansion cards in the case. Why on earth would you need such a high resolution video output on a server anyway? Any work you'd need to do on it could be done over the network. If you get a board with IPMI, you won't even need to hook up a monitor to it...ever.

Honestly, I found the Norco 4020s I had to not be that bad. I had my full 22U server rack in my bedroom for the better part of a year and had no issues sleeping with it there. Even my girlfriend didn't mind too much. Norco does in fact make a 120mm bracket for the cases. Just don't get some fan rated to be completely silent. You need a reasonable amount of airflow to keep the drives and other components cool.
 
Anyone know about the NetStor backplanes ?
http://www.netstor.com.tw/_02/02_detail.php?MzA=
4in3, SFF8087, three position fan, but dont ' know about the noise/cooling in low or if they fit in the PC343B in front of the areca card..

Why on earth would you need such a high resolution video output on a server anyway? Any work you'd need to do on it could be done over the network. If you get a board with IPMI, you won't even need to hook up a monitor to it...ever.
To plug a 30" display, but you say it' s possible to pilot the server from the network from another computer itself plugged to the monitor ?

About the Norco+120mm, so you say it' s possible to combine large array and silence ? With some 1000RPM 120mm fans ?
 
Last edited:
No large Areca card will fit in the case if you have any devices in the 5.25" bays, no matter what they are. Yes, that's how all servers are managed. Remote desktop (and IPMI for stuff outside the OS). I didn't say silence and performance. I said to not get a silent fan as it won't be sufficient. Get something in the middle of the range and you should be fine and it won't be too loud. No idea if a 1000rpm fan will be sufficient. RPM alone doesn't mean anything as it's really a combination of all the specs. I bet you'll find some decent suggestions in the large Norco thread as to what fans work well.
 
If you are going for quiet, you will also want to go for minimum power / heat. So get 5400rpm HDDs instead of 7200rpm, and use the lowest power CPUs you can, and the least amount of RAM that you can.

I have a Norco 4224, with the 120mm fan bracket. I put in 3 Noctua NF-P12 fans, running with a low noise adapter (series resistor that cuts the voltage and RPM). Make sure you get the "P" version (NOT the "S" version), since the "P" is optimized for better static pressure performance, and there is quite a load sucking the air through the HDDs in front since they are really packed tight in there.

I also replaced the rear two 80mm fans with Silverstone FN81:

http://www.silverstonetek.com/products/p_spec.php?pno=fn81&area=usa

I settled on those since they had decent static pressure and relatively low noise. I also put a low noise adapter on those.

Even so, the box is not silent. I can hear a quiet hum from all the spinning platters, and a quiet whooshing sound from the air being sucked in past the HDDs. But it is reasonably quiet, I think.

I have it filled with mostly 2TB Samsung F4EG drives. Right now they range in temperature from 30C to 35C, with the max recorded temperature for any drive (smartctl) being 38C.
 
Hi, i' m planning to build a noisefree 24*3TB RAID6 homeserver setup, i already made some research but i need some help:

...

And finally, have you any advices, remarks, thoughts about modifying, enhancing things, removing flaws in that project i didn' t see ?

It really appears that you did little to no research into your project. Others have pointed out enough mistakes that you might want to abandon the whole project. But you asked for help ...

It is easy enough to hang hard drives from two strips of plywood.Makes a tidy installation.

Rather than RAID just use individual hard drives. Much easier to recover from a drive failure.
 
If you want it noiseless you'll want to dunk the whole thing in mineral oil and have it go through a rad. Look at Taco pumps. They're made for water though so not sure how they'd do with oil.
 
@GeorgeHR
If i already been omniscient i won' t posted and requested for help, the point here is precisely this: helping me clearing some point and helping me about some bad choices i could have made, why should i abandon the project when answers comes ?


Silence is so subjective...

I know it' s not going to be absolutely fanless and perfectly dead silent, i know there is an obvious need for cooling, i already make a 16*hdds RAID6 with Tyan K8W/2* 250opteron/ARC1160 in an awfully cooled V2100, but i want this new project to be better cooled but also avoid having a 747 at take off in my desk like the stock fans do.

In fact i only search for a good balance between performance, cooling and noise.


Some 80mm backplane are less noisy, and i' ve seen here that it was possible to fit some iStarUSA racks+areca 1880ix in the PC343B, even if it' s tight :
07.JPG

08.JPG


If the SFF8087 4in3 NetStor, with their 3position fans fit, that would make the PC343B option possible with even less cables.

Here we can see the longer LianLi racks, preventing the installation of any long card:
add0c5b402efe1e908a6dec5e635e.jpg
39b1521b79a0581ec80102921487c.jpg

05a199b24857eaa3e3de71cd86dee.jpg

ea6a54055dd61189aab1d71cf8dcf.jpg

The last pic shows a ARC-1170 that fit in front of some shorter racks.

And finally john4200 proved, thanks to him, that it was possible to build a quiet Norco 4224 based server.

So get 5400rpm HDDs instead of 7200rpm, and use the lowest power CPUs you can, and the least amount of RAM that you can.

I have a Norco 4224, with the 120mm fan bracket. I put in 3 Noctua NF-P12 fans, running with a low noise adapter (series resistor that cuts the voltage and RPM). Make sure you get the "P" version (NOT the "S" version), since the "P" is optimized for better static pressure performance, and there is quite a load sucking the air through the HDDs in front since they are really packed tight in there.

Yes, i planned to use a 35W TDP CPU like some i3, integrated graphics.
The NF-P12 were the fans i targeted, but without the resistor, so if it' s working even undervolted that' s perfect.

About 5400 vs 7200RPM, i know the 7200 will heat more, but how much ? Also will i lose a lot of performance ?

About static pressure, there are also the 120*38mm fans, perhaps their increased static pressure could allow the use of 7200RPM hdds without sacrificing too much silence ?
 
About static pressure, there are also the 120*38mm fans, perhaps their increased static pressure could allow the use of 7200RPM hdds without sacrificing too much silence ?

I don't think it is worth it, but if you really want to go that route, there is the Triebwerk TK-121. I've never heard one, so I cannot vouch for the noise. And if you have a large motherboard, it may be difficult to fit the fan in since it is 55mm thick.

http://www.feser-one.com/site/product_info.php?products_id=328
 
5400RPM drives lose in terms of IOPS, data transfer rates aren't significantly worse. Chances are home streaming applications will be limited by transfer rate, with mostly sequential reads that mech drives aren't too bad at.
 
To the OP: I think you really need to figure out your priorities first!

From reading some of your replies, you want high throughput, but low noise and low heat/power.

CPU: If you are opting for the 35W TDP i3, this may limit your overall network throughput especially if you are looking at bonding multiple gig-E ports together.

GPU: You asked if the on-board VGA on the SM X9SCM will support 2560*1600, that sounds to me like you are trying to use this as a workstation as well as a server!?! The on-board VGA is sufficient enough to run a server desktop, but as BlueFox pointed out, you would most likely manage the server via RDP so you wouldn't even need to have a monitor connected to the server!

I would highly recommend splitting these two functions into separate systems!
A workstation and a separate server!

HDD: If you want a silent system, get the 5400rpm drives, they are a bit quieter and generate less heat which also saves you buying a more powerful power supply.

PSU: Here is a general guideline for picking a PSU:
(Replace any of the following with whatever you are going to pick in terms of components, some of the following numbers are from another server I am currently assembling)

Code:
TYPE	MODEL			QTY	TDP (W)	TOTAL TDP (W)
CPU	E3-1240			1	80.00	80.00
MB	X9SCM-F			1	75.00	75.00
MEM	KVR1333D3E9SK2/4G	2	2.05	4.10
CTRL	AOC-SASLP-MV8		3	10.00	30.00
HDD	WD20EARS		24	6.00	144.00
OSD	SSDSA2CW080G310		1	0.50	0.50
				
				Sub Total	333.60
				
		Minimum Recommended PSU Wattage	417.00

I am using a Norco 4224 and picked the Coarsair AX750 as a power supply as it has the highest efficiency at around 375W or in other words, it is most efficient when the system puts the most load on the power supply.

I will be running WHS 2011 on this server combined with FlexRAID. I opted for the 3 AOC-SASLP-MV8 as interface as this this gives me 24 ports total and I do not need (nor do I want) a hardware RAID setup.

I may post some additional info later, have to run...
 
As an aside, I would be interested in your thoughts on WHS 2011 with flexraid in a separate thread. I am designing a system similar to the OP and do not feel comfortable with ZFS at this time. So, i have been vacillating between WHS and something like flexraid vs. 2008R2 and a beefy raid controller and your thoughts would be appreciated.
 
Thanks for your feedback!

If you are opting for the 35W TDP i3, this may limit your overall network throughput especially if you are looking at bonding multiple gig-E ports together.

What do you advise ?
Is a quadcore xeon really useful ?

You asked if the on-board VGA on the SM X9SCM will support 2560*1600, that sounds to me like you are trying to use this as a workstation as well as a server!?!

No, or as a very light one, using it only for playing/checking some 1080p videos stored, FTP server, configuration... Nothing really power hungry, not real workstation use.
I would like to use the full resolution of my display when tweaking the server, and not having a degraded/blurred display (VGA and/or upscale).

I saw there was C206 which support HDGraphic, but the only motherboard (ASUS P8B WS) gives only a single link DVI up to 1920*1200.
I can always add later a small PCI-E 1x fanless videocard with dual link DVI port.

About IPMI, it' s like some sort of KVM ?


Actual config i' m thinking about now:

RPC-4224 + 120mm bracket
Noctua NF-P12+some quiet rear 80mm
X9SCM-F or Tyan alternative (S5510?)
4/8GB DDR3 ECC
i3 2100T or xeon E3
ARC-1880ix-24 /BBU /RAM
24*5k3000 or 7k3000
AX750
Boot/OS SSD + temp 2.5" hdd
In a Scythe slot rafter + 80mm in front of the areca for cooling her.
...
 
RPC-4224 + 120mm bracket
Noctua NF-P12+some quiet rear 80mm
X9SCM-F or Tyan alternative (S5510?)
4/8GB DDR3 ECC
i3 2100T or xeon E3
ARC-1880ix-24 /BBU /RAM
24*5k3000 or 7k3000
AX750
Boot/OS SSD + temp 2.5" hdd
In a Scythe slot rafter + 80mm in front of the areca for cooling her.
...

This is almost exactly what I have. I just need time to put this damn thing together.
X9SCM-F, 16GB ECC, E3-1220, Arc-1680i/BBU, 4224 with Wall, 2 HP SAS and over 48 2TBs drives.

One thing I didn't like is the motherboard headers are on the opposite side of the board of where I would expect them to be. This makes the case jumpers from the 4224 a tight fit to reach them. As soon as I can make some led header extenders, it should be fine.
 
Hi, I'd recommend against using H67 based mobos of Asus/Gigabyte. I have acquired Asus P8P67 Pro and it doesn't work with LSI based controller Intel RS2WC080.

Supermicro mobos work well, at least Atom based X7SLA-H - none issues whatsoever. If you are considering buildign a server, get enterprise class mobo, e.g. P67 based Supermicro C7P67:

http://www.shopbot.ca/m/?m=C7P67

And Norco cases are amazing - i have Norco 4020 case
 
Hi, I'd recommend against using H67 based mobos of Asus/Gigabyte. I have acquired Asus P8P67 Pro and it doesn't work with LSI based controller Intel RS2WC080.

Supermicro mobos work well, at least Atom based X7SLA-H - none issues whatsoever. If you are considering buildign a server, get enterprise class mobo, e.g. P67 based Supermicro C7P67:

http://www.shopbot.ca/m/?m=C7P67

And Norco cases are amazing - i have Norco 4020 case

I have the P8P67 pro and had a lsi based Intel SRCSATAWB seemingly die on me.

Has anyone got a reasonable explanation as to exactly why the board doesnt like these controllers?
 
What do you think about the Gentle typhoons 1850RPM as 120mm wall fan replacement ? Enough power to cool the whole backplane ?
I also have the NF-P12 in head in the 120*25mm category, don' t know which one provide the most static pressure.
 
What do you think about the Gentle typhoons 1850RPM as 120mm wall fan replacement ? Enough power to cool the whole backplane ?
I also have the NF-P12 in head in the 120*25mm category, don' t know which one provide the most static pressure.

Why don't you flip a coin?

...or, here's a thought, you could do some research....
 
I have the P8P67 pro and had a lsi based Intel SRCSATAWB seemingly die on me.

Has anyone got a reasonable explanation as to exactly why the board doesnt like these controllers?

Ross1: I called both Intel and Asus, they just uselessly kept pointing at each other. Intel said that they can look into compatibility issues only on Intel boards, but that could be more risky than Supermicro. Were you able to get anywhere with P8P67?
 
Ross1: I called both Intel and Asus, they just uselessly kept pointing at each other. Intel said that they can look into compatibility issues only on Intel boards, but that could be more risky than Supermicro. Were you able to get anywhere with P8P67?

My situation:

Get new mobo, everything seems fine for a couple of weeks.

reboot last month, suddenly card doesnt post. ****. Try it on other systems, the card aint posting on them now either, so i think its died.

I spend a while trying to source an affordable controller that has the same RoC. Went for an LSI 8708ELP on ebay from america. It finally arrives, that doesnt post.

So obviously I was starting to wonder WTF was up, because its very unlikely for a raid card with a MTBF of millions of hours to suddenly die and then receive a new dud card that has the same problem.

At any rate, im glad you have confirmed that I wasnt being a complete dolt and overlooking something obvious.

edit: ATM im now in the position of trying to recover the 6 x 2TB raid 5 array I had on that controller. Ive spent a huge amount of my time over the last week on it.
 
About the 5400/7200 dilemma:

I know the 5400 will be quieter and slower than the 7200, but how much ?
So how 24 of them will perform in a RPC-4224 ? How this case handle vibrations ?

If the 7200RPM hdds are only a little more hot/noisy but have a far better IO/S and can handle bigger loads.
Or if the 5400RPM can handle sufficient loads without slowdown and are a lot less hot/noisy.


Do you think a 24*5400RPM drive setup could handle something like 10/12 people browsing/downloading data each one at 1Gb speed ? (Like with a 10Gb/1Gb ethernet for removing the network bottleneck)
Or at least at 60MB/s ?

Hi, I'd recommend against using H67 based mobos of Asus/Gigabyte. I have acquired Asus P8P67 Pro and it doesn't work with LSI based controller Intel RS2WC080.

I also plan to build a far smaller 24/7 powered on internet/workstation/homeserver with minimal noise/heat/power consumption with an Areca 1210/ML, but i will need the dual link DVI of a H67 motherboard.

I first thought about some Gigabyte H67 mobo + core i3 35 or 18W TDP, ARC-1210 + 4*5K3000 3TB in RAID5, do you think there is incompatibilities between areca/mobo ?

I could also get the same Supermicro/Tyan/ECC setup but with an added graphic card, too bad the integrated HD3000 is not working..
 
10-12 people downloading @ 1GbE speed (~100MB/sec) - so ~1GB/sec of array speed - that's probably a reasonable expectation for 20-24 spindles real world.

If they are all doing streaming sequential reads - if they are all hitting random reads then you might run into an IOPS issue - I don't know that 7200 vs. 5400 would make a huge difference in that situation - really you need some sort of caching/tiering at that point - zfs + a L2ARC would help there, not sure what the analogue would be for an areca setup.
 
..............
Do you think a 24*5400RPM drive setup could handle something like 10/12 people browsing/downloading data each one at 1Gb speed ? (Like with a 10Gb/1Gb ethernet for removing the network bottleneck)
Or at least at 60MB/s ?
.......

i don't think you really have to worry about the array speed, 24 drives whether they are 5400 or 7200 will be insanely fast , well over 1GB/s which would be as fast as a 10Gbs connection. Sure if they are all doing something insanely IO intensive you might have problems but most large companies would use much less for a lot more people and more controller cache will help with this if it is really that big of a problem.

10GBs network cards , switches ect are not very cheap so assuming you are not going going down that path you would be limited to a 4Gbs trunk (again pretty much negating array performance).

Is this going to be a LAN box of some sort? (quiet when you are at home, performance when you are at a LAN)
If so , i think you will be fine, My PC can easily max out a 2GBs truck with only 8x 5400rpm drives and an older card than yours ( if you get the 1880), IOs are not really a huge issue at LANs

As a reference 5400 vs 7200 (both raid 5 on an Adaptec 51645)
8 X 2TB F4EG (5400) 300-940MBs
8X 2TB 7K2000 (7200) 500-980 MBs

the top end they are very similar however the speed does drop off fairly quickly on the 5400rpm drives (but how much of your 70TB+ will you really fill before you want to up grade)
 
Back
Top