Project: Disk space expansion.

Joined
Sep 14, 2008
Messages
1,622
Funny enough it wasn't from any forums but I randomly found a norco 4020 case on ebay (not that it was labeled as such) and after looking for the model by the dimensions listed on ebay I found the case. This was a great deal since I was looking to go from my CM stacker to a hot-swap setup and that would have cost about as much as the case and I got an entirely new case and it had 8 extra bays!

First things first, My original setup:

CM Stacker Case
MSI P6N SLI Platinum (nforce 650i)
Core 2 Duo E6600 @ 3.5 Ghz
8GB DDR2-800 Ram
MSI GeForce 8800 GTX
Rosewill 850watt PSU
ARECA ARC-1231ML Raid controller
12x1TB Seagate 7200.11 (raid5)

Pics:









I ordered the case and she was a beauty:










Once I had it I just knew that I had to fill those 8 extra bays. I always felt I should have went raid6 instead of raid5 but I was being a bit greedy, luckily it never came back and bit me in the ass.

Planned new setup:

Norco RPC-4020
MSI P6N SLI Platinum (nforce 650i)
Core 2 Duo E6600 @ 3.5 Ghz
8GB DDR2-800 Ram
MSI GeForce 8800 GTX
Rosewill 850watt PSU
ARECA ARC-1680ix-24 Raid controller
20x1TB Seagate 7200.11 (raid6)


Unfortunately why I had received my new hard-drives and was waiting for the raid-controller to be shipped I found out from some friendly forum members here that this raid controller does not like seagate AS series drives. After looking for fixes, getting my hopes up from a firmware release I ended up just going to the ARCA-1280ML which was exactly like my old card except supported 24 drives instead of 12.

Actual New Setup:

Norco RPC-4020
MSI P6N SLI Platinum (nforce 650i)
Core 2 Duo E6600 @ 3.5 Ghz
8GB DDR2-800 Ram
MSI GeForce 8800 GTX
Rosewill 850watt PSU
ARECA ARC-1280ML Raid controller
20x1TB Seagate 7200.11 (raid6)

Here is a look at the drive bays. They are quite sturdy but a couple of the bottom ones don't like to slide in very well. They appear to allow plenty of airflow:







I figured it would be easiest to get things hooked up by first removing the piece of medal which holds the fans in so I can hook up the SATA cables and power cables to the backplane. I used Y adapter cables to ensure I had enough length and to make it a bit easier to work with:





Went ahead and used the same Rosewill PSU that I had before:



Right after I got the mobo installed but no cables ran:



Got all the drives loaded up, oh man this thing is heavy, I think I should have waited until after I was done to put the drives in!



Got everything hooked up to the mobo, Cables tied down as much as I could:



Finally got the system powered on (some pics at different angles:













Decided to takes some pics with all the drives slightly pulled out =)







System is finally complete... Except I was still waiting for the new raid controller so it was just the new case at that time:



Like 2 weeks later I *finally* got the new raid controller. The raid controllers side-by-side:



Got the new raid controller installed so all 20 drives hooked up! I also bought a BBU and installed that as well:







The comp finally at its final resting place, right outside of my room. The noise doesn't bother me like the heat generated from this thing does:







As you can see it uses quite a bit of power idling and thus generates quite a bit of heat. I can bump down the power usage by 30-35 watts if I take off my CPU over-clock





Yeah I know the cable job is not pretty/sucks but I didn't want to spend the money *just* to buy a modular power supply to save myself some cables which are in the way.

And for those curious the temps on my drives went up only slightly (1-3 C) from my old CM stacker case and my CPU temp dropped about 13C! My CPU/mobo now runs at around 37C which is very good IMHO for a 2.4 Ghz core 2 duo OC'd to 3.5.

I also found out my PSU didn't like a 0.4 second staggered power up and requires a 1.5 second one so it takes a good 30-40 seconds from pushing the power button before my OS starts loading =( I realized this when I got the new raid controller and it kept turning off when the the firmware was initializing (it was fine with only 12 drives hooked up).

Loving the space though =)

Code:
root@sabayonx86-64: 01:38 PM :~# df -Hl
Filesystem             Size   Used  Avail Use% Mounted on
/dev/sdc2               35G    32G   3.8G  90% /
udev                   4.2G   263k   4.2G   1% /dev
none                   1.1M   324k   725k  31% /lib/rcscripts/init.d
/dev/sdc1              100M    59M    42M  59% /boot
/dev/sdc3               18T   6.9T    11T  39% /data
/dev/sdd1              751G   559G   192G  75% /750
/dev/sdb1               21G   6.7G    15G  32% /mac
/dev/sda1               84G    53G    32G  63% /winxp
tmpfs                  4.2G      0   4.2G   0% /dev/shm

I didn't really trust the online migration so I brought my system into work and off-loaded all my data on a pair of 2U servers with 8x750 GB drives in raid6. I have to say that the 3ware controllers in them suck for writes as it took 2 days to off-load my data to them and only 1 day to bring it back.

I also noticed that initializing an array goes *much* slower when you chose back-ground init. A foreground init of my 11TB raid5 took about 4.5 hours. It took about the same for a 8 drives raid6 array (I had two arrays for a while to stress test the new 8 drives). In about 24 hours the 18 TB raid6 array was only about 50% initialized and its still going:

Code:
Volume Set Information
Volume Set Name 	LINUX VOLUME
Raid Set Name 	18TB RAID6
Volume Capacity 	17895.0GB
SCSI Ch/Id/Lun 	0/0/2
Raid Level 	Raid 6
Stripe Size 	128KBytes
Block Size 	512Bytes
Member Disks 	20
Cache Mode 	Write Back
Tagged Queuing 	Enabled
Volume State 	Initializing
Progress 	71.3%

I can't do benchmarks until the init is complete but it is quite fast even though it is initializing.
 
I've got Norco 4020 fever too, but my wife would kill me if I bought one right now.
 
It is a nice case. My only complaint is the disk activity LEDs are way to feint. I can only see two with the lights on and I have to be 1 inch away and the lights off to see the others.

I setup some real-time power usage monitoring on my system here:

http://66.159.214.70/power/

You can see when I took it into work to migrate my data off and then back on to the system and also when I got the new raid-controller and 20 drives were being powered instead of 12.

5 minute averages:
power-day.png


30 minute averages
power-week.png
 
nice. how much was that new case? i need something like that :). do the harddrives stay cool enough? what are ur average temps?
 
And for those curious the temps on my drives went up only slightly (1-3 C) from my old CM stacker case and my CPU temp dropped about 13C! My CPU/mobo now runs at around 37C which is very good IMHO for a 2.4 Ghz core 2 duo OC'd to 3.5.

I see there is also a fairly decent video card as well... given that, and the fact that the CPU is overclocked, is this your gaming machine? Or do you use the extra OC'ed clocks to render video?
 
I see there is also a fairly decent video card as well... given that, and the fact that the CPU is overclocked, is this your gaming machine? Or do you use the extra OC'ed clocks to render video?

Yup, this is basically my gaming machine which just happens to have a crapload of drives in it. I also use it as a file-server for members of my family as well.

nice. how much was that new case? i need something like that :). do the harddrives stay cool enough? what are ur average temps?

They run in the low 40s (C). On my old CM stacker they ran around 35-40C (depending on the drive and now they run 39-43C or so. Definitely an acceptable temp in my book.
 
Ok, some benchies. Just linux ones for now since I am at work and I saw the array finally finished initializing which took some time!

Time Device Event Type Elapse Time Errors
2008-10-18 1:43:46 LINUX VOLUME Complete Init 013:32:29
2008-10-17 12:11:35 LINUX VOLUME Start Initialize
2008-10-17 12:11:14 H/W Monitor Raid Powered On
2008-10-17 12:10:4 LINUX VOLUME Stop Initialization 000:00:45
2008-10-17 12:9:41 RS232 Terminal VT100 Log In
2008-10-17 12:9:18 LINUX VOLUME Start Initialize
2008-10-17 12:9:15 H/W Monitor Raid Powered On
2008-10-17 10:8:6 LINUX VOLUME Stop Initialization 026:24:18
2008-10-16 7:44:14 LINUX VOLUME Start Initialize
2008-10-16 7:40:34 LINUX VOLUME Stop Initialization 000:04:04
2008-10-16 7:36:30 LINUX VOLUME Start Initialize
2008-10-16 7:36:7 H/W Monitor Raid Powered On
2008-10-16 7:34:41 LINUX VOLUME Stop Initialization 001:41:58
2008-10-16 5:52:58 LINUX VOLUME Start Initialize
2008-10-16 5:52:12 H/W Monitor Raid Powered On
2008-10-16 5:50:46 LINUX VOLUME Stop Initialization 000:05:12
2008-10-16 5:45:34 LINUX VOLUME Start Initialize
2008-10-16 5:45:34 LINUX VOLUME Create Volume

So it took about 31-32 hours to init the array (in background). Write bench:

sabayonx86-64 data # dd count=20000 bs=1M if=/dev/zero of=./20gb.bin
20000+0 records in
20000+0 records out
20971520000 bytes (21 GB) copied, 36.3067 s, 578 MB/s

Read bench:

sabayonx86-64 data # dd bs=1M if=/data/20gb.bin of=/dev/null
20000+0 records in
20000+0 records out
20971520000 bytes (21 GB) copied, 26.796 s, 783 MB/s

Reads have stayed about the same. I expected writes to go down and they did by about 70 MB/sec compared to what they did in raid5 which I think is quite good still.

bonnie++

Code:
sabayonx86-64 bonnie # bonnie++ -u sandon
Using uid:1008, gid:0.
Writing a byte at a time...done
Writing intelligently...done
Rewriting...done
Reading a byte at a time...done
Reading intelligently...done
start 'em...done...done...done...done...done...
Create files in sequential order...done.
Stat files in sequential order...done.
Delete files in sequential order...done.
Create files in random order...done.
Stat files in random order...done.
Delete files in random order...done.
Version 1.93c       ------Sequential Output------ --Sequential Input- --Random-
Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
sabayonx86-64   16G  2119  99 459147  47 234714  19  2348  66 745469  37  1130   5
Latency              3914us     898ms     554ms     295ms   24945us   40156us
Version 1.93c       ------Sequential Create------ --------Random Create--------
sabayonx86-64       -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 14996  20 +++++ +++ +++++ +++ 22984  96 +++++ +++ 26894  78
Latency               431ms      34us    4968us    1929us      35us   33601us
1.93c,1.93c,sabayonx86-64,1,1224316576,16G,,2119,99,459147,47,234714,19,2348,66,745469,37,1130,5,16,,,,,14996,20,+++++,+++,+++++,+++,22984,96,+++++,+++,26894,78,3914us,898ms,554ms,295ms,24945us,40156us,431ms,34us,4968us,1929us,35us,33601us

I will run some of the windows benchies (such as hdtach/hdtune) probably in 8-9 hours when I get home from work. Here was hdtach/hdtune on my old raid5 array:

hdtach.png

hdtune.png
 
Great project! How are you using all of that space? Is the box very loud when it's powered on?

Do you really need the 8800GTX in there? You could save yourself some power with something small and fanless.
 
Finally ran some benchies from windows on the array. For some reason hdtach doesn't want to give me a nice solid line and gets spikes in different sections of the array. Didn't really have this problem before but before I was using an esata drive as a boot drive and now I am booting off the array.

HDTACH:
hd_tach.png


HDTUNE raw:
hdtune_raw.png


HDTUNE file:
hdtune_file.png
 
I setup some real-time power usage monitoring on my system here:

http://66.159.214.70/power/

You can see when I took it into work to migrate my data off and then back on to the system and also when I got the new raid-controller and 20 drives were being powered instead of 12.

5 minute averages:
power-day.png


30 minute averages
power-week.png

houkouonchi, did that watts up pro collect that data and generate the graphs?
 
houkouonchi, did that watts up pro collect that data and generate the graphs?

The watts up pro basically has a serial interface so you can have a computer read its values. I have it hooked up to my closet box which acts as a load balancer/router between my multiple internet connections and I am using a serial over ethernet adapter to hook it up to the watts-up pro.

The graphs are generated using MRTG using a simple script I wrote which reads 30 values (about 30 seconds worth of data) every 5 minutes and then prints out the averages:

Code:
#!/bin/bash
tmpfile=/tmp/power.tmp
powerfile=/tmp/power.txt

/bin/echo -n > $tmpfile

/bin/wattsup -c 30 ttyS0 watts volts 2> /dev/null 1> $tmpfile

count=`/bin/cat $tmpfile  | /usr/bin/wc -l`

watts=`/bin/cat $tmpfile | /bin/gawk -F', ' '{ sum+=$1}; END { printf ( sum/'$count' )}' |  /bin/gawk '{ round=($1)}; END { printf ("%.0f", round )}'`
volts=`/bin/cat $tmpfile | /bin/gawk -F', ' '{ sum+=$2}; END { printf ( sum/'$count' )}' |  /bin/gawk '{ round=($1)}; END { printf ("%.0f", round )}'`

echo -n > $powerfile
echo $watts
echo $volts

mrtg config:

Code:
WorkDir: /var/www/html/power

Target[power]: `/bin/powermeasure.sh`
Options[power]: nobanner,nopercent,growright,gauge,noinfo,nolegend,pngdate,integer
Title[power]: Power Usage of Desktop Box
PageTop[power]: <H1>Power Usage of Desktop Box </H1>
MaxBytes[power]: 50000
YLegend[power]: Watts/Volts
ShortLegend[power]: &nbsp;
YSize[power]:160
XSize[power]:500
LegendI[power]: Watts:
LegendO[power]: Volts:
Legend1[power]: voltwatts:

I also do graphs for my disk usage:

http://66.159.214.70/stats/df/index.html

and a bunch of other stuff:

http://66.159.214.70/stats/
 
Very cool. That Norco has made a name here and I'm definitely thinking of getting one for a storage build

 
Holy poop that's fast! 1.2GB per second!
Nice build log thingie, I always forget to take pics...
Couldn't you save power/heat by setting the drives to turn off after 10 minutes of no usage?
 
best [486];1033201748 said:
Holy poop that's fast! 1.2GB per second!
Nice build log thingie, I always forget to take pics...
Couldn't you save power/heat by setting the drives to turn off after 10 minutes of no usage?

Yes but I don't really trust doing that and it means a good 40+ second wait time when the drives spin up. Also due to logging and other stuff I don't know how often the drives would even spin down in the first place.

I did upgrade to a geforce gtx 260 from my 8800 gtx which shaved about 30 watts off my idle power usage.
 
Love that case, Wish I had the cash for one/to fill it up.

*have the same table designs, I only have the end tables not the coffee table*
 
Back
Top