houkouonchi
RIP
- Joined
- Sep 14, 2008
- Messages
- 1,622
Funny enough it wasn't from any forums but I randomly found a norco 4020 case on ebay (not that it was labeled as such) and after looking for the model by the dimensions listed on ebay I found the case. This was a great deal since I was looking to go from my CM stacker to a hot-swap setup and that would have cost about as much as the case and I got an entirely new case and it had 8 extra bays!
First things first, My original setup:
CM Stacker Case
MSI P6N SLI Platinum (nforce 650i)
Core 2 Duo E6600 @ 3.5 Ghz
8GB DDR2-800 Ram
MSI GeForce 8800 GTX
Rosewill 850watt PSU
ARECA ARC-1231ML Raid controller
12x1TB Seagate 7200.11 (raid5)
Pics:
I ordered the case and she was a beauty:
Once I had it I just knew that I had to fill those 8 extra bays. I always felt I should have went raid6 instead of raid5 but I was being a bit greedy, luckily it never came back and bit me in the ass.
Planned new setup:
Norco RPC-4020
MSI P6N SLI Platinum (nforce 650i)
Core 2 Duo E6600 @ 3.5 Ghz
8GB DDR2-800 Ram
MSI GeForce 8800 GTX
Rosewill 850watt PSU
ARECA ARC-1680ix-24 Raid controller
20x1TB Seagate 7200.11 (raid6)
Unfortunately why I had received my new hard-drives and was waiting for the raid-controller to be shipped I found out from some friendly forum members here that this raid controller does not like seagate AS series drives. After looking for fixes, getting my hopes up from a firmware release I ended up just going to the ARCA-1280ML which was exactly like my old card except supported 24 drives instead of 12.
Actual New Setup:
Norco RPC-4020
MSI P6N SLI Platinum (nforce 650i)
Core 2 Duo E6600 @ 3.5 Ghz
8GB DDR2-800 Ram
MSI GeForce 8800 GTX
Rosewill 850watt PSU
ARECA ARC-1280ML Raid controller
20x1TB Seagate 7200.11 (raid6)
Here is a look at the drive bays. They are quite sturdy but a couple of the bottom ones don't like to slide in very well. They appear to allow plenty of airflow:
I figured it would be easiest to get things hooked up by first removing the piece of medal which holds the fans in so I can hook up the SATA cables and power cables to the backplane. I used Y adapter cables to ensure I had enough length and to make it a bit easier to work with:
Went ahead and used the same Rosewill PSU that I had before:
Right after I got the mobo installed but no cables ran:
Got all the drives loaded up, oh man this thing is heavy, I think I should have waited until after I was done to put the drives in!
Got everything hooked up to the mobo, Cables tied down as much as I could:
Finally got the system powered on (some pics at different angles:
Decided to takes some pics with all the drives slightly pulled out =)
System is finally complete... Except I was still waiting for the new raid controller so it was just the new case at that time:
Like 2 weeks later I *finally* got the new raid controller. The raid controllers side-by-side:
Got the new raid controller installed so all 20 drives hooked up! I also bought a BBU and installed that as well:
The comp finally at its final resting place, right outside of my room. The noise doesn't bother me like the heat generated from this thing does:
As you can see it uses quite a bit of power idling and thus generates quite a bit of heat. I can bump down the power usage by 30-35 watts if I take off my CPU over-clock
Yeah I know the cable job is not pretty/sucks but I didn't want to spend the money *just* to buy a modular power supply to save myself some cables which are in the way.
And for those curious the temps on my drives went up only slightly (1-3 C) from my old CM stacker case and my CPU temp dropped about 13C! My CPU/mobo now runs at around 37C which is very good IMHO for a 2.4 Ghz core 2 duo OC'd to 3.5.
I also found out my PSU didn't like a 0.4 second staggered power up and requires a 1.5 second one so it takes a good 30-40 seconds from pushing the power button before my OS starts loading =( I realized this when I got the new raid controller and it kept turning off when the the firmware was initializing (it was fine with only 12 drives hooked up).
Loving the space though =)
I didn't really trust the online migration so I brought my system into work and off-loaded all my data on a pair of 2U servers with 8x750 GB drives in raid6. I have to say that the 3ware controllers in them suck for writes as it took 2 days to off-load my data to them and only 1 day to bring it back.
I also noticed that initializing an array goes *much* slower when you chose back-ground init. A foreground init of my 11TB raid5 took about 4.5 hours. It took about the same for a 8 drives raid6 array (I had two arrays for a while to stress test the new 8 drives). In about 24 hours the 18 TB raid6 array was only about 50% initialized and its still going:
I can't do benchmarks until the init is complete but it is quite fast even though it is initializing.
First things first, My original setup:
CM Stacker Case
MSI P6N SLI Platinum (nforce 650i)
Core 2 Duo E6600 @ 3.5 Ghz
8GB DDR2-800 Ram
MSI GeForce 8800 GTX
Rosewill 850watt PSU
ARECA ARC-1231ML Raid controller
12x1TB Seagate 7200.11 (raid5)
Pics:
I ordered the case and she was a beauty:
Once I had it I just knew that I had to fill those 8 extra bays. I always felt I should have went raid6 instead of raid5 but I was being a bit greedy, luckily it never came back and bit me in the ass.
Planned new setup:
Norco RPC-4020
MSI P6N SLI Platinum (nforce 650i)
Core 2 Duo E6600 @ 3.5 Ghz
8GB DDR2-800 Ram
MSI GeForce 8800 GTX
Rosewill 850watt PSU
ARECA ARC-1680ix-24 Raid controller
20x1TB Seagate 7200.11 (raid6)
Unfortunately why I had received my new hard-drives and was waiting for the raid-controller to be shipped I found out from some friendly forum members here that this raid controller does not like seagate AS series drives. After looking for fixes, getting my hopes up from a firmware release I ended up just going to the ARCA-1280ML which was exactly like my old card except supported 24 drives instead of 12.
Actual New Setup:
Norco RPC-4020
MSI P6N SLI Platinum (nforce 650i)
Core 2 Duo E6600 @ 3.5 Ghz
8GB DDR2-800 Ram
MSI GeForce 8800 GTX
Rosewill 850watt PSU
ARECA ARC-1280ML Raid controller
20x1TB Seagate 7200.11 (raid6)
Here is a look at the drive bays. They are quite sturdy but a couple of the bottom ones don't like to slide in very well. They appear to allow plenty of airflow:
I figured it would be easiest to get things hooked up by first removing the piece of medal which holds the fans in so I can hook up the SATA cables and power cables to the backplane. I used Y adapter cables to ensure I had enough length and to make it a bit easier to work with:
Went ahead and used the same Rosewill PSU that I had before:
Right after I got the mobo installed but no cables ran:
Got all the drives loaded up, oh man this thing is heavy, I think I should have waited until after I was done to put the drives in!
Got everything hooked up to the mobo, Cables tied down as much as I could:
Finally got the system powered on (some pics at different angles:
Decided to takes some pics with all the drives slightly pulled out =)
System is finally complete... Except I was still waiting for the new raid controller so it was just the new case at that time:
Like 2 weeks later I *finally* got the new raid controller. The raid controllers side-by-side:
Got the new raid controller installed so all 20 drives hooked up! I also bought a BBU and installed that as well:
The comp finally at its final resting place, right outside of my room. The noise doesn't bother me like the heat generated from this thing does:
As you can see it uses quite a bit of power idling and thus generates quite a bit of heat. I can bump down the power usage by 30-35 watts if I take off my CPU over-clock
Yeah I know the cable job is not pretty/sucks but I didn't want to spend the money *just* to buy a modular power supply to save myself some cables which are in the way.
And for those curious the temps on my drives went up only slightly (1-3 C) from my old CM stacker case and my CPU temp dropped about 13C! My CPU/mobo now runs at around 37C which is very good IMHO for a 2.4 Ghz core 2 duo OC'd to 3.5.
I also found out my PSU didn't like a 0.4 second staggered power up and requires a 1.5 second one so it takes a good 30-40 seconds from pushing the power button before my OS starts loading =( I realized this when I got the new raid controller and it kept turning off when the the firmware was initializing (it was fine with only 12 drives hooked up).
Loving the space though =)
Code:
root@sabayonx86-64: 01:38 PM :~# df -Hl
Filesystem Size Used Avail Use% Mounted on
/dev/sdc2 35G 32G 3.8G 90% /
udev 4.2G 263k 4.2G 1% /dev
none 1.1M 324k 725k 31% /lib/rcscripts/init.d
/dev/sdc1 100M 59M 42M 59% /boot
/dev/sdc3 18T 6.9T 11T 39% /data
/dev/sdd1 751G 559G 192G 75% /750
/dev/sdb1 21G 6.7G 15G 32% /mac
/dev/sda1 84G 53G 32G 63% /winxp
tmpfs 4.2G 0 4.2G 0% /dev/shm
I didn't really trust the online migration so I brought my system into work and off-loaded all my data on a pair of 2U servers with 8x750 GB drives in raid6. I have to say that the 3ware controllers in them suck for writes as it took 2 days to off-load my data to them and only 1 day to bring it back.
I also noticed that initializing an array goes *much* slower when you chose back-ground init. A foreground init of my 11TB raid5 took about 4.5 hours. It took about the same for a 8 drives raid6 array (I had two arrays for a while to stress test the new 8 drives). In about 24 hours the 18 TB raid6 array was only about 50% initialized and its still going:
Code:
Volume Set Information
Volume Set Name LINUX VOLUME
Raid Set Name 18TB RAID6
Volume Capacity 17895.0GB
SCSI Ch/Id/Lun 0/0/2
Raid Level Raid 6
Stripe Size 128KBytes
Block Size 512Bytes
Member Disks 20
Cache Mode Write Back
Tagged Queuing Enabled
Volume State Initializing
Progress 71.3%
I can't do benchmarks until the init is complete but it is quite fast even though it is initializing.