Build-Log: 100TB Home Media Server

treadstone

Gawd
Joined
Jan 25, 2010
Messages
567
Thank you for checking out my build-log.

Server name:
TBD (I'm still looking for a cool name...)​
Total storage capacity:
100TB​
Available storage capacity:
90TB+ (Will depend on configuration, more about that later...)​

The following picture will be replaced with a picture of my actual server once it is compeleted and up and running



The complete parts list for my new centralized home media server:

I know it has taken me quite some time to finally getting around to post this, so my apologies to those that have been waiting so patiently this entire time.

Note: Click on any image for a larger version.

Well, here we go... The build log...



11 JAN 2010

Items ordered:


20 JAN 2010

The server chassis arrived on it's own 26 x 40 inch skid...



I decided to assemble the server in the test lab at my company and once it's all put together I'm going to transport it home.



The shipping box measured 25W x 36D x 26H (without the skid) and weight in at 168 pounds (about 76.5kg)!!!



Located right under the shipping carton lid were 4 boxes containing the chassis accessories.



A sneak peek at the chassis after taking it out of the shipping box. The entire chassis was protected by a strong and rather large plastic bag.



At the bottom of the shipping box are the two 24 inch slide rails that come standard with this chassis!



The four boxes containing the accessories.



The content of accessory Box #1:
  • 8 Hot-swap drive trays
  • An assortment of cables
  • Mounting screws for the HDD, motherboard, etc.
  • ODD retaining bracket and interface PCB


The content of accessory Box #2, #3 and #4:
  • 14 Hot-swap drive trays


A single hot-swap drive tray with the white 'air-blocking' plastic clip still in place.



Front view of the Chenbro RM91250 50-bay server chassis.
The top two drive slots are SATA only and are intended to be connected directly to the motherboard for use as system boot drives.
On the top left is the optical disk drive bay and above that are two USB connections, the reset, alarm mute and power button as well as the HDD activity, power, alarm, LAN1 and LAN2 LEDs.
Each of the 12 rows below contain a 4 port SAS/SATA backplane with a single SFF-8087 connector for connection to a storage controller.



Side/Back view.
Those build-in handles are EXTREMELY useful! Even in this state, the chassis is quite heavy and requires at least two to lift it out of the box and onto the work bench!



The back of the server.
The top portion contains the removable motherboard tray. Right below are four high-performance and hot-swappable 80mm fans and at the bottom, are two high-performance and hot-swappable 120mm fans along with the four hot-swappable redundant power modules.



The four hot-swappable power modules after I removed them from their cage.
Each power module has a 600W output rating and two Sunon PMD1238PQBX-A 38x38x28mm 15,000RPM 19CFM 5.8W 52.0dBA fans. Right below the handle on the left edge of each power module is a status LED (green = power ok, red = faulty module). The latch next to the IEC-320 power input connector in combination with the thumb screw is used to secure each module in the power module cage.



The power module cage and power distribution backplane. This cage is designed as a 3+1 redundant setup with a total output of 1620W.



One of the two high-performance and hot-swappable 120mm fans.
This is a Delta AFB1212SHE-F00 120x120x38mm 4,100RPM 190.5CFM 15.0W 55.5dBA fan with TACH output signal, but unfortunately no PWM control input.



The four high-performance and hot-swappable 80mm fans.
These are Delta FFB0812EHE-7N66 80x80x38mm 5,700RPM 80.2CFM 10.8W 52.5dBA fans with TACH output and PWM control input signals.



Each of the fan modules has it's own little 'mini-backplane' with a 10-pin card edge connector. Unfortunately, they don't utilize the PWM control signals even though the backplanes have all the necessary connections.



Looking inside the chassis from the back to the front after lifting the lid.
No tools required, just loosen two thumb screws and slide the lid back about 3/4 of an inch to lift it up.



A large sticker on the inside of the lid as a 'Chassis Quick Reference' guide.



The internal hot-swap fan tray. The tabs on top of the blue hot-swap fan modules are used to un-latch and pull-out the individual fan modules. The metal bracket actually consists of two brackets. One is mounted to the chassis and the second (to which the fan slots are attached) is mounted to the first via rubber mounts to isolate vibrations. There is also a chassis intrusion switch mounted to the fan tray bracket on the right hand side.



One of the internal hot-swap high-performance 80mm fans after removal from the fan tray. It's the exact same 80mm fan as those found in the back.



After removing 9 screws and removing the lid that covers the front portion of the chassis, the front panel PCB (LEDs, USB ports, switches and a processor to monitor ambient temperature and backplane signals and control the alarm beeper) as well as the fan monitor PCB (on the left) became accessible.
The connections of the front panel PCB (left to right):
  • USB 1 & 2 (from motherboard)
  • Motherboard control connections (power switch, reset switch, HDD activity LED, LAN 1 & 2 activity LED, power LED)
  • Fan monitor board interconnect cable (provides power and alarm signal to the front panel PCB)
  • Power supply alarm mute
  • Power supply alarm status input
The fan monitor PCB monitors the TACH output of the 10 chassis fans and a 10 position DIP switch provides the ability to disable the monitoring of any fan. The yellow wires are the individual TACH signals from the 10 fans.



A different view of the optical drive bay with the front panel PCB mounted on top of it. Also visible are the SFF-8087 connectors on the right hand side of the first three HDD backplanes.



The two-slot SATA backplane for the system drives.



Top down view between the internal fan tray and the backplanes (it's a bit messy in there with all those cables flying about).



Here is something I found rather interesting, they have the model number sticker on the INSIDE! Makes total sense doesn't it :)





28 JAN 2010

Items ordered:
Items ordered and picked-up:
In case you are wondering why I bought 52 drives, well here is how I intend to put them to use:
  • 2 drives (4TB capacity) in RAID 1 (2TB usable) as system drives and what the system doesn't occupy will be used for music storage
  • 48 drives (96TB capacity) in either 2 x RAID 6 (88TB usable) or 3 x RAID 5 (90TB usable) configurations for the storage pool
  • 2 drives (4TB capacity) as spares in case any of the other 50 drives fail
And this is what 52 x 2TB drives (104TB total) look like still individually packaged and boxed up :D



One of the WD20EADS 2TB green drives still in it's protective anti-static bag.



I mounted one of the drives in a drive tray to have a look at the fit and finish.



A close-up shot of the front of one of the drive trays. Located on the right side are two light pipes for the blue power LED (top) and the green activity/red fault LED (bottom). The actual LEDs are mounted on the backplanes.

 

treadstone

Gawd
Joined
Jan 25, 2010
Messages
567
03 FEB 2010

More outside views of the chassis:

Front/Side view. The front aluminum handles not only look nice they are also incredibly strong!
Note the laser engraved HDD row numbers on each side of the chassis visible in the following two picture.



Back/Side view.



Back/Side/Top view.



Side/Top view.
The add-on card retainer that goes across all of the expansion cards is a nice touch. More about that later.



The motherboard cage can be pulled out of the chassis after removing 10 small screws (5 on each side).



The empty motherboard cage.



View from the back without the motherboard cage.



The internal fan tray. The white connector is used to supply 12V power to the fans and the black connector with the yellow wires carries the fan TACH signals to the fan monitor board.



Internal view of the backplanes. The SFF-8087 connectors are on the right, the power supply connections on the left. Each backplane has two Molex power connectors.
There are a total of 12 dedicated power cable connections coming from the power supply for the SAS/SATA HDD backplanes and a separate connection for the two-port SATA backplane!
Each SAS/SATA power connection feeds one connector on two separate backplanes. A neat setup, but somewhat messy the way it came shipped from the factory.



All of the different power connections converging on the power supply. I don't like the fact that they bunched up the excess cable at the bottom of the chassis.



Different views of the backplane power connections.



Since I was redoing the power cable management, I took the opportunity to take one of the backplane support brackets that holds two backplanes out of the chassis to have a closer look at them.
This is the front side that the HDDs plug into. Each slot has it's own dedicated LEDs to the right of the SAS/SATA connectors.
The top LED is blue and indicates power status, the bottom lights up green for activity and red to indicate a drive fault.



On the back side there is a microprocessor (Atmel AT89S51) and a CPLD (Xilinx) to decode the SGPIO and/or I2C control. 5 connectors to power fans in smaller enclosures.
A LDO regulator on the left edge of the PCB to drop the 5V down to 3.3V. A few configuration jumpers to set the backplane ID and a few other features.



The 100 female Molex crimp terminal power pins I bought to redo the backplane power connections.



The backplane power connections AFTER I cleaned them up.
I need to order and mount the optical drive with a slimline adapter before I can finish the power cable management.



The 14 Molex 600mm SFF-8087 to SFF-8087 Mini SAS cables arrived at the same time as the Molex power pins.
12 cables will be used to connect the two SAS expander cards (6 per expander) to the backplanes and two to connect the RAID controller to the two SAS expander cards.





08 FEB 2010

Items ordered:


09 FEB 2010

Items ordered:
Items ordered and picked-up:


16 FEB 2010

Items ordered:


05 MAR 2010

Finally received the remaining parts to start assembling the server.



The Areca ARC-1680i PCI-E x8 SAS RAID adapter.



The Sapphire ATI Radeon HD 5450 1GB GDDR3 1600MHz DVI/HDMI/D-SUB PCI-E 2.0 graphics card.
I picked this graphics card for it's low power consumption but still provide descent performance!



The ASUS P7P55 WS SUPERCOMPUTER motherboard.
It wasn't easy to find a motherboard with the right combination of PCIE slots needed to put all of these cards together.



The motherboards PCIE slots (from left to right):
  • PCIEX16_1 (blue - x8 or x16 link) is for the Sapphire Radeon HD5450 Graphics card
  • PCIEX1 (white - x1 link) currently not used (possible future use: dual channel TV tuner card #1)
  • PCIEX16_2 (black - x8 link) is for the first HP SAS expander card
  • PCIEX16_3 (white - x4 link) currently not used (possible future use: dual channel TV tuner card #2)
  • PCIEX16_4 (blue - x8 or x16 link) is for the Areca ARC-1680i RAID controller card
  • PCI1 (white) currently not used (no future plans for this slot)
  • PCIEX16_5 (black - x8 link) is for the second HP SAS expander card


One of the HP 36-Port SAS expander cards (I didn't realize that they came with SFF-8087 cables, oh well, now I have lots of spares).



The G.Skill F3-12800CL7D-4GBRM Ripjaw DDR3 1600MHz 4GB dual channel memory kit.



The Intel i7-860 2.8GHz LGA 1156 95W Quad-Core processor.



The Lite-On DL-8ATS SATA slot load black slim CD/DVD rewritable drive.





On to the actual assembly...
The motherboard cage before assembly.



The motherboard mounted in the cage with installed memory, CPU and fan.
I have a spare Arctic Cooling Alpine 11 Pro CPU cooler that I may use instead of the cooler/fan that came with the CPU.



The graphics card, RAID controller and SAS expander cards are installed. From left (with an empty slot in between each card):
  • HP SAS expander card #2
  • Areca ARC-1680i RAID controller card
  • HP SAS expander card #1
  • Sapphire Radeon HD 5450 graphics card


Back view of the motherboard cage. From right:
  • HP SAS expander card #2 with external SFF-8088 port
  • Areca ARC-1680i RAID controller card with 10/100Mb/s Ethernet connection
  • HP SAS expander card #1 with external SFF-8088 port
  • Sapphire Radeon HD 5450 graphics card with VGA, HDMI and DVI outputs
  • The motherboard has two 10/100/1000Mb/s Ethernet, 8 x USB 2.0, 1394a, SPDIF Out, Analog Audio, PS/2 Keyboard and Mouse ports
Chenbro provided mounting holes for two additional 80mm fans above the I/O panel, if needed.



A different view of the rear hot-swappable 80mm fans from the inside of the chassis, located just below the motherboard cage.



The optical disk drive bracket after removal from the chassis.



Test fitting the ODD in the mounting bracket after removing the perforated ODD cover.



The accessories included a latch type metal bracket that inserts into two mounting holes on one side of the ODD and holds the drive in place once inserted into the mounting bracket.
The latch is the part that protrudes the bracket on the right. Note the mounting hole support tab in the center.



The Startech SLSATACDADAP female slimline SATA to SATA adapter with SP4 power. Front and back of the adapter.



The slimline adapter mounted to the ODD.



And here is where I ran into my first problem...
After mounting the adapter to the ODD and inserting the drive into the bracket, the adapter's power connector hit the mounting hole support tab and prevented me from pushing the drive all the way in!
It needed another 3/8 inch to latch into the bracket. I figured I try and replace the adapters vertical (straight) 4 pin power connector with a horizontal (right angled) 4 pin power connector.
Since the right angled connector covered up the mounting screw, I had to modify the connector by drilling a hole slightly off-centered into the latch portion.
The original connector is on the left, the modified right angled connector on the right.



I de-soldered the original and soldered the right angled modified power connector into the adapter and mounted it onto the ODD.



I inserted the drive into the bracket and... Well it ALMOST latched into the bracket, another 1/16 inch and it would have...
On to plan B. I soldered the original vertical 4 pin power connector back into the adapter and instead removed about half of the bracket's mounting hole support tab.
This finally did the trick and the drive latched into the bracket!



The ODD bracket with drive mounted in the chassis and power cable attached to the slimline adapter.



I put the motherboard cage back into the chassis and hooked everything up:
  • The 24 pin ATX power as well as the 8 pin 12V power connection to the motherboard
  • The front panel control and USB connections to the motherboard
  • The 12 SAS backplanes to the two SAS expander cards via Molex mini SAS cables
  • The two SAS expander cards to the RAID controller via two Molex mini SAS cables (one cable per expander)
  • The two system drives via SATA cables to the motherboard
  • The optical disk drive via a SATA cable to the motherboard


The two system drive SATA cables are a bit on the short side and will need to be replaced by longer cables so they can be routed differently, once the internal fan tray bracket is back in place.



A 'pocket' above the power supply came in quite handy to store the extra/spare power connections that were not need by my current setup.
I just coiled them up and tucked them into the extra space available for future use and out of the way so they won't block any airflow.



A better view of the SAS expander and RAID controller connections.



The chassis came with a really neat add-on card retainer bracket.



Loosening a thumbscrew on top and sliding it back, pushes the bottom retainer clip down onto an expansion card to provide extra support.



The 7 SAS cables put a lot of strain on each SAS expander card and these retainer clips help to relief some of it.



The add-on card retainer bracket also came in handy to tie down the CPU's 12V power supply connection.

 
Last edited:

treadstone

Gawd
Joined
Jan 25, 2010
Messages
567
06 MAR 2010

Time to mount the hard disk drives in the drive trays...



Lots of empty bags and boxes. The four boxes at the bottom contained the drive trays and the top three boxes contained the HDDs.



It took over two and half hours just to mount the 50 hard disk drives in the drive trays and configure them with the WDILDE3 and WDTLER program.
I could only configure two disks at a time since only the two system drive slots are directly connected to the motherboard.





07 MAR 2010

First time the chassis is fully assembled and loaded with all 50 x 2TB drives.
After I powered it up, the noise coming from the fans sounded like a jet plane was taking off! I figured it was time for some SPL (sound pressure level) measurments.



Measurements taken in front of the chassis reached 72.4dBC!



The back measurement setup was basically the same, the SPL meter was centered at a distance of 1 meter behind the server.



Measurements at the back reached 84.2dBC!! That's a bit on the LOUD side and definitely something I need to look into...





It was time to boot up the system and get the BIOS configuration started!
I Downloaded and flashed the motherboard BIOS with the latest available version. Completed the BIOS configuration without any major issues and only minor changes were necessary.
Only the memory configuration required a few reboots to get the BIOS to use the memory modules XMP (extreme memory profile).
After rebooting and entering the RAID controller BIOS setup, I ran into the second problem of this build:

The McBIOS RAID controller software as well as the McRAID web based configuration tool (accessed via the RAID controller's Ethernet port) reported 'NO IDE DRIVES AVAILABLE' !!!

That's obviously a BIG problem! What good is a storage server without any storage drives!

To figure out what was going on, I disconneted the SAS cable from the HP SAS expander #1 and connected the RAID controller directly to the first backplane.
After booting into the McBIOS screen, the first four hard disk drives appeared in the 'PHYSICAL DRIVES' information page.
This test confirmed that the RAID controller, cables, backplanes and drives all worked as intended but for some reason the HP SAS expander cards did not.

I remembered having read online that disabling SES2 (enclosrure management) on the Areca RAID controllers was necessary to get the HP SAS expander to cooperate with the controller.
After disabling the SES2 support (only available via the McRAID web based interface) and rebooting the server, neither the drives nor the HP SAS expanders were detected by the RAID BIOS.
I tried connecting the controller and backplanes to different ports on the HP SAS expander, but still no drives or expander were recognized by the RAID controller.
I downloaded and updated the ARC-1680i with the latest available firmware (1.48) and transport (4.7.3.0) versions but unfortunately, that did not change anything either.

So I had a closer look at the HP SAS expander cards:
There are 6 green LEDs on the expander right next to the mounting bracket and just below the SFF-8088 external SAS port.

On both expander cards the LEDs were lit as follows (top to bottom):

CR6 : ON
CR5 : OFF
CR4 : OFF
CR3 : ON
CR2 : ON
CR1 : ON



For a quick test, I moved the first HP SAS expander card from the PCIEX16_2 slot (black connector) to the adjacent PCIEX16_3 slot (white connector).
To my surprise, all 24 drives connected to this expander appeared in the 'PHYSICAL DRIVES' information page!
There is obvioulsy a difference between the blue, black and white PCI express slots.
Since I liked my original setup I needed to figure out what was causing this issue and if there was something I could do to make it work!
I also noticed that after I moved the expander to the PCIEX16_3 slot, the expander's top most LED (CR6) started to blink.
When the card was plugged into the PCIEX16_2 slot that same LED would always be on. I assume that LED CR6 is used as a heartbeat/status type indicator.

The HP SAS expander cards obviously seemed to use more than just the power connections from the PCI express connector.
I looked at the HP SAS expander's card edge connector and took some measurements with a Tektronix MSO4104 Mixed Signal Oscilloscope.
I moved the expander card to diffferent slots and repeated my measurements during boot up.
That helped me to figure out what caused the expander cards not to work in the black PCI express slots.

Every PCI express connector has a PWRGD (Power Good) signal on Side A pin 11 (A11).
On the HP SAS expander cards, this signal appears to be connected to the expander's processor (most likely connected to or used as a #RESET signal).

The PWRGD signal in the black and blue PCI express slots are by default low and on the white PCI express slots it goes high (+3.3V) as soon as the motherboard powers up.
During the expansion slot boot process, the CPU queries configuration registers on the expansion cards to determin what kind of card it is and what resources it requires.
If no card is plugged into the black or blue slots, the PWRGD signal is released (goes high) for a short periode of time and returns low if a card does not respond with any data during the query/scan process.
If proper resource data is received by the CPU, it will set the PWRGD signal high for that particular slot.
My guess is that since the HP SAS expander cards don't need any resources from the motherboard other than power,
they don't respond with any data and hence the motherboard figures there is no card in the slot and no need to release the PWRGD signal!
Which in turn puts the HP SAS expander back into a reset state and the expander will not pass any traffic via any of the SAS ports!

I decided to perform a couple of simple tests and used some Kapton tape to cover the PWRGD signal pin on the expander cards edge connector.
Once the PWRGD signal was disconnected from the motherboard (esentially floating), the expander cards processor started working and the drives appeared in the RAID controller's BIOS screen.
So I figured that all I had to do was to simply disconnect the PWRGD signal by cutting the trace just above the A11 pin.
However after a few more measurements, I found that the card does not pull-up this signal.
Now leaving a signal floating is bad enough, leaving a RESET signal floating can be absolutely disastrous!
A simple solution would be to add a pull-up resistor on the card, however that usually does not satisfy a processors power-up reset timing.
A capacitor could be added to control the signal rise time during power up.
However the proper or best way would be to have the PWRGD signal connected to the processor to have a controled reset process.
This would also provide the ability to reset the card without having to power down the entire system.
If a reset is issued by the motherboard, the PWRGD signal is pulled low momentarily to reset the I/O card.

So I had a look at the motherboard to see what I could do to change the behaviour of the PWRGD signal for the two black PCI express slots.
After tracing the signal I found that apparently there are different stuffing options on the Asus P7P55-WS-SC motherboard for the PWRGD signal next to each of the PCI express connectors!
Currently there is a zero ohm jumper resistor that connects the PWRGD pin of each slot to a dedicated (programmable) power good signal.
Right next to each of those resistors is an unpopulated spot for a jumper resistor that would connect the PWRGD pin to a global power good signal.
I verified the operation of the 'global' PWRGD signal with the scope before undertaking the modification.
I figured that all I need to do is move the jumper resistors for those two black PCI express connectors over and I should be good to go.

Please note that I am only listing this here as reference for those that have the knowledge, ability and proper tools to perform this kind of modification!
I do NOT recommend this to everyone! This will also void your warranty!


The location of the PWRGD configuration jumpers circled in red for PCIEX16_2 and PCIEX16_5 (black connectors) on the ASUS P7P55 WS SUPERCOMPUTER motherboard.
The picture was taken from the back of the board.



Close-up view of the PWRGD configuration jumper for PCIEX16_2, the picture was taken from the front of the board.
The jumper needs to be moved from the location marked by the red arrow (dedicated PWRGD signal) to the new location marked by the green arrow (global PWRGD signal).



Close-up view of the PWRGD configuration jumper for PCIEX16_5 AFTER I moved the 0402 zero ohm jumper to the alternate location marked by the red arrow.
The picture was taken from the front of the board.




S U C C E S S !!!!

After modifying the motherboard, putting the server back together and booting into the RAID BIOS, finally all 48 2TB drives appeared in the 'PHYSICAL DRIVES' information page!!!


TO BE CONTINUED
 
Last edited:

odditory

Supreme [H]ardness
Joined
Dec 23, 2007
Messages
5,942
FINALLY! I thought this build log would never come. Nice job treadstone. And to think I missed buying this case by a matter of minutes!
 

odditory

Supreme [H]ardness
Joined
Dec 23, 2007
Messages
5,942
The ASUS P7P55 WS SUPERCOMPUTER motherboard.
It wasn't easy to find a motherboard with the right combination of PCIE slots needed to put all of these cards together.
By the way did you take a look at any Supermicro Xeon-based motherboards? I would've put nothing else but SM in that chassis - it *will* take an EATX motherboard, you know. Think 5-7 PCIe slots.
 

treadstone

Gawd
Joined
Jan 25, 2010
Messages
567
Simply Awesome project with an equally awesome writeup. Thanks!
Thanks and sorry for the long delay in posting this. I dislocated my shoulder on the 11th so I can't do much anyway and figured it was time to get this out of the way :)

FINALLY! I thought this build log would never come. Nice job treadstone. And to think I missed buying this case by a matter of minutes!
Yep, I remember... Sorry mate :)

Great project man, keep it up :D
There is more coming. Just have to get the pictures organized. I literally have over 1000 so far...

By the way did you take a look at any Supermicro Xeon-based motherboards? I would've put nothing else but SM in that chassis - it *will* take an EATX motherboard, you know. Think 5-7 PCIe slots.
I did have a look at ALL of the SM motherboards. There were two that I had in mind but I really didn't want to go the dual CPU route and all of the power, cooling, RAM, etc that goes with it. I must have looked at well over 400 motherboards to the point where my head was buzzing :)
I finally settled on something that was easy to get, low cost and I could drop into another chassis if it didn't work for this application.
 

ND40oz

[H]F Junkie
Joined
Jul 31, 2005
Messages
11,644
Might have been easier just to swap the cards in the blue slots with the cards in the black, not as cool though :D Since the blue slots are always getting lanes no matter what, you would have been fine with the expanders there, then your video card and areca should have triggered the black slots, just a thought.

I think by doing that mod, you permantently turned the the blue slots into 8x slots as well, whether or not the black slots are occupied. Any reason you didn't just go with an X58 board that would have had all the PCIe slots you needed without resorting to nForce200 chips? It'll be interesting to see how the 16x PCIe connection (8 GB/s) to the CPU handles all that though, rather then using a QPI link (25.6 GB/s) like the X58 would have.
 

treadstone

Gawd
Joined
Jan 25, 2010
Messages
567
Might have been easier just to swap the cards in the blue slots with the cards in the black, not as cool though :D Since the blue slots are always getting lanes no matter what, you would have been fine with the expanders there, then your video card and areca should have triggered the black slots, just a thought.

I think by doing that mod, you permantently turned the the blue slots into 8x slots as well, whether or not the black slots are occupied. Any reason you didn't just go with an X58 board that would have had all the PCIe slots you needed without resorting to nForce200 chips? It'll be interesting to see how the 16x PCIe connection (8 GB/s) to the CPU handles all that though, rather then using a QPI link (25.6 GB/s) like the X58 would have.
I did have a look at X58 based boards along with every other new and older Intel chipsets.

The modification I did to the motherboard did not change the setup or configuration of the slots in terms of their lane width. The board is still booting up the exact same way as before. The only difference is that the processor on the HP expander cards will now start operating since the PWRGD signal goes high right after the power up.
 

nicolas9510

Limp Gawd
Joined
Sep 21, 2006
Messages
169
*drools*
I cannot wait to see the rest of this worklog :)
and good job on messign with the mobo to get the cards to work correctly
 

pissboy

Gawd
Joined
Feb 2, 2003
Messages
514
As someone who's used Arecas before, I have to ask why you're not using Hitachi drives?

Also, if you plan on moving the chassis with the drives removed, invest in a label maker.
 

dajet24

2[H]4U
Joined
Jun 23, 2004
Messages
2,296
all very impressive but damn is that Chassis really worth the $3000 price i see it goes for online ?
 

treadstone

Gawd
Joined
Jan 25, 2010
Messages
567
As someone who's used Arecas before, I have to ask why you're not using Hitachi drives?

Also, if you plan on moving the chassis with the drives removed, invest in a label maker.
The Hitachi's are slightly higher (thicker) than the WD drives, which blocks the already quite restricted air flow. Also, the Hitachi drives consume more power and at 50 drives even 1W each drive makes a difference. Back when I purchased the WD drives, I got a really good deal on them and the Hitachi drives where still a bit more expensive. The price for both drives have come down since then and I have seen some great pricing on both.
I was aware of the issues with the WD drives and as it turns out the Hitachi drives are not without issues either...

The chassis actually came with a sticker sheet that can be applied to each drive tray to show the disc number :)
 

treadstone

Gawd
Joined
Jan 25, 2010
Messages
567
all very impressive but damn is that Chassis really worth the $3000 price i see it goes for online ?
I do have to say that the chassis is quite impressive and very well build.

My local dealer had it listed for $4000 ... I payed less than half that :D ... That's what odditory was talking about earlier ;)
 

ToddW2

2[H]4U
Joined
Nov 8, 2004
Messages
4,019
Currently I think I average about 25 Blu-rays (complete discs stored as .ISO images) per TB.

So this should give me room for about 2250+ movies... :)
Pushing 200 BR I've been debating about doing something like this but not as big, and 25 per TB is a pretty good fit! Only $4 extra per BR really.
 

SeanG

n00b
Joined
Oct 13, 2005
Messages
43
Great build, although I'm curious if you already have enough data to fill most of those drives now? I find it more cost effective to purchase a drive when I approach filling my array and expand as I go. Reason is that the prices drop over time and eventually, larger capacity drives will come out down the road. When the 2.5, 3, 4TB drives come out and become price effective, I will purchase those instead and migrate the data from the 2TB drives over. I find this usually happens within the last 1-1.5 years of the warranty of the existing drives so it helps the value of the drive when I put them up for auction on eBay to help recoup the cost of upgrading to higher capacity drives. Another plus is that I'm not spinning as many empty drives so there is a power savings. I did take advantage of Fry's $125 price point on the Hitachis and ordered 2 more but I think I will need to add a 2TB drive about every month at the rate I'm going.

Right now, my chassis has 30 hotswap bays and I kept five 1TB Seagates from my last server in there for my regular data as a separate RAID 6 array. I have 10 2TB Hitachis in another array for my movie collection. I keep the arrays separate cuz the data array is accessed more often, while the movie array not so much, giving them a chance to spin down more often. At ~10 watts per drive that's 100 watts currently.
 

jaypeezee

Gawd
Joined
Jun 11, 2003
Messages
990
Holy shnikes... impressive. If you don't mind me asking, how much has all of it set you back, so far?

You should name it - (in deep manly voice) Minotaur :)
 

pjkenned

[H]ard|Gawd
Joined
Jan 8, 2010
Messages
1,971
Holy shnikes... impressive. If you don't mind me asking, how much has all of it set you back, so far?
I hope this question isn't answered. I'm at 50TB+ right now and growing (+4TB this weekend). It is basically inevitable that I'm going to hit a similar total and I really don't like keeping an accounting of what I've spent on the servers. :)

BTW treadstone: what is your current capacity and growth rate? I'm only in the 250-350GB/ week range but I'd imagine it would be higher if I had THAT much extra capacity.

Edit: 52TB... I forgot I had a bunch of drives on onboard controllers... I'm halfway to having to do something like this :/
 

calamar

n00b
Joined
Apr 13, 2010
Messages
7
Very Impressive!
Given the ammount of money you spend on HDs/case PSU, why that mobo? I would go for an intel 3420 or 5500 series with integrated graphic and better pci-e lanes handling.
 

ND40oz

[H]F Junkie
Joined
Jul 31, 2005
Messages
11,644
I did have a look at X58 based boards along with every other new and older Intel chipsets.

The modification I did to the motherboard did not change the setup or configuration of the slots in terms of their lane width. The board is still booting up the exact same way as before. The only difference is that the processor on the HP expander cards will now start operating since the PWRGD signal goes high right after the power up.
Interesting, so if you boot up without the black slots occupied, the blue slots still show full 16x bandwidth? I was under the impression that once you force the black slot into thinking it's occupied, it's going to get 8 lanes no matter what. The way the lane config works on that board, whenever the black slots are "occupied", they get 8 lanes from the blue slots and that's when the blue slots revert to 8x bandwidth.

The point I was trying to make with the X58 board is, you wouldn't need the nForce200 chip to "add" lanes (really just a switch chip) and you'd have greater bandwidth available to your cards, 36 lanes from the X58 chipset instead of 16 lanes from the CPU. But that may not really matter, since everything should use the areca's 8x connection.

It'll be interesting to see how you get this home and into your house, maybe I missed it, but where is it going in the house?
 

ToddW2

2[H]4U
Joined
Nov 8, 2004
Messages
4,019
I`m curious what you will be using it for.... (and other ppl who use 50TB+ in their HOME)

I can see handling 300-800 blu-rays and some DVDs... (Or biz usage)

But 2k+ is a lot of movies, what else do you have a lot of data of? Home movies? Security Cameras?
 

treadstone

Gawd
Joined
Jan 25, 2010
Messages
567
Great build, although I'm curious if you already have enough data to fill most of those drives now? I find it more cost effective to purchase a drive when I approach filling my array and expand as I go. Reason is that the prices drop over time and eventually, larger capacity drives will come out down the road. When the 2.5, 3, 4TB drives come out and become price effective, I will purchase those instead and migrate the data from the 2TB drives over. I find this usually happens within the last 1-1.5 years of the warranty of the existing drives so it helps the value of the drive when I put them up for auction on eBay to help recoup the cost of upgrading to higher capacity drives. Another plus is that I'm not spinning as many empty drives so there is a power savings. I did take advantage of Fry's $125 price point on the Hitachis and ordered 2 more but I think I will need to add a 2TB drive about every month at the rate I'm going.

Right now, my chassis has 30 hotswap bays and I kept five 1TB Seagates from my last server in there for my regular data as a separate RAID 6 array. I have 10 2TB Hitachis in another array for my movie collection. I keep the arrays separate cuz the data array is accessed more often, while the movie array not so much, giving them a chance to spin down more often. At ~10 watts per drive that's 100 watts currently.
True it might be more cost effective to buy as you go, but the model life span of these HDD are getting shorter and shorter and by the time I need to buy more there are other drives out there and since my initial intend was to setup a RAID system, which more or less dictates to use the same capacity/type drive, I figured I might as well go and fill it up right now.

Currently I have about 400+ Blu-ray movies and REALLY had to slow down collecting those over the past few month since my HTPC (currently at 9TB+) and a whole bunch of external drives are all full. At the rate I am going at, I will most likely add about 25+ discs a month. I also have 1450+ DVDs, however I don't think that I will be transferring those onto the server as they are currently in 400-Disc changers.

Holy shnikes... impressive. If you don't mind me asking, how much has all of it set you back, so far?

You should name it - (in deep manly voice) Minotaur :)
Thanks for the naming idea, I'll add it to the list...

I haven't really added it all up but I figure somewhere around $16k+

I hope this question isn't answered. I'm at 50TB+ right now and growing (+4TB this weekend). It is basically inevitable that I'm going to hit a similar total and I really don't like keeping an accounting of what I've spent on the servers. :)

BTW treadstone: what is your current capacity and growth rate? I'm only in the 250-350GB/ week range but I'd imagine it would be higher if I had THAT much extra capacity.

Edit: 52TB... I forgot I had a bunch of drives on onboard controllers... I'm halfway to having to do something like this :/
pjkenned, 52TB, that's pretty damn nice too!!
I used to add at least 1 Blu-ray movie a day to my collection. See above...

Very Impressive!
Given the ammount of money you spend on HDs/case PSU, why that mobo? I would go for an intel 3420 or 5500 series with integrated graphic and better pci-e lanes handling.
I find it interesting that everyone seems to get hung up on the motherboard choice I made. If I could, I'd stick an even smaller motherboard in there to bring the power consumption down. The server is build for storage capacity and NOT for speed. The problem I was facing when I was looking for a motherboard was that there are very few motherboard out there that have the right combination on PCIE slots that can support the cards I needed/wanted to put into the system.

I needed a x16 slot for the HD5450 graphics card. I need this card due to the HDMI interface as most motherboard based graphics chips don't support HDMI. Those motherboards with the i5 based graphics engine that do have HDMI on the board don't have the right PCIE slots... believe me I looked!!

Another x8 PCIE slot for the Areca ARC-1680i controller card.

Two x8 PCIE slots for the HP expander cards (although they don't really need to have any lanes on them as the slots are just used for power and physical support).

I was also looking to keep an empty slot in between the ARC-1680i and the HP expander cards to have adequate air flow to keep them cool.

I also wanted to have at least one x1 slot for a future dual TV tuner card.

So based on my above listed criteria, I set out to find a motherboard... and I looked and looked and looked... for 3 weeks. I also looked at the Intel and Supermicro server boards and two or three of them almost made the list. But I really didn't want the go the dual Xenon CPU route...

Interesting, so if you boot up without the black slots occupied, the blue slots still show full 16x bandwidth? I was under the impression that once you force the black slot into thinking it's occupied, it's going to get 8 lanes no matter what. The way the lane config works on that board, whenever the black slots are "occupied", they get 8 lanes from the blue slots and that's when the blue slots revert to 8x bandwidth.

The point I was trying to make with the X58 board is, you wouldn't need the nForce200 chip to "add" lanes (really just a switch chip) and you'd have greater bandwidth available to your cards, 36 lanes from the X58 chipset instead of 16 lanes from the CPU. But that may not really matter, since everything should use the areca's 8x connection.

It'll be interesting to see how you get this home and into your house, maybe I missed it, but where is it going in the house?
The motherboard is not AWARE that there is anything plugged into the black slots as the config query at the beginning goes unanswered (which was my original problem). So all I did is modify the PWRGD signal that goes to the two black slots. The motherboard configuration hasn't changed which means that the blue slots still get their x16 bandwidth.

As to the X58 motherboard, I wanted to use an i7 based processor and along with my above listed criteria for the slots/card distribution, there wasn't much to choose from.

I have the server at home already, It's been sitting in my garage for the last two weeks and I haven't been able to move it into my basement and hook it up :(

I`m curious what you will be using it for.... (and other ppl who use 50TB+ in their HOME)

I can see handling 300-800 blu-rays and some DVDs... (Or biz usage)

But 2k+ is a lot of movies, what else do you have a lot of data of? Home movies? Security Cameras?
Just Blu-ray movies... ripped as complete discs in ISO image format. I hate having to get up and look through all the damn cases to find a movie I want to watch. It's so much nicer to be able to select a movie from the comfort of your home theater chairs ... :D

Based on the UPS load how much power does it pull?
I will post some power consumption figures soon. During the testing and assembly I used a Kill-A-Watt type meter to monitor power consumption. That's how I figured out that I should definitely use at least two of the power modules when I have all 50 HDD plugged in to spin them up... First time I powered up the system I had a single power module plugged in and when I started up the server and watched the meter, it was telling me 1156W... Oops, those are 600W power supplies ;)
So I plugged them all in and tried it again, well depending on configuration and if the OS is running ot just the BIOS screen, or how many drives are active it can vary from below 300 to over 800W!

I will post more on that subject when I finish up my build log for the following days from where I stopped...

Read this:
http://blogs.zdnet.com/storage/?p=162

You pretty much are guaranteed to have at least one disk fail within the first year. You'll also have the raid 5 fail to rebuild.

With that in mind you must go raid 6 with hot spare.
I am contemplating different configurations and I will post some questions and want to get some more input on that subject once I get the server moved into my basement and I can work on it...
 
Last edited:

nitrobass24

[H]ard|DCer of the Month - December 2009
Joined
Apr 7, 2006
Messages
10,462
Man I have been waiting a long time for this thread.

Looks great treadstone!

BTW is SES2 enabled and working with your Expander?
 
Top