Build-Log: 100TB Home Media Server

again depends on what you mean by real time. zfs doesn't bother with checksums unless you tell it to scrub or it reads a bad checksum. having your entire pool constantly reading checksums is wasted IO.
 
Will Flexraid detect bit rot the same way ZFS does, while a file is being read? When people mention SnapRaid and Flexraid in the same post it gets confusing as to whether or not they perform the same.
 
zfs doesn't bother with checksums unless you tell it to scrub or it reads a bad checksum. having your entire pool constantly reading checksums is wasted IO.

What does that even mean? How would it know a checksum is bad unless it reads and verifies it?

Of course ZFS reads and verifies the checksum when a block is accessed. Anything else would be pointless.

Real-time in this case means when a file is accessed. It's not going through the pool constantly. That's scrubbing and a manually kicked-off process.
 
No, you made it sound as if ZFS never even looks at checksums until it somehow reads a bad checksum. What "bother" could ZFS have with checksums except reading and verifying them? That's their only purpose.

Also, since there is a manual scrub process, what else could "contantly reading checksums is wasted IO" mean, other than reading checksums at every block access?

If you really think what you and I said have the same meaning, you need to work on your wording.
 
With FlexRAID you need to run a process on the array called "validate" to check if the file checksums match. Most people run this process once a month.
 
Thank you for checking out my build-log.

Server name:
TBD (I'm still looking for a cool name...)​
Total storage capacity:
100TB​
Available storage capacity:
90TB+ (Will depend on configuration, more about that later...)​

The following picture will be replaced with a picture of my actual server once it is compeleted and up and running



The complete parts list for my new centralized home media server:

I know it has taken me quite some time to finally getting around to post this, so my apologies to those that have been waiting so patiently this entire time.

Note: Click on any image for a larger version.

Well, here we go... The build log...



11 JAN 2010

Items ordered:


20 JAN 2010

The server chassis arrived on it's own 26 x 40 inch skid...



I decided to assemble the server in the test lab at my company and once it's all put together I'm going to transport it home.



The shipping box measured 25W x 36D x 26H (without the skid) and weight in at 168 pounds (about 76.5kg)!!!



Located right under the shipping carton lid were 4 boxes containing the chassis accessories.



A sneak peek at the chassis after taking it out of the shipping box. The entire chassis was protected by a strong and rather large plastic bag.



At the bottom of the shipping box are the two 24 inch slide rails that come standard with this chassis!



The four boxes containing the accessories.



The content of accessory Box #1:
  • 8 Hot-swap drive trays
  • An assortment of cables
  • Mounting screws for the HDD, motherboard, etc.
  • ODD retaining bracket and interface PCB


The content of accessory Box #2, #3 and #4:
  • 14 Hot-swap drive trays


A single hot-swap drive tray with the white 'air-blocking' plastic clip still in place.



Front view of the Chenbro RM91250 50-bay server chassis.
The top two drive slots are SATA only and are intended to be connected directly to the motherboard for use as system boot drives.
On the top left is the optical disk drive bay and above that are two USB connections, the reset, alarm mute and power button as well as the HDD activity, power, alarm, LAN1 and LAN2 LEDs.
Each of the 12 rows below contain a 4 port SAS/SATA backplane with a single SFF-8087 connector for connection to a storage controller.



Side/Back view.
Those build-in handles are EXTREMELY useful! Even in this state, the chassis is quite heavy and requires at least two to lift it out of the box and onto the work bench!



The back of the server.
The top portion contains the removable motherboard tray. Right below are four high-performance and hot-swappable 80mm fans and at the bottom, are two high-performance and hot-swappable 120mm fans along with the four hot-swappable redundant power modules.



The four hot-swappable power modules after I removed them from their cage.
Each power module has a 600W output rating and two Sunon PMD1238PQBX-A 38x38x28mm 15,000RPM 19CFM 5.8W 52.0dBA fans. Right below the handle on the left edge of each power module is a status LED (green = power ok, red = faulty module). The latch next to the IEC-320 power input connector in combination with the thumb screw is used to secure each module in the power module cage.



The power module cage and power distribution backplane. This cage is designed as a 3+1 redundant setup with a total output of 1620W.



One of the two high-performance and hot-swappable 120mm fans.
This is a Delta AFB1212SHE-F00 120x120x38mm 4,100RPM 190.5CFM 15.0W 55.5dBA fan with TACH output signal, but unfortunately no PWM control input.



The four high-performance and hot-swappable 80mm fans.
These are Delta FFB0812EHE-7N66 80x80x38mm 5,700RPM 80.2CFM 10.8W 52.5dBA fans with TACH output and PWM control input signals.



Each of the fan modules has it's own little 'mini-backplane' with a 10-pin card edge connector. Unfortunately, they don't utilize the PWM control signals even though the backplanes have all the necessary connections.



Looking inside the chassis from the back to the front after lifting the lid.
No tools required, just loosen two thumb screws and slide the lid back about 3/4 of an inch to lift it up.



A large sticker on the inside of the lid as a 'Chassis Quick Reference' guide.



The internal hot-swap fan tray. The tabs on top of the blue hot-swap fan modules are used to un-latch and pull-out the individual fan modules. The metal bracket actually consists of two brackets. One is mounted to the chassis and the second (to which the fan slots are attached) is mounted to the first via rubber mounts to isolate vibrations. There is also a chassis intrusion switch mounted to the fan tray bracket on the right hand side.



One of the internal hot-swap high-performance 80mm fans after removal from the fan tray. It's the exact same 80mm fan as those found in the back.



After removing 9 screws and removing the lid that covers the front portion of the chassis, the front panel PCB (LEDs, USB ports, switches and a processor to monitor ambient temperature and backplane signals and control the alarm beeper) as well as the fan monitor PCB (on the left) became accessible.
The connections of the front panel PCB (left to right):
  • USB 1 & 2 (from motherboard)
  • Motherboard control connections (power switch, reset switch, HDD activity LED, LAN 1 & 2 activity LED, power LED)
  • Fan monitor board interconnect cable (provides power and alarm signal to the front panel PCB)
  • Power supply alarm mute
  • Power supply alarm status input
The fan monitor PCB monitors the TACH output of the 10 chassis fans and a 10 position DIP switch provides the ability to disable the monitoring of any fan. The yellow wires are the individual TACH signals from the 10 fans.



A different view of the optical drive bay with the front panel PCB mounted on top of it. Also visible are the SFF-8087 connectors on the right hand side of the first three HDD backplanes.



The two-slot SATA backplane for the system drives.



Top down view between the internal fan tray and the backplanes (it's a bit messy in there with all those cables flying about).



Here is something I found rather interesting, they have the model number sticker on the INSIDE! Makes total sense doesn't it :)





28 JAN 2010

Items ordered:
Items ordered and picked-up:
In case you are wondering why I bought 52 drives, well here is how I intend to put them to use:
  • 2 drives (4TB capacity) in RAID 1 (2TB usable) as system drives and what the system doesn't occupy will be used for music storage
  • 48 drives (96TB capacity) in either 2 x RAID 6 (88TB usable) or 3 x RAID 5 (90TB usable) configurations for the storage pool
  • 2 drives (4TB capacity) as spares in case any of the other 50 drives fail
And this is what 52 x 2TB drives (104TB total) look like still individually packaged and boxed up :D



One of the WD20EADS 2TB green drives still in it's protective anti-static bag.



I mounted one of the drives in a drive tray to have a look at the fit and finish.



A close-up shot of the front of one of the drive trays. Located on the right side are two light pipes for the blue power LED (top) and the green activity/red fault LED (bottom). The actual LEDs are mounted on the backplanes.




OP,

Do you mind if I ask how the reliability of those WD green drives turned out ???
 
Will Flexraid detect bit rot the same way ZFS does, while a file is being read? When people mention SnapRaid and Flexraid in the same post it gets confusing as to whether or not they perform the same.

FWIW FlexRAID has a couple modes. One mode is just like SnapRAID, where it is not possible to verify checksums when doing reads. FlexRAID however offers 'Real-Time RAID' where it would be possible, although I do not know if it does it. And then its new thing is F-RAID or something which again might be different than the older 'Real-Time RAID'. I don't actually have answer off hand, but make sure you get answer to the mode your thinking about.

Also FWIW, you can just put SnapRAID on a bunch if individual disks each with own BTRFS/ZFS file system on them. Let filesystem do checksums on reads, and it will fail if data is bad. Then use SnapRAID, or FlexRAID, to fix it.
 
Oz,

How do you have your drives setup? I'm about to setup a very similar system - 96TB (24 Seagate 4TB NAS drives) and I have an Areca 1680ix 24-4 raid card.

Initially I was thinking of doing Raid60 with 2x 12 drive Raid6 arrays, or 3x 8 drive Raid6 arrays. I've never done Raid60, but have had 2x 16 drive x 1.5TB Seagate Barracuda Raid6 arrays on Areca 1260D controllers that have worked great for 4+ years.

I'm also looking at FlexRaid, ZFS but don't have experience with them. I was running an unRaid array for a little over a year, but the performance on copying data to the "array" was slow (15-20MB a sec).

I have 2 video capture systems and do a ton of m2ts video captures as well as a very extensive DVD and BluRay collection that I rip to play back via SageTV.

Any info would be great.

Thanks!



Amount of total storage: 192TB
Amount of storage in the following system: 2 Systems Exact Same Specs 96TB Each

Case: Norco 4224
PSU: Silverstone 750w
Motherboard: ASUS P8C WS
CPU: Intel Xeon E3-1230 V2
RAM: 16GB Kingston ECC (2x8GB)
GPU: 7200gs
Controller Cards: Areca 1882-24-ix 4GB
Hard Drives (include full model number) 24xHitachi Coolspin 4tb (5K4000), 1xSanDisk Extreme 120gb
Battery Backup Units: Areca BBU
Operating System: Windows 2008 R2 Enterprise

I'll upload pics shortly
 
Amount of total storage: 192TB
Amount of storage in the following system: 2 Systems Exact Same Specs 96TB Each

Case: Norco 4224
PSU: Silverstone 750w
Motherboard: ASUS P8C WS
CPU: Intel Xeon E3-1230 V2
RAM: 16GB Kingston ECC (2x8GB)
GPU: 7200gs
Controller Cards: Areca 1882-24-ix 4GB
Hard Drives (include full model number) 24xHitachi Coolspin 4tb (5K4000), 1xSanDisk Extreme 120gb
Battery Backup Units: Areca BBU
Operating System: Windows 2008 R2 Enterprise

I'll upload pics shortly

We are waiting for pics :D
 
I'd be curious what the failure rate on the drives in these cases are, how is the vibration?
How long have you been running them?
 
I'd be curious what the failure rate on the drives in these cases are, how is the vibration?
How long have you been running them?

Vibrations? - LoL
I use two of these Chenbro boxes as my main ZFS backup systems. They are as loud as a couple of cheap turbo vacuum cleaners and so weighty that you need 2- 4 persons to carry them away. If you worry, you should worry about the desk where you put these boxes on not about vibrations.

chenbro50b.jpg
 
Last edited:
How many TB _Gea? :eek:

Main reason for these 50bay boxes was the ability to grow up to around 200 TB seamless with Sata disks (6TB some day) without the need of expanders (I use 6 x IBM 1015 each).

Currently I use 2 and 3 Raid-Z2 vdevs with 10 disks each (vdevs from 2 and 4 TB disks) with a capacity of about 40 TB each. Capacity can either grow by adding mode vdevs or by replacing 2 TB disks by 4TB (or 6TB in 2014) disks.

Max capacity some day: 36 disks + 10 disks for redundancy + 2 hotspare = about 200 TB usable with 6 TB disks (140 TB with 4 TB disks)
 
Last edited:
Vibrations? - LoL
I use two of these Chenbro boxes as my main ZFS backup systems. They are as loud as a couple of cheap turbo vacuum cleaners and so weighty that you need 2- 4 persons to carry them away. If you worry, you should worry about the desk where you put these boxes on not about vibrations.

Main reason for these 50bay boxes was the ability to grow up to around 200 TB seamless with Sata disks (6TB some day) without the need of expanders (I use 6 x IBM 1015 each).

Currently I use 2 and 3 Raid-Z2 vdevs with 10 disks each (vdevs from 2 and 4 TB disks) with a capacity of about 40 TB each. Capacity can either grow by adding mode vdevs or by replacing 2 TB disks by 4TB (or 6TB in 2014) disks.

Max capacity some day: 36 disks + 10 disks for redundancy + 2 hotspare = about 200 TB usable with 6 TB disks (140 TB with 4 TB disks)
You have confirmed why the the 50-bay boxes are not really a great investment.
Too heavy to work with in most cases, wasted space in them and still noisy.
Norco's with with expanders in hindsight would have and still be best option.

I have a chuckle when I see rack chassis's but no rack.

Not having a shot at you, just seen the proof from an end user that these over-priced boxes don't really suit.
 
I would also prefer two 4224, the second being connected by SFF 8088 to the first (either via expander ot 6 direct lanes) and put into 19" rack instead of using these overpriced Chenbro 48-bay case... but having them on top of a table a *single* box is maybe less risky :)
 
Back
Top