Build-Log: 100TB Home Media Server

Well, looks like I have been living under a rock called FlexRAID. :p

This is my first time seeing this thread. :eek:
Ridiculous build you have there treadstone.
I might need to use screenshots of your build instead of the puny 8TB currently being showcased for FlexRAID 2.0 (http://www.flexraid.com/).

Sent you a PM at the Openegg.org forum.
 
Have you lost any WD20EADS drives yet treadstone? Based on the reliability reports on that drive, even with WDIDLE and WDTLER run on it, I would expect out of 50 of them you would have seen some go down.

Cool build... I'm at 24TB and landlocked on an old 16-port Areca controller. To go any bigger I'd have to invest in a new controller and external drive cases, etc.
 
@spectrumbx: I'm eager to try FlexRAID 2.0!!
And I am more than happy to provide you with screen shots or feedback for the new version...

@shoek: I haven't lost a drive yet (knock on wood) :)

Mind you, I don't run them in a RAID array. I actually put WDTLER back since I run each drive individually and use FlexRAID instead. FlexRAID gives me a similar protection (against a drive failure) and also gives me the ability to see and share the 48 2TB drives I have in my storage pool as a single drive of 96TB. The benefit of FlexRAID is that I can loose for example 5 drives and still have access to most of my data (less those that failed of course) and even if the server would die on me, I could simply take the individual drives, plug them into another windows machine and access all my data!
If a single drive fails, I can recover the data I lost on the failed drive via FlexRAID. I will be switching to a different FlexRAID configuration that should allow for up to two drives to fail before I can no longer recover the data on the failed drives.

I am actually looking at replacing the motherboard, CPU and RAM along with the Areca 1680i (currently only used as a dumb JBOD controller) with something different. The i7-860 I have in there is WAY overkill and I'm thinking of moving that into one of my HTPCs instead... I'm looking at and waiting on the new Sandy Bridge type server boards from SM...
 
I am actually looking at replacing the motherboard, CPU and RAM along with the Areca 1680i (currently only used as a dumb JBOD controller) with something different. The i7-860 I have in there is WAY overkill and I'm thinking of moving that into one of my HTPCs instead... I'm looking at and waiting on the new Sandy Bridge type server boards from SM...

Can I assume you selected the i7-860 to handle the parity calculations? I read recently on zfsbuild.com (actually it was the related Anandtech article) about how a ZFS filesystem was using up to 70% usage due to the checksumming that ZFS does.

In your case though I suppose the only time the CPU is involved is when a) you're ripping a Blu-Ray onto the filesystem and b) your trying to read from the filesystem while it's degraded.

You could probably downspec to an i3-530, to be honest.
 
I don't use ZFS. I use Windows Server 2008 R2 as the OS and use FlexRAID for parity and combining all drives into a single view that gets shared across the network. I use one of my HTPCs to rip the Blu-rays and transfer them via the network to the server. So the only time the CPU really gets used is during the parity calculations for FlexRAID.

Originally I had a lot of other things planned to run on this server, but that's not going to happen anymore and hence my intention of downgrading the server and moving the i7-860 into one of my HTPCs. Most likely the one I currently use to rip the Blu-rays.
 
Yes, I know you don't use ZFS. :) My point was that parity calculations eat CPU cycles, and FlexRAID uses parity. :)
 
FlexRAID does make use of all 8 cores (if you configure it accordingly and correctly), however if memory serves me right (I haven't really looked at that in a while), none of the cores were anywhere near maxed out. So a slower CPU probably will do just fine, one with at least 4 cores but I think I will end up with a CPU with 8 cores again...
 
Why would you put an 8 core Xeon in a server that you're removing a 4-core processor (i7-860) from?
 
Even old CPUs can do >5GB/s of XOR operations; XOR is one of the easiest calculations possible for the OS. So i do not see why you need 8 cores for that; is even more than one utilized at a time? If it's just a single threaded implementation a single sempron/celeron should do XOR fast enough for you to care.

If you boot an Ubuntu Linux livecd you can see in the dmesg output how fast XOR works on your system. Generally between 5GB/s and 20GB/s; using one CPU core.

Of course, XOR is only a small part of the overhead of a RAID-driver; most overhead comes from memory copies not the XOR itself. XOR is super easy for your CPU, it's needs are hugely overrated.
 
@parityboy: Sorry I meant 4 core / 8 thread CPU not an 8 core CPU!

@sub.mesa: Thanks, I am very well aware that XOR is a simple logic function that most CPUs can easily handle. But I am also looking at more than just the processing power of the CPU (e.g. power consumption...) and older CPUs usually draw more power for the same throughput a newer CPU generation handles.

FlexRAID does make use of all 8 cores if you configure it accordingly. Brahim (spectrumbx) can answer your question in regards to the actual CPU utilization during it's parity calculation a lot better than I can since he is the designer/brains behind the FlexRAID system.

I am considering one of the new 2nd generation Xeon CPUs as an alternative since it looks like they will draw even less power than what I currently use, combine that with a new Sandy Bridge MB, it should reduce the idle power consumption by a bit.

I have to talk to Brahim and see what kind of CPU utilization/requirements the new FlexRAID v2.0 (realtime RAID engine) has.
 
@treadstone
The CPU requirements will depend on the tolerance level you pick and your I/O throughput.
The I/O throughput is the amount of data your system I/O can feed to the FlexRAID engine.
In essence, it ties into the performance of your disks controllers in relation to your total number disks.

Any dual-core above an atom should do fine with a T2+ (RAID 6) configuration.
My dual-core 2Ghz opteron VM had no issue doing triple-parity.

For your setup, I'd say any decent quad-core will be more than enough.
The real-time RAID feature is not complete and stable yet, but its CPU requirements are event less.

If you want to give me some numbers based your current configuration, we might be able to focus more on the key components and configurations that will impact performance the most.
 
@spectrumbx: Thanks for the info.

So last night I installed FlexRAID v2.0 and recreated my snapshot raid and storage pool (-view) and also decided to recreate the parity from scratch. The parity creation just finished (took a little over 12h) and here is what I found so far:

Running parity across 15 drives seems to load the CPU with about 40 to 50% with 8 threads.
Running parity across 8 drives seems to load the CPU with about 20 to 30% with 8 threads.
Running parity across 5 drives seems to load the CPU with about 13 to 20% with 8 threads.
 
It's across all threads, however it is not continues. It's at about 40 to 50% for about 10 seconds while it reads the data on the 15 drives and computes all the parity. It also starts to write the parity to the parity drive but from the looks of it, it takes longer to write the parity, so the reads and parity calculations are interrupted wile the write process to the parity drive completes. This takes a few more seconds in which time the process load goes down to pretty much 0%. So the average load is actually less (probably more around the <30% mark). I was considering to replace the parity drives with black drives, but honestly this type of load only happens when you recreate the entire parity data from scratch (hence the 12 hours it took to complete this process). Normally this process doesn't take that long to update the parity to cover the changes I made (when I add a few movies). And since the update process runs late at night, it doesn't matter to me how long it really takes :)
 
From your description, it looks like you are getting a great throughput on the input, which is good. It is a better scenario to have the input wait on the output than the reverse.
If the output was doing the waiting, that would mean either system I/O saturation, which can be resolved by replacing or adding disk controllers.

I wonder if RAID-0'ing the parity drives might be worthwhile.
I would not recommend introducing the risk of a RAID-0 to shave a few hours of initialization.
However, if you were to sync large data changes daily, that might be something to explore as you can always rebuild if the RAID-0 fails.
 
I've got some ideas how to improve FR write performance to parity disks based on my own benches w/ FR 2.0 + 72 disks. I'll continue the discussion at Flexraid's forum so not to co-opt this thread.

@Treadstone good to see you on board beta-testing FlexRAID 2.
 
Last edited:
@spectrumbx: I think I'm ok with the throughput right now. Here is a recap on how I have the drives connected:

48 2TB drives connected via 2 HP SAS expanders to an Areca 1680i (in JBOD mode).
2 2TB drives connected directly to the MB via SATA connectors.

All drives (except for the OS SSD drive) are identical 2TB green (WD20EADS) drives.

Out of the 15 drives that were used during the parity calculation (input), 13 of them are on one HP SAS expander and basically connected via 4 lanes to the HBA and the remaing 2 drives are on the other HP SAS expander connected to the second 4 lane port of the HBA. The CPU access the parity drive (output) via the ICH10R on the MB.

Currently I am running the T1+ RAID engine. I am in the process of moving my music collection (~500GB) of the 2nd drive that is directly connected to the MB, so that I can use that drive as a parity drive as well and then switch to T2+ RAID engine.

@odditory: Thanks and I have to say I like the GUI a LOT better :)
 
treadstone, replying to your PM about 2nd gen Xeons.

For anyone else that is interested, STH tomorrow morning.
 
Unless I've got some issue here, are the flexraid forum's down? I can't even find a download link to flexraid 2.0. I do admit I have not been looking overly hard.

Nicely build server though. Your skills to build such a beast are admirable!
 
I am afraid of 2 things, the cost of build, and your electric bill.

Would you mind putting out an approximate price of build tho?
 
Unless I've got some issue here, are the flexraid forum's down? I can't even find a download link to flexraid 2.0. I do admit I have not been looking overly hard.

Nicely build server though. Your skills to build such a beast are admirable!

I'm not sure if the forum is currently down, but I do think that Brahim needs to look into this since it does appear that his site and/or forum is unavailable quite a few times lately :(

And thanks for the compliment. An update to the build log should hopefully come soon if I find the time for it. I've made some changes and I am currently building another server and added more storage...

I am afraid of 2 things, the cost of build, and your electric bill.

Would you mind putting out an approximate price of build tho?

I've already answer that in the build log (look at post 39) ;)
 
Treadstone, first and foremost props to your rig, extremely interesting writeup and you certainly know your stuff!

Couple of questions. When you were modifying the system I.e drilling holes, changing motherboard components did it not scare you that potentially it could degrade the quality of the board and that intel have already designed the board in the best configuration without the need of modifications?

Also, I understand you must love your Bluray but do you not think it would've been cheaper in terms of electricity bills, build and maintenance. I'm a firm believer in multiple arrays or storage on this level, not to mention security and disaster recovery.

What would happen if you were burgled, house got blown away?
The RAID controller crashed the system?

I know these are rare and unlikely scenarios but I would put forward this. The bigger the array in a consolidated state would be far more prone to failure as opposed to multiple arrays in an idle state?
 
Ah, okay my bad didn't know that.

However there are very viable risk factors to this setup, although it's mega impressive not sure in terms of logistics it's genius.
 
Ah, okay my bad didn't know that.

However there are very viable risk factors to this setup, although it's mega impressive not sure in terms of logistics it's genius.

Great. One day member jumps in on a thread about a GREAT build, don't even bother to read enough of the thread to "know that" about what he did, and then start bashing the "risk factors" to this setup. You didn't even read it well enough to know he wasn't using raid (which pretty much means you didn't read any of it). You've not read the forums long enough to recognize Treadstone as a very experienced system builder. You've no standing to cast judgement. The least you could have done is actually read the thread well enough to understand what (and who) you were critiquing before you started criticizing it. Get a life.

Treadstone: one more time - GREAT BUILD and thank you for sharing as it developed.
 
Last edited:
Guys, internet wars aren't my bag, any build like this would raise healthy debates I have already stated that it is an excellent build and since these previous posts I have read the entire thread and have a better understanding of the setup. Is it obsolete, Yes. Technically inspirational? Yes. Just because I have been on here for around a couple of hours now, does not mean I lack experience.

Treadstone has far superior knowledge than me no doubt but it wasn't meant to be a direct insult to the setup. I would find it hard to sleep at night with this setup and the potential maintenance costs would make me cry but none the less creative and the most notable I've seen to date.
 
Last edited:
Just trying to help out, not argue, but people generally like it when you back up your statements with a healthy bit of fact around here. Example, you keep mentioning how "dangerous" this setup is, and how high the maintenance costs will be, but support no reasoning as to why either should be an issue.
 
Just trying to help out, not argue, but people generally like it when you back up your statements with a healthy bit of fact around here. Example, you keep mentioning how "dangerous" this setup is, and how high the maintenance costs will be, but support no reasoning as to why either should be an issue.

Note taken Jesse. Thanks.

I will emphasise again that this is an incredible setup, again, most notable I've certainly seen.

Maybe my comments are predictable but with someone like myself who is extremely aware of data backup, not storage, my data is priceless. To elaborate on this, if to say a drive went down and the data was irretrievable, even if I did have the originals I wouldn't have the time to start backing up my discs again, it's not the most efficient way of housing all my media. Worst case scenario the original discs and storage got taken out by some disaster, no originals to put on the replaced drives. Go to the store and by them again and then burn them again through insurance, possible but not practical. This in turn would cost money, money + time = nobody's ideal solution.

If I had the money to spend on a solution like this I would have to consider some type of backup, it's in my nature.

This of course is just my way, not knocking treadstone in anyway at all and whatever the solution there will be countless opinions on what the best solution is. It's a tricky one to pin point and all have their pro's and con's. The guy, technically is up there with the minority (if you ever read this, big virtual high 5). but for me part of the fun is also finding solutions to keep the data safe on top of a storage solution. I think these are fair comments.
 
Last edited:
First guys, please chill... no need to get angry/upset with each other.

I appreciate Odditory and PigLovers to answer some of the mwhq's questions.

I do have to agree with PigLover though, most if not all answers to your questions have either been previously asked and answered or I have already posted about them throughout this thread.

I do not run a RAID array because of certain issues I found with running multiple RAID arrays and the potential for loosing data when drives drop out of an array. Hence my choice of running each drive as a separate entity and using FlexRAID to combine them into one virtual drive as well as using FlexRAID to provide me with some level of data redundancy. So far (knock on wood) I have never had any issues with this setup.

Power consumption, this is something I am still working on. I am trying to reduce the overall power draw and I am in the middle of replacing the motherboard, CPU, Memory, etc... Pictures (and maybe another writeup) to follow sometime soon.

I also assembled a second server for other tasks that I used as a test bed for optimizing settings that I hopefully can apply to this monster to bring the power consumption down a bit.

In terms of theft, I doubt that any 'normal' burglar would be able to steal this thing. It is HEAVY and I do mean HEAVY. Fully loaded, I can't budge this server on my own. To move it around I have to first remove all the drives and even then it takes two strong guys to move it!

If the hose where to get blown away, I don't think it would matter what kind of equipment you have, it will be gone either way :)

Maintenance, not really anything special I need to do on the server, so this is not an issue.

I have no issues modifying motherboards, have done this many times over the years...

Hope this answers some of your questions.
 
Treadstone, cheers for your response. As discussed I now have a much thorough understanding of your setup and was in no way insulting you and your knowledge, I have no pleasure in doing this.

It has answered some of my questions I was just outlaying some very low percentage scenarios, day in day out I have DR to consider so it has become almost second nature to think through every element of a system.

A burglar, unless they have some serious muscles wouldn't be able to budge the server.

Apologies if I have missed this in the thread and you have your Bluray's, DVD's stored elsewhere. Just out of pure curiosity though what would you do in the event of fire/natural disaster that could compromise your house (touch wood this wood never happen).

I'm glad your working to a solution on the power consumption and even better finding progress.
 
@mwhq: I build this server primarily to have an easier way of accessing my Blu-ray collection (675+ titles last time I checked). As per my previous post, I use FlexRAID to at least give me some data redundancy. My current setup allows for two drives to fail before I am no longer able to recover my data. The advantage of FlexRAID over a normal RAID setup is that because each drive is handled as a separate disk, even if I were to loose 3 drives, I would not be able to recover the data I lost on those 3 drives, however all the data on the remaining drives is still intact and accessible, in other words, I would only loose the data on those 3 drives and not the entire array as it would be the case in a normal RAID setup!

The other reason I went with FlexRAID is the fact that only the drive I am accessing to playback a movie or store a new movie needs to be active and all remaining drives can stay in standby mode reducing the power consumption! For example, my original setup (hardware based RAID arrays( required to have all 50 drives active. 50 x 6W (per drive) = 300W! Now with FlexRAID, all but the active drive are in standby: 49 x 0.8W = 39.2 + 6W = 45.2W!
That's a 254W savings! Not to mention the additional heat which in turn means I can run the fans at a lower rpm, etc...
 
@mwhq: I think if the house were to burn down, I would have bigger issues than to worry about my movie collection :)
I mean, sure I put a LOT of time and effort into my collection and I would hate to see it destroyed one way or another, but if something major were to happen to my house, there would be so many things that take precedence over this server and/or my Blu-ray, DVD and music collections...
 
Understood. FlexRAID is definitely a solution I would look into now after reading your posts so I have taken something away from this and will most certainly be re-assessing how I store my media in the future as it becomes inevitably larger.

*Edit* just checked your post in response. if that doesn't worry you then fair play you are far more laid back than me. It's that or I sadly value my music and movie collection more than my house ;)
 
Really Impressive ! Bravo ! I'm on the way to buil something equivalent...
A question : Does the ARC-1608i works with HP expander at firmware level 2.06.
I've no way to upgrade mine at level 2.08
 
fantastic mobo troubleshooting and engineering work there. what would really piss me off would be if WD/Seagate/Samsung released a 10TB drive in a year or two...for a hundred bucks, lol.
 
LOL, yeah that would bite, but then I guess I would just replace all the drives and have a 1PB (1000TB) server :p

However I doubt you will see 10TB drives anytime soon. Seeing as the storage capacity has only increased from 1TB drives in the early part of 2007 to around current 3TB drives in 4 years, it's still going to take a little while until we see capacities as large as 10TB in a single drive...

Mind you though, it took the storage manufacturing industry 5 years to reach 1GB, that was back in 1991, then another 14 years more to reach 500GB in 2005 but only an additional two more years to reach 1TB in 2007...
 
Also, I've heard Brahim is back and has started developping again. Just wanted to let you know as you did want to test version 2.0:))
 
@Jeroen1000: Oh I've been using FlexRAID v2.0 pretty much from the time he released it. It's a lot easier to use now than it has been in the past. I'm just hoping that he will have a specific version for WHS 2011 sometime soon :)
 
@Jeroen1000: Oh I've been using FlexRAID v2.0 pretty much from the time he released it. It's a lot easier to use now than it has been in the past. I'm just hoping that he will have a specific version for WHS 2011 sometime soon :)

My guess is a combo between one of the new DE's (drivebender, stablebit, etc) and FlexRAID will be the answer. Will have to see though.
 
Back
Top