Build-Log: 100TB Home Media Server

@Trepidati0n: Don't really need any type of DE since FlexRAID does everything I need. It provides protection via it's different RAID engines and a single volume via the storage pool option.

What I was referring to earlier in regards to WHS 2011, is that I hope Brahim will have a dedicated version that can be controlled via the dashboard and also offer the storage pool as a NTFS type volume so that you can use WHS folders like Movies, Music or Recorded TV on the storage pool.

What I am after is the ability to combine drives into one large 'storage pool' and then use it in WHS 2011 and share it across my network as say the Movie folder. Basically use the WHS 2011 default folder structure but use a FlexRAID created storage pool as the underlying storage mechanism. In addition, FlexRAID provides data protection via the snapshot RAID feature. At the moment, I can't use the storage pool created volume and assign it to one of the WHS default folders (e.g. Movies). It would also be a cleaner and neater implementation if the control of FlexRAID would be via the WHS 2011 dashboard...
 
I think if FlexRAID had such a feature it would bring it into the prime time but there has been little evidence suggesting that it will ever happen. At first he said that it wasn't really possible but in a recent response to you (treadstone) he said something would be coming within a month. Until then I'm still using WHS V1 with DE. If and when He comes out with a V2 plugin I'll make the switch then and there. And I'm sure I won't be the only one.
 
Why hasn'y anyone suggested the new FreeNAS 8.0 via ZRAID3 it uses ZFS and has a valid upgrade path?
 
Why hasn'y anyone suggested the new FreeNAS 8.0 via ZRAID3 it uses ZFS and has a valid upgrade path?

Becaues it isn't in line with the what the OP is going for. Server 2008 or Vail provide a lot of features that FreeNAS can't touch (and they do'nt have to be file related).
 
ZFS is nice and it most certainly has it's place, but if you read through this build log, you'll find that one of my issues was power consumption. If I would switch to a ZFS type setup, I would have to have all the drives running (e.g. accessing a single movie would require all 50 drives to be active) at the same time which would bring me back to my original setup in regards to power consumption and that's just not going to happen...

In that sense, FlexRAID is the perfect solution.
 
ZFS is nice and it most certainly has it's place, but if you read through this build log, you'll find that one of my issues was power consumption. If I would switch to a ZFS type setup, I would have to have all the drives running (e.g. accessing a single movie would require all 50 drives to be active) at the same time which would bring me back to my original setup in regards to power consumption and that's just not going to happen...

In that sense, FlexRAID is the perfect solution.

indeed its a perfect solution if everything works
but i have my thoughts as so much people fighting with bugs with Flexraid
specially with foldernames.
If sometime Flexraid is a nearly bugfree solution i might be give it a try again,
my first try wasnt sucsesfully, had some errors after create an array, and to solve this i really dont know how, logs are exteme long and it doesnt say me much.
When i start my new build i think i still going for some normal raid arrays,
a raid 0 or 1 for my download`s and 2 x raid 5 for the shares or so
but for that need to buy again some disks, properly again some wd20ears or maybe go for 2,5 or 3TB disks ? .. but i dont know the effect with greater than 2TB disks as MBR cant handle that anymore.
My controller is new and sata600 (Highpoint 2740) so maybe it is no trouble for bigger disks.
 
@Parcifal: I have no issues with folder generation, etc with FlexRAID since I use it in a somewhat unique way. I only use FlexRAID to combine the drives (via the storage pool function) and share it as a READ ONLY volume across the network, that way I can be sure that nobody or nothing can mess with my movies or folder structure deliberately or by accident. To store new movies on the server, I have another network share which is password protected and is the root of the empty folder structure for all my drive mount points. When I want to transfer a new movie to the server, I log into this 'special' share and pick what drive I want to store the new movie on. That way I have direct control over what movie will get stored on what drive. The storage pool picks up the additional folder as soon as I add it via the other share and everyone else has access to the new movie. Works flawlessly for me and it's exactly what I wanted, although originally I thought I wanted something that would do the file distribution on its own for me, but I have found that I like it way better the way I use it now.

I bought another 12 2TB WD20EARS drives for my other server since they were the best price point at the moment at $60 each.

I can't comment on the Highpoint controller since I don't own one...
 
well done thats a cool setup for sure that way and save ,
for here i dont do networking normally but play movies and music direct and put it on hdmi to my 7.1 reciever thats going very well , speed enough for single disk setup (jbod) but for downloaden , extacting and copy some more speed will be great.
The wdears doing fine on jbod , i setup them all in single disk arrays , they work slightly faster than on the motherboard itself.
One thing i read is that there are some different versions of the wd20ears , made in malay and Thailand , there are proofs that the Malay ones are the best drives if tkaing some with same production date.
i didnt know before but i will try to choose some drives carefully if i going to make an raidarray.
Latest types having 667 GB platters , early types 500GB platters.i think this is an very important thing to not mix them for the best performance.
The RE4 drives might be the best choice but the price is much much higher and i cant afford that for so many disks, i guess they are just the same drives but with different firmware.
 
Where are you finding the WD20EARS drives @ $60/ea? Or is that an occupational perk price?
 
First off - I've been following this thread for a long time and the work you've done (treadstone) is amazing!

I think I speak for a lot of us here - but your project has inspired many of us to pursue our own extreme media servers =)


With that being said I'm not sure if anyone here has seen this or is interested, but a very cool take on the extreme server.

Enjoy!
HTML:
http://blog.backblaze.com/2011/07/20/petabytes-on-a-budget-v2-0revealing-more-secrets/
 
Last edited:
FelixJongleur- I have seen and been intrigued by the Backblaze pods as well, but they aren't designed to be used as a single storage server without backup.

There are numerous problems with running the Backblaze pod (V1 or V2) in a production environment as a stand alone server. The biggest one is that there is no inherent redundancy within the server itself. No backup boot drive, no RAID solution whatsoever, and no backup power supply. Granted it is quite impressive that they were able to get so much raw storage for the price, but I view their pods like huge disks.... no failsafes if used alone but if you cluster a bunch of them (as Backblaze does) then you have a massive redundant storage solution for much less that an OEM would provide it for.
 
@blackhawk777: I had looked at the Backblaze pods before I decided to go with the Chenbro RM-91250 chassis. The issue I found with the Backblaze pod design was in regards to the fact that they used SATA port multipliers.

I guess I should post an update to this thread since I made some hardware changes to the server... If I find the time, I'll do a write up and post it on the first page...
 
@treadstone: Would you describe your FlexRaid setup in more detail? I just installed WHS 2011 onto an HP MediaSmart EX490 and want to setup something similar to what you described with FlexRaid. Any non-intuitive steps or gotchas?
 
@cycleback: Not sure what exactly you are looking for, but here is how I setup my server:

Out of the 50 x 2TB drives, I use 48 for storage and 2 for parity data.

I created the following folder structure on the OS drive (80GB SSD) with sub-folders for each individual drive:

C:\DRIVES\HDD#01, C:\DRIVES\HDD#02, C:\DRIVES\HDD#03, ... C:\DRIVES\HDD#49, C:\DRIVES\HDD#50

Each physical drive is then mounted into its corresponding empty sub-folder.

In FlexRAID, I created a single RAID setup with the Tx engine (the only one that would allow me to assign 48 drives to a single RAID) and assigned HDD#01 to DRU 01, HDD#02 to DRU 02, HDD#03 to DRU 03, ... up to HDD#48 to DRU 48. HDD#49 is assigned to PPU 01 and HDD#50 is assigned to PPU 02 as the two parity drives.

HDD#01 through HDD#48 are connected via two HP SAS Expanders to a single AOC-SAS2LP-MV8 and HDD#49 and HDD#50 are connected directly to the SuperMicro X9SCM-F motherboard.

Finally the 48 x 2TB drives are pooled together to form a single virtual 96TB drive via FlexRAIDs Storage Pool feature. I share this drive as a 'Read Only' drive across my network. I set the permission on this virtual drive to 'Read Only' so that nobody can accidentally mess up (e.g. erase) my movies :)

I use a dedicated computer I build specifically for ripping Blu-ray movies to the server. I access the server via the C:\DRIVES folder which I share with full access rights but only if I log in as the administrator. This allows me to store the movie on a specific drive. After I'm done ripping the movie, I run FlexRAID to update the parity data.

The advantage of this setup is that only the drive that I am streaming a movie from or saving a new movie to has to be active, all other drives are usually in standby. This reduces the power consumption and heat generation considerably (compared to the hardware RAID setup I previously had). The other advantage is that with my current setup of two parity drives, more than two drives would have to go bad before I start loosing data. I can even unplug any drive and connected it directly to any other computer and read the data from it since all drives use plain NTFS. Basically, if a single drive fails, I can recover the data via the FlexRAID parity data, if two drives fail at the same time, I can still recover ALL my data via the FlexRAID parity data, if 3 drives fail at the same time, I only loose the data on those 3 drives, but the remaining drives are still accessible. Compared to a hardware RAID 6 setup, if I lost 3 drives at the same time, I would have lost EVERYTHING! The other drawback of hardware RAID (or even a ZFS setup), is that all drives have to be active at the same time to stream a movie and I can't unplug a single drive and read the data on another computer!

Hope this helps...
 
^^ WOW. :) I might have to check into this as a setup. But God knows that it would suck to lose your movies. hahaha. My setup only contains 16HDD. hahaha i am still in Awe with 50HDD....
 
@midnkight: Granted it would suck if I lost the movies on the server, but that would only happen if more than two drives would fail at the same time and anything after that would only mean that I loose what is on the bad drives not the remaining drives since I can still access all of those movies.

If that should happen, I just replace the bad drives and then have to re-rip the Blu-ray movies I lost on the bad drives. I have all the originals, so it's not too bad... It would just be time consuming.

I'm at just over 1000 Blu-ray titles right now (titles not discs as some titles have a LOT of discs)...
 
@midnkight: Granted it would suck if I lost the movies on the server, but that would only happen if more than two drives would fail at the same time and anything after that would only mean that I loose what is on the bad drives not the remaining drives since I can still access all of those movies.

If that should happen, I just replace the bad drives and then have to re-rip the Blu-ray movies I lost on the bad drives. I have all the originals, so it's not too bad... It would just be time consuming.

I'm at just over 1000 Blu-ray titles right now (titles not discs as some titles have a LOT of discs)...

Are you running Full Rips? or movies only? I touch roughly 800 too! but movies only. >_<
 
treadstone,

FlexRAID looks pretty interesting (I need to go back & read some more)

How come more peeps aren't talking about this technology?
 
Wow.. This is nuts. I had no idea you could do this. That is actually the absolute ideal way to have a home media setup IMO.. Will have to look into this..
 
Treadstone - With the way you are using FlexRaid, can you add drives? Or do you have to start with the 48 (or XX amount) from the start and only replace failed drives?
 
You can add and remove drives anytime you like. In fact you can take drives off-line (e.g. use USB attached drives) if you like.
 
Hey,

Have been reading the wiki and saw:
The Tx engine

The Tx engine is an optimized RAID&#8734;™ engine.
As a RAID&#8734;™ engine, it represents a breakthrough in data protection.
Its I/O sub-system is tuned specifically for each targeted OS.

It supports an infinite (1 to &#8734; ) number of PPUs (parity) and an infinite number of DRUs.

The Tx engine can tolerate any number of UoRs (DRU and/or PPU) failure depending on your chosen level of protection.
As a current limitation (hopefully a temporary limitation), adding a new DRU to the array (RAID expansion) is as expensive as re-creating the RAID even if the added DRU is empty.
One can work around this issue by setting up as many empty DRUs as one will need in the future during the initial RAID creation.

Is this no longer a factor? does that just mean if you expand you need to run the parity again which means reading the entire pool ?

Feel free to tell me to sod off and do some reading if you want - I just figure its easier to ask someone who's got some real experience..
 
Why are we not "hearing" more about this technology?

What's the downside of FlexRAID compared to traditional RAID?
 
So now for the imp question - have you tested recovering data by simulating drive failure? I'm interested in how long it takes for recovery, how long it takes to rebuild parity (I'm guessing both should be almost same) and whether the server is usable during this time.

Are you using any drive monitoring tools - e.g. S.M.A.R.T ? I think FlexRaid has some sort of plugin for this but not sure.
 
Impressive project, indeed..
How is the noise level on this thing? Were you able to bring it down to an acceptable level?
Honestly I haven't had the patience to read through all 20 pages, so I apologize if this was already discussed..
 
Impressive project, indeed..
How is the noise level on this thing? Were you able to bring it down to an acceptable level?
Honestly I haven't had the patience to read through all 20 pages, so I apologize if this was already discussed..

I can only imagine something like this would be in a rack in a basement/garage/closet somewhere out of sight/sound. I've got a 6 drive NAS in the top of a closet and I can hear it vrrrrrrrrrrrrrrr'ing in the bathroom a room away....I can't imagine 100TB worth of disk.
 
sick sick build. my wallet hurts just thinking about it but id love to build something like this.
 
I can only imagine something like this would be in a rack in a basement/garage/closet somewhere out of sight/sound. I've got a 6 drive NAS in the top of a closet and I can hear it vrrrrrrrrrrrrrrr'ing in the bathroom a room away....I can't imagine 100TB worth of disk.

Well I only have a 4U case with 16 drives, but it is actually quieter than expected. The case is so heavy that you can't hear any vibrations from the drives directly. Of course I selected drives with low inherent vibrations (all are "green" Samsung drives with not more than 3 platters). Nearly every case with multiple stiffly installed drives will generate those annoying ultra-low (below 1 Hz) beat frequencies that you will hear two rooms away in buildings with improper impact sound insulation. I placed a ~5cm thick rubber foam mat below the case that absorbs the remaining vibrations.

The cooling is actually louder than the drives. Even though I use only 1500 rpm fans, they have to suck the air through the small gaps between the drives which is clearly audible.
 
Why on earth would someone NOT use zfs to manage this much data. I hope you upgrade to ECC ram also. I know this is a ressurection thread too.
 
Why on earth would you use zfs for this build? it doesn't meet the design goals of this thread at all.

This is a media server, and he only wants the one drive that is streaming video to be powered on at a time, not ALL of the drives. Redundancy is provided just fine, as well as corruption, just not done in realtime like zfs does it.
 
Why on earth would someone NOT use zfs to manage this much data. I hope you upgrade to ECC ram also. I know this is a ressurection thread too.

Because striping is unnecessary for a media server with this many drives and only introduces an unnecessary extra layer of risk. That's one reason among several others.

Flexraid is also hashing the files and detecting bit rot and correcting as needed.
 
Why on earth would you use zfs for this build? it doesn't meet the design goals of this thread at all.

This is a media server, and he only wants the one drive that is streaming video to be powered on at a time, not ALL of the drives. Redundancy is provided just fine, as well as corruption, just not done in realtime like zfs does it.

Gotcha. I missed that part sorry.
 
Does Flexraid detects bit rot in real time? If so I did not know that! What about snapraid?

doubtful, well, neither does zfs really. zfs won't detect a checksum error until it reads the corrupted block during a normal read OP or during a scrub. i don't see why flexraid would be any different.
 
No one ever claimed it did it realtime, correcting bitrot on a media server in realtime is generally not needed (unless your business is as a media provider).
 
Back
Top