Recommendations for Media Server

XvMMvX

[H]ard|Gawd
Joined
Jan 13, 2005
Messages
1,668
Hello All,

I am in the midst of going all digital with my DVD collection (and small bluray collection). With the time it takes to rip about 500 movies I would shit frisbees if I had to do it again due to harddrive failure.

What recommendations do you guys have for a small compact expandable (storage wise) media server? I would like some redundancy but I do know you can never have %100, would like to get the percentage of complete loss as low as possible.

Would like it to be simple and work well with stream to the Media center PC/WDTV Live boxes.

Thanks in advance!
 
My recommendation is to use individual disks to limit the loss. Don't make one large raid0, LVM volume or spanning array. If you want a some redundancy you can use something like flexraid.
 
Last edited:
DVD = 8GB max. 500 Movies * 8GB = 4TB space.

Method 1: Put the titles on two seperate drives. Result, if a drive fails...lose half
Method 2: Build a RAID5 system, result, if a drive fails...lose none, but may lose all if a URE occurs
Method 3: Buy double the drives you need an duplicate the data ou need. Result, low chance of losing data...but unknown bit rot potential.
Method 4: Buy double the drives and run ZFS. Result low chance of losing data and nearly zero chance of bit rot.

RAID 6 is in there somewhere. However, having a true backup is the only way to not "shit frisbees".

My $0.02 is two build a decent ZFS box and then just copy the data to external HDD's every once and a while. This should work since movie data is very static. For data that is volatile, I have two syncback profiles. One profile only copies data left to right (no delete) nightly. I have a manual profile which deletes orphaned data (e.g. deleted TV shows, etc) from the right side (I run this every couple of weeks). Thus, if I had massive failure, I have all my data.
 
I currently run WHS 2011 with a 3rd party drive extender replacement (drive bender). Its affordable, easy to use, and reliable.

You might want to look into a WHS 2011 with flexraid setup as well. I am currently planning on upgrading to WHS 2011 with flexraid or a nappit-zfs setup as they both offer better data protection options. Flexraid looks to be significantly easier to use while ZFS is better at protecting data. Nappit-zfs also has higher hardware requirements and is outside my comfort zone (compared to most users on this board I am a very casual user of technology).

Good luck.
 
I started off with the same requirements. My1st setup utilized 2 Areca 1202 cards and 2 Rosewill/Sans Digital 8 bay eSATA enclosures...1 combo used for primary and then 1 used for backup. This setup worked great cuz I only needed a PC with 2 free X1 card slots...which are pretty easy to find for a low cost. I have since outgrown this setup but it gave me the ability to run 2 RAID 6 11tb usable setups safely for over a year with good performance. My needs outgrew the setup, but it was a good entry level setup that did the job.
 
My recommendation is to use individual disks to limit the loss. Don't make one large raid0, LVM volume or spanning array. If you want a some redundancy you can use something like flexraid.

Why is that (just curious)? I was leaning towards a RAID5 solution (Linux Software RAID), or something like FlexRAID that keeps the individual drives in NTFS so that you can get data off even if the pool collapses.

At least that way, you have SOME protection if a drive fails. If you just have a bunch of individual drives, you have no protection against drive failure whatsoever.

I'm curious because my movies are currently on 2 external USB HDD's, and I am in the process of building a server box for media, and maybe a couple VM's.
 
At least that way, you have SOME protection if a drive fails. If you just have a bunch of individual drives, you have no protection against drive failure whatsoever.

A lot of users like to make a single array and to do this a lot of them (who do not know linux) cheap out and use low quality HW raid5, raid0 or even some type of spanning array (with no redundancy) then complain when they lost everything.

For individual disks, yes drives can die and you loose whatever is on that disk but your loss is limited to the 1 disk only and you can usually predict drive death by looking/monitoring at the individual SMART attributes. At least I have been very successful at that here at work.


Even though it would be easy to add raid6 or zfs for me I actually do this (use individual drives / now 5 or 6 2TB green models) for my linux based HTPC (mythtv) and have done so from the start which was back in 2004. A second advantage of individual disks is that I spin down my drives when idle for ~2 hours. Not all raid arrays will support that.
 
Last edited:
ESXi + OpenIndiana + Napp-It "All In One" setup is my favorite. can do zfs2 for 2disk failure, or 3
 
Last edited:
I'd suggest single disks, backed up to identical single disks in another machine, better if off-site, but it is just DVD's.

Machine 1 - 2TB - 2TB - 2TB - 2TB
Machine 2 - 2TB - 2TB - 2TB - 2TB

The machines do not have to be robust or powerful.

Use simple programs to automate backups.

Keep it simple and to an OS you know.
 
I'd suggest single disks, backed up to identical single disks in another machine, better if off-site, but it is just DVD's.

Machine 1 - 2TB - 2TB - 2TB - 2TB
Machine 2 - 2TB - 2TB - 2TB - 2TB

The machines do not have to be robust or powerful.

Use simple programs to automate backups.

Keep it simple and to an OS you know.

Really interesting topic guys, im still deciding on the right storage and backup for my files.

reading between the lines it seems raid 5 AND 6 are not the best idea at the moment. I also agree with what you said with regards to the single to single storage mostly but im curious as to why so many people on here use zfs, raid arrays if its not the best idea? you specified 16gig of space with only 8 usable in your example with only 1 disc failure being tolerated before losing 2TB.

a thought:

1.8TB (2TB drive)
that's about 100 uncompressed 1:1 bluerays with master HD audio ! at 25mins per rip and remux per title @ 45 hours to re rip the collection ! Given that most people will work a 9-5 job and given time with family, food and rest id expect it to take quite some commitment to get the whole collection back up (3 weeks in reality?) for the whole family to use, in fact the family or perhaps yourself may just sigh and forget the idea of re-doing it for quite some time if ever (as cheap monthly movie services become higher quality)

3 weeks of your free time where at one point your in the same boat as raid 5 with only 1 spare

so i do agree in like for like discs but i think going 4x RAID-Z primary and 4x RAID-Z backup might be the best idea?
 
Wow a lot of overwhelming information in this thread.

Whatever zfs is sounds like the best... That or just using duplicate drives.

Thanks for all the responses and I have a lot of research to do.
 
My $0.02 is two build a decent ZFS box and then just copy the data to external HDD's every once and a while. This should work since movie data is very static. For data that is volatile, I have two syncback profiles. One profile only copies data left to right (no delete) nightly. I have a manual profile which deletes orphaned data (e.g. deleted TV shows, etc) from the right side (I run this every couple of weeks). Thus, if I had massive failure, I have all my data.

Which Setup would you use? Striped (obviously not), mirrored, or raid-z?
 
Raid-6 is the best I think. Then two disks can fail. But you should use raid-6 with ZFS. Here is very interesting read about data corruption:
http://en.wikipedia.org/wiki/ZFS#Data_Integrity

One advantage of ZFS, is that it does not need any hardware raid. Thus, ZFS is cheaper. And safer. And it integrates extremely well in Windows network if you use OpenSolaris. Napp-it simplifies things and gives you a an easy gui.
 
2 x HP Microservers
8 x 2TB Drives
4 x 4GB Ram
Free Copy of Solaris or FreeNas
2 x Cheapy 300VA UPS's

Install and setup a ZFS snapshots + ZFS send procedure

Place one Microserver and UPS at each end of the house
Cable them up and rip away on your DVD's etc

Sit back and Enjoy your new setup

.
 
I use WHS 2011 and Stablebit's Drivepool to make an 8tb volume to store all my rips. I use a scheduled SyncToy task to echo that to an external 6tb drive every morning at 6am. Works like a charm.

Look up SyncToy, it's free and makes duplicating data wicked easy.
 
2 x HP Microservers
8 x 2TB Drives
4 x 4GB Ram
Free Copy of Solaris or FreeNas
2 x Cheapy 300VA UPS's

Install and setup a ZFS snapshots + ZFS send procedure

Place one Microserver and UPS at each end of the house
Cable them up and rip away on your DVD's etc

Sit back and Enjoy your new setup

.

sounds like a good set-up is the 8 x 2TB drives in each machine or is it 8tb per server? (i suspect the latter)

Also are you using plan ZFS? or Z2 for parity across your discs?

thanks
 
I use WHS 2011 and Stablebit's Drivepool to make an 8tb volume to store all my rips. I use a scheduled SyncToy task to echo that to an external 6tb drive every morning at 6am. Works like a charm.

Look up SyncToy, it's free and makes duplicating data wicked easy.


Out of curiosity, what is the external 6TB drive you use for your backups??
 
2 x HP Microservers
8 x 2TB Drives
4 x 4GB Ram
Free Copy of Solaris or FreeNas
2 x Cheapy 300VA UPS's

Install and setup a ZFS snapshots + ZFS send procedure

Place one Microserver and UPS at each end of the house
Cable them up and rip away on your DVD's etc

Sit back and Enjoy your new setup

.

one question, is it easy to allow the backup machine to sleep until a file change is made or a backup is committed each night ? your setup sounds fairly simple but robust as a design idea but id want my backup server using less than 2w in sleep/idle until the master server decided it wanted to backup and then a magic packet was sent to wake the unit? is this even possible or do you just keep it simple again and set a sleep/wake timer on the backup for the same time as the master server is scheduled to backup?

thanks
 
Depends how you setup things I guess

Microservers drag 6w in standby

With 4x2tb drives they average 60w depending on drive type.

You could script the second one to power off at a certain time if you liked

Then yes, send a magic packet to wake it back up.... Wait xyz minutes then fire off the ZFS snapshot send.... Wait xyz minutes again and power back off again.

All depends on your usage requirements etc.

You can put 6x 3.5" hdd's inside one.
And they will take 3tb + hdd's, just the boot OS drive is limited to 2tb from what I remember (no 3tb drives myself) but if using Solaris etc, you only need 120gb OS drive max anyways.

.
 
Depends how you setup things I guess

Microservers drag 6w in standby

With 4x2tb drives they average 60w depending on drive type.

You could script the second one to power off at a certain time if you liked

Then yes, send a magic packet to wake it back up.... Wait xyz minutes then fire off the ZFS snapshot send.... Wait xyz minutes again and power back off again.

All depends on your usage requirements etc.

You can put 6x 3.5" hdd's inside one.
And they will take 3tb + hdd's, just the boot OS drive is limited to 2tb from what I remember (no 3tb drives myself) but if using Solaris etc, you only need 120gb OS drive max anyways.

.


thanks for the reply

I already have a bare fractal design define mini`s ready for the main server within which i will run 9 x 3TB drives in a yet to be decided software raid setup via freenas . After much deliberation I am going with the method you use as its what I already do at the moment with local backups / images to a usb 3.0 caddy and to spare internal drives, but this time it will be on a much bigger scale with streaming services setup and more stable automated platform (hopefully)

one thing which i have no idea about the best way to proceed is in isolating my backup server from the main network (apart from turning it off) so that it is as free from attack as possible, with this requirement i also have to consider the location for the backup server as I will need it safe and secure from theft or fire

any ideas?

btw* you say your server is 60w with all the drives? do you not have green drives that turn off or spin down to less than 1w?
 
Building two machines isn't needed and would only be necessary if you were in an enterprise setting with crazy storage or if you were case limited somehow. Unless that second box is off site then there's really no benefit to the two box solution which is connected via network. You are going to want to save your network bandwidth if you're driving more than 2 TV's. The bigger the collection the longer those backups are going to take and since you are at home blackout times will be compressed.

My second personal issue with Microservers is the chassis space. You ARE going to eat up tons of space when it comes to media. And no matter the solution RAID or something like it will take 1 to 2 drives out of the Microserver's 4. Which means you'll have no more than 6TB in the case of Raid 5. 4TB in the case of RAID 6. It sounds like a lot..... but it isn't.

Build one box. Since this is a new build get a 3U chassis (most fit anywhere from 12 to 16 drives). They make some really nice wooden slabbed racks which are absolutely beautiful.

You can do either ZFS (RAIDz2) or RAID6 (ext3 or 4). Get a LSI SAS 9201-16i Host Bus Adapter Card. Put as many drives in it that you can afford. That card supports 16 drives. Use 8 drives for file serving and 8 drives for backup. Since you won't be using dedup you won't need tons of memory for either. I would go with 8GB just to be safe in the case of ZFS. You can pair it with a low power processor ( an i3 in this case). For just software Raid you can go as low as a Celeron or Sempron and memory requirements would be nothing more than 2GB.

In terms of software the sky's the limit. Go open source. I swear you shouldn't pay MS to do file serving. Sorry but that's my personal view. You can do OpenIndiana, FreeNAS, or any Linux distro.
 
thanks for the reply

I already have a bare fractal design define mini`s ready for the main server within which i will run 9 x 3TB drives in a yet to be decided software raid setup via freenas . After much deliberation I am going with the method you use as its what I already do at the moment with local backups / images to a usb 3.0 caddy and to spare internal drives, but this time it will be on a much bigger scale with streaming services setup and more stable automated platform (hopefully)

one thing which i have no idea about the best way to proceed is in isolating my backup server from the main network (apart from turning it off) so that it is as free from attack as possible, with this requirement i also have to consider the location for the backup server as I will need it safe and secure from theft or fire

any ideas?

btw* you say your server is 60w with all the drives? do you not have green drives that turn off or spin down to less than 1w?

Grab a cup or two of coffee, and have a read...

Plenty of good examples in here

http://forums.overclockers.com.au/showthread.php?t=958208

.
 
@kak77

Backup times?

Once you have done a snapshot, then send it over to the second box. The next backup is only going to send the difference...

So your backup times are very small for a server that's just holding DVD rips or whatever the op is using as media.

Unless he is adding changing things wildly every day, which I doubt he is.

Even if he adds two DVD rips a day, that's not much to send across to the second box....

.
 
@kak77

Backup times?

Once you have done a snapshot, then send it over to the second box. The next backup is only going to send the difference...

So your backup times are very small for a server that's just holding DVD rips or whatever the op is using as media.

Unless he is adding changing things wildly every day, which I doubt he is.

Even if he adds two DVD rips a day, that's not much to send across to the second box....

.
I'm am aware of how snapshots work but if OP does BR that's 10GB per movie easy. If recording ever should happen you're looking at triple to quadruple that per day and that's precisely the exception you listed above.

However, let's say the above will never happen. Why send the OP to send backups over the network? What's the benefit? Both backup and file servers are in the same location so if the house burns down to the ground the fact that there's one box over here and another overthere is meaningless. You're not carrying out a server in the middle of a blaze.

Having the second box sleep and wake.....why? When you could have put them all in the same chassis to begin with. You're sending the second box into a sleep state because it's there in the design.

The final problem is that you're reducing the availble network bandwidth at certain times of the day, when you don't have to at all. In the case of enterprise you do because of the offsite backup requirement (although there is tape and Iron Mountain). If you are in the same building go with direct attached first. This would be DA to SAS Exp or DA to Tape. All I/O will be faster. In the case of DA to Tape the house could burn down to the ground taking all of your hardware with it. If you put the tapes in a safe it really wouldn't even matter. Plus the latter is cheaper per TB (which is somewhere around $30 to $40 per TB vs the current $120). This isn't to say DA to Tape is perfered over DA to Disk. I'm saying that you should preserve network bandwidth and utilize it as the third option. Not the first.
 
@kaz77

For sure, I agree with all your points.

Apart from the network bandwidth....

Which can easily be avoided with a second network card and cable in each.
Which wouldn't affect your streaming watching of the content at all.

.
 
@kaz77

For sure, I agree with all your points.

Apart from the network bandwidth....

Which can easily be avoided with a second network card and cable in each.
Which wouldn't affect your streaming watching of the content at all.

.

Just take it as a recommendation. ;) There's more than one way to get to Oz and there's many ways to skin a cat.
 
Last edited:
Back
Top