Hardware RAID 5E vs 6 vs 10 for home media server

Joined
Jan 24, 2012
Messages
26
Hi all, so I recently lost a software RAID 6 due to the software itself failing. I'll spare the details because I could never proof what happened, all I know is non of my drives failed. I believe it has something to do with mdadm vs dmraid metadata conflict, but I could be wrong.

Anyway...
Trying to learn from this I went ahead and purchased a Dedicated Hardware RAID controller, http://www.newegg.com/Product/Product.aspx?Item=N82E16816103215 for my 20x1TB disk drives.

My question is RAID 6 vs RAID 5(E) vs RAID 10? I am not looking for performance, so much as resilience and storage capacity, as it is a media server and only accessed by my household.

Is there a benefit of using spare drives for a household RAID? Assuming spare drives only help with up-time during drive failures? Also, RAID 5E configuration uses the spare drive in a rotational manner...is this beneficial for such static data? Seems like it would make the drives fail faster by moving blocks around constantly.

What RAID configuration would be the easiest to recovery from a Hardware RAID controller failure? Is it even possible to recover from a Hardware RAID controller failure?

I believe I understand what the sacrifice is from a capacity perspective for each RAID configuration. RAID 6 I would get 18TB usable, RAID5 I would get 18TB usable, RAID 10 I would get 10TB usable.

Also I understand RAID is NOT a backup solution. Does anyone know of a cheap backup solution for this much data?(18-20TB?) I've herd Tape Library's, but the cheapest I saw would be somewhere in the $5000 range for this much data...yikes!

Thanks all for your comments and helping me make the best decision for my setup
 
Recovering from a Raid Controller failure isn't usually a big deal. Most Raid Controllers will read the configuration off the disks, and get you back up and running fairly easy.

Raid 5 vs 6, there's a little difference, and you don’t lose any capacity.
1) You will lose some write performance, read performance is the same.
2) Raid 6 uses 2 Parity bits per drive this is why there is a write performance hit.
Raid 6 does have a slightly less rebuild time when you have a disk failure but a rebuild of a disk really doesn't impact operations much.

As for a Backup Solution, I wouldn't worry too much about that. Your Raid setup protects your data in the event of a Disk Failure. However it doesn't protect you if you have a catastrophic multi-disk failure. An Offline Backup Solution for this amount of data is expensive no matter how you look at it. Tape would be your best bet. Or you could consider Raid "0+1" or Raid 10. However, these raid modes get expensive due to the need for twice as many disks.

Here's a link to the different Raid Leves. http://en.wikipedia.org/wiki/Standard_RAID_levels

One thing I would have looked at before purchasing that raid controller is support for SATA 3 (6gb/s). I believe the card you selected only supports SATA 2 (3gb/s).

I see your running 1TB drives so the card you choose will probably be fine.
My only concern is that you have limited yourself somewhat, from a performance/future proof perspective.

SATA 3 consumer Hard Drives are already 4TB at 7,200rpm, these are expensive at the moment as they are new. However, you can get 3TB Seagate’s for $119(onsale, currently $137) these days which I am using.
http://www.ncix.com/products/?sku=66009&vpn=ST3000DM001&manufacture=Seagate&promoid=1366

Do you really need a card that natively supports 24ports? In my humble opinion no.

Take my Raid setup for example.
I use the MegaRAID SAS 9260-4i http://www.lsi.com/products/storagecomponents/Pages/MegaRAIDSAS9260-4i.aspx

Solid raid card, support all the raid you could ever need. I've never had a problem.

It does limit you out of the box to 4 drives. However with SAS expanders you can go all the way to 128 drives.
Currently I have 3 x 2TB Seagate Baracuda hard drives in Raid 5.
I also purchased a battery backup module for my raid card to improve performance and reliability of data in the event of a power failure.

For your situation though I would have probably gone with something like the Adaptec 7 Series Raid card 72405 to allow you the option to start replacing your slow disks with ones that utilize SATA 3.
This card, on new egg, is $179 more than the one you have shown.
What I would also have considered is purchasing 8 new 3TB drives to start rebuilding with faster disks. This would give you total usable space of 19.55Gigs, slightly more than you currently have.
You could also start with 3 x 3TB drives, and add an additional one as you run out of space to save some money on the initial cost.

You would also be consuming a lot less power using 8 drives vs 20.

I don’t know your budget, so this would be an expensive rebuild, but in my opinion worth it.
The overall performance gains would be very noticeable.

You could sell off the 1TB drives to recoup some of the cost.

Also, from the sounds of it you lost all your data on this array so it may be the perfect time to rebuild with faster disks and an updated controller, if you haven’t already made the purchase of this card.

Since you posted today it looks like you bought that card today, so maybe it’s not too late.

Sorry for the long winded reply but hopefully these considerations will help you make the right choice.

Bottom line, I think you’ve made the extremely good choice of moving to a Hardware Raid controller to stop using a software solution. That is the first and most important step.
 
@sawk
Thanks sawk for this great information.

I was looking at cards that had 4+ SFF-8087 internal interfaces because I have 20x1TB drives with 5xSFF-8087 to 4SATA breakout cables. I guess I thought that if I had a RAID controller that only had 1 or 2xSFF-8087 interfaces I was stuck with only using 4 or 8 drives...unless I purchased another controller and used some sort of software raid to combine the two.

I started my project about a year ago with this card from LSI. http://www.newegg.com/Product/Product.aspx?Item=N82E16816118112

**(So i have two LSI Internal SATA/SAS 9211-8i 6Gb/s for sale if anyone wants them :) )**

This allowed me to use 8 drives, I later purchased another controller (the same LSI from newegg) and luckily not having too much data was able to back it up and then recreate a software array using the 16 drives. Guess I didn't understand this stuff as much as I thought :)

Also, I bought the Adaptec card I mentioned from my previous post but got it for $450 bucks so not as much as $800 on newegg.

I'll have to read more about my Adaptec card in regards to battery for cache, etc.

For data, I lost 7.5TB. So I wasn't at full capacity. And it will take me a long time to get it all back. I have over 100 CDs I'll need to re-rip to FLAC. :(
 
Raid 5 vs 6, there's a little difference, and you don’t lose any capacity.

There is a capacity difference between R5 and R6. R6 dedicates two disks for parity and R5 dedicates 1.
 
There are certain scenarios where RAID 10 should be used but for a media server go with RAID 5 as the read performance will be better and you'll get more storage from your drives.

As long as you buy raid ready hard drives chances of having to ever rebuild are small, and even if you do, it's not like you have to sit there watching it. Who cares if it takes a day or two.
 
@sawk
Thanks sawk for this great information.

I was looking at cards that had 4+ SFF-8087 internal interfaces because I have 20x1TB drives with 5xSFF-8087 to 4SATA breakout cables. I guess I thought that if I had a RAID controller that only had 1 or 2xSFF-8087 interfaces I was stuck with only using 4 or 8 drives...unless I purchased another controller and used some sort of software raid to combine the two.

I started my project about a year ago with this card from LSI. http://www.newegg.com/Product/Product.aspx?Item=N82E16816118112

**(So i have two LSI Internal SATA/SAS 9211-8i 6Gb/s for sale if anyone wants them :) )**

This allowed me to use 8 drives, I later purchased another controller (the same LSI from newegg) and luckily not having too much data was able to back it up and then recreate a software array using the 16 drives. Guess I didn't understand this stuff as much as I thought :)

Also, I bought the Adaptec card I mentioned from my previous post but got it for $450 bucks so not as much as $800 on newegg.

I'll have to read more about my Adaptec card in regards to battery for cache, etc.

For data, I lost 7.5TB. So I wasn't at full capacity. And it will take me a long time to get it all back. I have over 100 CDs I'll need to re-rip to FLAC. :(

That's a pretty good price you got that Card for, for sure. It still limits you as to using the newer SATA 3 (6gb/s) hard drives which was my main point I was trying to make.

If your looking as a stop gap until you need to go with larger drives that'll do just fine.

The only down size to purchasing SAS expanders to add more drives to a single card is they themselves run upwards of $400, but they have quite a few connectors. You can also daisy chain them, which makes it easy to expand.

In my case, like I mentioned, out of the box I can only do max 4 drives. However if I add a SAS expander I can go to like 12, or even 24 on a single SAS Expander.

Another big benefit to going with a newer 7 series card as I recommended is they work well with SSD's if you want to add them into the mix.
 
There is a capacity difference between R5 and R6. R6 dedicates two disks for parity and R5 dedicates 1.

I stand Corrected, my mistake...

Raid 5 is (N-1)
Raid 6 is (N-2)
where N= total number of Disks.
 
Also those 9211 cards you have are not Raid Cards, they are just SATA Host Bus cards, which allow you to attach more disks to a system via the card.

I believe a single card can also be expanded up to 256 individual drives if needed.
 
Seriously, people, please I BEG of you to stop pitching HW raids for HOME MEDIA server.:mad:

Stripping HDDs either through HW Raids, or ZFS variants for home user is NEVER needed.
It doesn't matter if you have 500+MB/s transfer speed when most home these days have no more than 1GB Lan (~100 MB/s tops).
You are only increasing the Cost, points of failure (Raid cards and chances of more than 2 hdd failures kills your entire set), reduced drive life (cause they are all spinning all the time, easily kill a WD Green), and generate more heat/power usage, and potential data-corruption (HW raid write-hole).

Instead, for home media server please use FlexRaid ($, windows) or SnapRaid (Free but not as convenient), or Unraid ($ and linux, real-time parity). These software solutions create Parity Drives to ensure an equal level of data-availability compared to RAID/ZFS solutions, but do not have the potential of killing the entire pool.

The characteristics of a home media server is:

0. Cheap drives (WD greens), must assume they WILL FAIL. But need to make sure even multiple failures does not mean total disaster.

1. Large Movie files, Music files, Pictures, Game installers etc. Occasional Home Office backups (originals are still on the workstation/laptops). The critical point is that Data are generally immutable or updated very infrequently once it gets on there. (e.g. BD-rips, FLAC musics, Game ISOs etc.)
Furthermore, the IO is mostly continuous so most new HDD can easily sustain 100MB/s transfer speed.

2. Connected to no more than 4~5 streamers at the same time. (2 TVs, 2 PCs, 1 Tablet/Laptop is pretty reasonable even for big families). Generally speaking, you will never saturate the GB LAN when streaming from a media server. (Raw blu-ray is 40Mbps Max, multiply by 5 and you are still only using 20% bandwidth).

3. Contents are replaceable through originals or downloadable again, Lost an HDD-worth of data is NOT critical. The only time needed is to re-rip them all.

4. Need to be cheap, long-lasting, disk spin-down when not in use, quiet operation etc.

5. Most of the time even pooling of HDDs are not required. Most media software (XBMC, MediaBrowser, Plex, etc.) all have built-in pooling capability.

So basically, please always use the right tools for the job at hand.
For Office and serious multiple-simultaneous access workspace, HW Raid or ZFS are REQUIRED, but so is an IT manager.
For VM usages, HW Raid or ZFS are also recommended due to the *seek* speed.

But, for Home MEDIA server however, please only use Flexraid, unraid or snapraid. These solutions provide the best experience for these kinds of data, especially in disaster situations. They are cheap to free. Also are independent of hardware or even OS.
All you need is a cheap HBA card (M1015 for $60 on ebay), an expander as necessary (~$300), and you can run these off Windows 7 if you want.
 
@thejimmahknows,

I'm in the same boat as you, except I'm backing up 60+TB of content.:D

I'd say as far as backing up 20+TB of content for OFFLINE backup, Tape is the only way to go.
For home users, you don't need a tape library. What you need is a stand-alone internal or external LTO Tape Drive. These typically need a SAS port so make sure you have a SAS HBA card as well.

LTO5 tape drives can be found for ~$1500 new, cheaper for used.
LTO6 tape drives just came out so a bit of premium, but Quantum has those for ~$2200
I'm still waiting on Dell Canada to get back to me on their pricing information.

If you run the cost figures for offline backup scenarios:

Disk solution:
A separate barebone file server with CPUs, Controllers, mem, power etc, runs about ~$1000 minimum.
Each disk is $130 / 3TB or ~$45/TB

Tape Solution:
Tape Drive (LTO5) $1500
Each Tape is $40 / 1.5TB or ~$30/TB

Although you spend more on the initial drive vs another stand-alone server (face it, you are not gonna just rip the HDD out and store somewhere are you?)
The Operating cost is ~33% less.

The break even point is ~30 TB of data.

But really the more important thing is if you drop a tape cartridge, big deal, pick it up and move on.
If you drop an HDD, god bless you.:p


Oh, also for those that say tape backup is hard, look into LTFS from IBM, HP or Quantum.
Basically you can use a tape as if its an USB stick. You can drag-drop contents on there, Even watch movies directly from the tape (its kinda like streaming from tape).
You should NOT use it for small (<10MB) files though as performance really suffers.
 
Seriously, people, please I BEG of you to stop pitching HW raids for HOME MEDIA server.:mad:

Stripping HDDs either through HW Raids, or ZFS variants for home user is NEVER needed.
It doesn't matter if you have 500+MB/s transfer speed when most home these days have no more than 1GB Lan (~100 MB/s tops).
You are only increasing the Cost, points of failure (Raid cards and chances of more than 2 hdd failures kills your entire set), reduced drive life (cause they are all spinning all the time, easily kill a WD Green), and generate more heat/power usage, and potential data-corruption (HW raid write-hole).

Instead, for home media server please use FlexRaid ($, windows) or SnapRaid (Free but not as convenient), or Unraid ($ and linux, real-time parity). These software solutions create Parity Drives to ensure an equal level of data-availability compared to RAID/ZFS solutions, but do not have the potential of killing the entire pool.

The characteristics of a home media server is:

0. Cheap drives (WD greens), must assume they WILL FAIL. But need to make sure even multiple failures does not mean total disaster.

1. Large Movie files, Music files, Pictures, Game installers etc. Occasional Home Office backups (originals are still on the workstation/laptops). The critical point is that Data are generally immutable or updated very infrequently once it gets on there. (e.g. BD-rips, FLAC musics, Game ISOs etc.)
Furthermore, the IO is mostly continuous so most new HDD can easily sustain 100MB/s transfer speed.

2. Connected to no more than 4~5 streamers at the same time. (2 TVs, 2 PCs, 1 Tablet/Laptop is pretty reasonable even for big families). Generally speaking, you will never saturate the GB LAN when streaming from a media server. (Raw blu-ray is 40Mbps Max, multiply by 5 and you are still only using 20% bandwidth).

3. Contents are replaceable through originals or downloadable again, Lost an HDD-worth of data is NOT critical. The only time needed is to re-rip them all.

4. Need to be cheap, long-lasting, disk spin-down when not in use, quiet operation etc.

5. Most of the time even pooling of HDDs are not required. Most media software (XBMC, MediaBrowser, Plex, etc.) all have built-in pooling capability.

So basically, please always use the right tools for the job at hand.
For Office and serious multiple-simultaneous access workspace, HW Raid or ZFS are REQUIRED, but so is an IT manager.
For VM usages, HW Raid or ZFS are also recommended due to the *seek* speed.

But, for Home MEDIA server however, please only use Flexraid, unraid or snapraid. These solutions provide the best experience for these kinds of data, especially in disaster situations. They are cheap to free. Also are independent of hardware or even OS.
All you need is a cheap HBA card (M1015 for $60 on ebay), an expander as necessary (~$300), and you can run these off Windows 7 if you want.

hooray, i just setup flexraid with 8 2TB WD Green drives for my home media server. i feel so smart now. ;)
 
Seriously, people, please I BEG of you to stop pitching HW raids for HOME MEDIA server.:mad:

There's no ONE solution. All of them have up sides and down sides but none of them are so egregious that they should be avoided. ZFS or mdadm is not going to cost you any more to run than flexraid, snap raid, or unraid will. If you need lots of ports you'll need a HBA of some sort regardless. Actually flexraid will cost you more than ZFS or mdadm will simply because of license costs alone.

Contents are replaceable through originals or downloadable again, Lost an HDD-worth of data is NOT critical. The only time needed is to re-rip them all.
It's not about the data being critical it's about time. Once your file server starts getting into the 10's of TB's your requirements are no less than any SMB. No one wants to spend all day pulling from backups or heaven forbid re-ripping, especially in the case of BR's.

At the end of the day it's going to come down to need, and what your comfortable with. None of the solutions out there are so horrible that they need to be avoided. The only one that has enough negatives due to cost is the hardware raid card. Other than that the differences between the other solutions are not great enough to discount them.
 
So basically, please always use the right tools for the job at hand.
For Office and serious multiple-simultaneous access workspace, HW Raid or ZFS are REQUIRED, but so is an IT manager.
For VM usages, HW Raid or ZFS are also recommended due to the *seek* speed.

But, for Home MEDIA server however, please only use Flexraid, unraid or snapraid. These solutions provide the best experience for these kinds of data, especially in disaster situations. They are cheap to free. Also are independent of hardware or even OS.
All you need is a cheap HBA card (M1015 for $60 on ebay), an expander as necessary (~$300), and you can run these off Windows 7 if you want.

most accurate post in the history of hardocp. please stop using raid
 
There's no ONE solution. All of them have up sides and down sides but none of them are so egregious that they should be avoided.

No. There is a very clear downside of ZFS, mdadm or HW, in that if more than 2 drives (for RAID 6) dies due to cheap hardware (Green WD), because all the data are stripped. you are losing ALLLLLLL data. This will take months to replace. Given how cheaply these drive are made and heaven forbid you used same drives from same batch, the chance of having the 3rd failure is very high during the rebuilding process.

That's why SMBs HAVE TO (or at least SHOULD) use enterprise class SAS HDDs which cost $300+ easily, even in ZFS applications.

For Flexraid etc, if more than 2 drive fails, you only lose what you have on those drives, nothing more. This applies to 22x 3TB drives that I'm having. If one drive fails I only replace (or recover) 3TB worth of data, not 60 TB.
The only thing is to keep track of all data by running a directory list and MD5 hash regularly to know what to recover when needed and/or if any file is corrupted. Takes like a few minute per day to run through 22 drives.

Again, please remember my argument is for HOME MEDIA server ONLY.
For this type of data and usage pattern, and average user's knowledge level and experience, and really the *suggested* hardware costs, there is only ONE (i.e. one type) of solution, that is "non-stripped data drives with dedicated dual parity drives" either in Snapshot mode (SnapRaid, Flexraid snapshot) or Real-time mode (Unraid or Flexraid real-time).

Finally, remember ZFS and HW have to pre-allocate enough space for the pool. Do online expansion takes years.
While for Flexraid etc., like the original WHS V1, all you need is pop in a new drive and off you go. It just works.
Whats better is unlike WHS v1, you don't actually need a new drive, existing data can be also protected, just pop it in and run a parity calc after and you are all set.

SBS 2011 Essentials / Server 2012 Essentials + FlexRaid + Plex Media Server = WHS that should have been but never did.
 
One final point for you to consider:

With ZFS, HW Raid, the more drives you have, the higher the probability of losing ALL data, regardless of the type of drive.
e.g. P(failure of ENTIRE zfs pool) = P(single fail) ^ 3 * N * (N-1) * (N-2)
(this is the formula for calculating 3 simultaneous failures assuming non-correlated failure rates, correlations between drives make this worse).

With FlexRaid, the more drives you have (for the same amount of data), the LESS likely you are going to lose much data.
e.g. P (loss of a subset of data) = (P(single fail) ^ 3 * (N-2) * (N-3) * (N-4)) / N (assuming perfectly spread out).
(note. the reason for N-2 here is because if a parity drive fails, you don't lose anything, you don't even need to run a recovery job).

Do your own math assuming P(single failure) = ~0.01 per year for consumer drives (source: google)

When N is small, its not a big deal, but when its 20+ 3TB drives, it will make a huge difference.
 
I must stress the importance of backing up. RAID is FAULT TOLERANCE, it's designed to get you back and running more efficiently, but if you don't have a spare backup of the data and your whole system crashes and burns, there's no guarantee all you can retrieve all your stuff. Ask anyone who's been down that path.
 
As for a Backup Solution, I wouldn't worry too much about that. Your Raid setup protects your data in the event of a Disk Failure.

While you provide useful information, this single point is a really bad suggestion. If you regularly read this forum, at least once per month someone with a failed RAID array passes by and asks how to recover their single copy of 20 year old photos without spending a thousand dollars on a professional recovery service. Basically a scheduled automatic backup is more important and generally a better solution for a private data storage than any level of RAID. Even if it is just for media, just consider how long it would take to rerip your entire DVD collection. If I calculate that for my collection in work hours I could easily buy a second server.

Further, a controller can fail in different ways. While the chance that this happes is low, the worst would be that it starts to write garbage to the disks. RAID cannot protect you against this.

For a high capacity media server a pooling solution with parity is probably the best solution.

EDIT: @OP: Regarding your mdadm issue... are you sure the data is lost? I'm using mdadm since 2003 and I never managed to destroy a RAID even though I'm constantly playing around. Anyway what you should have learned from this is not to buy a dedicated controller but to make backups.
 
Last edited:
First, to meet forum guideline, I understand the thread title says "home media server"

1. to address on-topic discussion, only a discussion, not a recommendation, the usual eSATA/USB3.0 4-bay or 8-bay external enclosure is one option.
-- this is specific to posters' opinion that "home media server" so absolute precision is, perhaps, not a strict requirement.
1.1 The only uncertainty I have for this is I am not very familiar with ZFS (if you chose this route) when running ZFS over USB3.0 connection. (Most of the discussion seems to centered on connection to SATA/SAS ports)

2. Since other posters' also touch on hardware RAID, issues, considerations, based on this observation, an admittedly over-demanding, but perhaps worth a note..

2.1 I saw this HP D2700 25-Bay Disk Enclosure on great discount. (from 3xxx down to 1750)
2.2 25-Bay should be sufficient for most home users
2.3 Being HP, it should be of sufficient quality, I suppose.
2.4 Some readers have valid price sensitivity for disks, I also agree for home use, affordability is a great concern. However, I did find relatively great discount (with respect to enterprise grade storage) for HP official branded hot-plug disks. Not sure will it last though..
2.5 Coupling it with HP official SmartArray Controller with battery backed cache and official HP Array Drivers, it should be perfect.

I searched on Amazon. Not very sure about specific discounts status now.
 
2.1 I saw this HP D2700 25-Bay Disk Enclosure on great discount. (from 3xxx down to 1750)

Dude, these takes 2.5 inch drives, not 3.5 inch which we normally use as a consumer.
Also for an drive enclosure I'll take SuperMicro branded ones over any other name brand. (And cheaper too, get it at newegg)

If Kim DotCom uses SuperMicro exclusively for his new ExaByte Mega service, there has to be a reason.
 
@Hikarul
When did this become a discussion of Hardware versus Software Raid?

If you read the thread starters original post you can clearly see he's already tried software raid and was not happy with the experience.

He's already made the decision to switch to Hardware Raid, instead of berating him for his decision, or others for making recommendations, why not support him in his endeavor by providing constructive feedback and other recommendations if they haven’t been mentioned.

He had a few specific questions in his original post.

1) Is it possible to recover from a Raid Controller?
a. Short answer is yes and was covered.
2) What Raid Configuration is easiest to recover from a Raid Controller Failure?
a. Short answer there is there is no real “easiest” to recover. If you replace the raid controller with the same one you had it’s a non-issue. Even a different controller may be able to read the configuration of the disks.
3) Is there a cheap way to backup large amounts of data? 18-20TB range.
a. See my expansion on this topic below.

Nowhere do I see him ask about the differences between Software Raid and Hardware Raid.

Now back to the original discussion…

Actually I missed a 4th question.

4) Is there a benefit to using a spare Drive in raid?

To answer this missed question, there can be benefits, yes. If your raid card supports it you can have a Hot Spare, which means in the even to of a drive failure it should automatically rebuild the array with the hot spare and mark the failed drive bad and removable. Once replaced the replaced drive can either be manually made the new hot spare or you could switch it to being active and move the original hot spare back to being the hot spare.

If you have the room in your chassis and the money to buy the additional drive, AND your Raid controller supports it, using a Hot Spare in my opinion is a good idea if you are really concerned.

Otherwise have your Raid Controller e-mail you when a drive fails and replace it at your leisure. Remember in Raid 5 a single drive failure won’t take down the array.


I’d like to also expand on the Backup you were wondering about.

If you would like an offline backup you should look at a tape like you were thinking, however this is expensive.

The tape drives alone are from approximately $1,900 to $2,300 for a drive that can do 3TB per tape (LTO5).
Tapes run about $50 per.
These drives typically do 1TB/Hour.
You have to manually rotate the tapes to do a backup.
You’d need 7 tapes to backup 20TB and it would take approximately 20 hours to complete.

These prices are for:
1) Hewlett Packard - Dat 3c Hp Storageworks Eh958sb Lto Ultrium 5 Tape Drive (eh958sb) - $1,955.00 on Amazon, regular around $4,000.
2) Quantum TC-L52BN-EZ Black 3TB Tabletop 6Gb/s SAS Interface LTO Ultrium 5 Half Height Tape Drive w/ SAS HBA Card - $2,327 on NewEgg

These are just 2 examples and by no means the only ones out there.

There are online services, www.ibackup.com for example that offer 3TB for $299.95 a month… a Month…

Or maybe www.crashplan.com would be better. $59.99/year for unlimited storage.

I have zero experience with these services but thought I’d offer up a little information for you to investigate, perhaps something like these would work if you really wanted an offsite backup.

Hard Drives just don’t fail as much as they used to. Probably just jinxed myself here but I personally have not had a drive failure at home in 7+ years. I also have about 20 drives running most of the time.

Again please keep on topic and do not hijack a user’s thread to start a discussion about what is best for a home media server. There are many options, please discuss them in the appropriate place.

There is a “Home Theater PCs & Equipment” thread right here if you would like to start a discussion on what is better for home theater storage.
http://hardforum.com/forumdisplay.php?f=103
 
First off, thank you all for your comments whether they were directly for my original questions or a pointer for software vs hardware raid.

The reason I sought a hardware RAID solution was due to the fact that for about 8 months I was running mdadm as a software RAID 6 solution...I think i had a max capacity of 12TB at the time. The issue I ran into (if you would like to read it http://serverfault.com/questions/460129/mdadm-rebooted-array-missing-cant-assemble) was I did a sudo apt-get update && upgrade, rebooted and all my drives under /dev/ were showing up as /dev/mapper-923809283 or some other random numbers for each drive, instead of /dev/sdb1 /dev/sdc1, etc. mdadm was unable to reassemble the array, even though none of my drives failed. I used SMART and scanned them all, which showed green status.

I have a few more questions, regarding a controller failure, how does the new controller reassemble the disk array? Do the disks contain some sort of "metadata" (correct my terminology please)?

I have had the card i posted in the first post for about 3 weeks now, I just haven't had the confidence yet on the RAID type I think is best to be the most fault tolerant. Yes I know RAID is not a backup solution, but it is the only thing I can/could afford to aggregate that much data, with out just having HDDs lying around.

FlexRAID? Is this software raid? what makes this different from mdadm? Cause if it acts the same I wouldn't trust it. Yet again my situation might have been just a 1 off.

For the tape library stuff, I notice tapes show a 1.5TB/2.0TB compressed advertisement. Does that mean i have to find a tape drive/library that supports compression? Or how does it work with backing up? can you do a full and then incremental? Maybe that wouldn't make sense with my setup. Curious though, never played with tape drives before, unless you count Iomega 100MB zip disks :-p

Think that is it for now with questions, thanks again y'all

-Jim
 
Does that mean i have to find a tape drive/library that supports compression?

Yes. Modern tape drives (like all LTO units) have hardware compression built into the drive. However remember you will not get any compression for media files since compressed files generally will not compress very well so you better use the native size for your sizing. Remember the quoted compressed size is just an estimate and depending on your data it can be very inaccurate. For example here at work I average at about 1.5:1 compression on tapes that state a 2:1 compression ratio. However on some backups containing mostly text I have had tapes that did 10:1.

Or how does it work with backing up?

The drive compresses the blocks that are sent to the drive on the fly. It will keep writing blocks until there is no more tape left. In this case it reports that condition to the tape driver and the backup software will take action.

can you do a full and then incremental?

Any reasonable backup software package should support that.
 
Last edited:
The Disks do contain their configuration "metadata" as you put it so that a controller can read the configuration in the event the controller is replaced.

Lol the infamous Iomega Zip Disks... I did have those, they were decent for the time... long time ago...

In fact I think I still have the hardware and disks kicking around.
 
The reason I sought a hardware RAID solution was due to the fact that for about 8 months I was running mdadm as a software RAID 6 solution...I think i had a max capacity of 12TB at the time. The issue I ran into (if you would like to read it http://serverfault.com/questions/460129/mdadm-rebooted-array-missing-cant-assemble) was I did a sudo apt-get update && upgrade, rebooted and all my drives under /dev/ were showing up as /dev/mapper-923809283 or some other random numbers for each drive, instead of /dev/sdb1 /dev/sdc1, etc. mdadm was unable to reassemble the array, even though none of my drives failed. I used SMART and scanned them all, which showed green status.
If device-mapper "grasped" all your devices, mdadm cannot create the array. The first step would be to stop all falsely assembled mdadm arrays (if there are any) and remove all device-mapper assignments with
Code:
dmsetup remove_all [--force]
partprobe
It seems that dmraid detected the superblocks, which keeps the kernel from creating the partition devices. The confusion between the bare device (sda) and the partition device (sda1) is, why I created all my newer arrays on bare devices. Also metadata version 1.2 and partitions do not mix too well. Version 1.0 would have been better here.

You were by no means out of options at this point. The best thing to do when your RAID falls apart for whatever reason and the standard steps fail is to make block level backups of all drives before messing around. That way you could always restore whatever you may delete later and something you can send to a recovery service in case everything else fails. That can happen with hardware RAID likewise.

I have a few more questions, regarding a controller failure, how does the new controller reassemble the disk array? Do the disks contain some sort of "metadata" (correct my terminology please)?
Just like software RAID, hardware RAID stores so called superblocks on each drive that contain information about the the array and the drive.

FlexRAID? Is this software raid? what makes this different from mdadm? Cause if it acts the same I wouldn't trust it. Yet again my situation might have been just a 1 off.
FlexRAID is basically a type of software RAID that works on filesystem level, not at block level. It pools multiple drives, which by themselves have separate independent filesystems, into one large device containing all files.
 
Last edited:
No. There is a very clear downside of ZFS, mdadm or HW, in that if more than 2 drives (for RAID 6) dies due to cheap hardware (Green WD), because all the data are stripped. you are losing ALLLLLLL data. This will take months to replace. Given how cheaply these drive are made and heaven forbid you used same drives from same batch, the chance of having the 3rd failure is very high during the rebuilding process.
A) You are supposed to have backups. That's why we have a problem now with people with 50 TB solutions with no backup.
B) Bad hard drives affect all systems.
C) The chances of 3 drives going at the exact same time is slim and for those times see point A.

That's why SMBs HAVE TO (or at least SHOULD) use enterprise class SAS HDDs which cost $300+ easily, even in ZFS applications.
People use SATA drives all of the time in enterprise settings and we aren't talking about enterprise here.

For Flexraid etc, if more than 2 drive fails, you only lose what you have on those drives, nothing more. This applies to 22x 3TB drives that I'm having. If one drive fails I only replace (or recover) 3TB worth of data, not 60 TB.
The only thing is to keep track of all data by running a directory list and MD5 hash regularly to know what to recover when needed and/or if any file is corrupted. Takes like a few minute per day to run through 22 drives.
If you set up your system correctly you shouldn't lose any. We are working on uptime not acceptable failure. Most backup systems take care of restores automatically.

Again, please remember my argument is for HOME MEDIA server ONLY.
For this type of data and usage pattern, and average user's knowledge level and experience, and really the *suggested* hardware costs, there is only ONE (i.e. one type) of solution, that is "non-stripped data drives with dedicated dual parity drives" either in Snapshot mode (SnapRaid, Flexraid snapshot) or Real-time mode (Unraid or Flexraid real-time).
Sorry there is not. Setting up mdadm and especially FreeNAS is brain-dead simple. If you want to sell people on the positives of FlexRaid then do that, but don't sit up there and tell people the only way to go is to spend money on Windows and FlexRaid. It comes off like your sole goal is to sell technology to people and not help them.

Finally, remember ZFS and HW have to pre-allocate enough space for the pool. Do online expansion takes years.
While for Flexraid etc., like the original WHS V1, all you need is pop in a new drive and off you go. It just works.
Whats better is unlike WHS v1, you don't actually need a new drive, existing data can be also protected, just pop it in and run a parity calc after and you are all set.

SBS 2011 Essentials / Server 2012 Essentials + FlexRaid + Plex Media Server = WHS that should have been but never did.
Takes years? LOL.
 
Alrighty, decided to go with Hardware RAID 6 again. Until I have a backup solution that I can afford in the 10s of TB capacity....I've herd a lot of folks mention ZFS filesystems. I also herd ZFS is good for large files, which I am considering seeing I have my DVDs/BDs to re-rip.. I don't do compression with my BDs and I pretty much straight transfer them to mkv. This creates files anywhere from 18GB to 35GB (Avengers BD). Keeping in mind these huge video files are one thing that will reside on the Volume, I also have my FLAC files, which will whenever I finish ripping all my CDs again, probably close to 1,000+ files.

When I was using mdadm I used ext4 for the Volume. So I'm not sure if I should stick with that or not. I have a buddy who runs FreeNAS which I guess uses ZFS.

So my question is with the information about my files above. ext4 vs ZFS? And if it boils down to just preference, please let me know that too.

Thank you all for your help!

Additions:
Forgot to add one thing, I do have a home Hyper-V server I use to play with new software technologies. That being said I do create 20-100GB image files for LUNs.
 
Last edited:
Hot spares are not worth it for home. Definitely go raid6 not raid5 especially with more than 8 drives.

Raid6 is the best raid ever unless you do small random writes (like mysql/db write patterns) which is almost never the case for home users.

Raid6 is the only raid level that can recover from unreadable/bad sectors without data loss when a single drive has failed and it is rebuilding to a new one. It is also the only raid to be able to have 3 data sets (parity 1, parity 2, data) for doing consistency checks as well when no drives have failed.

Having hot spares messes with disk order when you do have a failure and makes recovering the array more of a PITA so I would avoid it all together. Having a cold spare that you can replace when a drive does fail is not a bad idea though.

I personally use DAS anyway so it doesn't matter if my machines are only on gigabit and i do enjoy the 500-2000 mbyte/sec reads when I am doing parity repairs and unraring in parallel on my usenet downloads.

I am all for hardware raid in the home if you can afford it but some of the software alternatives are good as well depending on the users needs.

Hardware raid is overkill for some people but i like fast I/O and no software raid really compares especially with the overhead of a network file-system when I can't use it natively on linux and thus have to use it as NAS instead of DAS.
 
This thread is great and has a lot of very useful info that I had not known, thanks everyone.
 
Lol the infamous Iomega Zip Disks... I did have those, they were decent for the time... long time ago...

Hell they were better then decent. They were fast enough you could run DOS games off of them. And 100MB back when people had 800MB drives was a healthy amount of space.

For my media storage I took a mixed route. I have 2 RAID 5 pools and then use symlinks to make it all look like 1 drive, though I have also considered Windows spanning to make it really look like 1 drive. Any ways I did this so that I had some parity through RAID 5, but broken up so if one pool goes down the other one is completely unaffected. I don't like the idea of having 8 drives and only 1 parity and putting the eggs in one basket.

I suppose fundamentally its not much different then RAID6, but should something go wrong only 1 pool takes a performance hit, and you only have to rebuild that pool of data.

I bought my PERC5i with battery backup for $35 shipped on EBay. I felt that to be a very good price. And while I can only get about 20-30MB/s on the motherboards RAID5, I have no problem hitting the NIC's cap of 75-85MB/s with the PERC. I think I benched it at about 135MB/s.
 
Pardon me for adding a small note here (because Jimmah already said thread issue mostly conclude, so I have to add declaration)

1. Pertaining to your issue of,
quote
The issue I ran into (if you would like to read it http://serverfault.com/questions/460...-cant-assemble) was I did a sudo apt-get update && upgrade, rebooted and all my drives under /dev/ were showing up as /dev/mapper-923809283 or some other random numbers for each drive
end quote

1.1 The are many potential scenario so I am not going into that.
1.2 However, for a "dedicated NAS OS/environment", usually they are more controlled and tested releases, especially for those established options, so less likely to have this issue. So software raid configuration does have very valid circumstances.

2. If the enduser do want to run software raid on locally attached large storage with performance expectation, the poster's original scenario, then software update scenario does play a part. It is recommended one stays on the -stable update path. This applies to all the distributions having software raid facility. The simple basis for this is that usually a lot more test/verifications,(issue also being addressed more formally) are done on general release channel.
 
Alrighty, decided to go with Hardware RAID 6 again. Until I have a backup solution that I can afford in the 10s of TB capacity....I've herd a lot of folks mention ZFS filesystems. I also herd ZFS is good for large files, which I am considering seeing I have my DVDs/BDs to re-rip.

ZFS and hardware RAID do not mesh well together. ZFS basically incorporates software RAID, volume mananger and filesystem into a complete solution and if you want to use hardware RAID you are better of with another filesystem.
 
Now go and buy a second. This card is your single point of failure If it goes pop you a thoroughly stuck. If the CPU or motherboard blows you can move the array to a different box to get at your data, but if this card goes, you are SOL.

I was under the impression, @sawk, that a controller failure is an easy fix, simply swap out the controller with another one and the drive "metadata" is used to reassamble.

Now each statement is contradicting, so could either of you ellaborate on this matter?

Thank you both,

Jim
 
I was under the impression, @sawk, that a controller failure is an easy fix, simply swap out the controller with another one and the drive "metadata" is used to reassamble.

Now each statement is contradicting, so could either of you ellaborate on this matter?

Thank you both,

Jim

Since ZFS handles raid functionality, you need an HBA but not RAID at the hardware level.
Either an IBM M1015 that you can flash to an LSI or the LSI 9211.
 
As an Amazon Associate, HardForum may earn from qualifying purchases.
I was under the impression, @sawk, that a controller failure is an easy fix, simply swap out the controller with another one and the drive "metadata" is used to reassamble.

Now each statement is contradicting, so could either of you ellaborate on this matter?

Thank you both,

Jim

He is mentioning the easiest way to protect against that particular controller failing. Having one on standby is a great option, if you can afford to have a $400-$1000 card sitting around.

Most Hardware Raid controllers can read the metadata from the disks for the configuration and bring the array back online for the OS.

I've run across this scenario in an enterprise environment many times. Dell would send a newer version PERC Controller for a server and it would read the raid that was controlled by a older generation PERC controller.

The simplified way it works is that if you have to exchange your controller to a different brand/revision you go into the BIOS of the controller where you would setup your array and it should have the option to read configuration, or in some cases, it will do that automatically and just ask for confirmation that you want to continue with the current disk configuration.
 
For your situation though I would have probably gone with something like the Adaptec 7 Series Raid card 72405 to allow you the option to start replacing your slow disks with ones that utilize SATA 3.
This card, on new egg, is $179 more than the one you have shown.

I have the same card as the OP and its more than fast enough (600MBs+) and the cables are much much cheaper considering you can get generics so he overall cost is much lower. Also if you are using HDDs (not SSD) then sata3 is pointless as drives have only just broken the SATA1 max speed. You can also get a 8088 to 8087 adapter and use the 4 external port internally for a total of 20 drives.

Also stay away from tapes if at all possible, the drives are very load, the tapes are pretty delicate (if the feeder pin becomes dislodged and you put it into the drive it will kill the tape and drive) and cost a fortune. (I can get almost 20X 3TB drives for the cost of a LTO6 drive let alone the tapes/controller.)

I use hotswap HDDs in cheaper caddies (you could use a USB3 dock) . Cheap, Easy and It works with any PC with a sata port (or USB port if using a dock) so any point of failure that exists is easily replaced. I also reuse drives from my main server as I upgrade to further reduce costs.
 
The only thing is to keep track of all data by running a directory list and MD5 hash regularly to know what to recover when needed and/or if any file is corrupted. Takes like a few minute per day to run through 22 drives.
Yes, this is a very good point. You should always make MD5 (or some other algorithm) checksum to make sure that your data has not been subject to random bit flip, a.k.a Bit Rot. Read more about silent data corruption, here:
http://en.wikipedia.org/wiki/ZFS#Data_Integrity

Thus, do a checksum of all files every day, to detect data corruption. Either do it manually, as hikarul does, or use an automatic system such as ZFS to do that.
 
the tapes are pretty delicate

Tapes are a lot more durable than removable hard drives and the are orders of magnitude more reliable (bit error rates).
 
I'm sure it's been mentioned in this forum before but for just a normal home media server, I've relied on Flexraid.

1. It doesn't change anything as far as files are concerned. So no matter how the system explodes, you can take the drive and plug it into any machine and copy your data off.
2. Relies on parity system (for the most part) to restore lost files/disk. Example. I have 8 "data" drives and 2 "parity" drives set up in my flexraid. This allows me to lose 2 data drives or 1 parity drive.
3. Bonus: have drive pooling similar to the original windows home server.
3.
 
I have my RAID 6 up and running. Just finished the build/verify today. Starting to load data back onto it.

I am running into an issue though. I am receiving a lot of SCSI hangs errors in my syslog.

Code:
Jan  6 20:29:15 mediaserver kernel: [ 5493.523282] aacraid: Host adapter abort request (4,0,0,0)
Jan  6 20:29:15 mediaserver kernel: [ 5493.523309] aacraid: Host adapter abort request (4,0,0,0)
Jan  6 20:29:15 mediaserver kernel: [ 5493.523375] aacraid: Host adapter reset request. SCSI hang ?

According to Google, I've update my Motherboards bios, which has reduced the message in syslog, however they still occur under I/O transfers.

Code:
Ubuntu 12.04.1 64-bit Server
Linux mediaserver 3.2.0-29-generic #46-Ubuntu SMP Fri Jul 27 17:03:23 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux

RAID Controller = http://www.newegg.com/Product/Product.aspx?Item=N82E16816103217


Has anyone stumbled across this before? Also, if this is deviating away from my original post, I can post a separate post.
 
Back
Top