Storage System for HTPC media

ReKK

n00b
Joined
Aug 18, 2010
Messages
25
I was hoping you guys could help me build a storage system for my media for my HTPC. What I am trying to do is create a storage system within a budget of $3k, hopefully less. After backing up 100s (if not a few thousand) blu-rays and DVDs, I want to ensure that my system never fails so I don't ever have to re-rip them again cause it will take hours on end. I don't know much about RAID and am hoping some of you experts out there can help advise if you guys think its neccessary. Is there any type of build you guys can suggest?
 
Considering that you're new to all of this, I highly recommend using Windows Home Server. We have a stickied FAQ thread about it at the top of this subforum.

But to help you further, please answer these questions:
1) Is this a fresh from the ground up build? As in you won't be reusing any parts?
2) How much storage space do you need initially? i.e 2TB?
3) how much storage space do you want eventually? i.e 24TB?
4) Is noise a factor at all?
5) Where do you live? As in, what state and/or country?
6) When do you plan on building/buying the server(s)?
7) Do you have any familiarity at all with Linux or FreeBSD?

Also note that to 99% prevent from ever having to rerip your DVDs, you're looking at either a two server setup (one main storage and one backup server) or possibly one server and a large external hard drive. In other words, keep two copies of that data. At least that's the easiest way I can think of.
 
For maximum reliability, I highly recommend RAID-6, 1TB maximum sized drives, and small span groups -- such as 2+2. Then you can combined as many span groups in RAID-60 as you need for the total data you need. It also helps reliability quite a bit to use SAS drives instead of SATA, but they're also a fair bit more expensive.

The reason for RAID-6 is that with large volumes, it's not uncommon to have multiple errors, particularly when you're rebuilding an array. RAID-5 or RAID-10 can only correct for single errors; RAID-6 can correct two errors. Small span groups help to further minimize the chances of an uncorrectable error.

Keep in mind that RAID alone isn't a good substitute for backups. If your controller goes crazy, you can lose the array -- so offline backups are still a good idea, if you can afford them.

Although I prefer Windows myself, if you're familiar with Solaris, that's another option, since ZFS has some reliability benefits that aren't available in NTFS.
 
Thanks for the quick replies. I'm really excited to see if this project is feasible/do-able. I've answered the questions below.

QUOTE=Danny Bui;1036074245]Considering that you're new to all of this, I highly recommend using Windows Home Server. We have a stickied FAQ thread about it at the top of this subforum.

But to help you further, please answer these questions:
1) Is this a fresh from the ground up build? As in you won't be reusing any parts?
Yes

2) How much storage space do you need initially? i.e 2TB?
I want it to be expandable if possible since disk space only gets cheaper by time. If I was forced to choose, I would say probably 10TB (not including mirrors or backup space)

3) how much storage space do you want eventually? i.e 24TB?
I would for it to be expandable.

4) Is noise a factor at all?
Yes

5) Where do you live? As in, what state and/or country?
US

6) When do you plan on building/buying the server(s)?
ASAP

7) Do you have any familiarity at all with Linux or FreeBSD?

Also note that to 99% prevent from ever having to rerip your DVDs, you're looking at either a two server setup (one main storage and one backup server) or possibly one server and a large external hard drive. In other words, keep two copies of that data. At least that's the easiest way I can think of.[/QUOTE]
 
Please answer question 7.

Already see one snag hardware wise: For the amount of storage that you want and the need for expandability, a quiet server is somewhat out of the question.
 
I have and would do a WHS system and have a complete HDD based backup

I have around 600 DVD and BR rips and I have them on my WHS with my older drives as my backup copy.
 
7) Do you have any familiarity at all with Linux or FreeBSD?

No, sorry I don't.

I figured that the reliability of external hard drives aren't worth using as backups. Any ideas for a setup?
How loud would the setup you're thinking be?
 
I figured that the reliability of external hard drives aren't worth using as backups. Any ideas for a setup?
How loud would the setup you're thinking be?

Fairly loud I guess?

Here's the setup I was thinking of:

$115 - Intel Core i3-530 CPU
$190 - Supermicro MBD-X8SIL-F-O Intel 3420mATX Motherboard
$113 - Kingston 2 x 2GB ECC Unbuffered DDR3 1333 RAM
$200 - 2 x SuperMicro AOC-SASLP-MV8 PCI-Ex4 8 Port SATA Controller Card
$64 - 4 x 3ware SFF-8087 to Multi-lane SATA Forward Break-out Cable
$1200 - 10 x Western Digital Caviar Green WD20EARS 2TB SATA HDD
$110 - Corsair 750TX 750W PSU
$300 - NORCO RPC-4020 4U Rackmount Server Case
---
Total: $2,292 plus tax and shipping.
 
Danny Bui has a pretty decent setup spec'd out there. Also, you could look at my WHS setup to get some ideas.

One note... do NOT get the WD EARS drives with WHS v1. Those drives use 4K sectors which work on WHS (I have a few running) but it adds extra hassle and I am fairly sure that on the XP code base (made for 512B sectors) that WHS v1 uses the performance is a bit worse than the EADS drives.
 
I know it's not popular on this forum because it's not high end but, Unraid is perfect for what you need. It was designed for media storage. It's a type of fake raid setup with one parity drive for up to 20 data drives. You can mix and match drive sizes along as the parity drive is as large as the largest drives in the system. It's not actual raid since all the drives are actually readable outside the system. With real raid if you loose more drives than you have parity protection for you loose everything. With Unraid if you loose two drives you only loose what was on the dead drives.

I use it and it works great for what it is. It's certainly not as fast as true hardware raid, but it's a lot safer and cheaper. Unraid has some pretty relaxed hardware requirements. You can easily build a 10 terabyte system for under $1500. You are going to have to work to make a big server quiet no matter what the hardware is. It's easier though with Unraid since it can spin down drives when not in use, so your cooling requirements are not as high. Plus You can spend a little extra on quieter parts since you're under your budget.

I like AMD for Unraid since it's generally cheaper then Intel and has plenty of power for Unraid. I'm using a Gigabyte 785g board with a low powered Athalon chip. That gives you 6 sata ports which is enough for 10 terabytes using 2 terabyte drives. When you run out out of ports add a SuperMicro AOC-SASLP-MV8 for eight more ports. Danny Bui made some good recommendations for a case and power supply (good suggestions over all just overkill for Unraid), and you wouldn't need that many drives to start since you don't have to worry about folder duplication like with WHS.

Many different options out there. This is the one I use and I think it's well suited to your needs. Good luck.
 
I like AMD for Unraid since it's generally cheaper then Intel and has plenty of power for Unraid. I'm using a Gigabyte 785g board with a low powered Athalon chip. That gives you 6 sata ports which is enough for 10 terabytes using 2 terabyte drives. When you run out out of ports add a SuperMicro AOC-SASLP-MV8 for eight more ports. Danny Bui made some good recommendations for a case and power supply (good suggestions over all just overkill for Unraid), and you wouldn't need that many drives to start since you don't have to worry about folder duplication like with WHS.

Many different options out there. This is the one I use and I think it's well suited to your needs. Good luck.

I don't want to explain again why RAID 4 is not appropriate for a 4TB+ system since we have lots of threads on the subject that one can search. Frankly, FreeNAS is fairly easy if spending $100 on an OS (WHS) to run a $3000 system is not an issue.

What I will say is that AMD systems, while cheap, also mean you are buying a cheap cosumer motherboard unless you start to spend $250-300 and up for a decent server board. There is a HUGE difference in stability, component quality, and features. If you have never used a quality Super Micro/ Tyan/ Intel server motherboard with IPMI 2.0, KVM-Over-IP, dual (or more) onboard Intel NICs and etc, you are missing out. In a system that is as you envisioned, there is little to gain cutting corners to save a few dollars. If your goal was a three disk system, that would be a different story entirely, but build it right the first time using proven hardware/ software and you will be glad you never had to experience the alternative.
 
I don't want to explain again why RAID 4 is not appropriate for a 4TB+ system since we have lots of threads on the subject that one can search. Frankly, FreeNAS is fairly easy if spending $100 on an OS (WHS) to run a $3000 system is not an issue.

What I will say is that AMD systems, while cheap, also mean you are buying a cheap cosumer motherboard unless you start to spend $250-300 and up for a decent server board. There is a HUGE difference in stability, component quality, and features. If you have never used a quality Super Micro/ Tyan/ Intel server motherboard with IPMI 2.0, KVM-Over-IP, dual (or more) onboard Intel NICs and etc, you are missing out. In a system that is as you envisioned, there is little to gain cutting corners to save a few dollars. If your goal was a three disk system, that would be a different story entirely, but build it right the first time using proven hardware/ software and you will be glad you never had to experience the alternative.

I came to this forum from the HTPC forum on AVS. There Unraid is a very popular option because it works well for serving media files. It's not true raid 4. Yes it is similar, but since all the files are intact on each individual disk I think that's a significant safety net.

For every story about someones successful raid set-up there is the horror story about someones raid crashing where they loose everything. That's why we all recommend back-ups. With WHS we use disk duplication, so we use twice as many disks. With raid 5 or raid 6 we spend large amounts of money on a raid card, use the all the same disks. Then we have to find some way to back this up, either with another server or external disks. I think for media files, for which you have a back-up hardcopy, unraid offers a decent compromise. If you manage to loose two disks in the 8 hours or so that it takes to rebuild parity (compared to days for some cards) you only loose a small part of your data. If the parity drive fails along with another drive you only loose one drive's worth of data. Compared to WHS you have a degree of fault tolerance with much less of an expenditure and use of disk space. Also for someone whose plan is to expand as needed this allows them to buy disk that are at the best $/GB point. Now we're buying 2TB drives maybe next year it's going to be 3 or 4. It gives you a extra degree of flexibility.

As for the hardware, I've got nothing against good server hardware, when it's called for. I said that Danny Bui suggested a very nice system with some quality hardware. I just thought it was overkill. I ran Unraid just fine on a 2ghz P4, it also runs just fine on Atom CPU's. It's just a simple file server. We're talking about serving up what, a couple Blu-Ray streams at most, nothing at all demanding. I think if you buy high quality consumer hardware it will work very well for Unraid. If I was running a large array with 2008r2 I would definitely go with Supermicro. I'm in fact saving up to buy a Supermicro board to run 2008r2 as a experimental box. My media will still be on Unraid though because it works.

You just don't need all the features that you talk about for Unraid. I set up my system now I never need to access the bios, I very rarely if ever even need to access anything other the the web-gui. It also can't take advantage of that second Nic so what's the point. I think your just spending extra for things that won't get used. I rather spend my money on more Blu-Rays or drives then on things I can't use. I do understand that in the end the price of the drives is going to end up dwarfing the price of everything else, I just don't like wasting money. To each their own.
 
I have this 8 x 1.5TB DAS enclosure on a RAID 6 Adaptec card, but it's already almost full with about 1,000 DVDs.
Pretty compact, but I have 1,000 more SD DVDs to rip, and expect many more BDs to come and take even more space, so I'll be looking at an upgrade next year, when WHS 2.0 and 3TB disks will be available. Note that I store full rips of the DVDs, with all the extras, audio tracks and subtitles, not compressed files of the movies only. So that's 40GB for District 9, instead of 24GB for just the movie, or 4GB for the BD rips you can usually find on torrent sites.

Expandability is also a top requirement for me, SAS expanders (for SATA drives) seem to be the way to go from other threads, but I am not ready for full length (25") racks when hot-swappable hard drives trays are only 7" long. Plus industrial looking racks don't really fit a home in both size and aesthetics, even if you hide them in a closet or garage. It's worse if you want to have your backups in a separate room, which would require 2 sets of these huge racks.

Ideally, I'd like to find a chassis in tower format, deep enough to fit hot-swappable disk drive trays on both front and rear, large enough to fit a miniATX mobo at the bottom and 2-3 rows of disk trays, with prefitted multi-lane SFF8087 cables for all drives, so you don't have to connect each drive individually. A 3.5" drive slot is about 1x4", so a full tower could probably fit over 45-60 hot-swappable trays. Then it starts making sense to have space for a motherboard when you have so many drives in the box, compared to other chassis who host only 24 drives. So. Who's building it? ^-^

I'd forgo the previously mentioned SAS and 1TB drives, SATA is just as good, and 1TB is inappropriate for video storage.
I initially wanted to expand into RAID 60, but I'll play with WHS 2.0 Beta Preview, FreeNAS and sub.mesa's FreeBSD with ZFS NAS to see what's compatible with WHS.

ReKK, how much space are you using already for how many movies? To start with 10TB usable space seems to be a bit underestimated.
 
Last edited:
I'd forgo the previously mentioned SAS and 1TB drives, SATA is just as good, and 1TB is inappropriate for video storage.

That depends on your tolerance for data loss (as well as your budget).

With a single 2TB drive, at any point in time you have 1.5% chance of not being able to read all of the data on a single drive. Since the BERs are the same, with a 1TB drive, the chance of an error drops by half to 0.8%.

Let's look at the probability of not being able to read all sectors across a single sub-array, with 20TB total size, along with MTTDL (mean time to data loss) using four different approaches:

SATA = 10^15 BER, 1.2M hour MTBF (note than some low-cost consumer drives only have a 10^14 BER and a 0.75M hour MTBF)
SAS = 10^16 BER, 1.6M hour MTBF

1. 10 * 2TB SATA, in RAID-0 or JBOD: 15%, MTTDL = 11 yrs
2. 20 * 2TB SATA, in 5*(2+2) RAID-60: 6%, MTTDL = 600,000 yrs (86 yrs for RAID-10)
3. 40 * 1TB SATA, in 10*(2+2) RAID-60: 3%, MTTDL = 3,200,000 yrs
4. 40 * 1TB SAS, in 10*(2+2) RAID-60: 0.3%, MTTDL = 12,500,000 yrs

So, sure, for a 20TB array, if you're willing to accept five times the probability of a drive error per sub-array, or one fifth the achievable MTTDL with RAID-60, then SATA 2TB drives are fine compared to SATA 1TB.

For MTTDL, you might think that even 11 yrs sounds like a long time -- but keep in mind that's a statistical "mean" number. If you've ever lost a disk, they typically have a 1.2M hour mean time between failure (MTBF), which is about 137 yrs, yet some die within hours or days. That's why the huge MTTDL numbers are useful; they reduce the statistical chance of data loss in human scale timeframes to close to zero.
 
Hey Duff - I did a quick read into unRAID. It sounds pretty interesting. I read that one of the huge disadvantages is the read/write. Can you comment on your experience on this?
 
Hey Duff - I did a quick read into unRAID. It sounds pretty interesting. I read that one of the huge disadvantages is the read/write. Can you comment on your experience on this?

I can read at 80MB/sec and write at 40 to 45MB/sec. Admittedly that's not as fast as a hardware card or for writes as fast as WHS, but again we're talking for media files. It's plenty fast to serve up as many video streams as you'll ever need. The way I look at it, it may take a little while to write the data initially, but then your done. Media files are constant, your not moving things around all the time, you write it and then leave it alone so the write speed isn't that important. I've got files that started out on 80gb disks four years ago that have been transferred to bigger disks, and through four different systems as I've upgraded. That's what's important to me. Every TV in my house has a HTPC hooked up to it and I've never had a problem with speed.
 
I'm really interested in a unRAID now. Do you have a picture or specs of your setup?
 
My setup is pretty simple it's also a little old at this point.
Gigabyte GA-MA780G-UD3H ATX motherboard
AMD Athalon 64 X2 5400 Brisbane 2.8ghz 65w
4gb Corsair DDR2 800 memeory
Thermaltake W0116RU 750W Modular Power Supply.
Norco RPC-4020 Case.

I'm also using a couple of those cheap Silicon Image sata cards for some extra ports, and a mix of Samsung, Toshiba, and WD 1 and 2TB drives. I'd send you a picture but the way things are setup now all you see is a picture of my rack which isn't very helpful.

Doing it now I'd obviously use some more modern hardware first off. I'd replace the power supply with a Corsair like Danny Bui recommended. I'd also go with the Supermicro Sata card, it wasn't supported when I built the system. The Si cards are the slowest part of my system, but they do work and they were cheap. The other consideration is between the RPC-4020 and the RPC-4220. The 4020 is cheaper but the 4220 has sas connectors to the backplane for easier cabling. For drives as long as you stay away from the advanced format drives almost anything will work. I'd go with the fastest drive you can for the parity drive since that's what is going to determine your write speed, if you want to go with 5400 green drives for the rest that's fine.

If you want some more hardware recommendations I'd read up on the Unraid forums they've got lists of hardware that's proven to work. I've built a bunch of systems with Gigabyte boards without any problems so I like them. You just have to be careful that the bios backup to the hard drive is disabled, that can cause some problems, so they're not popular on the forums. Mine came with that option disabled so it didn't cause me any problems. The forums are very helpful so if you've got any questions that I can't answer that would be the place to go. Good luck.
 
Doing it now I'd obviously use some more modern hardware first off. I'd replace the power supply with a Corsair like Danny Bui recommended.
Actually, the Thermaltake PSU you have now is based on the same CWT platform as the Corsair 750TX. So their performance is very similar. Then again, for a new setup, yeah I'd recommend the Corsair over the Thermaltake.

The other consideration is between the RPC-4020 and the RPC-4220. The 4020 is cheaper but the 4220 has sas connectors to the backplane for easier cabling.

Also note that the 4220 is extremely louder than the 4020 since there's less ventilation IIRC.
 
Fairly loud I guess?

Here's the setup I was thinking of:

$115 - Intel Core i3-530 CPU
$190 - Supermicro MBD-X8SIL-F-O Intel 3420mATX Motherboard
$113 - Kingston 2 x 2GB ECC Unbuffered DDR3 1333 RAM
$200 - 2 x SuperMicro AOC-SASLP-MV8 PCI-Ex4 8 Port SATA Controller Card
$64 - 4 x 3ware SFF-8087 to Multi-lane SATA Forward Break-out Cable
$1200 - 10 x Western Digital Caviar Green WD20EARS 2TB SATA HDD
$110 - Corsair 750TX 750W PSU
$300 - NORCO RPC-4020 4U Rackmount Server Case
---
Total: $2,292 plus tax and shipping.

I have a very similar setup to this using WHS, works well. One thing though, I wouldn't recommend the AOC-SASLP-MV8. I had trouble with drives mysteriously dropping out of the array (there is a whole thread discussing problems with this card). I replaced them with an Atto ExpressSAS H30F (16 SATA ports on one PCIe card). Also, I am using Samsung 2TB drives. Been solid for months.

--srengr
 
Actually, the Thermaltake PSU you have now is based on the same CWT platform as the Corsair 750TX. So their performance is very similar. Then again, for a new setup, yeah I'd recommend the Corsair over the Thermaltake.

Also note that the 4220 is extremely louder than the 4020 since there's less ventilation IIRC.

Thanks for the info on the power supply. That was always one aspect of the build I wasn't real happy about. I got a good deal when I was a little newer at this stuff.

I'd hadn't heard anything about about difference In noise levels between the cases. I know mine isn't too quiet, but I can't tell with all the other fans I have going around it. Good to know.
 
That depends on your tolerance for data loss (as well as your budget).

With a single 2TB drive, at any point in time you have 1.5% chance of not being able to read all of the data on a single drive. Since the BERs are the same, with a 1TB drive, the chance of an error drops by half to 0.8%.

Let's look at the probability of not being able to read all sectors across a single sub-array, with 20TB total size, along with MTTDL (mean time to data loss) using four different approaches:

SATA = 10^15 BER, 1.2M hour MTBF (note than some low-cost consumer drives only have a 10^14 BER and a 0.75M hour MTBF)
SAS = 10^16 BER, 1.6M hour MTBF

1. 10 * 2TB SATA, in RAID-0 or JBOD: 15%, MTTDL = 11 yrs
2. 20 * 2TB SATA, in 5*(2+2) RAID-60: 6%, MTTDL = 600,000 yrs (86 yrs for RAID-10)
3. 40 * 1TB SATA, in 10*(2+2) RAID-60: 3%, MTTDL = 3,200,000 yrs
4. 40 * 1TB SAS, in 10*(2+2) RAID-60: 0.3%, MTTDL = 12,500,000 yrs

So, sure, for a 20TB array, if you're willing to accept five times the probability of a drive error per sub-array, or one fifth the achievable MTTDL with RAID-60, then SATA 2TB drives are fine compared to SATA 1TB.

For MTTDL, you might think that even 11 yrs sounds like a long time -- but keep in mind that's a statistical "mean" number. If you've ever lost a disk, they typically have a 1.2M hour mean time between failure (MTBF), which is about 137 yrs, yet some die within hours or days. That's why the huge MTTDL numbers are useful; they reduce the statistical chance of data loss in human scale timeframes to close to zero.

Just as a FYI on this one, the BERs that you are quoting are quite high. Especially since SATA drives include the heroic error recovery to reach those numbers which is instant kill for a lot of RAID controllers.
 
I have a very similar setup to this using WHS, works well. One thing though, I wouldn't recommend the AOC-SASLP-MV8. I had trouble with drives mysteriously dropping out of the array (there is a whole thread discussing problems with this card). I replaced them with an Atto ExpressSAS H30F (16 SATA ports on one PCIe card). Also, I am using Samsung 2TB drives. Been solid for months.

--srengr

Are people having problems with linux too or just with windows. Unraid runs on a stripped down version of Slackware. I haven't heard any complaints on the Unraid forums, I had heard some complaints on windows though.
 
@pjkenned The BER rates mentioned seem reasonable, the Seagate XTs have an even worse 1/10^14 bit error rate.

@AceNZ BTW, not to be picky, but your notation should be either 10^-15 or 1/10^15, not 10^15.
What I mean when I say 1TB disks are not appropriate for video storage is that the need for space is so huge that your only choice is to go for the biggest drives.
That 600,000 years MTTDL seems good enough for me for video streaming. So the parity data is used only to rebuild disks, not as a checksum for bit errors?

Having twice as many smaller drives also means twice as many chassis, which adds to the reliability stats and to the cost.
 
Last edited:
Just as a FYI on this one, the BERs that you are quoting are quite high. Especially since SATA drives include the heroic error recovery to reach those numbers which is instant kill for a lot of RAID controllers.

What do you mean by "quite high"? As I said, consumer drives are typically 10x worse. Drives like the Caviar Black or the Raptor are 10^15; consumer drives include the Baracuda XT and 7200.12, which are 10^14.

And yes, the issue with "heroic error recovery" -- aka deep recovery cycle -- is something I tried to point out before. It's an issue that is addressed in Enterprise drives by limiting the time spent trying to recover data, which allows a RAID controller to do the work instead (while also preventing array drop-outs).

@AceNZ BTW, not to be picky, but your notation should be either 10^-15 or 1/10^15, not 10^15.

True. For the other pedants out there: I use 10^N as a shorthand for 1:10^N, 1 in 10^N, 10^-N or 1/10^N.

What I mean when I say 1TB disks are not appropriate for video storage is that the need for space is so huge that your only choice is to go for the biggest drives.

"Only choice"?

So the parity data is used only to rebuild disks, not as a checksum for bit errors?

With parity RAID, the parity data is only used on reads when a non-parity drive reports a detectable error or if a drive is completely missing. For normal, successful reads, the parity data is not used -- which means that if a drive returns bad data without reporting an error (very possible, as outlined in my previous post), it won't be corrected.

However, parity can optionally be used to scrub the data for single-drive errors (weekly scrubs are a good idea) by reading both the data and the parity and checking it for consistency -- unlike with mirrors (RAID-1 or RAID-10), which can't be used to check for errors (with two exact copies of the data, how do you know which one is right?).

Having twice as many smaller drives also means twice as many chassis, which adds to the reliability stats and to the cost.

Or a single chassis that holds twice as many drives. Chassis are generally much more reliable than drives, since they have fewer moving parts. Although it's possible to lose data with a chassis failure (which is why backups are a good idea), it's relatively unlikely compared to a drive failure.

Yes, more drives (or more reliable drives) does add to cost; there's no free lunch.
 
Last edited:
@pjkenned The BER rates mentioned seem reasonable, the Seagate XTs have an even worse 1/10^14 bit error rate.

Actually, looking at real populations, 10^14 (using standard shorthand) is probably more realistic in real-world use. For example, the CERN study is pretty well known in this area where they got something like 1/3 of the 10^14 spec after writing something like 2.4PB onto the disks. The question, of course, is whether this is due to RAID cards (3Ware in that case) or the disks, but either way a disk needs a controller making the effective BER much lower than 10^14.

And if we weren't there already... we have solidly hit the point where this conversation got WAY too nerdy for HTPC media storage since video files tend to do OK with small amounts of data corruption.
 
Actually, looking at real populations, 10^14 (using standard shorthand) is probably more realistic in real-world use. For example, the CERN study is pretty well known in this area where they got something like 1/3 of the 10^14 spec after writing something like 2.4PB onto the disks. The question, of course, is whether this is due to RAID cards (3Ware in that case) or the disks, but either way a disk needs a controller making the effective BER much lower than 10^14.

There is definitely a difference between drive BER and system BER. Controller cards or CPUs without ECC RAM are both sources of additional errors. A big study by Google found that errors from non-ECC RAM alone are at least an order of magnitude higher than generally assumed (something like 30,000 errors per year, IIRC). Higher net error rates such as 10^14 serve to further reinforce the need for addressing error sources and error correction at all tiers in a storage system: drives, controllers, RAM and software / firmware (RAID): a single 2TB drive with 10^14 BER has an astounding 15% chance of not being read error-free; ten 2TB drives brings the probability up to 80%.

And if we weren't there already... we have solidly hit the point where this conversation got WAY too nerdy for HTPC media storage since video files tend to do OK with small amounts of data corruption.

From the OP (emphasis added):

After backing up 100s (if not a few thousand) blu-rays and DVDs, I want to ensure that my system never fails so I don't ever have to re-rip them again cause it will take hours on end.

Let me put it in less nerdy terms:

With RAID-0, if you lose one drive in the volume, you lose the whole volume.
With RAID-1 or RAID-10, if you lose one drive in a mirror and the other drive in the same mirror has a read error during recovery, you can lose the entire volume.
With RAID-5 or RAID-50, if you lose one drive and have an error on any other drive in the same span (sub-volume) during recovery, you can lose the entire volume.
With RAID-6 or RAID-60, if you lose one drive, you can also lose any other drive in the volume during recovery and your data will survive.

Keeping the ideas above in mind:

With current HDD technology, there is a good chance of having an error on a second drive while you are recovering from the first one.

With 2TB drives, you have twice the chance of a drive failure as with 1TB drives, because they contain twice the data, but with the same bit error rate.

In addition, small amounts of the right kind of corruption might be OK if they happen to be in the right place in a video stream. However, it's a much different story if the corruption is in the filesystem's or video file's data structures, or, even worse, if it results in a drive failure. If you poke around on the web, it's pretty easy to find people who have lost large chunks of their HTPC data due to storage system failures of one kind or another. Most people seem willing to accept that risk / cost tradeoff (probably because they don't really understand it); doesn't mean that everyone is or should be willing to do the same.
 
Last edited:
Come on, I am not pedant, just confused when Seagate, Hitachi, Samsung all use the correct "1/" or "1 in ". Leaving "-" out is not exactly shorthand, it just makes it harder to readers.
I was not sure at first if my drive's "read error rate" was the same as your BER, or which of 10^15 or 10^16 is better. Well at least I won't forget about that BER for long! ^-^

Thanks for the information on the regular data scrubbing for RAID arrays, that seems like the right thing to do, especially when disk drives keep getting denser. Would be even cooler if it could run during idle times automatically. Stupid manufacturers: BER would not happen if they painted the electrons in different colors! ^-^
 
Are people having problems with linux too or just with windows. Unraid runs on a stripped down version of Slackware. I haven't heard any complaints on the Unraid forums, I had heard some complaints on windows though.

I had problems with the SASLP-MV8 cards under both Linux and WHS.

-srengr
 
Consumer hardware is reasonable for a HTPC. It tends to be reliable for 5+ years. At which time one might want a different system. It also tends to be quiet in that usage. $600-1000 plus the cost of drives seems to be a reasonable price. Paying someone to assemble it might add a few hundred.

24TB is only 12 drives. Without RAID only a couple are runnning at a time. So the system should be quiet. (I have 12 drives in my HTPC - 8 SATA, 3 IDE, 1 IDE CDV player. There is almost no noise.) Since only a few are running at any time, failure is not a daily worry. Backup whole drives and you should be ok. There is no reason for RAID.

24TB is a lot of space. At 25GB per movie that is 1000 movies, but most people are happy with 4GB per movie so 6000 movies. I doubt if anyone has the interest to watch even 1000 movies.
 
I have very little storage physically in my HTPC. I keep most everything on my file server which has a drive mapped to the HTPC. I've found this to work better for me than anything else.
 
I had problems with the SASLP-MV8 cards under both Linux and WHS.

-srengr

Thanks for the Linux feedback. So whats the options then, the upgraded supermicro card that everyone talks about for Open Solaris and it's Intel clone. I don't know If it's even supported with Unraid. I can't see spending $400 for a raid card just for ports.
 
Thanks for the Linux feedback. So whats the options then, the upgraded supermicro card that everyone talks about for Open Solaris and it's Intel clone. I don't know If it's even supported with Unraid. I can't see spending $400 for a raid card just for ports.

I recommend the Atto Tech card. After spending weeks of wasted time trying to get the SASLP-MV8 cards to work correctly, all I can say about cost is you get what you pay for...

--srengr
 
Last edited:
Not to high jack this thread but I am also interested in the same things as Adidas.

I've been watching these forums for a few months now as my BD collection has grown. I would like to make a HTPC & file server in one to be used by my roommate and I.

To answer the original questions:
1) Is this a fresh from the ground up build? As in you won't be reusing any parts?
I currently have an e6400 and Corsair 750W I would like to use in this build.
I plan to buy a COOLER MASTER Centurion 590 $60 and a SUPERMICRO 5 Bay Hot-Swapable $100. I do believe I will have to dremel the 5 1/4" little dividers in the case to fit the SM 5in3.
The mobo Danny suggested won't work for me. Maybe I should just scrap my e6400? If I do go with a i3 530 I doubt that board would work for me since I will need an x16 pci-e for a video card? Also can I just use onboard SATA and upgrade to a expander at a later date? I currently need 5 ports, 1 for a 500GB OS drive.
4GB ram enough?
Cheap 1080p video card?

2) How much storage space do you need initially? i.e 2TB?
I would like to start with 4 2TB hard drives. I would like each drive to have a mirror essentially leaving me with 4TB pre-formatted space.

3) how much storage space do you want eventually? i.e 24TB?
This case will allow for 3 SM 5in3's which can grow to 15 drives and is plenty for my needs.

4) Is noise a factor at all?
Somewhat but I think this build isn't very heavy duty.
I would prefer if the drives would spin down when not in use.

5) Where do you live? As in, what state and/or country?
US

6) When do you plan on building/buying the server(s)?
This month

7) Do you have any familiarity at all with Linux or FreeBSD?
I used to play with Debian & Gentoo years ago. I would prefer a Linux build. ZFS? OS will have to support those 4K sectors in the WD EARS.

Thanks for your help.
 
I would like to make a HTPC & file server in one to be used by my roommate and I.
It's been a general consensus in the HTPC subforum that combining the HTPC and file server is not a smart idea.
http://hardforum.com/showthread.php?t=1419906
The mobo Danny suggested won't work for me. Maybe I should just scrap my e6400?
If I do go with a i3 530 I doubt that board would work for me since I will need an x16 pci-e for a video card? Also can I just use onboard SATA and upgrade to a expander at a later date? I currently need 5 ports, 1 for a 500GB OS drive.
4GB ram enough?
Cheap 1080p video card?

I say scrap your E6400. It's gonna limit you to older hardware that costs the same as newer hardware. The i3 530 has an onboard video chip and that motherboard clearly has a video output in the form of a VGA port. So you don't need a video card.

By expander, did you mean a SAS expander or did you really mean another storage controller card? 4GB of RAM is s enough.
 
Well I'll read up on that thread but I don't understand why combining the two (HTPC and FS) especially when I doubt more than one user will be connected at a time is a bad idea. It's just not economical to have single function computers laying around.

The board didn't work due to it only having vga and no x16 pci-e. The board is fine if the computer is just a file server.

I do agree with spending the $100 and going with a more current processor.

By expander I meant the SATA Controller Cards you suggested. I'm not sure if by using the term expander I was saying something else. Essentially I think I can get away with using onboard SATA initially and then when I move to more drives add in a controller card.
 
Last edited:
Back
Top