8TB+ Storage

Jeffman

Gawd
Joined
Jul 23, 2008
Messages
917
Hey guys! Looking for some ideas on what you would do to build an 8TB+ storage.

Background: I have a 2TB raid setup in my machine, and an 8TB WHS (Phenom II x4 and 4GB of DDR2 in an Antec case, I know it's overpowered) that I'm currently ripping my DVDs to. Eventually I would like to rip my blu-rays as well, which is why it's 8TB.

My biggest concern is backing up the 8TB on the WHS.

What I want to do is either build one box that will just do a RAID or something and add another 8TB to it, or build a second box completely that will serve as a backup to the WHS. I'm open to all options, but money is a factor. I'd like to stick to the $600 range for all of this (This price include the new HDD's). I could sell my current WHS hardware and my 4 2TB drives, and upgrade to like 6 3TB drives in a different box. That will give me more funds for the build. I do have one 2TB and one 3TB external hard drives as well, that I won't need when this is complete. I could rip the cases apart and use those drives, or sell them to fund the project.

This does not need to be a NAS. I can just use eSATA or USB 3.0 on my main rig, and share it all. I would almost prefer this method, actually.

What are my best options? Another 8TB box? A bigger 16TB box with mirroring?
 
comcast 250gb cap disagrees with you

Agreed. I'm on Charter, I don't know what my cap is.

I appreciate the suggestion, but I want it to stay local.

I really want something that won't be a pain to fix/rebuild if a drive fails. For this, I'm thinking 2 different arrays and software to keep them both mirrored. If one fails, I can rebuild it while running the other one still, and then have it mirror when the rebuild is complete.
 
comcast 250gb cap disagrees with you
your bandwidth cap is your problem to manage. you're going to spend more than $4 a month to run an active mirror of your video files.

you have a limited ability to back this stuff up or keep it available.

1) run triple parity via zfs or something else that can manage triple parity. if you do this with 5300rpm low power drives this is your least expensive option.

2) LTO5 tape. Problem here is if you're backing up already compressed video you're not going to get the advertised compressed capacity of 1.5TB per tape. The up front costs of this are going to likely be higher than a mirrored setup but in the long run will cost less since you aren't powering your tapes constantly. Tape is also extremely resilient presuming you keep your tapes stored relatively safely.

3) a set of mirrored disks. this doubles your initial cost and requires you to increase the mirrored capacity along with your live data capacity. also doubles your $ spent on power at the high end or increases your $ spent on drives on the low end because parking and restarting your heads wears out the drives sooner.

4) lots of Blu-Ray disc backups. Cheap but a PITA to manage. Media stores well but you have to buy good blanks in the first place and you're looking at 20 discs per TB to run a backup.

5) cloud backup. super cheap, highly available, bandwidth caps suck balls but most cable providers have the option of upgrading to business class which remove the caps but now you're paying ~100 a month for internet so you have to account for that in your cost eval.

there is no inexpensive solution to backing up TB worth of data, there just arent. this is why you see the large cloud guys making heavy use of redundant nodes and or replication. they have too much data to backup. they would go broke backing it all up so instead they mirror/replicate it around and make sure all their storage is making some profit at all times.
 
1) run triple parity via zfs or something else that can manage triple parity. if you do this with 5300rpm low power drives this is your least expensive option.

I have never heard of this. Is there more information you can give me on it? The 4 2TB drives in my current WHS are 5900RPM drives, and they haven't been a problem.

2) LTO5 tape. Problem here is if you're backing up already compressed video you're not going to get the advertised compressed capacity of 1.5TB per tape. The up front costs of this are going to likely be higher than a mirrored setup but in the long run will cost less since you aren't powering your tapes constantly. Tape is also extremely resilient presuming you keep your tapes stored relatively safely.

I used to manage tape drives, and actually completely forgot about them. It's a great idea, but I don't want to drop that much up front.

3) a set of mirrored disks. this doubles your initial cost and requires you to increase the mirrored capacity along with your live data capacity. also doubles your $ spent on power at the high end or increases your $ spent on drives on the low end because parking and restarting your heads wears out the drives sooner.

This is likely what I'll be looking at. I'm already paying to run the WHS, so adding a few drives to that wouldn't be a huge deal. The bad thing is I'll need a new board with more SATA ports, or an add-in SATA card, as well as a new power supply and a bigger case. Adding that with 4 new drives so I can mirror, and I'm looking at $800-$1000. I'd like to keep it lower if I can.

4) lots of Blu-Ray disc backups. Cheap but a PITA to manage. Media stores well but you have to buy good blanks in the first place and you're looking at 20 discs per TB to run a backup.

Also a great idea, since all I'd need to buy is a blu-ray burner and then some blank discs. Probably the lowest up front costs.

5) cloud backup. super cheap, highly available, bandwidth caps suck balls but most cable providers have the option of upgrading to business class which remove the caps but now you're paying ~100 a month for internet so you have to account for that in your cost eval.

I really would love to upgrade to business class internet. I just don't want to drop that much on internet. Over a 2 year period, I'd be looking at around $1500 with the increase. If that's what I'm looking at I'm better off buying the equipment and running it.

Thanks for all the input, you really know your stuff!
 
triple parity means 3 of the drives out of the array are used for parity. meaning, you can sustain 3 drive failures before you lose your data. it is little more than raid 6 with one more distributed parity drive.

the tradeoffs here is you're spending $ and effectively 'losing' 3 drives. however, this can be an advantage as instead of running say two 4 disk mirrors you can run an 8 disk triple parity and actually gain usable space.

now, you can run raid 50 across those same 8 discs in a 3+1 config and have 6 total data drives but you can only sustain the loss of a single drive in each raid 5 set.

IMO, since raw mind numbing performance should never be anyone's first concern with home based media setups triple parity is the best trade off between usable space and "shit, now i have to re-rent all those from netflix" to repopulate my library. the chance of a second or third disk failing while doing the 5+ day rebuild of an 8TB array are actually fairly high. so having the ability to sustain such an event like that to me is important.

also, you're not paying $$ to power disks that you aren't in fact 'using' since everything is either actually serving data or protecting it. this is one of the major reasons so many folks are all crazy for ZFS right now. you only rebuild the actual data on disk and ZFS' raidz3 offers more usable space than traditional hardware triple parity. also, cards than can do triple parity are really expensive where as an LSI 9211 8i + 2 8087 breakout cables + zfs can be done for a fraction of the cost.

do note, 8 drives is not recommended for raidz3. you want 7 or 11. 8 will work, probably perfectly fine for what you need but the performance will be inferior to 7 or 11 drive setups.
 
That sounds pretty cool.

So then I just need to buy the LSI 9211 8i and the breakout cables (There's a package on NewEgg), and 3 more 2TB drives, and I'm set? That seems like it's too easy of an answer to what I was looking for.

I'm guessing that these work together as well? It looks like that LSI 9211 8i only runs up to 8 drives, so if I run 7, and want to go to 11 later, is it a pain? Can I just buy another card and cables and add the drives without any problems?
 
you'll need to use ZFS (openindiana, bsd, nexenta) to make that setup work. windows 8 storage spaces lacks double or triple parity.

as for expanding beyond 8 drives, just slap in another controller or run to an external jbod with a sas expander. the problem with expanders though is they hate sata drives, even sata drives with interposers can be/are problematic.

however, if you have 8 2TB drives you can upgrade those to 3, 4, 5, 6 TB drives or whatever down the road. you have to do this in a serial manner, one disk > rebuild, next disk > rebuild, etc but once that is done ZFS will expand automatically.

i am looking around for a nice case with 10 x 3.5" bays then I will either use 2 x 9211 or 1 x 9201-16i. then find a nice super low power mobo/cpu. 10 disks in raidz3 will give a usable space of 24TB using 3TB drives. just been slammed lately building out other stuff for work, havent had time to build my own toys.
 
Second box and MS Synctoy or similar.

I agree. I don't even know if you need raid. If you know Microsoft and don't know OI or other distributions that support ZFS, I'd stick with what you know.

You can technically back up 8TB to just two measly 4tb drives.

You could back that up to a third box, even.
 
8 TB of source data that compresses well, sure 4TB or less is realistic (slow and really cpu intensive but realistic). 8TB of already compressed videos are NOT going to compress down to 4TB though. just isn't going to happen.
 
Aweome. That is definately something I'm going to look into. Thank you a ton for the info!
 
For those of you that think I should get a second box and 4 more drives, what box are you thinking of? I know RAID is probably not necessary, I could run it JBOD and just tell WHS to use it as a backup. I just don't want a crap box to use. Most of the ones between the $100-$200 range have awful reviews, and I can't afford to spend much more than that.
 
For those of you that think I should get a second box and 4 more drives, what box are you thinking of? I know RAID is probably not necessary, I could run it JBOD and just tell WHS to use it as a backup. I just don't want a crap box to use. Most of the ones between the $100-$200 range have awful reviews, and I can't afford to spend much more than that.

Just get something like this Media sonic probox: http://www.amazon.com/Mediasonic-HF...2?s=electronics&ie=UTF8&qid=1336521158&sr=1-2

I've owned the usb 2.0 version for awhile and its been excellent.
 
As an Amazon Associate, HardForum may earn from qualifying purchases.
First, 5 days is awfully long for a rebuild on only 8TBs.

2nd, the performance difference of 8 disks in Z3 vs 7 or 11 is minimal at best.

You also probably don't need Z3 for only 8TBs. Z2 should be fine. You could consider two 4 drive Z1 VDEVs. This will also speed up your pool a bit.

Things to keep in mind for building ZFS. You must use ECC memory and a 64 bit processor. If not you might as well skip it. This isn't a difficult thing anymore but something to be aware of. You also want to make sure you check the HCL to make sure the parts you get will work with your OS of choice. You will also benefit from Intel NICs.

I don't know your level of tech ability, managing Open Indiana and ZFS is a lot different than WHS. Nothing insurmountable but again something to keep in mind.

I am a huge fan of ZFS, please don't take any of this to mean I think its a bad choice. Just want you to have as much info up front as possible.
 
Thanks guys! Keep the awesome info coming.

I am a fairly technicial guy, but I like to keep things as easy and low-maintainence as possible. Just because I can figure it out doesn't mean I want to figure it out. I don't mind waiting for rebuilds, overall speed isn't a problem for me. I'd rather have things running well, and not have problems with them down the road. If I had to choose between reliabilty and speed, I would almost always pick relibility.

Possibly needing to look at ECC memory to go along with the ZFS stuff puts me off of that idea a little. For now, anway. It still sounds like it's an awesome way to go, but if I need to be buying a ton of new parts then maybe I'll either wait on the project, or do it as my next project (When I need even more space).

The easiest thing I can think of is just getting 2 4-bay enclosures, and running them both with 4 2TB drives in JBOD and backing one up to the other. I could do that with fairly decent speeds through USB 3.0, right? Or am I completely wrong? I like this idea because I can manage the backups myself, and access either one locally. I can share one through my PC, and keep the other one powered off when it's not needed to back something new up.
 
he was talking about moving to 8 2TB drives, originally in a 4 == 4 mirrored setup. 8 spindles each at 2tb the chances of a second failure during a rebuild of an array that large with consumer grade drives are high enough that triple parity is very attractive.

if you aren't concerned, hate your data, or are indifferent, certainly there is no reason to burn the usable space on more parity. i've lost music and video stores before though and it sucks. not something i want to entertain again.
 
you could also do quarterly, semi anual, or single yearly full backups to blu-ray. media libraries don't change 'that' much. sure, maybe you have 30 new movies and a new season of X number of your favorite shows but running a full backup in janurary and then an incremental in june isn't too painful. still a boat load of BRDs though.
 
It will be a lot of BRD's, and I don't know I want to deal with that. Considering I have 1TB of DVD's ripped already, and I'm only half done with my DVD collection and haven't started on my blu-ray collection yet, it's going to be a lot of BRD's to deal with.
 
For backup:
I'd look at something like Crashplan or as already mentioned, Backblaze. Don't try and upload the entire library in one month, data caps suck, but spread it out. My take a few months but its doable.

ECC:
Your existing AMD Phenom probably supports ECC RAM. Its worth checking.

Array setup:
I agree that 4 2 drive mirrors is a waste of space. It would be faster but at a cost of half the space. I still don't think an 8 drive Z2 array of 2TB disks is a substantial risk. Z1 absolutely is a huge risk. But Z2 should be fine. Remember that ZFS only rebuilds the used portion of the disk, not a byte for byte rebuild like traditional RAID.
 
2) LTO5 tape. Problem here is if you're backing up already compressed video you're not going to get the advertised compressed capacity of 1.5TB per tape. The up front costs of this are going to likely be higher than a mirrored setup but in the long run will cost less since you aren't powering your tapes constantly. Tape is also extremely resilient presuming you keep your tapes stored relatively safely.

LTO5 is 1.5TB native (uncompressed) capacity, LTO4 is 800GB and LTO3 is 400GB. Older generation drives can be had much cheaper, I personally use LTO3.
 
The reason ZFS is advocated is because it protects your data better than any solutions. Normal filesystems does not protect your data. Read here for more infromation on this:
http://en.wikipedia.org/wiki/ZFS#Data_Integrity
It turns out that if you get random bit errors in RAM, then ZFS can in some cases not detect the bit flip error that corrupted your data. Thus, ZFS might store corrupt data to disk if you dont use ECC. This vulnerability applies to all filesystems of course, so all PCs should use ECC RAM to be able to detect bit flips in RAM. If ZFS uses ECC RAM, then ZFS will not store corrupt data, because it will have been corrected by ECC checksums.

The point is, without ECC RAM, then ZFS is not 100% fool proof. However, bit errors in RAM are very rare. So you decide if you want to take the chance.



The only solution on the market today which has 3 disk redundancy, is ZFS. Raid-5 allows one disk to fail. Raid-6 allows 2 disks to fail. Only ZFS allows 3 disks to fail (with raidz3).
 
Just get something like this Media sonic probox: http://www.amazon.com/Mediasonic-HF...2?s=electronics&ie=UTF8&qid=1336521158&sr=1-2

I've owned the usb 2.0 version for awhile and its been excellent.

My main array is ZFS on Linux (5x 2TB raidz). I have an external enclosure with 2x 3TB (raid 0) for backup. Serously consider a ZFS for your main array and backup array. If you don't want solaris or BSD, feel free to use ZFS on Linux (in development, but very stable).

Get your ZFS array up and going on your array, and peroidically do a snapshot and send it to your backup. Then store your backup offsite.

Works like a charm.
 
As an Amazon Associate, HardForum may earn from qualifying purchases.
Man, you guys all know your stuff.

ZFS sounds like it's the best way to go. I'll do a little more research and see if it's something I want to tackle.

Keep the ideas/info coming, if you have anything else to share!

Thanks!
 
You may want to look at Flexraid, it does offer double parity, and works very well with WHS.
 
Awesome. Any different recommendations on enclosures if I go external? Looking for $100-$200 range. Does not need to have ethernet, just eSATA or USB 3.0 preferred.
 
I can second the Probox that tisb0b linked. Using the USB2.0/eSATA model for a client, but with the eSATA connection and really love it. Keep in mind tho, if you use eSATA with an external box like that the eSATA port needs to support Port Multiplication. Otherwise only one of your disks will show up.
 
I can second the Probox that tisb0b linked. Using the USB2.0/eSATA model for a client, but with the eSATA connection and really love it. Keep in mind tho, if you use eSATA with an external box like that the eSATA port needs to support Port Multiplication. Otherwise only one of your disks will show up.

I'm running a Sabertooth x58 in my main rig, I'd need to double check but I think I'll be okay there. If I plug it into the WHS, I'm pretty sure I'd need a new board or a card that supports it, as that was just a cheap Biostar AM3 board.
 
bla bla bla. this is silly. you're already running a storage server. if you want to run another, you know how to do it. if you dont want to do that, and you want offsite backup, you should subscribe to crashplan. it is great.

most or all of these suggestions are from people who aren't doing what they're suggesting. running a ZFS box as a backup to a WHS box, I mean. you'd need to either have a lot of spare time, or be doing it professionally. maintaining the proficiency necessary to operate such a datacenter at home well for the long term is not something most people here are going to be considering. I can tell because they're also suggesting running ZFS and ECC as a backup to something without ZFS and ECC. waste of time.
 
I don't want offsite backup though.

I'm looking for a suggestion to get 8TB of storage, and have it backed up. It doesn't have to involve my WHS, I'll tear that down if there's a better idea. Either I need 2 8TB external arrays, one backing up the other, or just a 16TB mirrored array. I'll take whatever ideas someone has to do this easily and keep it low maintainence.
 
With ZFS, you should avoid raid cards. Raid cards disturb ZFS, as it takes control over the disks. ZFS wants control over the disks, to detect all errors, and correct them.

Thus, ZFS is the cheapest choice, no need for extra hardware.
 
Thus, ZFS is the cheapest choice, no need for extra hardware.

I like ZFS too, but this isn't true. The performance required to do z3 or z2 requires processing power and memory and that's not free. Whether that's done on a hardware card or by the operating system itself, those resources need to be allocated some where. Add in a JBOD card, and for ZFS's basic features you'll run pretty close to parity with any hardware raid solution. You could always skip the JBOD card, but you'll need it especially in the case of z3. Turn on dedup and sorry to say, but the ZFS build will be thousands more.

Even though that's the case, that's OK. ZFS offers a good number of features that hardware cards lack. However, as I said before those features come at a cost. Nothing in life is free, and this applies to ZFS as well.
 
I like ZFS too, but this isn't true. The performance required to do z3 or z2 requires processing power and memory and that's not free.
It is true that ZFS needs more cpu and RAM than other filesystems because of all the checksumming calculations to detect data corruption (MD5 / SHA is expensive to do on every file), but todays PCs have no problem with CPU nor RAM. So the resources needed by ZFS is negligble and you dont have to think about them. For instance, I heard that ZFS takes like 5-10% or so, from one core in a quad core cpu. That leaves plenty of power for other stuff. Regarding RAM, ZFS works fine on 1GB PCs, no need for huge amounts of RAM. If you have a Pentium2 with 256 MB RAM - then you can not run ZFS on this PC. But who has such old computers? Any modern PC has at least dual core and 2GB RAM. Many modern PCs has quad core and 4GB RAM.


Add in a JBOD card, and for ZFS's basic features you'll run pretty close to parity with any hardware raid solution. You could always skip the JBOD card, but you'll need it especially in the case of z3.
You never need a separate disk controller card (JBOD). Why do you believe this? You can just insert all disks into your ordinary SATA2/SATA3 slots. No need for extra JBOD card. No matter which config you run; raidz1 raidz2 or raidz3.

In my next ZFS build I am going to use 11 disks in raidz3. Eight of the disks will go into an ordinary SATA disk controller, and the rest of the disks goes into my mobo SATA3 slots. I mix freely. I can choose to insert all 11 disks into my mobo, if it has enough SATA3 slots.


Turn on dedup and sorry to say, but the ZFS build will be thousands more.
It is true that deduplication is very costly in terms of RAM. But dedup is not recommended yet, as it is immature on ZFS. No one should use dedup. It is easier to buy more disks.


My original claim is still valid, I think: "no need for extra hardware". You can insert all your disks into ordinary SATA slots. No need to get extra hardware. Thus, ZFS is cheapest in terms of additional investments. No need to invest in extra hardware to use ZFS.
 
It is true that ZFS needs more cpu and RAM than other filesystems because of all the checksumming calculations to detect data corruption (MD5 / SHA is expensive to do on every file), but todays PCs have no problem with CPU nor RAM. So the resources needed by ZFS is negligble and you dont have to think about them.
OK run ZFS with 1GB of unbuffered RAM. Sure it may boot, but the performance impact would be pretty darn apparent with z2 and z3 implementations (if it didn't crash out first). If file servers could be built at the enterprise level with miniscule processors and levels of ram don't you think we would do it? You don't think there's a need to save money?

The truth of the matter is that you get what you pay for. You can believe until the cows come home that you can implement ZFS in z2 or z3 without thinking about the cpu/ram requirements, but the chances of it performing similarly to a server with 1GB of memory and a hardware raid controller is quite low. I can assure you it won't.

You never need a separate disk controller card (JBOD). Why do you believe this? You can just insert all disks into your ordinary SATA2/SATA3 slots. No need for extra JBOD card. No matter which config you run; raidz1 raidz2 or raidz3.

Because you'll be port limited using just the MB. Maybe that's the reason.

In my next ZFS build I am going to use 11 disks in raidz3. Eight of the disks will go into an ordinary SATA disk controller, and the rest of the disks goes into my mobo SATA3 slots. I mix freely. I can choose to insert all 11 disks into my mobo, if it has enough SATA3 slots.

Yes, I could get some cheap Rosewill SATA controller for 35 bucks. Aside from the performance impact (and yes there is a performance impact), I prefer a high quality JBOD HBA for alot of other reasons like reliability, warranty, compatibility, and expandibiilty. You're paying the high cost of a good JBOD controller because most of them offer options far away and above that of some 35 buck SATA controller.

For me that matters. I'm a build it once type of guy. I don't like cracking open a case because i bought some cheap 4 port SATA controller. I would much rather buy a JBOD HBA and have 16 ports with room to expand to more if need be.

It is true that deduplication is very costly in terms of RAM. But dedup is not recommended yet, as it is immature on ZFS. No one should use dedup. It is easier to buy more disks.

Huh? Dedup no matter what, is costly. It doesn't have much to do with it's maturity. Yes different versions of ZFS perform better than others, but the RAM requirment for dedup is still present even in Solaris. By the nature of it's implementation this will always be the case. Maturity or no you'll always need more RAM to implement it.

My original claim is still valid, I think: "no need for extra hardware". You can insert all your disks into ordinary SATA slots. No need to get extra hardware. Thus, ZFS is cheapest in terms of additional investments. No need to invest in extra hardware to use ZFS.

You can always do things on the cheap. There's no denying that. But you'll get what you pay for. The threads are filled with people buying sub standard parts because they were cheap and were told they didn't need this or that. Later, they realize that while they were told they could do something with only 2GB of RAM, they find out rather quickly they were not told that it would run much worse than they expected.
 
Last edited:
Thanks for everything so far guys, you've been great. I love the debate on whether or not I should do ZFS.
 
crashplan is great I have about a month to finsh my 8TB of media and data. It took about a year or so. I do regularly go over my cap at around 400GB a month or so but I found unless you us a terrabyte or near there they will not contact you about it. well that depends on your area too I guess and the other data usage.
 
ZFS doesn't really care how you hook up drives. You can use more expensive HBAs, typically SAS, and they can have some performance benefit. But with his requirements and a Z2/Z3 setup, ultimate performance is not the goal. You could hook up all the drives via USB if you want, it'll work. If you want a lot of ports on one card great, if you want to spread it out, fine. If you want to change later, fine. As long as the system can see the drives you can import the pool. And, no I probably wouldn't recommend 1GB of memory. There are people running it on 2, and 4 these days is cheap. 4 would be more than sufficient for what he wants. Outside that its just more caching. That's what the RAM is really for. Unless he wants to turn on Dedup, which wouldn't benefit him for the type of workload he's mentioned, 4GB will be fine. And for only 8TB of storage he could probably do it with 16-24GBs or RAM which is also not that expensive at this point, if he really wanted to.

CPU also isn't a concern. He's likely only going to be accessing it from one or two locations simultaneously. Even an Atom is sufficient at this point. Parity calculations aren't that hard to do. Modern CPUs, while general purpose compared to application specific ones on RAID cards, are as fast, and often much much faster. The only thing you really need is 64 bit.

It comes down to what your expected usage is. You can spend a little or a lot. If you need high throughput or tons of storage, you'll spend a lot. If you want low power and quiet, with adequate performance, you can spend a lot less. Heck, if you want to build a supercomputer, LLNL style, you can do that too. ZFS isn't the only way, but I feel, its one of the better ones.

BJ79 - I've had a similar experience with Comcast. I had a month where I transferred 931GBs and never heard from them. Comcast is supposed to cap at 250GBs.
 
ZFS doesn't really care how you hook up drives. You can use more expensive HBAs, typically SAS, and they can have some performance benefit. But with his requirements and a Z2/Z3 setup, ultimate performance is not the goal.
You could hook up all the drives via USB if you want, it'll work.

It's not about ZFS caring. It's about planning ahead in general. It's also not about ultimate performance, but informing the OP of what the performance really is and not saying "it requires no additional hardware" when in reality it does. Computers are forgiving little things, and quite flexible. Just because a USB stick is storage, doesn't mean it makes much sense for storing tons of files.

And, no I probably wouldn't recommend 1GB of memory. There are people running it on 2, and 4 these days is cheap. 4 would be more than sufficient for what he wants. Outside that its just more caching. That's what the RAM is really for. Unless he wants to turn on Dedup, which wouldn't benefit him for the type of workload he's mentioned, 4GB will be fine. And for only 8TB of storage he could probably do it with 16-24GBs or RAM which is also not that expensive at this point, if he really wanted to.

And that's precisely the use case I'm using. 8TB not less than that. Your RAM recommendation is correct. That RAM requirement is alot more than what is required for a file server not running ZFS. That informaiton is precisely the informaiton the OP should walk away with. Not putting in 2GB for 8TB of storage. BTW 16GB - 24GB of ECC will run anywhere from $159 - $230+ that's not cheap that's the cost of a i5 2500.

CPU also isn't a concern. He's likely only going to be accessing it from one or two locations simultaneously. Even an Atom is sufficient at this point. Parity calculations aren't that hard to do. Modern CPUs, while general purpose compared to application specific ones on RAID cards, are as fast, and often much much faster. The only thing you really need is 64 bit.

Really? Mind telling me how you're going to get 16-24 GB on an Atom processor? Aside from the fact that most consumer Atoms don't support ECC either. That's probably the most expensive route bar none. But if you want to make the OP purchase a server from Supermicro so he can have decent performance instead of buying a better processor that would support more RAM be my guest. I think the processor matters here for this setup.

It comes down to what your expected usage is. You can spend a little or a lot. If you need high throughput or tons of storage, you'll spend a lot. If you want low power and quiet, with adequate performance, you can spend a lot less. Heck, if you want to build a supercomputer, LLNL style, you can do that too. ZFS isn't the only way, but I feel, its one of the better ones.

I think I stated this as well. You can go cheap, but again it comes at a price that people should be fully aware of going into it.
 
Last edited:
I don't want offsite backup though.

I'm looking for a suggestion to get 8TB of storage, and have it backed up. It doesn't have to involve my WHS, I'll tear that down if there's a better idea. Either I need 2 8TB external arrays, one backing up the other, or just a 16TB mirrored array. I'll take whatever ideas someone has to do this easily and keep it low maintainence.

you should want offsite backup, because without it you're not protected from your errors, which are more common than disaster, which it also protects from. crashplan lets you encrypt the data and throw away the key so nobody can read it but you. crashplan uploads only changed data. crashplan tracks changes. you should use it.

for local storage with backup, just 16TB mirrored is not an option because it has no backup. http://www.smallnetbuilder.com/nas/nas-features/31745-data-recovery-tales-raid-is-not-backup

the cheapest and easiest thing for you to do is add space to your existing server as it pleases you, back it up to a raid box by whoever is in fashion, and back it up to crashplan. do a mirror on your primary storage if you want higher availability and less hassle when disks break.

NAS appliances seem expensive at first, but their lifecycle cost is where they pay off. time is money.
 
Back
Top