Inexpensive dedicated storage without ZFS?

damarious25

Limp Gawd
Joined
Dec 27, 2010
Messages
227
I want ZFS but life is too hectic so I need a more maintenance free solution.

I need around 10TB with an added RAID 5 like redundancy. Price is a factor and I'm looking at keeping the thing under $600 CDN without storage HDDs. The $600 would need to include everything else, new. Case, motherboard, CPU (can be onboard), PSU, controller card, and Ram. No monitor needed. I'm considering two small SSDs mirrored for the OS and possibly a third small SSD to act as a cache depending on what everyone here suggests.

I'm also considering buying a Dell server just to take advantage of monthly payments instead of building a frakenserver up front. Overall that'll be twice as expensive (even without any storage HDDs) but It's kinda needed now and low monthly payments I can handle. Also under consideration are the NAS units but it seems anything above 6TB without RAID and the price sky rockets. Are there any other companies that offer monthly payments on home storage solutions?

So I come to H to ask what now? I know for a fact there's gotta be a lot of [H] out there that are in the same boat. Any help, suggestions, advice, quick posts would be appreciated.
 
I don't see why you couldn't do ZFS here. There is no such thing as a problem free solution. All of them have upsides and downsides. Looking at your budget you could pull off a ZFS solution with ECC ram + everything else and stay under that $600 mark. But if you want something different you could do Linux + mdadm or Windows + FlexRaid.

I also wouldn't buy something from Dell. That's just throwing away money.
 
Coming in under 600 without drives should be no problem. I build my ZFS rig 2 years ago for under $500 including 3 2tb Samsungs (no case though).

As far as maintenance free, ZFS shouldn't require any more than any other solution. Honestly probably less. I've never messed with Linux software raid, but I can bet there's a lot more to it than the one or two commands it takes to set up a zpool and datasets.

Once the array is set up, it's going to be about the same amount of work to get shares going. Unless you go Windows your going to have to set up Samba or, in the case of a Solaris based system, the native CIFS server.

After the initial setup there isn't a whole lot of maintenance to be done. Backups are about the only thing that really has to be done. Admittedly I don't really have a backup in place, but most of my stuff is just movie/music rips.

What your planning on using this system for is probably most important. If you want something easy, perhaps someone could chime in about WHS. I used an old hard drive as a boot drive. If this is just a media server where uptime isn't critical, I would forgo the mirrored ssd's. If you have an old drive laying around you could always clone your boot drive once you have it all set up in case it ever did go south. ZIL or L2ARC also depends on your uses. For me I probably wouldn't see much benefit unless I regularly watch the same movies over and over.
 
Last edited:
just some addition:
you should understand on what are you using, such as ZFS or mdraid or windows solution,

and be prepared for HD failing scenario and troubleshooting your software raid :D...
once you understand, tackling replace failing drives, maintenance, and others on your system would be not hard as 1 2 3..

get good PSU, and motherboard and RAM... do not be a cheap guy hehehe.
you can use a cheap case, when you do not mind rattles on flimsy metal vibration :p.
 
I'm not a ZFS fan.. Other then a fairly small added cost.. I don't understand why folks don't just get a dedicated raid card that does raid5. Then just present the lun to the OS and let the OS just use it as as one big disk. The raid card, a decent one anyway will do all the work and takes any load off the cpu for its work. Its less headache, and you don't have to deal with the normal zfs nonsense I see people posting all over this forum. If you really wanted to go on the cheap.. Im running an AMD MB that I paid 120$ for that has the AMD 890 chipset that does raid 5, and I've had zero issues with it.

best of luck.
 
Why not look into Storage Spaces with Windows 8?
Easy to manage, expand, and to have redundancy.
 
what happens when your raid card breaks? vs what happens if a drive fails under zfs? big difference.
hardware raid - no block level checksums, exposure to silent data corruption and bit rot and RAID write holes etc.

if you don't know why zfs is held in high regard, you may want to read up on it rather than bash it and sound ignorant in the process
 
if you don't know why zfs is held in high regard, you may want to read up on it rather than bash it and sound ignorant in the process

I've been lurking on this thread but now I'm curious. In two sentences, why do people hold ZFS in high regard.
 
In two sentences

Wtf? I already pointed out some of its hallmarks like block level checksums warding off silent data corruption and bit rot. You don't get that with hardware RAID. Furthermore, if a hardware RAID controller fails you need to find another controller, which can suck. With ZFS you just swap out disks when they fail.

Do your own homework, man, don't demand that people spoon feed stuff to you "in two sentences." If you care about this topic that much, then read up on it yourself instead of taking at face value the words of anyone on an internet forum, my words included. There are any number of sources, including Oracle itself. https://blogs.oracle.com/orasysat/entry/so_what_makes_zfs_so and at popular Q&A sites: http://libraries.stackexchange.com/...antages-of-using-zfs-for-storage-in-a-digital
 
what happens when your raid card breaks?....

HW RAID rarely breaks as I know,
I even still have 1.5G SATA H/W raid that still works :p.. preety old card :D only support 1T drive max and 2T logical drive max.

everyone can pick HW Raid or SW raid as his/her Convenience

I had some broken motherboards...:)...

if broken.( I would say rarely happens..
you buy the HW RAID with the same vendor, mostly they support migration or just plug the drives and boom.. all logical drive is on-line
been there with Adaptec Series and LSI Series :)
 
Why do you need 10TB?

Is this primarily a media server? In other words, large files that are read-only once you get them on the file server?

If so, then you are correct to avoid ZFS, which is a terrible choice for a media server.

Your best bet would be snapshot RAID, using either FlexRAID or SnapRAID. If that is the way you go, then the hardware can be just about anything you like, since both of those programs can run on Windows or linux, and none of them are picky about hardware.
 
Is this primarily a media server? In other words, large files that are read-only once you get them on the file server?

If so, then you are correct to avoid ZFS, which is a terrible choice for a media server.

I don't get your logic with this one. ZFS protects against bit rot/silent data corruption, which is more than what many other file systems can say. Yeah you should still scrub once in a while, and you can get protection out of fscking files as well, but I don't see how ZFS is a "terrible" choice, even for servers that are mostly read-only.
 
Wtf? I already pointed out some of its hallmarks like block level checksums warding off silent data corruption and bit rot. You don't get that with hardware RAID. Furthermore, if a hardware RAID controller fails you need to find another controller, which can suck. With ZFS you just swap out disks when they fail.

Do your own homework, man, don't demand that people spoon feed stuff to you "in two sentences." If you care about this topic that much, then read up on it yourself instead of taking at face value the words of anyone on an internet forum, my words included. There are any number of sources, including Oracle itself. https://blogs.oracle.com/orasysat/entry/so_what_makes_zfs_so and at popular Q&A sites: http://libraries.stackexchange.com/...antages-of-using-zfs-for-storage-in-a-digital

Thanks. That's fair, but everyone has to start somewhere. There are all kind of technologies. Not all of which are suitable or necessary for a home user, even a power user.

Trust me, no one spoon feeds me very much.
 
SnapRAID has a useful comparison table of various RAID solutions.

http://snapraid.sourceforge.net/compare.html

One thing it leaves off is whether you can expand an existing volume by one drive at a time. For SnapRAID and FlexRAID (and unRAID), the answer is yes. For ZFS you cannot. I do not know for the other two solutions.
 
I don't get your logic with this one. ZFS protects against bit rot/silent data corruption, which is more than what many other file systems can say. Yeah you should still scrub once in a while, and you can get protection out of fscking files as well, but I don't see how ZFS is a "terrible" choice, even for servers that are mostly read-only.

You'd be lucky to get anything JoeBob a.k.a Jim Williams says... guy is a schmoe.

Every thread he posts in turns into some pointless discussion that's unrelated to the OP. He consistantly makes room for these pointless discussions and retarded acts even thought he's multiple times been told to stay on topic.

He crapped the page of the review of TweakTown's SuperSSpeed S301 with these pointless discussions by going ahead and comparing it to the non-pro 840, comparing a SLC drive to a TLC drive... then went ahead and kept on calling the guy he was arguing with "confused" and that he's like the fucking teacher of SSD storage or something. Then Chris Ramsayer the TT SSD reviewer came in and told him that he's comparing a SLC drive to a TLC drive and he had to stop posting BS.

Sad really. Knowledable guy otherwise, could have used his life better. Cuts time into forum idiocricity.
 
what happens when your raid card breaks? vs what happens if a drive fails under zfs? big difference.
hardware raid - no block level checksums, exposure to silent data corruption and bit rot and RAID write holes etc.

if you don't know why zfs is held in high regard, you may want to read up on it rather than bash it and sound ignorant in the process

I knew my post would ruffle some feathers.

Buy a common decent/common raid card. Hell.. When I upgraded my mother board the new AMD raid controller saw my old AMD raided harddrives from previous motherboard setup. Thats as cheap as you can get. Other cards are much better a preventing silent data corruption and bit rot. Raid Write holes can be avoided with rebalancing.

So the broken raid card arguement is invalid.

Ive seen plenty of threads on this site where people complain about zfs rot, corruption, and performance issues.

I know why zfs is held in high regard. I've read it, I personally choose not to go that route. Why? It can be complicated, you do have to babysit it, and recovering a failed drive can be complicated. I do storage work all day as my day job. The last thing I want to do is storage work when I get home on my own personal PC.

Hardware raid. Setup once, ignore it till a drive dies, and you swap in a new drive like drive.
 
I don't get your logic with this one. ZFS protects against bit rot/silent data corruption, which is more than what many other file systems can say. Yeah you should still scrub once in a while, and you can get protection out of fscking files as well, but I don't see how ZFS is a "terrible" choice, even for servers that are mostly read-only.

Not necessarily "terrible" for media storage but also not the be-all end-all that its enthusiasts would have people believe.

1) ZFS stripes data across multiple disks. Not as ideal as say pooled JBOD with parity for media storage because it introduces extra risk. Not to mention, more power usage and net wear and tear on the disks since they all have to be spinning to read/write files, compared to a pooled JBOD solution which only has to spin up one.

2) ZFS doesn't support disk-at-a-time OCE. So with a raidz2 when you need more space and don't want to compromise redundancy level, youre adding 4 more disks and losing 2 of them to parity again, netting you 2 more disks usable space when all you may have needed was one.

3) Complexity. The idea of working at a unix command prompt when there are issues is more than some may be comfortable with, especially Windows users.

ZFS is great for what it was designed for - enterprise. Not so great for dumb end users that want to store their DVD's at home. In between is enthusiasts/prosumers and it will come down to comfort level.
 
Last edited:
Not necessarily "terrible" for media storage but also not the be-all end-all that its enthusiasts would have people believe.

1) ZFS stripes data across multiple disks. Not as ideal as say pooled JBOD with parity for media storage because it introduces extra risk. Not to mention, more power usage and net wear and tear on the disks since they all have to be spinning to read/write files, compared to a pooled JBOD solution which only has to spin up one.

2) ZFS doesn't support disk-at-a-time OCE. So with a raidz2 when you need more space and don't want to compromise redundancy level, youre adding 4 more disks and losing 2 of them to parity again, netting you 2 more disks usable space when all you may have needed was one.

3) Complexity. The idea of working at a unix command prompt when there are issues is more than some may be comfortable with, especially Windows users.

ZFS is great for what it was designed for - enterprise. Not so great for dumb end users that want to store their DVD's at home. In between is enthusiasts/prosumers and it will come down to comfort level.

Mostly not a question snapraid vs ZFS

1. Not correct
ZFS does not stripe data, ZFS software raid over multiple disks does

Data is striped, when you build a ZFS pool over a vdev of multiple disks
If you use a vdev of a single disks you can use snapraid with the non-raid features of ZFS
but with realtime checksums exact the way you use it on Windows or Linux

http://snapraid.sourceforge.net/compare.html
supports Solaris (although not the most used os for snapraid)

2. Correct
for Software raid, but you may use 1 (ZFS without Raid) or add disks in pairs

3. Not correct.
There are ZFS NAS software appliances with Web-UI around (Free-NAS, napp-it +OmniOS,OI,Solaris or NexentaStor CE).
They are similar in handling like a Wlan router or a NAS from stock
 
You can definitely stay under budget without disk. Craiglist! i picked up a mobo, i7-920, and 12 gb of ram for less than $200. A dell perc 6ir is 30-50 on ebay than your left with 350+ for case, psu, etc.

i7-920 and my mobo supports vt-d so i have esxi running and oi + napp it is one of the vm's running off of it. I have 2 perc 6ir's that pass through to the vm.
 
not sure why people are debating zfs in OPs thread when he said he specifically said he didn't want zfs.

just sayin.
 
You're all a bunch of a-holes. Because you all have good advice. I honestly like both sides of the argument.

Here's some more info. It will be a read only media server. I like high quality so I do rip my blu rays the best I can. I "HAD" 7TB (full) on a windows machine without raid but some hardware failed (thank god not the disks). I could easily replace the hardware but I've been saying for years I need to build something with redundancy. So, now is the time.

That's why I'm asking for 10 or more TBs. My information on RAID is also very old. I remember reading back in the day that you can't expand if you use hardware RAID. ZFS has always been an interest because I've watched and read how easy it is to expand RAID both in size and in amount of disks.

The only thing that's putting me off ZFS is I have been working with Windows forever and it's the OS I support/maintain at work. So, the time I spent on Li(u)nux years ago has almost been erased from memory. I just don't have the time to learn it again now. Hell, it took me 4 days just to catch up on this thread. I am ashamed to say it, but I AM a dumb windows user with basic Li(u)nux knowledge. So yeah, there's a bit more info. What are people here using for large media servers?

I've read so many amazing things about ZFS and I still might try it due to fun and low costs but as someone pointed out, there are almost an equal amount of threads about people having issues with it. I'm not all about tweaking for fastest read/write speeds either.

I really just wanna buy some hardware, buy a bunch of 2 or 3 TB disks of the same type, add a med/high level of redundancy, xfer all my data too it, put it in my computer lab, and leave it :)
(edit: with the option to expand if I can't afford enough disks right now.)

Again though, thanks to everyone. The debate is helping learn what's new in RAID and large storage solutions.
 
ZFS Raid, I can take my drives out, place them in a completely different machine, boot up a thumbdrive and have my array active and usable.

It does not care about chipsets or cpu

I can expand my array in multiple ways(Replace all drives, add more vdevs)

I am not tied to one hardware raid card. What happens if the card dies? Great now I need the same card with the same firmware to access my data

Dont give me any crap about hw raid never dying, I've pulled 3 sas cards and 5 scsi cards last year from our aging servers.

You dont need to learn linux or unix for ZFS. NAS4FREE or Freenas latest are pretty hands off simple.

Unraid is even easier, but you need a license if you want more than 6 disks(Buy a license off someone on the forum)
 
ZFS has always been an interest because I've watched and read how easy it is to expand RAID both in size and in amount of disks.

No. For the media server you describe, you are going to want at least dual-parity. ZFS is inefficient to expand with dual-parity, as odditory already explained. You have to add at least 4 drives at a time, and half of those are lost to parity. This is one of the reasons ZFS is a terrible choice for a media server.

Your best bet is FlexRAID or SnapRAID. FlexRAID can be purchased with pooling. SnapRAID is free but does not come with pooling. I think FlexRAID would probably be a little easier for you, given your description of your skills. But you have to pay for FlexRAID, so you need to decide if being a little bit easier is worth buying your software rather than using a free program.
 
No. For the media server you describe, you are going to want at least dual-parity. ZFS is inefficient to expand with dual-parity, as odditory already explained. You have to add at least 4 drives at a time, and half of those are lost to parity. This is one of the reasons ZFS is a terrible choice for a media server.

Your best bet is FlexRAID or SnapRAID. FlexRAID can be purchased with pooling. SnapRAID is free but does not come with pooling. I think FlexRAID would probably be a little easier for you, given your description of your skills. But you have to pay for FlexRAID, so you need to decide if being a little bit easier is worth buying your software rather than using a free program.

You do not have to use RAIDZ1 or 2, you know. You can use ZFS without any redundancy at all if necessary, or in mirrors so you expand two drives at a time. Even if you take OP's request to mean literally what he said (RAID5 style redundancy), that means RAIDZ1, not RAIDZ2, and that means increments of three drives at a time. I'm not saying that OP's needs are best met by ZFS, but your statement about needing to expand four drives at a time and how OP "needs" dual-parity is a bit weird when even OP said he only needed single-parity.

By the way there is another way to expand ZFS without adding vdevs: replace the drives making up the vdevs with larger drives. So if your media server were a mirrored pair of 2TB disks, for instance, you could replace them with a pair of 3TB disks (one at a time, to allow for resilvering).
 
I want ZFS but life is too hectic so I need a more maintenance free solution.

Well, no data storage solution is "maintenance free", or at least not "admin free" anyway!
Personally I don't think ZFS is any more burdensome on that front than the typical alternative options (and it's less than some - eg Linux with LVM and mdadm).
Many options (inc some ZFS based solutions) have GUIs for those who aren't comfortable with the command line.

With whatever solution you use, I'd always advise becoming familiar with how to set it up properly, as well as how to recover when (not if) you eventually have a disk failure - not a warm feeling realising once it's happened that you don't really know how to recover it.
Many of the problems people get into are due to fundamental misunderstandings of the recovery process. With ZFS, it's simple as long as you know how :) (but ain't that always the truth)


I need around 10TB with an added RAID 5 like redundancy. Price is a factor and I'm looking at keeping the thing under $600 CDN without storage HDDs. The $600 would need to include everything else, new. Case, motherboard, CPU (can be onboard), PSU, controller card, and Ram. No monitor needed. I'm considering two small SSDs mirrored for the OS and possibly a third small SSD to act as a cache depending on what everyone here suggests.

Getting in under budget shouldn't be a problem, especially if you use a free OS.
Obviously a software based solution for the raid/volume mgnt also frees up budget elsewhere.
Personally I wouldn't bother with mirroring the OS drive unless high availability is your aim - just back it up. You can use an SSD, but equally you can boot many OS from a cheap USB pen-drive - the performance benefits of SSD are largely lost with this type of use.
Again, it depends how you want to use your budget.
As for a cache drive - personally I'd wait and see how the performance is without one - you can always add one later if you see it might help!

I'm also considering buying a Dell server just to take advantage of monthly payments instead of building a frakenserver up front. Overall that'll be twice as expensive (even without any storage HDDs) but It's kinda needed now and low monthly payments I can handle. Also under consideration are the NAS units but it seems anything above 6TB without RAID and the price sky rockets. Are there any other companies that offer monthly payments on home storage solutions?

I can't comment on your finances, but you could also pay monthly using a credit card on any components - even if the Dell offer is interest free, you may still end up paying more than just buying and building your own, especially if you use a low interest card.
You'd have to the figures on that though!

You may have to compromise with a lower end Dell server, in that it may not be exactly what you want...you can only buy what they have for sale - though at least you'd have a warranty on the whole server rather than just on each component (though maybe limited use if it's an RTB warranty).
An off-the-shelf NAS is an option, but your budget may limit you to 4bay units, which would mean the use of 4TB drives to get >10TB with raid5, and they are a bit pricey per GB at the moment, compared to 2TB/3TB models (also personally not too sure about using drives that big with single parity raid schemes) - they are as close to a turnkey solution as you'll get though.
 
Last edited:
You do not have to use RAIDZ1 or 2, you know. You can use ZFS without any redundancy at all if necessary, or in mirrors so you expand two drives at a time.

With a media server as the OP described, 10TB (presumably using 2- and 3TB drives), it is terrible advice to recommend anything less than dual-parity (and it is just ridiculously inefficient to use RAID 1 or one-to-one mirroring). With 3TB drives, it will take 8+ hours to replace a failed drive, and during that time if you have only single-parity, then another drive failure will lose data. With ZFS, it is even worse as you would lose everything on the vdev. That is another reason why ZFS is a terrible choice for a media server. At least with FlexRAID or SnapRAID, you would only lose the data on the failed drive(s). Even so, dual-parity is the least that should be chosen for a media server using 2+ TB drives.
 
Last edited:
10TB is still reasonable that any common solution (FlexRAID, RAID6, ZFS) makes sense and you can even try several of them in a matter of days.

Personally I'm at more than 50TB and Windows struggles horribly, so FlexRAID is out, "offline RAID" doesn't work since even if most of the data is static the smallest change means a loss of parity until it is calculated again, and I have nightmares hashing all my data and all my backups, I've been at it for months. ZFS is a solution to most of my problems even if I agree that expansion isn't the easiest, I'm planning on doing it 12 drives at a time (in RAIDZ2). Even without counting the drives (I have plenty already) it's still an investment, server board, server CPU, 16GB ECC, HBA, expander, 24 bays case to start with, good PSU, good UPS...
 
"offline RAID" doesn't work since even if most of the data is static the smallest change means a loss of parity until it is calculated again,

Not a big issue with SnapRAID or FlexRAID with a media server.

1) A media server does not contain files that are changed, only added, read-only files (changeable files are best kept segregated on other volumes).

2) If you need to delete files, then you first move the files to a staging area on the same drive(s) and then sync the parity. Only delete the files after the sync is complete. If drives die during the sync, you still have the files so nothing is lost. FlexRAID automates this process. SnapRAID requires a little user intervention (or a simple script).

3) Even if you do lose a drive after a change before parity is synced, it is still unlikely that you will lose data if it is just a change on part of one drive, and you have dual-parity. And even if you only have single parity when you lose a drive before parity sync, you will only lose a portion of the data on the failed drive, corresponding to -- at most -- the amount of data that was changed since the last sync. The rest of the data on the failed drive can be recovered.
 
Last edited:
With a media server as the OP described, 10TB (presumably using 2- and 3TB drives), it is terrible advice to recommend anything less than dual-parity (and it is just ridiculously inefficient to use RAID 1 or one-to-one mirroring). With 3TB drives, it will take 8+ hours to replace a failed drive, and during that time if you have only single-parity, then another drive failure will lose data. With ZFS, it is even worse as you would lose everything on the vdev. That is another reason why ZFS is a terrible choice for a media server. At least with FlexRAID or SnapRAID, you would only lose the data on the failed drive(s). Even so, dual-parity is the least that should be chosen for a media server using 2+ TB drives.

Mirroring is not "ridiculously inefficient" when you consider that you don't lose that much extra capacit (compared to RAID5-style setups). And you are suggesting RAID6-style setups which isn't exactly efficient either, though they do give a bit more protection that multiple mirrors, at least for the number of drives OP is probably considering, in that for mirrors, if two drives go down sequentially and were on the same vdev you are screwed, whereas RAID6 can tolerate two of any drive failing.

I do agree with you that RAID6 is fast becoming the "new" RAID5 because capacities have risen so much over the last several years. I also agree that ZFS is not necessarily the best choice. All I'm saying is that ZFS is not necessarily as horrible/terrible/etc. as you made it out to be, and you get some forms of data protection you wouldn't get under other schemes (hardware RAID or non-ZFS/btrfs/ReFS software RAID) due to ZFS checksums.
 
ZFS is a terrible choice for a media server. This has already been covered multiple times in these forums.

One-to-one mirroring is a terribly inefficient choice for a media server.

A media server set up reasonably with ZFS does NOT provide any data protection that cannot be better provided by SnapRAID or FlexRAID.

The best choice for a media server is SnapRAID or FlexRAID.
 
ZFS is a terrible choice for a media server. This has already been covered multiple times in these forums.

Stop making it sound like your opinion is the forum consensus. For instance, plenty of people stated things that contradict your "ZFS is a terrible choice for a media server" statement that you keep pounding, on this older thread: http://hardforum.com/showthread.php?t=1718587

Once again, I am not actually disagreeing that FlexRAID may be better for OP, but rather disagreeing with your insistence that ZFS is a "terrible" choice for a media server. ZFS can work okay, just isn't necessarily the best for OP's purposes, and OP said they didn't think they had time for ZFS anyway (even though ZFS is dead simple to set up with appropriate GUIs like FreeNAS), so maybe we should just stop talking about ZFS in this thread. :)
 
ZFS is a terrible choice for a media server. There are numerous reasons why ZFS for a media server is inefficient, wasteful, inconvenient, and much more likely to result in total data loss. This has been covered many times in these forums, and even in this very thread. If there were only one or two downsides, then maybe ZFS would not be a terrible choice. But there are numerous issues that make ZFS a terrible choice for a media server.

The best choices for a media server are SnapRAID and FlexRAID.
 
Last edited:
ZFS is a terrible choice for a media server. There are numerous reasons why ZFS for a media server is inefficient, wasteful, inconvenient, and much more likely to result in total data loss. This has been covered many times in these forums, and even in this very thread. If there were only one or two downsides, then maybe ZFS would not be a terrible choice. But there are numerous issues that make ZFS a terrible choice for a media server.

The best choices for a media server are SnapRAID and FlexRAID.

You may be an absolute fan of snapraid and co but please stop with this nonsense
You may compare a snapraid solution vs a striped raid solution - ZFS or conventional raid

But please do not compare ZFS with snapraid, they are completely different things.
One is a filesystem with advanced features and optional Soft-Raid options,
the other is a raid concept on top of different filesystems without realtime approach.

You can even use snapraid on Solaris with ZFS undernearth if you like and combine some ZFS features with snapraid.
You may show worst case scenarios on both systems with advantages and disadvantages.
But a absolute sentence like ZFS is a terrible choice is ignorance.
 
Hello Sir,

Have you considered letting ZFS into your life?

Please, have a read of our book, the Solaris handbook.
 
ZFS is a terrible choice for a media server.
This guy is either one of the best spam bots ever, or a highly unusual troll. I can't quite figure out which just yet, but given the identical sentences he uses in many of his posts, I'm leaning more towards the AI than the troll bit.
 
If you want a maintainance free solution, why not just a QNAP, Synology or the like. Upgrade the firmware every once and a while when some new feature comes along or they have a fix for something but other then that, it's set and forget. 5 bay models with 3TB WD Reds should be all you need.
 
Back
Top