home NAS suggestions?

merlin87

Gawd
Joined
Aug 18, 2001
Messages
964
I am looking for a NAS solution and have just started looking into it.

What am looking for :
*accessible to Mac and PC
*use this as a central backup system for my documents/photos
*stream music
*small footprint
*low power consumption
*on 24/7

I would only access the NAS to backup and/or play music.

I know everyone will say to go with a Synology or Qnap but would that be overkill for my needs? I dont need the highest transfer speeds since I will only use those speeds for the initial setup.

I was thinking of this guy but reviews are scarce. Any other suggestions?

http://www.amazon.com/gp/product/B0...pf_rd_t=101&pf_rd_p=1389517282&pf_rd_i=507846
 
As an Amazon Associate, HardForum may earn from qualifying purchases.
I just faced this dillema myself and it came down to either building my own ala FreeNAS or going with something pre-built instead....

I know everyone will say to go with a Synology or Qnap but would that be overkill for my needs? I dont need the highest transfer speeds since I will only use those speeds for the initial setup.

You called it! Go for a Synology ;) lol all jokes aside it's been great and I love the AD integration they have built in, it's even more robust in the DSM 4.2 release. If your not running AD, there are still a multitude of features that make it a viable device. I'm running a DS1512+ with 5x 3TB Western Digital RED NAS drives. Cost me a lot of money, but worth it to have all my data in one place now vs. scattered about various drives. The Synology will do all the things you want to do, even Time Machine backups from a Mac, I replaced my Time Capsule with this and it's great.
 
Build your own and use ZFS with NAS4Free if you are technically competent. If not then Syno is okay but you give up ECC RAM and ZFS's end to end ECC and software RAID via ZFS. Not to mention that prebuilts usually cost waay more than building our own.

If you want to play the middle ground, something like a HP N40L w/ NAS4Free/FreeNAS/etc. installed with ZFS will also do nicely. The N40L supports ECC. I also have used a Fujitsu MX130 S2 which is like a somewhat more power hungry N40L with more RAM expandability but less drive bay expandability and costs $100 less... but those are hard to find these days whereas N40Ls are plentiful and the new version of it came out recently, too.
 
Build your own and use ZFS with NAS4Free if you are technically competent. If not then Syno is okay but you give up ECC RAM and ZFS's end to end ECC and software RAID via ZFS. Not to mention that prebuilts usually cost waay more than building our own.

If you want to play the middle ground, something like a HP N40L w/ NAS4Free/FreeNAS/etc. installed with ZFS will also do nicely. The N40L supports ECC. I also have used a Fujitsu MX130 S2 which is like a somewhat more power hungry N40L with more RAM expandability but less drive bay expandability and costs $100 less... but those are hard to find these days whereas N40Ls are plentiful and the new version of it came out recently, too.

I have the HP N40L. Only had to reboot it once in the past year, and FreeNAS has been working flawlessly with 9TB (3TBx4disk, 1TB as parity) of storage. Small, quiet (with a tweak to the case because of a vibration issue), low power consumption, etc.

I store all of my videos on there, and backups of videos. Though...I'm paranoid, and have a backup of a backup of a backup of all my family photos/videos (including bluray discs in a safe offsite in a bank vault lol)
 
I am personally not a fan of DIY NAS solutions, I would rather buy one that I know is going to be reliable and requires very little work to get going, however you may prefer to build your own. As far as brand names go, my experience, and I have four brand name NAS's, is that the cheap more obscure ones, like the one you linked, are not worth your money or your time. If you are on a budget, my advice is wait and save until you can afford a Synology or Qnap, but don't by something cheap, you'll only wind up disappointed with either poor tech support, lost or corrupt files and or pouring a lot of your time into something you should of passed on buying in the first place. There is a reason people continue to recommend brands like Synology or Qnap, they work and they work well.
 
Depends on expectations.
A Qnap or Synology appliance is ideal if you do not want to stick in technical details and just look for a quite feature complete home NAS solution.

If you compare this to a HP microserver or a special DIY configuration with a comparable webbased appliance software based on FreeBSD or Solaris
like Freenas, napp-it, Nas4free or ZFSGuru you can get similar comfort but you get

- ZFS (with datasecurity at a new level compared to NTFS or ext filesystems used in these NAS Linux appliances)
- much more performance at a lower price
- better Windows integration when using Solaris CIFS
- no hardware vendor lockin (setup a new box and import the disks/datapool there with most settings kept)

In the case of a Microserver or some Supermicro configs I even offer downloadable and preconfigured napp-it USB stick images
where you also do not need to care about setup. Download, copy to stick with included tool and run your NAS.

Now compare a smaller Synology or Qnap to a HP Microserver appliance.
 
I am personally not a fan of DIY NAS solutions, I would rather buy one that I know is going to be reliable and requires very little work to get going, however you may prefer to build your own. As far as brand names go, my experience, and I have four brand name NAS's, is that the cheap more obscure ones, like the one you linked, are not worth your money or your time. If you are on a budget, my advice is wait and save until you can afford a Synology or Qnap, but don't by something cheap, you'll only wind up disappointed with either poor tech support, lost or corrupt files and or pouring a lot of your time into something you should of passed on buying in the first place. There is a reason people continue to recommend brands like Synology or Qnap, they work and they work well.

Ditto.... my thoughts exactly. I was running a small FreeNAS box that I built using spare parts I had laying around, and while it worked fine, I never took it serious enough because I didn't feel comfortable trusting my critical data to it, plus I was running out of space on the two 1TB drives I had in it. I looked at getting a new case/mobo/cpu/mem/drives etc and rebuilding what I had, and while it was cheaper I just simply decided it wasn't worth my time.

My point being, if you want to save money and your time is valuable, then definitely go for a FreeNAS/NAS4Free (or other) solution. For me, my time is a precious commodity so a pre-built solution was the better way to go. I'd rather not spend time wading through forums or asking for technical assistance on an IRC channel or mailing list :)

In the end, I think the best NAS solution is the one your satisfied with and does the job you need with no or minimal headache.
 
Depends on expectations.
A Qnap or Synology appliance is ideal if you do not want to stick in technical details and just look for a quite feature complete home NAS solution.

If you compare this to a HP microserver or a special DIY configuration with a comparable webbased appliance software based on FreeBSD or Solaris
like Freenas, napp-it, Nas4free or ZFSGuru you can get similar comfort but you get

- ZFS (with datasecurity at a new level compared to NTFS or ext filesystems used in these NAS Linux appliances)
- much more performance at a lower price
- better Windows integration when using Solaris CIFS
- no hardware vendor lockin (setup a new box and import the disks/datapool there with most settings kept)

In the case of a Microserver or some Supermicro configs I even offer downloadable and preconfigured napp-it USB stick images
where you also do not need to care about setup. Download, copy to stick with included tool and run your NAS.

Now compare a smaller Synology or Qnap to a HP Microserver appliance.

I just want to add to this. When I built my HP N40L, doing it DIY wasn't truly DIY.

All I had to do is plug in my 4 drives, make a bootable FreeNAS USB flash drive (very very easy), boot the system, log in via IP address (like most other NAS systems) and configure my array. FreeNAS has a ton of features, most I'll never use, but is also very easy to use. Plug in the drives, configure and forget, really.

Like I said in my post above, the system is amazingly stable and quiet.

Comparing a N40L build to a from-scratch system is like comparing a caramel apple to an apple pie, the later being MUCH more involved.
 
I hear what your saying, but IMO, HP is not in the NAS business per se, they have their micro servers and they fit a specific niche in the SMB market, you can certainly use them as a NAS, they will work, but I don't think they would be suitable for the average person, however they would certainly be an alternative, and perhaps a better middle of the road option for someone who might not want to build their own or buy a Nas in a box, like a Synology or Qnap.
 
I plugged a 3TB HD into my Asus RT-AC66U router and it's a pretty good low-rent NAS. Absolutely perfect for storing/streaming music/movies. I've had 3 devices streaming 3 HD movies simultaneously.
 
I just want to add to this. When I built my HP N40L, doing it DIY wasn't truly DIY.

All I had to do is plug in my 4 drives, make a bootable FreeNAS USB flash drive (very very easy), boot the system, log in via IP address (like most other NAS systems) and configure my array. FreeNAS has a ton of features, most I'll never use, but is also very easy to use. Plug in the drives, configure and forget, really.

Like I said in my post above, the system is amazingly stable and quiet.

Comparing a N40L build to a from-scratch system is like comparing a caramel apple to an apple pie, the later being MUCH more involved.

"One click" nas distributions are great when everything is working perfectly but what is someone to do when things go wrong and they have zero unix experience? NAS with raid or zfs are all about safely when things go wrong. You are up sh*t creek if you run into problems and you don't know your way around unix.
 
I am personally not a fan of DIY NAS solutions, I would rather buy one that I know is going to be reliable and requires very little work to get going, however you may prefer to build your own. As far as brand names go, my experience, and I have four brand name NAS's, is that the cheap more obscure ones, like the one you linked, are not worth your money or your time. If you are on a budget, my advice is wait and save until you can afford a Synology or Qnap, but don't by something cheap, you'll only wind up disappointed with either poor tech support, lost or corrupt files and or pouring a lot of your time into something you should of passed on buying in the first place. There is a reason people continue to recommend brands like Synology or Qnap, they work and they work well.

Reliable? Why are you entrusting data to non-ECC, non-ZFS/btrfs/ReFS prebuilts if reliability is your top priority?

Ditto.... my thoughts exactly. I was running a small FreeNAS box that I built using spare parts I had laying around, and while it worked fine, I never took it serious enough because I didn't feel comfortable trusting my critical data to it.

You're basing your opinion on a "feeling." Whereas others in this thread looked at cold hard facts like how ZFS is superior to the ext3/4-based prebuilts that also don't support ECC and cost a lot more, sometimes demand certain HDDs to work properly (hardware instead of software RAID), and cost a lot more. Got it.

"One click" nas distributions are great when everything is working perfectly but what is someone to do when things go wrong and they have zero unix experience? NAS with raid or zfs are all about safely when things go wrong. You are up sh*t creek if you run into problems and you don't know your way around unix.

That's what forums are for and you don't need to know UNIX for the common commands which are in the GUI. And if a prebuilt goes wrong due to lack of ECC or corrupted data due to lack of block-level checksums? Silent data corruption happens. Plus if it's hardware RAID controller failure, then you need to get more hardware of the same type which makes expensive prebuilts even MORE expensive.

Really, the OP needs to figure out priorities, including reliability, cost, ease of use, etc. and make a decision for himself. But it's kind of funny to hear these posts talk as if FreeNAS/NAS4Free/etc. with ZFS is less reliable than prebuilts using ext3/4, hardware RAID, and non-ECC memory.
 
I'm not sure what your basing your opinion on, but my cold hard facts are that I've had prebuilts running for years and my data has been safe and the units have performed reliably. Sure you can go the whole 10 yards with ECC and ZFS if you want and if you have the budget, but there are other solutions that are reliable and cost effective, remember the OP asked if a prebuilt was overkill and you want to suggest ECC and ZFS?
 
I certainly appreciate all the replies! I know this is [H] but I feel that a DIY NAS is more than I need. As intriguing as the N40l is I am looking for a simpler budget solution.

The discussion has been a great read though! Don't let this post stop that though since it can be beneficial to others.
 
I'm not sure what your basing your opinion on, but my cold hard facts are that I've had prebuilts running for years and my data has been safe and the units have performed reliably.
(How do you know that your data has been safe? Did you checksum all files every week to see if some bits have been flipped randomly? )

If you have little data, say a few TB, then data corruption is less likely to occur than if you have much data. If you have much data, there will always be data corruption somewhere, www.amazon.com reports:
http://perspectives.mvdirona.com/20...ErrorsCorrectionsTrustOfDependentSystems.aspx

And, if you are mainly storing media files, then a corrupt bit is not that important. It does not matter if a pixel is red instead of black.

So for home users with non important data, such as media files, you can use whatever storage solution you want. Flexraid/snapraid might suit well in this scenario.

But if you have important data then you should go for ZFS.
 
As an Amazon Associate, HardForum may earn from qualifying purchases.
You're basing your opinion on a "feeling." Whereas others in this thread looked at cold hard facts like how ZFS is superior to the ext3/4-based prebuilts that also don't support ECC and cost a lot more, sometimes demand certain HDDs to work properly (hardware instead of software RAID), and cost a lot more. Got it.

Where did I say that one was more reliable than the other? My point was simply go for what's best for your particular application. I wasn't debating which one is better than the other, there was no "feeling" about it for me. I don't have time for the care and feeding of a DIY NAS solution.

My goal was to set it up and forget it, and sure I probably spent more money vs. DIY'ing but honestly I don't care. In the end I'm happy with my purchase and that's all that matters to me, and that's all that will matter to the OP with his solution.
 
...and we're back to the good old ZFS debate :p

If the data is important enough that a single bit error matters then by all means use ZFS, but the user should understand that they must become familiar with it even if using a frontend such as FreeNAS, because "That's what forums are for" is a really shitty situation to be in when the stuff hits the fan. As a side note it's important to mention that ZFS should not be your only backup if the data is that critical.

I've looked over the pros and cons of ZFS and personally speaking I'd rather stick with what I know. Synology's software backend is great and their support is very well documented as being exceptional. I know the limitations of the hardware/software and I plan accordingly, so the data corruption risk is mitigated for my most critical data.
 
...and we're back to the good old ZFS debate :p

It's sad that every thread in this forum ends up pushing zfs as the one and only best solution for every data storage problem and every user. It's as if no one ever stored data reliably before zfs existed. Every other solution corrupts your data, kills your cat, and punches your mom in the face. :eek:

I saw a post asking why someone wasn't using zfs instead of hadoop and openstack. That makes as much sense as asking why he isn't using his lawnmover to wash his clothes.
 
Well before knowing about ZFS I didn't think that much about data integrity. After thinking about it I started to checksum all my files, both "originals" and backups, and started noticing a lot of corruption. Not all linked to bit rot for sure, some probably to software bugs, network troubles, etc., but the fact is, if you don't look for problems, it doesn't mean they're not there.
 
Well before knowing about ZFS I didn't think that much about data integrity. After thinking about it I started to checksum all my files, both "originals" and backups, and started noticing a lot of corruption. Not all linked to bit rot for sure, some probably to software bugs, network troubles, etc., but the fact is, if you don't look for problems, it doesn't mean they're not there.

True enough, but I do the same checksumming on my data myself and haven't had any issues. Very strange you experienced a lot of corruption...
 
The trouble with the zfs checksumming mentality is that it totally ignores the fact that the data does anything other than sit inside zfs. When you write to a zfs nas from some random windows box the data is "unprotected" all the way from the disk on the windows box, over the network, and into the nas box. Protection begins when the nas box writes into zfs. There is a similar situation when it's read out of the zfs nas.

Some zfs advocates will tell you how great zfs is at detecting hardware problems like bad memory, bad cables, etc. My question is this: Are you looking for a hardware diagnostic or a filesystem?

There is a lot of mission critical enterprise data in the world. Very little of it is stored in zfs. The sky is not falling in due to corrupted data. Yet every guy who posts asking how to store his collection of bluray rips is told that zfs is the only way to avoid having a pile of corrupt data in a year's time.

It's so bad that people with zero unix experience actually believe that their data is safer in their diy zfs nas than in a commercial raid box. A zfs owner/admin with zero unix experience is the single biggest risk to his own data, not bitrot.
 
This is a pure problem of data amount and statistics.
The more data you have the more errors you must espect, especially in the multi-terabyte area.

This is the reason for the need of new filesystems like Btrfs, WinRE or ZFS
with checksums on data and CopyOnWrite filesystems.

They allow not only snapshots without delay and initial space consumption,
they allow online filechecks on access or on scubbing the whole pool with data repair.

Conventional filechecks on filesystems like ext or ntfs cannot find problems nor repair data -
despite the fact that you need to offline a filesystem for hours or days on large volumes.

With large data you MUST look at these new filesystems and you should use them.
The idea, that there are no errors because you cannot detect them is not helpful.
 
I saw a post asking why someone wasn't using zfs instead of hadoop and openstack. That makes as much sense as asking why he isn't using his lawnmover to wash his clothes.
There are solutions built on top ZFS. For instance OpenAFS, Lustre, etc. Why not use both?


Some zfs advocates will tell you how great zfs is at detecting hardware problems like bad memory, bad cables, etc. My question is this: Are you looking for a hardware diagnostic or a filesystem?
How about a filesystem that also diagnoses and detects the slightest hardware problem? Wouldnt that be superior to just a filesystem that can not detect problems?


There is a lot of mission critical enterprise data in the world. Very little of it is stored in zfs. The sky is not falling in due to corrupted data. Yet every guy who posts asking how to store his collection of bluray rips is told that zfs is the only way to avoid having a pile of corrupt data in a year's time.
It is the exact same thing with ECC RAM. Lot of computers run without ECC and still the sky is not falling down. This must mean that ECC RAM is useless, right?

When a software on Windows crashes, MS gets diagnostic information if you report the error. MS found out that 33% of all Windows crashes are due to non ECC RAM. So MS tried to make vendors only sell ECC RAM. MS attempt did not fell well out. If you run Unix on the same hardware, there will be less crashes because Windows has notorius bad uptime. Unix is more robust.

So in a similar vein, data integrity filesystems are useless, right?


It's so bad that people with zero unix experience actually believe that their data is safer in their diy zfs nas than in a commercial raid box.
I would bet my money on the diy zfs nas box, everyday. Hardware raid is not even designed to detect data corruption. Or do you think that hw-raid could also do this:
http://hardforum.com/showthread.php?t=1754636
In this thread (active as we speak) ZFS notices lot of data corruption on several disks at the same time, on his DIY home made server. That must mean a hardware error, people suggests. So the guy checks his RAM, and sure enough, it is faulty. ZFS detected that. If he would run hardware raid, he would have continued without getting any warnings and slowly his data would have been corrupted.

Here is another story. He installs ZFS on his old DIY PC, and in a few minutes he gets data corruption warnings. On several disks. After some research, it turned out that the power supply model is flaky. Do you think hardware raid would have detected this?
https://blogs.oracle.com/elowe/entry/zfs_saves_the_day_ta
In fact, he recalled earlier using the old filesystem UFS, that sometimes he had corrupted files so he had to restore from backup. But he never thought of those problems. Until he got ZFS.

Data corruption is more common than you think. Maybe there is a reason everybody tries to add checksums like ZFS today? Maybe there is a reason ZFS is gaining traction? We have more data today than earlier. The more data => higher risk of corruption. 10 years ago, we had small data and data corruption was not a concern. There is always a small chance of data corruption for every bit read, if you have much data, you are sure to have corrupted data.

I am surprised you missed all this, after the umpteen ZFS threads here. But if you know of a hardware raid that also has the error detection that ZFS has, please tell us.
 
Last edited:
MS found out that 33% of all Windows crashes are due to non ECC RAM. So MS tried to make vendors only sell ECC RAM. MS attempt did not fell well out.

Do you have a source for this claim? I do find the "windows has notorious bad uptime" comment funny.
 
Do you have a source for this claim? I do find the "windows has notorious bad uptime" comment funny.
Ive googled a bit, (I am at work right now) but could not find the exact MS report Ive read. If you google a bit, you will find the exact numbers.

However, Ive found some relevant links (without the numbers):
http://news.cnet.com/8301-10784_3-9721344-7.html
"...According to a report in EE Times, the software maker MS has been privately circulating a white paper that claims errors from standard memory are now among the top 10 causes for system crashes..."


Here is the result of the MS report, "MS recommends ECC RAM":
http://www.tgdaily.com/technology/24190-microsoft-to-encourage-use-of-ecc-memory-for-vista


Old well known information about ECC RAM mitigates crashes:
https://docs.google.com/viewer?a=v&...CFVxFb&sig=AHIEtbQzQmMWndF0MUejWs3YPlksxlcZXg



And maybe you missed it, but Windows have a notoriously bad uptime. Sure, it works fine for desktops where you dont stress the hardware. But put Windows on a large server that gets loaded 100% and it will crash. Not many oses can handle 100% load under long time. Mainframes can handle that workload. OpenVMS rivals Mainframes with decades of uptime. Unix is not as stable as Mainframes/OpenVMS but almost. Linux is definitely not as stable as Unix. Windows is worst.

For instance, London Stock Exchange deployed a Windows system (called Tradelect) that crashed sometimes, so LSE bought MilleniumIT a small company with stock exchange systems on Linux + Solaris. The Windows Tradelect system costed $70 million and got scrapped after a year. The MilleniumIT company costed $30 million to buy. If the Windows system would have worked fine, LSE would never have scrapped the Windows system after a year. There are lots of information on this if you google a bit on "LSE MilleniumIT Tradelect". Also, Tradelect suffered from bad latency because Windows network stack is slow compared to Linux. And a stock system needs to be fast.
 
Depends on expectations.
A Qnap or Synology appliance is ideal if you do not want to stick in technical details and just look for a quite feature complete home NAS solution.

If you compare this to a HP microserver or a special DIY configuration with a comparable webbased appliance software based on FreeBSD or Solaris
like Freenas, napp-it, Nas4free or ZFSGuru you can get similar comfort but you get

- ZFS (with datasecurity at a new level compared to NTFS or ext filesystems used in these NAS Linux appliances)
- much more performance at a lower price
- better Windows integration when using Solaris CIFS
- no hardware vendor lockin (setup a new box and import the disks/datapool there with most settings kept)

In the case of a Microserver or some Supermicro configs I even offer downloadable and preconfigured napp-it USB stick images
where you also do not need to care about setup. Download, copy to stick with included tool and run your NAS.

Now compare a smaller Synology or Qnap to a HP Microserver appliance.

The microsever does look interesting and newegg even has a promo for the N40L right now. Can you elaborate on the process with the usb image a bit more? Do I need a pre-existing OS on the microsever before running the image on the USB?
 
The microsever does look interesting and newegg even has a promo for the N40L right now. Can you elaborate on the process with the usb image a bit more? Do I need a pre-existing OS on the microsever before running the image on the USB?

I'll chip in real quick, but hoping that others will link to guides.

You do not need any OS on the HP Microserver before installing FreeNAS. The N40L has a small internal USB port (not header, an actual port) that you can plug in a small USB stick that'll hold your OS for booting/running. I used a Corsair USB stick.

To prep the drive, I think (can't remember, but this looked very familiar) this guide:

FreeNAS USB bootable stick

After you have your drives installed in the N40L, and you've plugged in the FreeNAS USB stick to the internal USB port, just boot up the N40L. You can hook it up to a monitor/keyboard and do configuration that way, or use an IP address to log in via a webpage control panel to setup the array.

I suggest using a static IP for ease of use/configuration.

There's also a thread here on Hardforum that is dedicated to the HP Microserver. LOTS OF HELP HERE!
 
The microsever does look interesting and newegg even has a promo for the N40L right now. Can you elaborate on the process with the usb image a bit more? Do I need a pre-existing OS on the microsever before running the image on the USB?

regarding napp-it to go (based on OmniOS, a free Solaris fork):
- download napp-it to Go, http://napp-it.org/manuals/to-go.html, unzip
- start the included USB cloner (Windows app)
- restore the included USB image to a 16 GB USB stick (prefer a fast one)

- plugin the stick in the upper front USB slot
- boot (DHCP server needed)

at console:
login as root, no pw
enter ifconfig to get ip adress

Start a browser with adress http://ip:81
Thats it, everything is preconfigured and managed via Web-UI
 
I am surprised you missed all this, after the umpteen ZFS threads here.

Sadly, I've missed none of the zfs cheerleading. I see it week after week. month after month, year after year. It seems like every thread that isn't about ssd's eventually turns into a zfs thread.

But if you know of a hardware raid that also has the error detection that ZFS has, please tell us.

My point is that not every storage scenerio warrants zfs class error detection. In many cases, using zfs is like using a cannon to go duck hunting. ZFS is not the optimal choice for every person and every storage problem on the planet.

Putting important data on a zfs box when the owner has zero unix knowledge is absolute madness. I can't believe that every other thread here recommends doing so. A zfs nas is not an appliance like a router. You can't hold the reset button for 30 seconds and undo all of your mistakes.
 
It's sad that every thread in this forum ends up pushing zfs as the one and only best solution for every data storage problem and every user. It's as if no one ever stored data reliably before zfs existed.
That is true. ZFS is the first reliable filesystem. Others will follow, but ZFS is the first reliable and usable solution. Take that as a fact, and move on.

If you care about your data, you need to protect it properly. There is no solution which does this as well as ZFS.

In particular, all other solutions, including the Ext4 in Synology, can offer no protection against bad sectors. A bad sector means a failed disk to those legacy solutions. However, most people who use legacy filesystems like Ext4 or NTFS blame their harddrive as being 'defective' whenever it generates bad sectors. The truth is that actually their filesystem is defective, not their harddrives. Their legacy filesystem basically demands a perfect storage device. But today, storage devices are anything but perfect; they come with 10^-14 uBER specification. That means, that about half of all high-density harddrives will generate bad sectors within a year or two.

If you choose anything other than ZFS, Btrfs and perhaps ReFS, you will need full 1:1 backups to protect your data. Even then, you have problems like not knowing whether your data is corrupt or not. And limited protection against bad sectors.

ZFS simply has a reason to be popular. In the future where NTFS and Ext4 and all that legacy stuff is not widely used anymore, then you don't need ZFS. Today, however, ZFS basically is the only real choice for secure mass data storage. In particular to those who cannot afford a 1:1 backup of their data.

Either go ZFS, or live with the fact that your storage solution can fail on something as simple as bad sectors. So many people have lost their data already; you simply have to make a choice between convenience and protection.
 
If you care about your data, you need to protect it properly. There is no solution which does this as well as ZFS.

Either go ZFS, or live with the fact that your storage solution can fail on something as simple as bad sectors. So many people have lost their data already; you simply have to make a choice between convenience and protection.

You missed his point: "... pushing zfs as the one and only best solution for every data storage problem and every user. It's as if no one ever stored data reliably before zfs existed."

I don't put much value on the videos I have stored, but I do value our business data.

Our current business data is about 50GB. It has a value of $300K. Most likely worth more than most of the video collections of posters on this site. I use a defective Windows file system and have never lost any data.

Windows is good enough for low value data like video collections.

---

If your clients are losing (or fear losing) high value data, perhaps different IT people would be a better investment than ZFS.
 
You missed his point: "... pushing zfs as the one and only best solution for every data storage problem and every user. It's as if no one ever stored data reliably before zfs existed."

I don't put much value on the videos I have stored, but I do value our business data.

I don't know why you wouldn't put much value in video collections. If you have 1 or 2TB sure maybe one could stick with Windows. But really why? For DLNA? At this point there's so many third party solutions to deal with that issue that it's not really needed. Furthermore who the hell wants to re-rip TB's of videos? The amount of time it takes to back up BR's is formidable. How about speed? Amazingly built-in Windows solutions are worse than FlexRAID on the same OS w/ Storage Spaces. What does the license on FlexRaid cost you? I think it was $30.

The issue isn't that no one has ever stored data reliably. It's that files can get corrupted and unless you've got a cronjob running or some script checklng checksums you have no idea if you have what you think you have. Now this is slightly different if you have hardware RAID. It does an OK job of clearing up issues that software raid would miss. But even that goes out the window if you are talking about software solutions provided by Windows. Linux/Unix (not ZFS) still do a better job with more RAID types and better performance than Windows will give you. That being said, people at home who store massive amounts of data still have issues with bitrot.

Back in the day I wish I had something like ZFS. I've gone through the IBM Deathstar and Seagate 7200's that ran happily....while it messed up files left right and center. Had no idea until I scurried to copy files off of the dying drive.

There's nothing like listening to a mp3 that's effed up of Tori Amos' Professional Widow. "It's gotta be big".. bleep blop bloop. "It's gotta be" bleep blop boop.

For video it's pretty resilient against bitrot. However, why risk it if you don't have to? Why drop frames if you have an option to not have corrupted frames?

There is no additional cost for running ZFS for home use. It's the same as it is for Windows so why not do it? The RAM cost is easily covered by the license with Windows and any of the BS that they put you through.

Our current business data is about 50GB. It has a value of $300K. Most likely worth more than most of the video collections of posters on this site. I use a defective Windows file system and have never lost any data.

Windows is good enough for low value data like video collections.

If your clients are losing (or fear losing) high value data, perhaps different IT people would be a better investment than ZFS.

The only reason to stick with Windows as your end storage unit at this point is because of the lack of experience of alternatives. That pretty much is the only reason now. If that's the only reason then needing different IT people would be well advised. In addition 50GBs @ 300k a pop? Pfft Shitty workers and slow Internet can produce a loss like that within a week.

It's really not about being a zealot. It's about picking the best for minimal cost. Hell we haven't even discussed the limitations of Windows Server 2003 R2 + SMB which has nothing to do with bitrot.

Windows as an end storage unit blows. Great they have Storage Spaces..... too bad Linux, any flavor of Unix and Solaris have been doing the same thing with good performance for over a decade. Hell even FlexRAID in Windows is better than the stuff that's built in.
 
Last edited:
Where did I say that one was more reliable than the other? My point was simply go for what's best for your particular application. I wasn't debating which one is better than the other, there was no "feeling" about it for me. I don't have time for the care and feeding of a DIY NAS solution.

My goal was to set it up and forget it, and sure I probably spent more money vs. DIY'ing but honestly I don't care. In the end I'm happy with my purchase and that's all that matters to me, and that's all that will matter to the OP with his solution.

You wrote " I was running a small FreeNAS box that I built using spare parts I had laying around, and while it worked fine, I never took it serious enough because I didn't feel comfortable trusting my critical data to it."

You said yourself that you didn't "feel" comfortable even though it worked fine. Case closed.

...and we're back to the good old ZFS debate :p

If the data is important enough that a single bit error matters then by all means use ZFS, but the user should understand that they must become familiar with it even if using a frontend such as FreeNAS, because "That's what forums are for" is a really shitty situation to be in when the stuff hits the fan. As a side note it's important to mention that ZFS should not be your only backup if the data is that critical.

I've looked over the pros and cons of ZFS and personally speaking I'd rather stick with what I know. Synology's software backend is great and their support is very well documented as being exceptional. I know the limitations of the hardware/software and I plan accordingly, so the data corruption risk is mitigated for my most critical data.

I respectfully disagree for the reasons that others have pointed out, and furthermore I disagree with your "stuff hits the fan" line. Guess what, the people on forums like [H] and FreeNAS/NAS4Free are very knowledgeable. Look at the huge threads on the N40L and Napp-It for instance. If you post a question you are likely to get a response unless it's totally weird. With commercial prebuilts, you might get faster or slower service, with more--or less--knowledgeable respondents, and only for as long, as their support will undoubtedly ebb for older products. I am surprised a forumer like you would diss forums the way you did, frankly... I mean, if I have a Windows 7 problem, the first place I go is forums, not try to get a response from Microsoft, and that almost always answers my question, probably as well or better than a dedicated Microsoft employee would, and definitely faster than they could.

As others have said, ZFS DIY boxes are more likely to warn you of imminent failure and for me personally it identified a flaky SATA cable very quickly.

And with DIY, as others have mentioned, you can actually shift from one DIY box to another cheaply, unlike with prebuilts.

No need for strawmen like claiming that one needs ZFS to protect blu-ray rips. (One notorious person on here keeps saying that ZFS is a "terrible" choice for a media fileserver, but "terrible" is an overstatement. And OP's situation is different anyway--he is storing things that sound like they would be more important than that.)

And to the argument that ZFS only protects data within the NAS and not outside of its in the client, isn't some additional protection better than nothing? I mean, that anti-ZFS argument is like criticizing an umbrella for failing if winds go over 70mph, while giving it no credit for the protection it gives when winds are less than 70mph. (And by analogy, prebuilts might give protection when winds are less than 35mph.) And that argument also ignores how, while your client may take snippets of data here and there, the bulk of the data at any time will be sitting on drives decaying--sometimes without you knowing it until you try to access the data later after it's already corrupted. http://www.zdnet.com/blog/storage/data-corruption-is-worse-than-you-know/191

This is not to say that prebuilts do not have some advantages, e.g., slightly lower power draw in many cases. Prebuilts sometimes also have large forums depending on the product and company. They are somewhat less finicky to set up, though IMHO, the various NAS GUIs aren't that hard to set up if you have any technical savvy. Etc. OP has already decided for himself, and it's cool if he wants to go prebuilt. I don't think anyone is faulting him for it. We're just giving our opinions, same as you. And imho, DIY NAS is a viable alternative to prebuilts, if you have any decent amount of technical savvy.

Sadly, I've missed none of the zfs cheerleading. I see it week after week. month after month, year after year. It seems like every thread that isn't about ssd's eventually turns into a zfs thread.



My point is that not every storage scenerio warrants zfs class error detection. In many cases, using zfs is like using a cannon to go duck hunting. ZFS is not the optimal choice for every person and every storage problem on the planet.

Putting important data on a zfs box when the owner has zero unix knowledge is absolute madness. I can't believe that every other thread here recommends doing so. A zfs nas is not an appliance like a router. You can't hold the reset button for 30 seconds and undo all of your mistakes.

You caricature ZFS proponents for promoting it even when it doesn't make sense (though there is some truth in this for sure!), but your own argument is a caricature. Most people only need to set up a ZFS NAS once and don't really need to touch it again for a long time afterwards. If a drive fails just replace it and let it resilver. You act as if ZFS NAS users reconfigure their NAS every day and play with settings they don't understand all the time without consulting anyone on forums beforehand if they are stuck. Let's give [H] forumers a little more credit than that, okay? Most people on here have triple-digit IQs.

You missed his point: "... pushing zfs as the one and only best solution for every data storage problem and every user. It's as if no one ever stored data reliably before zfs existed."

Our current business data is about 50GB. It has a value of $300K. Most likely worth more than most of the video collections of posters on this site. I use a defective Windows file system and have never lost any data.

Yes people had storage prior to ZFS, but you seem to have missed the memo about how the huge increase in data sizes means that older methods might not work as well as they used to. The 10^-14 URE stat for consumer drives has stagnated even as user data has grown exponentially. (And in reality it's worse than 10^-14 which are laboratory conditions and don't account for multiple r/w rounds. See, e.g., http://www.zdnet.com/blog/storage/data-corruption-is-worse-than-you-know/191

Are we at the point where most people "need" ZFS? Regardless of what your answer may be, this is [H]. We aren't "most people." Thank goodness. :)

I don't think anyone is arguing that ZFS is necessary for low-value data. As for people claiming that they never lose data, I have three words for you: silent data corruption.

I suppose you were trying to bolster your argument with an anecdote, but I know people who run businesses where if you knew what went on inside, you would stop being a client. Sort of like restaurants where if you looked into the cooking area, you would never go there again. So I will value your anecdotal "evidence" at what it is worth. Also, 50GB is not that big and you are less likely to have corrupt data than people with larger arrays. But I suspect that 50GB will grow. Furthermore, your smug valuation of user's videos is inappropriate when some users could not put a price tag on the value of the memories contained therein. Some things in life you can't put a price tag on, and I would not be so casually dismissive of [H] forumers' desires to protect those memories. (And yes everyone should use backups and not just NASs, whether prebuilt or DIY, for anything they value more than the price of backup.)
 
Last edited:
Our current business data is about 50GB. It has a value of $300K. Most likely worth more than most of the video collections of posters on this site. I use a defective Windows file system and have never lost any data.
Well, I'm happy for you. But that does not invalidate my statement.

In fact, it supports my statement. This is because your solution more than likely is not using regular consumer harddrives with 10^-14 uBER specification. Instead, you use something like 10^-15 or even 10^-16 harddrives which develop bad sectors up to a factor 100 less than regular harddrives. You probably are also using expensive RAID controllers with battery protected buffercache and additionally a UPS on top. All this money is spent because the software (like NTFS) demands a perfect storage device. ZFS takes a different route, which is comparable to RAID in the past....

Back in the old days, you had two choices: either buy a very expensive harddrive that was reliable, or buy cheap harddrives but use intelligent software to transform it into a reliable storage volume. So you see, it is a choice between:

1. Dumb software + expensive harddrives that are more reliable by themselves (proprietary stuff; $$$)
2. Smart software + cheap harddrives that are less reliable by themselves (the ZFS way)

Of course, companies selling you all the expensive stuff would want you to go for option 1. They won't earn much if you buy cheap consumer-grade harddrives, would they? Because companies often use proprietary software, this means they have often have to spend a lot of money on the hardware.

ZFS is the smart route. You use only cheap harddrives, cheap controllers, no battery backup, no UPS. Just regular consumer-grade cheap hardware. And pair it with intelligent software. There you go; a reliable storage volume.

ZFS is immune to bad sectors, and makes the uBER specification of harddrives virtually irrelevant. That is the power of intelligent software. Cheers! :)
 
Ok, as someone said, this has evolved into a ZFS thread too.

Let us agree on this:
ZFS is best when you need to protect important data. If data is not important, you can as well as use another fileystem or hardware raid. If you have media files, you can perfectly use Flexraid or Snapraid or other solutions.

If you have never seen data corruption on small amounts of data, it does not mean data corruption does not exist. "Abscense of evidence, is not evidence of abscense". This is where I felt I needed to chime in for correction.

Ok? Let us go back to the original question.

Flexraid or Snapraid or Synology or any other solution will be fine if you dont have important data.
 
I don't see how something like FlexRAID isn't really safe.

Every file is checksummed. If a file gets corrupt due to non-ECC memory, or due to a bad sector, wouldn't the checksum always detect a difference? When that happens FlexRAID emails me the file(s) names that don't match and then I restore them from parity or restore them from the cloud in the worst case.

Is this solution really no good and just the ticking time bomb that you seem to say it is?
 
Somehow I went from wanting a cheap ~100 nas box to store misc docs and music to wanting something like a HP N40L. This is why I try to stay away from [H] but at the same time why I love it here.

After reading all the replies I wouldn't mind having the N40L and storing my photos there as well. Now you have me second guessing my raid DAS that I used for my main photo backup.
 
Back
Top