free NAS OS that doesnt need ECC

Ryland

2[H]4U
Joined
Nov 19, 2004
Messages
2,504
I have been looking at replacing my aging Synology NAS with a home built one but I dont have $600 to spend on this project. Are there any good free NAS os's that dont require ECC ram and server grade hardware? Can I use something like nas4free with an onboard RAID controller for two 2TB disks?
 
As far as I know, none of them require ECC. ECC is just highly recommended, as you are storing a lot of data in the cache that ECC holds and it would be a shame for something to be written with errors.

There are many writeups on this around the net. I would research before making the choice.
 
You can actually find some pretty cheap servers on ebay to make a NAS out of. I picked up a 12 bay dell poweredge that came with 16GB of ECC for less than 500 bucks. Came with a perc H700 wich supports 2GB+ large drives. I had two of my WD reds crap out on me in my raid 10 array, and I can honestly say that if I cheaped out on the raid controller, I probably would have lost all of my data. No problems loading freeNAS and openfiler on it. But even though freeNAS doesn't need ECC ram, I really wouldn't skip out on it, especially for a file server\nas\san.
 
How about going with a hardware RAID card and not using ZFS? This is just for home use and I would stick with my synology but it cant do transcoding. At this point Im thinking of just letting my desktop be the Plex server and just upgrading the drive in my synology.
 
You can do that, but at the end of it all, you have to pick up a hardware raid card, cache battery plus the price for the backplane, it will be close to the price of a used server. This is only if you decided to go with and enterprise level controller.
 
It is definitely sounding like doing this in any way that even closely resembles "correctly" is going to cost way more than Im willing to throw at it. I had a hard enough time parting with the $120 to buy the original DS110j hardware.
 
Sure, you can use regular desktop parts, but your chances of corrupting ALL of your data goes up substantially by not using ECC.
 
Sure, you can use regular desktop parts, but your chances of corrupting ALL of your data goes up substantially by not using ECC.

How does a small NAS box get away with it then or is it using a small amount of ECC memory?

Wouldnt a hardware RAID card alleviate the ECC requirement? Im thinking that this point is moot because decent hardware RAID cards are $200+
 
Last edited:
Found a bunch of HP DL360 servers on ebay for fairly cheap. Just not sure they support standard SATA drives but thats a different discussion.
 
Found a bunch of HP DL360 servers on ebay for fairly cheap. Just not sure they support standard SATA drives but thats a different discussion.

make sure that the raid controllers have large drive support. All SAS controllers for the most part are backwards compatible with SATA drives.
 
I thought you could configure it to use either the hardware RAID or ZFS.

the OS runs of the memory in the motherboard, not the memory from the RAID card. Sure the raid card may be fine but the OS is what is working with the data as well and if 1 block is corrupted, it then copies it self and so on and so forth. At first it may be 1 jpeg you cant open, then the data is moved and another block now goes bad.. oh, now it is 10 images and 1 video you can no longer open...

and it can continue.
 
The OS will have far greater problems than a few jpgs corrupting if it starts to produce faults like that. Most cost effective (power use wise etc) is usually just to get a dedicated cheap NAS box that will do the job without consuming 200W+ in the process.
 
Having ECC is always better than not having ECC.
That said, though, you can reduce your risk of losing *all* your data on a non-ECC system by strategically picking a file system. I'm no expert so I won't make a specific recommendation, but what I'm saying is "something other than ZFS". ZFS can trash the entire volume if one bit flips. On a Windows machine with NTFS, by comparison, over the decades I occasionally have a JPEG in a folder get corrupted due to bitrot.

So a non-ECC machine running the right file system, will keep most of your data safe, and only corrupt the odd file.

Hardware RAID controllers have ECC, so if you're running a RAID on a non-ECC system, use one of those. It won't protect you from the a main RAM bit-flip wrecking the odd file, but it would protect you from the entire array getting trashed. (Versus if you did a software based RAID using main memory) Go JBOD if no hardware RAID. (IMHO)

And of course, have backups no matter what you do.
 
At this point I am either going to not bother upgrading my current NAS box OR buy a server from ebay and use that. Hopefully a two processor quad core xeon will be enough for transcoding.
 
The only situation I have ever had file corruption was when I had a faulty controller. Images would corrupt during copy.
 
At this point I am either going to not bother upgrading my current NAS box OR buy a server from ebay and use that. Hopefully a two processor quad core xeon will be enough for transcoding.

Not necessarily: It depends on which quad core Xeons you're talking about. Some of them are pretty crappy by today's standards (like the ones that uses DDR2 FBDIMMs for example) and some can do fairly well (any that uses DDR3 at a minimum.
 
Not necessarily: It depends on which quad core Xeons you're talking about. Some of them are pretty crappy by today's standards (like the ones that uses DDR2 FBDIMMs for example) and some can do fairly well (any that uses DDR3 at a minimum.

I did end up finding that out when I went looking on ebay. One thing is weird is that I can find plenty of microATX boards with ECC support but no small cases for them yet I can find plenty of miniITX boards without ECC support and lots of small cases. I just cant find a good combination. Ahh well, I will have the disks I need incase I can find a combination that works for me.
 
I'm no expert so I won't make a specific recommendation, but what I'm saying is "something other than ZFS". ZFS can trash the entire volume if one bit flips. On a Windows machine with NTFS, by comparison, over the decades I occasionally have a JPEG in a folder get corrupted due to bitrot.
The only reason I could see ZFS trashing an entire volume is if the table got corrupted. In that instance, the same thing would happen if it were ext4, brtfs, ntfs, fat32, etc.

ECC RAM is highly recommended, but not required for any file server. Not using it increases your chances for data loss - But it's not a huge number to begin with. If an ECC system has a 0.1% chance to lose data, a non-ECC system may have a 0.2% chance. "Double the chance of data loss!"
 
One of the guys on the freeNAS forum is extremely belligerent about using ECC memory. Why would someone use a filesystem that can be trashed by a single bit flipping when there are other more robust ones out there that can handle a single bit flip, even if it results in a corrupt file? I have had windows installs go bad due to bad memory which is why I have backups of that machine. The NAS box would be a backup destination but also the only source of my video files which is why Im leaning towards going the recommended route with ECC memory.

I did find a new Intel server motherboard from 2012 which takes LGA 1155 and DDR3 memory for $99. Im tempted to go in that direction because that lets me pick a decent processor but there just arent that many small cases for a uATX board.
 
It looks like Directron has a few Apex SFF cases which arent necessarily "small" but could be small enough.
 
The only reason I could see ZFS trashing an entire volume is if the table got corrupted. In that instance, the same thing would happen if it were ext4, brtfs, ntfs, fat32, etc.

ECC RAM is highly recommended, but not required for any file server. Not using it increases your chances for data loss - But it's not a huge number to begin with. If an ECC system has a 0.1% chance to lose data, a non-ECC system may have a 0.2% chance. "Double the chance of data loss!"

Are you factoring in error bit from the hard drives themselves as well...
 
The only reason I could see ZFS trashing an entire volume is if the table got corrupted. In that instance, the same thing would happen if it were ext4, brtfs, ntfs, fat32, etc.

Is that really true though? If the journal gets damaged isn't it usually correctable with fsck or chkdsk (albeit you might lose some files)?
 
I prefer Linux's software RAID. Have been running a RAID 5 array for over a decade. It has been simple to expand the size with larger hard drives and even migrate to RAID 6 for better data protection.

The whole ECC memory thing is a bit misleading. There's no real point in worrying about the 0.0000001% risk from using non-ECC memory, when the risk of data loss due to other factors are several thousand times greater. Your data is much more likely to get corrupted due to disk failure, software failure, accidental deletion, virus, etc. For this reason most modern file systems are fairly fault tolerant and will not meltdown if a random bit gets flipped. The bigger issue is data integrity, for which other layers of protection are in place (ie. parity in RAID 5). Linux has an option that regularly rechecks on-disk data against the parity information.

My final point is that these storage systems are not designed to replace proper backup procedures. They're meant to minimize downtime and improve ease of recovering from certain types of disasters.
 
I prefer Linux's software RAID. Have been running a RAID 5 array for over a decade. It has been simple to expand the size with larger hard drives and even migrate to RAID 6 for better data protection.

The whole ECC memory thing is a bit misleading. There's no real point in worrying about the 0.0000001% risk from using non-ECC memory, when the risk of data loss due to other factors are several thousand times greater. Your data is much more likely to get corrupted due to disk failure, software failure, accidental deletion, virus, etc. For this reason most modern file systems are fairly fault tolerant and will not meltdown if a random bit gets flipped. The bigger issue is data integrity, for which other layers of protection are in place (ie. parity in RAID 5). Linux has an option that regularly rechecks on-disk data against the parity information.

My final point is that these storage systems are not designed to replace proper backup procedures. They're meant to minimize downtime and improve ease of recovering from certain types of disasters.

But you pulled the 0.0000001% out of thin air. That number is very shaky, especially if you consider that RAM is hardware that can be good today but go bad tomorrow.

And the disk and the bus to the disk are using checksumming, as well as you CPU caches. You RAM is the only thing that even has the option to just fly with plain bits.
 
But you pulled the 0.0000001% out of thin air. That number is very shaky, especially if you consider that RAM is hardware that can be good today but go bad tomorrow.

And the disk and the bus to the disk are using checksumming, as well as you CPU caches. You RAM is the only thing that even has the option to just fly with plain bits.

If ram based failures would be common no servers/workstations would have long uptimes, they'd crash to ram failures frequently. Luckily this is not the case.
 
If ram based failures would be common no servers/workstations would have long uptimes, they'd crash to ram failures frequently. Luckily this is not the case.

Lots of servers and workstation have ECC RAM, which helps uptime.

The study on google's servers showed that about 8% of DIMMs had an ECC correctable error at least once a year. The study also showed that correctable errors are highly correlated, in that a DIMM with a correctable error was much more likely to experience more correctable errors in the same month.
 
In my opinion for a home server a large drive or 2 backed up to an offline large drives or 4 beats raid or zfs any day.
If your worried about bit rot there are a few options to do check sums.
But in my opinion having dealt with thousands of servers since windows nt days bit rot is being pushed out of proportion.
I have seen 1 case of bit rot that was not attributed to a bad controller or drives.
zfs will not save you from those. Effective long term backups will.

Spending more on backup will also do much more good than spending more on ECC ram.

At work servers are all about up time and when you have to have max up time you use ecc ram.
Otherwise its in my opinion an option if your not running zfs.

Also only a small subset of ram errors are ecc recoverable.
 
Last edited:
In my opinion for a home server a large drive or 2 backed up to an offline large drives or 4 beats raid or zfs any day.
If your worried about bit rot there are a few options to do check sums.
But in my opinion having dealt with thousands of servers since windows nt days bit rot is being pushed out of proportion.
I have seen 1 case of bit rot that was not attributed to a bad controller or drives.
zfs will not save you from those. Effective long term backups will.

Spending more on backup will also do much more good than spending more on ECC ram.

At work servers are all about up time and when you have to have max up time you use ecc ram.
Otherwise its in my opinion an option if your not running zfs.

Also only a small subset of ram errors are ecc recoverable.

Yep it has been my understanding also that the most probable cause for data corruption is a controller problem. Even if the server would flip 1 bit randomly, it wouldn't effect any files unless that file was being handled in the memory at that moment. Files sitting in the storage media are unaffected. So if you have an 8% of chance you have hardware that's likely to suffer a memory error in 365 days and you would have to do a file operation at the exact moment of the memory error, the odds of that striking you are becoming pretty low.
 
A good ups is also mandatory for both home and work.
Preferably a "smart" type.
 
Lots of servers and workstation have ECC RAM, which helps uptime.

The study on google's servers showed that about 8% of DIMMs had an ECC correctable error at least once a year. The study also showed that correctable errors are highly correlated, in that a DIMM with a correctable error was much more likely to experience more correctable errors in the same month.

This is why I recommend ECC memory, 8% of machines with a correctable error, and when memory has that issue, they are more likly to have more.

This plays into the other needs for a good system. I believe the OP said that his will be "the only location for his movies" If this is the case, plan on losing the movies. One place means no place in my book.
 
The study on google's servers showed that about 8% of DIMMs had an ECC correctable error at least once a year. The study also showed that correctable errors are highly correlated, in that a DIMM with a correctable error was much more likely to experience more correctable errors in the same month.

To me this means 8% of the ECC ram that Google has in their servers is defective and needs replacing. In my small sample size of dozens of servers (over the last 2 to 3 decades) I have only seen ECC errors on systems with defective ram and replacing the defective ram fixes the issue. I can not believe their study did not investigate that especially when their data showed systems with ECC corrections had a much higher chance of having corrections..
 
To me this means 8% of the ECC ram that Google has in their servers is defective and needs replacing. In my small sample size of dozens of servers (over the last 2 to 3 decades) I have only seen ECC errors on systems with defective ram and replacing the defective ram fixes the issue. I can not believe their study did not investigate that especially when their data showed systems with ECC corrections had a much higher chance of having corrections..

Agree completely. Personally, I see ECC similar to RAID. It protects your system uptime from certain kinds of hardware failures. While not having ECC RAM can lead to data corruption in the event of a correctable fault, it is far more likely to simply crash the system. If I saw ECC correctable failures being logged, I'm certainly going to RMA the offending modules.
 
Agree completely. Personally, I see ECC similar to RAID. It protects your system uptime from certain kinds of hardware failures. While not having ECC RAM can lead to data corruption in the event of a correctable fault, it is far more likely to simply crash the system. If I saw ECC correctable failures being logged, I'm certainly going to RMA the offending modules.

Exactly but if you are not finding the errors, you could be propigating errors in your data and not know it.
 
Exactly but if you are not finding the errors, you could be propigating errors in your data and not know it.
Not all ram errors are a problem with Linux.
Linux will mark ram areas as bad and not use them.
I have used a system with known bad ram that would blue screen windows xp but worked fine on linux with 0 issues and year plus up time no data corruption. First I just used it to test but when it proved stable I kept it up and used it.
It was a dell out of warranty and not worth putting money into it.
I would not take such a chance with any system running zfs however.
 
Last edited:
Back
Top