Which OS for home ZFS system?

Tengis

Supreme [H]ardness
Joined
Jun 11, 2003
Messages
6,088
Which one and why? I know what ZFS is but I have no real experience to choose the best one for my home NAS.

Im going to be running four 3tb drives on a Core2Duo system. Ill be upgrading to something that supports ECC later.
 
I'll second FreeNAS, as it's a good base system that's already setup for you. I also like that it is built on something that has a long history.
 
FreeNAS has best support and has more features than any other ZFS based os that i know so another vote for FreeNAS.
 
even for ZFS home use, you must decide between
- based on BSD
- based on Linux
- based on Solaris

maybee OSX is the forth option some day (again)
each with pros and cons
 
Here is my vote for linux. Unlike freenas, you can install your linux distro of choice directly to the ZFS pool. Specifically I love arch (although it may be a bit too complicated if you are not familiar with linux). Biggest advantages over FreeNAS is that linux is supported on far more hardware and with far more software than freebsd. Also, because your OS etc. is on the pool itself you won't have to worry about corruption of the OS files due to cheap USB drives.

Here is a guide I used once if you are interested. http://www.jasonrm.net/articles/2013/10/08/arch-linux-zfs-root/

Also, if you don't want to install linux on the ZFS pool then just adding ZFS support to an existing install is trivial. For example, on ubuntu systems the commands are:

apt-add-repository ppa:zfs-native/stable
apt-get update
apt-get install ubuntu-zfs
 
Take a look at PC-BSD. Latest and greatest zfs and has boot environment feature. No GUI, but I don't really mind...
 
I'd go with napp-it on either openindiana or omnios.

Freenas won't work under ESXi and unless you manually change it, forces 512 byte sectors instead of 4k. I also had many issues with transfer speeds. Barely got 20-30MB/s. I can max out a gigabit link with OI+napp it.
 
ZFS (or any kind of RAID, really) is tricky on only 4 drives, especially big ones.
 
Here is my vote for linux. Unlike freenas, you can install your linux distro of choice directly to the ZFS pool. Specifically I love arch (although it may be a bit too complicated if you are not familiar with linux). Biggest advantages over FreeNAS is that linux is supported on far more hardware and with far more software than freebsd. Also, because your OS etc. is on the pool itself you won't have to worry about corruption of the OS files due to cheap USB drives.

Here is a guide I used once if you are interested. http://www.jasonrm.net/articles/2013/10/08/arch-linux-zfs-root/

Also, if you don't want to install linux on the ZFS pool then just adding ZFS support to an existing install is trivial. For example, on ubuntu systems the commands are:

apt-add-repository ppa:zfs-native/stable
apt-get update
apt-get install ubuntu-zfs

Me personally, I would avoid installing the OS on the main storage pool. You can also install FreeBSD on the storage pool. However, for me it just kind of reduces the portability. ZFS pools are readily migrated to different systems by just exporting and importing them, and it would seem that installing the OS directly on the pool would reduce this portability. You could import them into new hardware down the road, into a virtual server, etc.

You should still be able to do this with the OS installed on the pool, but I think it would clutter it. I also would be unsure of how the new system would handle it once you imported the pool, and now have two operating systems. Maybe this isn't an issue though, as the boot loader will still boot to whatever your main install is.

I would instead install it onto a second pool, either a single-disk pool or a basic mirror.

Just my personal preference.
 
Me personally, I would avoid installing the OS on the main storage pool. You can also install FreeBSD on the storage pool. However, for me it just kind of reduces the portability. ZFS pools are readily migrated to different systems by just exporting and importing them, and it would seem that installing the OS directly on the pool would reduce this portability. You could import them into new hardware down the road, into a virtual server, etc.

You should still be able to do this with the OS installed on the pool, but I think it would clutter it. I also would be unsure of how the new system would handle it once you imported the pool, and now have two operating systems. Maybe this isn't an issue though, as the boot loader will still boot to whatever your main install is.

I would instead install it onto a second pool, either a single-disk pool or a basic mirror.

Just my personal preference.

You can just do this for the install point

sudo zfs create pool/OS -o mountpoint=none
sudo zfs create pool/OS/Arch -o mountpoint=/

This makes your pool 'clean' and stops a recursive mount into your OS directory. Also, if you ever want to install another OS, you change Arch folder mountpoint to none and make another folder that mounts to root. e.g.

sudo zfs set mountpoint=none pool/OS/Arch
sudo zfs create pool/OS/FreeBSD -o mountpoint=/

You can also make the partitions cleaner by keeping your hard disks pure ZFS and moving only the boot partition to a USB drive.
 
Still a FreeBSD fan. Much more straightforward than others. Will also give you useful Unix knowledge.
 
I'd go with napp-it on either openindiana or omnios.

Freenas won't work under ESXi and unless you manually change it, forces 512 byte sectors instead of 4k. I also had many issues with transfer speeds. Barely got 20-30MB/s. I can max out a gigabit link with OI+napp it.
I'm using FreeNAS 9.1.0 on ESXi 5.1, but I passed down an IBM Serve raid/LSi BR10i card to it. This should completely eliminate those issues, correct? Is there anything else I should be wary about running FreeNAS on ESXi 5.1? I'm hoping hardware pass-through will eliminate any issues associated with virtualized FreeNAS... I know the FreeNAS support forums have a sticky'ed FAQ that specifically makes a point against running it under virtualized environments, but they didn't really have a response when using hardware pass through.
 
I suggest Freenas as well, simple to use with decent performance if you give it enough ram.

Freenas as a VM does suffer a performance hit, but It's more than fast enough to saturate a gigabit link.
It is highly recommended you pass through an HBA if you virtualize it.

Running 2 of them myself at home, one virtualized with a Dell 6/ir passed through and the other physical. Performance wise, the virtualized one saturates a single gig link just fine, the physical one gives me around 8gbps on a 10gig link, IOPs are not bad either.

The VM is used to host my media, the other I use for Hyper-V and ESXi without issue so far.

Don't skimp on the RAM, ZFS loves ram and your core 2 duo's max ram capacity is very likely 4 or 8GB. Going less than 8GB for a Freenas ZFS setup can get slow.
 
I suggest Freenas as well, simple to use with decent performance if you give it enough ram.

Freenas as a VM does suffer a performance hit, but It's more than fast enough to saturate a gigabit link.
It is highly recommended you pass through an HBA if you virtualize it.

Running 2 of them myself at home, one virtualized with a Dell 6/ir passed through and the other physical. Performance wise, the virtualized one saturates a single gig link just fine, the physical one gives me around 8gbps on a 10gig link, IOPs are not bad either.

The VM is used to host my media, the other I use for Hyper-V and ESXi without issue so far.

Don't skimp on the RAM, ZFS loves ram and your core 2 duo's max ram capacity is very likely 4 or 8GB. Going less than 8GB for a Freenas ZFS setup can get slow.

Core2 probably won't have passthrough. If he's just serving over gigabit, 8Gb of ram is probably enough.
 
Last edited:
Core2 probably won't have passthrough. If he's just serving over gigabit 8Gb is probably enough.

But not impossible, it depends on the chipset and bios.
I actually have 2 Core2 desktops and a laptop that can do it.

But yes, 8GB should be plenty for a home NAS on a gigabit link.
 
If you're planning on making use primarily of NFS, I find FreeBSD, Linux, or an illumos derivative all usable. Not equally usable, but all reasonable.

If you're planning on making major use of CIFS/SMB, and don't need SMB2.1 or higher, illumos derivatives have the advantage.

If you're planning on making major use of iSCSI and/or other block protocols, illumos derivatives have the advantage.

If you're planning on using it primarily locally on the same box, FreeBSD or Linux have the advantage, usually, simply due to ease of administration for non-Solaris people and the greater availability of both common software packages and assistance. Linux has an advantage over FreeBSD today in that it supports more hypervisors, with FreeBSD only supporting VirtualBox (and in 10, 'bhyve'?), whilst Linux has kvm and others.

If you're wanting the most solid ZFS experience - illumos derivatives have the clear advantage. At present, they still have the 'most hours on', I'd bet. It won't be long before that crown passes, just because of the sheer size of the FreeBSD & Linux communities and interest levels being on the rise, but for the moment, there it is. ZFS On Linux still has a bit to go, and while FreeBSD is more solid, really solid, I still run into the occasional weirdness you just don't see on illumos-derivs. Random anecdotal example, I had to reboot a FreeBSD box yesterday because a 'zfs rename' of a snapshot caused some weird error implicating the disk labels that caused all zfs & zpool commands to hang up completely, necessitating the reboot. I've only ever seen similar behavior on up-to-date illumos derivatives from failing hardware, never from a simple 'zfs rename'.

Obviously in all cases the hardware is the final arbiter of quality. Stick the most up to date, solid illumos derivative on some $300 desktop cobbled together from the local flea market with non-ECC RAM and aging disks, and you'll end up disappointed eventually, if not right away. I also concur with prior posts - for moderate home use, I tend to recommend at least 4 GB of dedicated ARC space, which implies 6-8 or more GB of system RAM. For power-user home use (think multiple boxes & VM's talking to the ZFS box), 16-32 GB of RAM quickly becomes more attractive.

CPU cores is important but should be sacrificed on the altar of higher clock speed. It's nearly impossible to get a single-core proc these days, so I tend to just leave the advice at that. Rarely is ZFS actually pegging out all the cores on a modern CPU, so what actually ends up determining latency is how fast the processor is, not how many cores it has. At work, on enterprise systems, we tend to push people to 3.5+ Ghz CPU's, even though they're often dual or quad-core instead of hex/octo-core, and often more expensive to boot.
 
nex7, good post. What I have seen (anecdotal of course): I have had odd driver issues and such with illumos setups. Might be due to running in a virtualized (esxi) environment (possibly issue with stale/bogus vmware tools? dunno...) Then again, I've had issues more than once with the mptsas driver under openindiana and omnios, leaving me gunshy. The single biggest deficiency from my POV for ZFS on linux is that they still don't have it integrated well with the udev system. Not ZoL's fault, per-se. The problem is (as one of the devs explained) with FreeBSD and illumos/opensolaris/etc, the devices are all visible by the time zfs fires up. udev can be wildly variable, as far as timing for when it presents disk devices. Some LSI HBAs are particularly problematic that way. The net effect is: you boot up and your pool isn't there, or it is degraded due to devices missing, etc... Other symptoms: NFS shares not being visible, iSCSI LUNs not being presented, etc... My production NAS/SAN is ubuntu 12.04+ZoL, and I have a bandaid in /etc/rc.local that forcibly reshares NFS datasets, restarts the iSCSI daemon, etc... And like I said, I like how PC-BSD provides beadm for 'oh crap' scenarios... As soon as I get some time, I'm planning on moving the production NAS/SAN box to PC-BSD.
 
Last edited:
Yes, most the illumos derivs also have 'beadm' or equivalence. The ability to upgrade into a new snapshot/clone and boot from it, and roll back out of it if there's a problem, is very, very handy in production environments.

I should also correct one comment in my post -- for NFSv3, they're all fairly good. NFSv4, the illumos derivs have a clear advantage in that they support being an NFSv4 server at all, and it isn't horribly broken. :)

I haven't tried PC-BSD in lieu of FreeBSD, yet, I think it will be my next test box install, I've heard good things.
 
Also allows install on a ZFS root pool that is mirrored, without having to jump through all kinds of hoops, reading HOWTOs, a bunch of CLI malarkey, etc...
 
Back
Top