Has anyone compared FreeBSD ZFS vs ZFS on Linux?

mrjayviper

Weaksauce
Joined
Jul 17, 2012
Messages
91
I will be rebuilding my file server soon and I'm thinking of using ZFS this time instead of FreeBSD. I am currently using FreeBSD 10 (not FreeNAS or similar distros) and it's generally working fine.

I've been using FreeBSD since version 5 or 6 (it's been more than a decade for sure :p) but trying out new things can be fun for me. :)

The big advantage I see of sticking to FreeBSD is the level of ZFS support that is available since it's built-in to the core. The advantage i see of using Linux is better package management (sure FreeBSD has ports but its package updates is bad).

I'm not really concern with write/read speeds as much since it's only for a home network. But I don't want atrocious speeds either.

Thanks very much for the reply :)
 
  • Like
Reactions: Gabe
like this
I've used OpenIndiana+ZFS, freeBSD+ZFS and Ubuntu/Debian+ZFS and in all cases the performance of a 6-disk raidz2 could saturate gigabit 2-3 times over which is more than enough for what we need. As such we use ZFS on Linux for reasons similar to yours (better package management and generally easier to use for my skillset).

I can't find my benchmarks spreadsheet but I stopped putting much effort into thoroughly benching it when I found that ZFSoL performed more than well enough for our needs, so the decision became easy.
 
I've moved all my storage over to ZoL. Even though I use 10Gb I found the performance differences to be small, and not always in favor of the same OS.
 
I also moved to ZoL; I found it to be the easiest to get good performance from (especially compared to OpenSolaris-based distros which had performed well for me in the past, but at some point were ruined via updates - also happened to a friend that runs NexentaStor after doing an update). In my opinion ZoL's downside is that they are sometimes slow to support new kernel versions, especially if you wait for an actual release of ZoL. Otherwise you'll be grabbing the source from GIT and hoping things will be stable enough. Then again, you could just never update your kernel, and you'd have a scenario more comparable to FreeBSD ;)

So to me it's either Solaris (real Solaris) or Linux. OpenIndiana and Illumos et al. devs mean well, but I have no idea what they think they're improving when they're causing near-wire-speed file transfers to slow down to ~30MB/s via updates. And again it happened to two different machines that I'm familiar with - the machines had completely different hardware (one AMD and one Intel and a couple generations apart). Happened with multiple OpenSolaris distros.

Tried FreeBSD briefly but worse hardware support than Linux yet without the benefits of ZFS having evolved natively on that platform (i.e. as it did on Solaris). Didn't see any advantages over Linux, personally, but I've never been interested in BSD.
 
Last edited:
I also use ZFS on Linux. Mostly for the same reason as others mentioned, I wanted to use it with CentOS because I already use CentOS for managing other things.

Getting ZFS to work with SELinux was a pain to figure out, but it works if you mount the filesystems using /etc/fstab or a manual "mount" command. This is considered a "legacy" mount in ZFS parlance. Doing it this way allows one to set the SELinux context manually.
 
Another happy ZoL user here. The Linux community for ZFS seems to be a lot more active than on BSD and you can find a lot more help and support.
 
ZFS on Linux also. My home server zpool was created July 2011. Has been great. I have even moved to ZFS root pools (rpools). All the ZFS benefits, especially the snapshotting, work on my boot device now.

My work computer is also a ZOL rpool mirror. Only problem with either is that you have to be sure that after every kernel update, that DKMS has rebuilt your modules before reboot.
 
And another happy ZoL user. My company had been using Linux VM hosts based on KVM as hypervisor and LVM for storage, and moving the later to ZoL made things much nicer to handle. That was a year ago and only minor issues cropped up so far. At home I've been running ZoL for 2 years with no issue at all; it helped me simplify the setup coming from a ESXi/OpenIndiana AIO.

Needless to say Linux has a couple of things going for it: Hardware support, software availability, the giant community and therefore tons of resources.
 
Early this year at work I have moved 1/2 of my 70TB over from btrfs or ext4+lvm on top of mdadm raid6 to zfs raidz2 or raidz3. This is ZoL under gentoo. At home 1/2 of my HTPC storage is on ZoL. In this case each disk is a zfs pool and I am using snapraid on top of this.
 
Not sure I understand why - if you have single-disk raid, zfs can't do any healing for you.
 
Not sure I understand why - if you have single-disk raid, zfs can't do any healing for you.

Power savings. The 2 snapraid parity disks are off most of the time. Also the data disks can individually power down when not in use. Snapraid provides the healing for these.
 
Not sure I understand why - if you have single-disk raid, zfs can't do any healing for you.

A zpool can auto repair in spite of single device vdevs so long as the filesystems have the copies property set to a value greater than 1. This only prevents bit rot, it doesn't help if the disk goes offline.

That being said, I'm still not sure why he wouldn't simply run snapraid on top of ext4 instead of bother with ZFS for single disk pools.
 
How does ZFSoL and Samba handle Windows style ACL's? The native CIFS server is one of the things that have kept me on Solaris derivatives (besides being a glutton for punishment). I actually use AFP mainly, but SMB is a necessity because we do have a Windows laptop and it works with everything.

I'm quite torn because I feel like Solaris was a solid platform and it's always good to have experience with many things, but I feel like it's a sinking ship and it's much easier to find support for Linux.
 
That being said, I'm still not sure why he wouldn't simply run snapraid on top of ext4 instead of bother with ZFS for single disk pools.

It's just another filesystem. ZFS isn't really any more of a bother than ext4 IMO.

zpool create poolname disk
vs
mkfs.ext4 disk
 
Last edited:
It's just another filesystem. ZFS isn't really any more of a bother than EXT4.

zpool create poolname disk
vs
mkfs.ext4 disk

Also I get compression, snapshots and datasets. Compression does not help much on the HTPC data but it does with the other datasets on the pool.

On top of this I do some testing on the home system before I deploy at work (although I do have additional testing systems at work).
 
Last edited:
Yeah, I get that. If you are doing something like snapraid, then I see your point. As to the idea of running with copies=2 or whatever, that will double your space usage, no? It would also cut your write IOPs in half. So why not create a bunch of 2-drive mirrors and snapraid those pools together (speaking to devman here.)
 
Last edited:
Yeah, I get that. If you are doing something like snapraid, then I see your point. As to the idea of running with copies=2 or whatever, that will double your space usage, no? So why not create a bunch of 2-drive mirrors and snapraid those pools together (speaking to devman here.)

I was just mentioning that it does exist as a capability not commenting on the merits of doing it. You are correct about the space requirements.
 
Understood. I wasn't trying to get into a p*ssing contest, just pointing out that while it may be possible, I can't imagine any scenario where it would be better than a bunch of raid1 pools. There is at least one Linux-based HA ZFS solution that requires redundancy via hardware raid, and then creating a single 'disk' pool on top of the resulting LUN, which I thought was beyond silly, since you lose a lot of what makes zfs good. drescherjm has a legitimate use case - that commercial product... not so much.
 
I've been using ZoL for two years now probably at home. At work it runs on some backup servers.

Early this year at work I have moved 1/2 of my 70TB over from btrfs or ext4+lvm on top of mdadm raid6 to zfs raidz2 or raidz3. This is ZoL under gentoo. At home 1/2 of my HTPC storage is on ZoL. In this case each disk is a zfs pool and I am using snapraid on top of this.

I do the same. Well I have one zpool with a pair of mirrored SSDs for VMs, one pool with a pair of mirrored 4TB drives for non-media stuff, and then many single disk pools with snapraid on them for redundancy. Lets me spin down most disks most of the time.

You also then can easily add one disk at time, and mix disk sizes, all the benefits of snapraid.
 
Also I get compression, snapshots and datasets. Compression does not help much on the HTPC data but it does with the other datasets on the pool.

On top of this I do some testing on the home system before I deploy at work (although I do have additional testing systems at work).

You also get ZFS checksums. Now when it detects corruption it can't repair, and you have to hope you have a snapraid backup (or however that works as I don't know) that can repair it. I would want to scrub this all the time, negating the power benefits.
 
I would want to scrub this all the time, negating the power benefits.

I scrub about 1 time a week. Usually this is about 100 to 200GB of data that needs to be updated. Remember this is recorded TV programming - over 90% of what I record will show up in reruns. Although the wife's stuff does not rerun but I record that 2 times and MythTV balances disks in that simultaneous recordings will most likely be on different storage devices.
 
I've been using ZoL for two years now probably at home. At work it runs on some backup servers.



I do the same. Well I have one zpool with a pair of mirrored SSDs for VMs, one pool with a pair of mirrored 4TB drives for non-media stuff, and then many single disk pools with snapraid on them for redundancy. Lets me spin down most disks most of the time.

You also then can easily add one disk at time, and mix disk sizes, all the benefits of snapraid.

ZoL under centOS on my backup server, not a single problem.. :D
Still waiting btrfs that supports RAID6 alike....
 
I got tired of waiting that was a part of my initial reason for moving to ZoL.

hahaha, yah slow for sure.
I guess. Oracle is the show stopper on btrfs :) in the beginning
Remember on what happens on Solaris, Since they bought "bankrupt" Sun...

I am still using ZoL until btrfs raid5/6 alike is stable <period, no comma>
 
I still use FreeBSD for all my ZFS, Linux only on test installs so far.

However, I want to warn against casually picking up random performance claims. The ZFS code is mostly the same, and ZFS is a RAM hog. So performance differences in ZFS will most likely be in the memory management in the kernel - and there are huge differences between FreeBSD and Linux in how they use RAM.

If you already have hardware you intend to use I recommend benchmarking on your own, with the amount of RAM you want to run, and with half of it.
 
I still use FreeBSD for all my ZFS, Linux only on test installs so far.

However, I want to warn against casually picking up random performance claims. The ZFS code is mostly the same, and ZFS is a RAM hog. So performance differences in ZFS will most likely be in the memory management in the kernel - and there are huge differences between FreeBSD and Linux in how they use RAM.

If you already have hardware you intend to use I recommend benchmarking on your own, with the amount of RAM you want to run, and with half of it.

I am more linux oriented since know oh how to do on linux.

linux memory management is pretty good/

the huge differences (on my understanding), which OS are you familiar :).
 
I've been using OpenIndiana v148 and v151 with raidz2 for a couple years now, alongside Ubuntu 10.04 and 12.04 boxes with mdadm/raid-6/xfs, for serving up NFS and iSCSI. The OpenIndiana boxes have uptimes measured in years. The Ubuntu boxes all give me kernel panics every couple months. Similar hardware specs and storage amounts on all the boxes (good old Q6600's, 8GB RAM all around, sasuc8i and lsi cards, mostly 2TB drives). These guys are getting hit with constant random reads and writes from about 40 other boxes as fast as they can read and write.

I get near wirespeed on the OpenIndy boxes. The Ubuntu boxes get less than half that. My theory on that is that ZFS is doing a much better job of caching writes, then writing out sequentially to disk, than whatever Linux is doing. The Linux boxes hit the disk all the time. The OpenIndy boxes write for a bit, then stop, then write for a bit, then stop. Does ZoL keep that better write cache behavior? Maybe ZoL would be worth a try, but I don't really trust Linux not to crash.

I've never patched any of the OpenIndy boxes. All they do is NFS and iSCSI on an isolated network. I have patched the Ubuntu boxes whenever they panic or oops, but it doesn't change much.

The only super stable Linux file server boxes I have are the Centos one for my Rocks cluster, and one debian squeeze NFS & NIS server. So maybe it's just a Ubuntu thing.
 
For those interested, ZFS on Linux has been updated to 0.6.3:
http://zfsonlinux.org/

I'll cross-post this in the Linux subforum.

The big thing (for me at least) is that in 0.6.3 adds full support for SELinux. I was able to get SELinux working with 0.6.1 and 0.6.2 using legacy mounting, but now it should work correctly for standard ZFS mounting as ZFS mount will now pass the SELinux options through correctly.
 
I've been using OpenIndiana v148 and v151 with raidz2 for a couple years now, alongside Ubuntu 10.04 and 12.04 boxes with mdadm/raid-6/xfs, for serving up NFS and iSCSI. The OpenIndiana boxes have uptimes measured in years. The Ubuntu boxes all give me kernel panics every couple months. Similar hardware specs and storage amounts on all the boxes (good old Q6600's, 8GB RAM all around, sasuc8i and lsi cards, mostly 2TB drives). These guys are getting hit with constant random reads and writes from about 40 other boxes as fast as they can read and write.

If you are getting kernel panics every couple of months then there's probably something seriously awry with your set up. Ubuntu 10.04 and 12.04 power some of our servers at work. I've never seen a kernel panic from them yet. The uptime on the boxes also measure in the span of years unless we have a kernel update. Specifically those two versions powered our file server and our KVM server. Both of them are solid.

I get near wirespeed on the OpenIndy boxes. The Ubuntu boxes get less than half that.

Here as well there's probably something up with your configuration. Unless you are talking 10Gbe, there's really isn't a reason why you can't max out a 1GbE connection. You can almost do that with a single disk and you can definitely hit that on burst.

I'm a ZFS fan too, but before ZFS there was mdadm, which is old a dust and quite stable and reliable. Many file servers used to rely on it. I doubt very seriously that your experience is indicative of what people can expect from normal use.

You can also tune MDADM for reads and writes. Here this may be of some help:

http://h3x.no/2011/07/09/tuning-ubuntu-mdadm-raid56
 
I realize this is old, but has anyone come up with benchmarks? A quick search didn't find any. I would like to compare ZoL to EXT4 on the rootfs. I especially would like to see benchmarks with rootfs on a raidz vs raid2... I don't care if it is on an SSD or HDD.
Thanks! :)
 
Seems it is true that ZoL is as performant as ZFS on FreeBSD now. However, I don't have anything easy to cite as evidence.
 
ooo perfect timing for me as I'm looking to decommission my old ZFS server and was looking at ZOL stuff since it's caught up to ZFS on FreeBSD.
I'm curious as to what people use to manage their ZFS. All command line? any web front ends? I'm looking at a new storage server for backups
 
ooo perfect timing for me as I'm looking to decommission my old ZFS server and was looking at ZOL stuff since it's caught up to ZFS on FreeBSD.
I'm curious as to what people use to manage their ZFS. All command line? any web front ends? I'm looking at a new storage server for backups

Command line is fine. Maybe you want some notifications for various things... Like if there are errors detected or if you get over a certain disk space %... Otherwise, I find the command line ideal.
 
Trying to work with OpenMediaVault right now- not because I have anything against FreeNAS, but because there isn't a FreeBSD driver for the Aquantia NIC I'm using. OMV is up to Kernel 4.19 apparently, and the remote desktop plugin works spectacularly if you just want a GUI; otherwise, it's Debian, so it'll run whatever.
 
Back
Top