Has anyone compared FreeBSD ZFS vs ZFS on Linux?

Discussion in 'SSDs & Data Storage' started by mrjayviper, May 18, 2014.

  1. mrjayviper

    mrjayviper [H]Lite

    Messages:
    70
    Joined:
    Jul 17, 2012
    I will be rebuilding my file server soon and I'm thinking of using ZFS this time instead of FreeBSD. I am currently using FreeBSD 10 (not FreeNAS or similar distros) and it's generally working fine.

    I've been using FreeBSD since version 5 or 6 (it's been more than a decade for sure :p) but trying out new things can be fun for me. :)

    The big advantage I see of sticking to FreeBSD is the level of ZFS support that is available since it's built-in to the core. The advantage i see of using Linux is better package management (sure FreeBSD has ports but its package updates is bad).

    I'm not really concern with write/read speeds as much since it's only for a home network. But I don't want atrocious speeds either.

    Thanks very much for the reply :)
     
    Gabe likes this.
  2. JoeComp

    JoeComp [H]ard|Gawd

    Messages:
    1,037
    Joined:
    Jan 23, 2012
    I'm guessing you meant linux ( ZFS -> linux ).
     
  3. Jim G

    Jim G Limp Gawd

    Messages:
    221
    Joined:
    Jun 2, 2011
    I've used OpenIndiana+ZFS, freeBSD+ZFS and Ubuntu/Debian+ZFS and in all cases the performance of a 6-disk raidz2 could saturate gigabit 2-3 times over which is more than enough for what we need. As such we use ZFS on Linux for reasons similar to yours (better package management and generally easier to use for my skillset).

    I can't find my benchmarks spreadsheet but I stopped putting much effort into thoroughly benching it when I found that ZFSoL performed more than well enough for our needs, so the decision became easy.
     
  4. Silhouette

    Silhouette Limp Gawd

    Messages:
    205
    Joined:
    Dec 14, 2006
    I've moved all my storage over to ZoL. Even though I use 10Gb I found the performance differences to be small, and not always in favor of the same OS.
     
  5. dandragonrage

    dandragonrage [H]ardForum Junkie

    Messages:
    8,300
    Joined:
    Jun 5, 2004
    I also moved to ZoL; I found it to be the easiest to get good performance from (especially compared to OpenSolaris-based distros which had performed well for me in the past, but at some point were ruined via updates - also happened to a friend that runs NexentaStor after doing an update). In my opinion ZoL's downside is that they are sometimes slow to support new kernel versions, especially if you wait for an actual release of ZoL. Otherwise you'll be grabbing the source from GIT and hoping things will be stable enough. Then again, you could just never update your kernel, and you'd have a scenario more comparable to FreeBSD ;)

    So to me it's either Solaris (real Solaris) or Linux. OpenIndiana and Illumos et al. devs mean well, but I have no idea what they think they're improving when they're causing near-wire-speed file transfers to slow down to ~30MB/s via updates. And again it happened to two different machines that I'm familiar with - the machines had completely different hardware (one AMD and one Intel and a couple generations apart). Happened with multiple OpenSolaris distros.

    Tried FreeBSD briefly but worse hardware support than Linux yet without the benefits of ZFS having evolved natively on that platform (i.e. as it did on Solaris). Didn't see any advantages over Linux, personally, but I've never been interested in BSD.
     
    Last edited: May 19, 2014
  6. devman

    devman 2[H]4U

    Messages:
    2,399
    Joined:
    Dec 3, 2005
    I also use ZFS on Linux. Mostly for the same reason as others mentioned, I wanted to use it with CentOS because I already use CentOS for managing other things.

    Getting ZFS to work with SELinux was a pain to figure out, but it works if you mount the filesystems using /etc/fstab or a manual "mount" command. This is considered a "legacy" mount in ZFS parlance. Doing it this way allows one to set the SELinux context manually.
     
  7. SirMaster

    SirMaster 2[H]4U

    Messages:
    2,121
    Joined:
    Nov 8, 2010
    Another happy ZoL user here. The Linux community for ZFS seems to be a lot more active than on BSD and you can find a lot more help and support.
     
  8. fields_g

    fields_g [H]Lite

    Messages:
    102
    Joined:
    Apr 9, 2011
    ZFS on Linux also. My home server zpool was created July 2011. Has been great. I have even moved to ZFS root pools (rpools). All the ZFS benefits, especially the snapshotting, work on my boot device now.

    My work computer is also a ZOL rpool mirror. Only problem with either is that you have to be sure that after every kernel update, that DKMS has rebuilt your modules before reboot.
     
  9. zrav

    zrav Limp Gawd

    Messages:
    163
    Joined:
    Sep 22, 2011
    And another happy ZoL user. My company had been using Linux VM hosts based on KVM as hypervisor and LVM for storage, and moving the later to ZoL made things much nicer to handle. That was a year ago and only minor issues cropped up so far. At home I've been running ZoL for 2 years with no issue at all; it helped me simplify the setup coming from a ESXi/OpenIndiana AIO.

    Needless to say Linux has a couple of things going for it: Hardware support, software availability, the giant community and therefore tons of resources.
     
  10. drescherjm

    drescherjm [H]ardForum Junkie

    Messages:
    14,199
    Joined:
    Nov 19, 2008
    Early this year at work I have moved 1/2 of my 70TB over from btrfs or ext4+lvm on top of mdadm raid6 to zfs raidz2 or raidz3. This is ZoL under gentoo. At home 1/2 of my HTPC storage is on ZoL. In this case each disk is a zfs pool and I am using snapraid on top of this.
     
  11. danswartz

    danswartz 2[H]4U

    Messages:
    3,609
    Joined:
    Feb 25, 2011
    Not sure I understand why - if you have single-disk raid, zfs can't do any healing for you.
     
  12. drescherjm

    drescherjm [H]ardForum Junkie

    Messages:
    14,199
    Joined:
    Nov 19, 2008
    Power savings. The 2 snapraid parity disks are off most of the time. Also the data disks can individually power down when not in use. Snapraid provides the healing for these.
     
  13. devman

    devman 2[H]4U

    Messages:
    2,399
    Joined:
    Dec 3, 2005
    A zpool can auto repair in spite of single device vdevs so long as the filesystems have the copies property set to a value greater than 1. This only prevents bit rot, it doesn't help if the disk goes offline.

    That being said, I'm still not sure why he wouldn't simply run snapraid on top of ext4 instead of bother with ZFS for single disk pools.
     
  14. PointandClick

    PointandClick Limp Gawd

    Messages:
    383
    Joined:
    Dec 6, 2008
    How does ZFSoL and Samba handle Windows style ACL's? The native CIFS server is one of the things that have kept me on Solaris derivatives (besides being a glutton for punishment). I actually use AFP mainly, but SMB is a necessity because we do have a Windows laptop and it works with everything.

    I'm quite torn because I feel like Solaris was a solid platform and it's always good to have experience with many things, but I feel like it's a sinking ship and it's much easier to find support for Linux.
     
  15. SirMaster

    SirMaster 2[H]4U

    Messages:
    2,121
    Joined:
    Nov 8, 2010
    It's just another filesystem. ZFS isn't really any more of a bother than ext4 IMO.

    zpool create poolname disk
    vs
    mkfs.ext4 disk
     
    Last edited: May 22, 2014
  16. drescherjm

    drescherjm [H]ardForum Junkie

    Messages:
    14,199
    Joined:
    Nov 19, 2008
    Also I get compression, snapshots and datasets. Compression does not help much on the HTPC data but it does with the other datasets on the pool.

    On top of this I do some testing on the home system before I deploy at work (although I do have additional testing systems at work).
     
    Last edited: May 22, 2014
  17. diehard

    diehard Gawd

    Messages:
    719
    Joined:
    Aug 4, 2002
  18. danswartz

    danswartz 2[H]4U

    Messages:
    3,609
    Joined:
    Feb 25, 2011
    Yeah, I get that. If you are doing something like snapraid, then I see your point. As to the idea of running with copies=2 or whatever, that will double your space usage, no? It would also cut your write IOPs in half. So why not create a bunch of 2-drive mirrors and snapraid those pools together (speaking to devman here.)
     
    Last edited: May 22, 2014
  19. devman

    devman 2[H]4U

    Messages:
    2,399
    Joined:
    Dec 3, 2005
    I was just mentioning that it does exist as a capability not commenting on the merits of doing it. You are correct about the space requirements.
     
  20. danswartz

    danswartz 2[H]4U

    Messages:
    3,609
    Joined:
    Feb 25, 2011
    Understood. I wasn't trying to get into a p*ssing contest, just pointing out that while it may be possible, I can't imagine any scenario where it would be better than a bunch of raid1 pools. There is at least one Linux-based HA ZFS solution that requires redundancy via hardware raid, and then creating a single 'disk' pool on top of the resulting LUN, which I thought was beyond silly, since you lose a lot of what makes zfs good. drescherjm has a legitimate use case - that commercial product... not so much.
     
  21. bexamous

    bexamous [H]ard|Gawd

    Messages:
    1,675
    Joined:
    Dec 12, 2005
    I've been using ZoL for two years now probably at home. At work it runs on some backup servers.

    I do the same. Well I have one zpool with a pair of mirrored SSDs for VMs, one pool with a pair of mirrored 4TB drives for non-media stuff, and then many single disk pools with snapraid on them for redundancy. Lets me spin down most disks most of the time.

    You also then can easily add one disk at time, and mix disk sizes, all the benefits of snapraid.
     
  22. Aesma

    Aesma [H]ard|Gawd

    Messages:
    1,844
    Joined:
    Mar 24, 2010
    You also get ZFS checksums. Now when it detects corruption it can't repair, and you have to hope you have a snapraid backup (or however that works as I don't know) that can repair it. I would want to scrub this all the time, negating the power benefits.
     
  23. drescherjm

    drescherjm [H]ardForum Junkie

    Messages:
    14,199
    Joined:
    Nov 19, 2008
    I scrub about 1 time a week. Usually this is about 100 to 200GB of data that needs to be updated. Remember this is recorded TV programming - over 90% of what I record will show up in reruns. Although the wife's stuff does not rerun but I record that 2 times and MythTV balances disks in that simultaneous recordings will most likely be on different storage devices.
     
  24. cantalup

    cantalup Gawd

    Messages:
    758
    Joined:
    Feb 8, 2012
    ZoL under centOS on my backup server, not a single problem.. :D
    Still waiting btrfs that supports RAID6 alike....
     
  25. drescherjm

    drescherjm [H]ardForum Junkie

    Messages:
    14,199
    Joined:
    Nov 19, 2008
    btrfs has had raid5/6 for some time but I do not trust it. I am not sure what kernel added that last year.

    https://btrfs.wiki.kernel.org/index.php/RAID56
     
  26. cantalup

    cantalup Gawd

    Messages:
    758
    Joined:
    Feb 8, 2012
    still waiting, I think is not stable.
    I guess stable release would be in linux kernel 4.X
    need to read further more....
     
  27. drescherjm

    drescherjm [H]ardForum Junkie

    Messages:
    14,199
    Joined:
    Nov 19, 2008
    I got tired of waiting that was a part of my initial reason for moving to ZoL.
     
  28. cantalup

    cantalup Gawd

    Messages:
    758
    Joined:
    Feb 8, 2012
    hahaha, yah slow for sure.
    I guess. Oracle is the show stopper on btrfs :) in the beginning
    Remember on what happens on Solaris, Since they bought "bankrupt" Sun...

    I am still using ZoL until btrfs raid5/6 alike is stable <period, no comma>
     
  29. uOpt

    uOpt Gawd

    Messages:
    750
    Joined:
    Mar 29, 2006
    I still use FreeBSD for all my ZFS, Linux only on test installs so far.

    However, I want to warn against casually picking up random performance claims. The ZFS code is mostly the same, and ZFS is a RAM hog. So performance differences in ZFS will most likely be in the memory management in the kernel - and there are huge differences between FreeBSD and Linux in how they use RAM.

    If you already have hardware you intend to use I recommend benchmarking on your own, with the amount of RAM you want to run, and with half of it.
     
  30. cantalup

    cantalup Gawd

    Messages:
    758
    Joined:
    Feb 8, 2012
    I am more linux oriented since know oh how to do on linux.

    linux memory management is pretty good/

    the huge differences (on my understanding), which OS are you familiar :).
     
  31. jedigeorge

    jedigeorge [H]Lite

    Messages:
    79
    Joined:
    Apr 4, 2010
    I've been using OpenIndiana v148 and v151 with raidz2 for a couple years now, alongside Ubuntu 10.04 and 12.04 boxes with mdadm/raid-6/xfs, for serving up NFS and iSCSI. The OpenIndiana boxes have uptimes measured in years. The Ubuntu boxes all give me kernel panics every couple months. Similar hardware specs and storage amounts on all the boxes (good old Q6600's, 8GB RAM all around, sasuc8i and lsi cards, mostly 2TB drives). These guys are getting hit with constant random reads and writes from about 40 other boxes as fast as they can read and write.

    I get near wirespeed on the OpenIndy boxes. The Ubuntu boxes get less than half that. My theory on that is that ZFS is doing a much better job of caching writes, then writing out sequentially to disk, than whatever Linux is doing. The Linux boxes hit the disk all the time. The OpenIndy boxes write for a bit, then stop, then write for a bit, then stop. Does ZoL keep that better write cache behavior? Maybe ZoL would be worth a try, but I don't really trust Linux not to crash.

    I've never patched any of the OpenIndy boxes. All they do is NFS and iSCSI on an isolated network. I have patched the Ubuntu boxes whenever they panic or oops, but it doesn't change much.

    The only super stable Linux file server boxes I have are the Centos one for my Rocks cluster, and one debian squeeze NFS & NIS server. So maybe it's just a Ubuntu thing.
     
  32. octoberasian

    octoberasian 2[H]4U

    Messages:
    4,086
    Joined:
    Oct 13, 2007
    For those interested, ZFS on Linux has been updated to 0.6.3:
    http://zfsonlinux.org/

    I'll cross-post this in the Linux subforum.
     
  33. devman

    devman 2[H]4U

    Messages:
    2,399
    Joined:
    Dec 3, 2005
    The big thing (for me at least) is that in 0.6.3 adds full support for SELinux. I was able to get SELinux working with 0.6.1 and 0.6.2 using legacy mounting, but now it should work correctly for standard ZFS mounting as ZFS mount will now pass the SELinux options through correctly.
     
  34. kac77

    kac77 2[H]4U

    Messages:
    2,196
    Joined:
    Dec 13, 2008
    If you are getting kernel panics every couple of months then there's probably something seriously awry with your set up. Ubuntu 10.04 and 12.04 power some of our servers at work. I've never seen a kernel panic from them yet. The uptime on the boxes also measure in the span of years unless we have a kernel update. Specifically those two versions powered our file server and our KVM server. Both of them are solid.

    Here as well there's probably something up with your configuration. Unless you are talking 10Gbe, there's really isn't a reason why you can't max out a 1GbE connection. You can almost do that with a single disk and you can definitely hit that on burst.

    I'm a ZFS fan too, but before ZFS there was mdadm, which is old a dust and quite stable and reliable. Many file servers used to rely on it. I doubt very seriously that your experience is indicative of what people can expect from normal use.

    You can also tune MDADM for reads and writes. Here this may be of some help:

    http://h3x.no/2011/07/09/tuning-ubuntu-mdadm-raid56
     
  35. P1x3L

    P1x3L n00b

    Messages:
    19
    Joined:
    Jul 31, 2012
    I realize this is old, but has anyone come up with benchmarks? A quick search didn't find any. I would like to compare ZoL to EXT4 on the rootfs. I especially would like to see benchmarks with rootfs on a raidz vs raid2... I don't care if it is on an SSD or HDD.
    Thanks! :)
     
  36. P1x3L

    P1x3L n00b

    Messages:
    19
    Joined:
    Jul 31, 2012
    Seems it is true that ZoL is as performant as ZFS on FreeBSD now. However, I don't have anything easy to cite as evidence.
     
  37. BlueLineSwinger

    BlueLineSwinger Gawd

    Messages:
    600
    Joined:
    Dec 1, 2011
  38. Bookmage

    Bookmage Gawd

    Messages:
    665
    Joined:
    Sep 2, 2004
    ooo perfect timing for me as I'm looking to decommission my old ZFS server and was looking at ZOL stuff since it's caught up to ZFS on FreeBSD.
    I'm curious as to what people use to manage their ZFS. All command line? any web front ends? I'm looking at a new storage server for backups
     
  39. P1x3L

    P1x3L n00b

    Messages:
    19
    Joined:
    Jul 31, 2012
    Command line is fine. Maybe you want some notifications for various things... Like if there are errors detected or if you get over a certain disk space %... Otherwise, I find the command line ideal.
     
  40. IdiotInCharge

    IdiotInCharge [H]ardForum Junkie

    Messages:
    8,591
    Joined:
    Jun 13, 2003
    Trying to work with OpenMediaVault right now- not because I have anything against FreeNAS, but because there isn't a FreeBSD driver for the Aquantia NIC I'm using. OMV is up to Kernel 4.19 apparently, and the remote desktop plugin works spectacularly if you just want a GUI; otherwise, it's Debian, so it'll run whatever.