Filesystem for Large Linux volume: XFS, JFS, or EXT4

Discussion in 'SSDs & Data Storage' started by BecauseScience, Jul 9, 2012.

Which filesystem?

  1. EXT4

    35.5%
  2. JFS

    19.4%
  3. XFS

    35.5%
  4. All of the options above will eat your data!

    19.4%
  1. BecauseScience

    BecauseScience [H]ard|Gawd

    Messages:
    1,049
    Joined:
    Oct 9, 2005
    I'd like to know what filesystems you guys use for large (16TB and up) volumes on Linux. I've been researching this for a while. As usual, opinions vary. I know about the old 16TB volume limitation for EXT4. I would be using recent userspace tools and kernel to get the >16TB support. The question is, how stable is the >16TB support? It's quite new.
     
  2. drescherjm

    drescherjm [H]ardForum Junkie

    Messages:
    14,152
    Joined:
    Nov 19, 2008
    At work where I have 40TB+ of linux software raid, I have moved for the most part to ext4 (from reiserfs, jfs and xfs) and I am in testing of btrfs on data that it would not hurt to lose. I know using btrfs for throw away data and ext4 for the important stuff sounds wrong. However btrfs comes with the "experimental" tag in the kernel and that does not make me want to risk having to spend the time to recover from tape..

    Although I do not have any single filesystems of 16TB so I guess I do not help your question.
     
  3. asgards

    asgards Limp Gawd

    Messages:
    204
    Joined:
    May 8, 2008
    for me extys lack of resizing was a dealbreaker
    i went for jfs, seemed fine, though it had some funky need to be rechecked uppon remounts
     
  4. houkouonchi

    houkouonchi RIP

    Messages:
    1,624
    Joined:
    Sep 14, 2008
    Beware of resizing of a filesystem of >32 TiB on JFS. It did not go well for me.


    I use JFS:

    Code:
    root@dekabutsu: 02:20 PM :~# df -H /data /data
    Filesystem      Size  Used Avail Use% Mounted on
    /dev/sdd1        44T   30T   15T  67% /data
    /dev/sdd1        44T   30T   15T  67% /data
    root@dekabutsu: 02:20 PM :~# mount | grep data
    /dev/sdd1 on /data type jfs (rw,noatime,nodiratime)
    /dev/sde1 on /data2 type jfs (rw,noatime,nodiratime)
    root@dekabutsu: 02:20 PM :~# 
    
     
  5. Red Squirrel

    Red Squirrel [H]ardForum Junkie

    Messages:
    9,213
    Joined:
    Nov 29, 2009
    With my limited experience on file systems I'd say EXT4, because it is the most widely available and used, so you know it gets lot of use and is known to be reliable. The others often require to have extra stuff installed so only the most advanced people tend to use them. Though there are reasons to go with the others as well, but that's just my personal feeling on it.

    Not that it means much, but my current main storage at home is 4.5TB md raid on EXT3. (old dinosaur server, using FC9).
     
  6. houkouonchi

    houkouonchi RIP

    Messages:
    1,624
    Joined:
    Sep 14, 2008
    Stable ext4 >16 TiB support is still pretty new so I dont know how much I would trust it. Its only been in stable e2fstools since like November of last year.
     
  7. insaneirish

    insaneirish n00b

    Messages:
    19
    Joined:
    Jan 24, 2012
    Use ZFS on Linux (kernel module, not FUSE). If it's good enough for Lawrence Livermore National Laboratory, it's good enough for me.
     
  8. BecauseScience

    BecauseScience [H]ard|Gawd

    Messages:
    1,049
    Joined:
    Oct 9, 2005
    I saw kernel changelogs for >16TB support that were dated January 2012. :eek: I believe support came first in e2fstools and the kernel work was done later. Has anyone looked at the actual code changes? I'm wondering how extensive they are.

    My gut says to go with ext4. The 16TB code is extremely new but other ext4 enhancements have gone smoothly in the past.

    I keep reading report after report of XFS and JFS going bonkers for no reason, like houkouonchi's post. I personally have not had good luck with either JFS or XFS but it was many years ago so I take it with a grain of salt.

    I think you are correct in not trusting btrfs. People are still losing filesystems to it. It's a shame btrfs doesn't have raid6 or encryption yet. It would be an easy choice. Well...if it was stable too.
     
    Last edited: Jul 9, 2012
  9. drescherjm

    drescherjm [H]ardForum Junkie

    Messages:
    14,152
    Joined:
    Nov 19, 2008
    I have looked at that. However my big reluctance to use zfsonlinux is that this is an out of the mainline kernel driver. I have been burned more than once by these in that after some time development either stops or they decide to follow some other distributions release schedule like RHEL.
     
  10. BecauseScience

    BecauseScience [H]ard|Gawd

    Messages:
    1,049
    Joined:
    Oct 9, 2005
    ZFS isn't suitable due to its limitations on adding additional storage. This filesystem will be on a raid6. Storage will be added to the raid6 one disk at a time.
     
  11. misterpat

    misterpat [H]Lite

    Messages:
    103
    Joined:
    Dec 24, 2011
    *delete*
     
    Last edited: Jul 10, 2012
  12. insaneirish

    insaneirish n00b

    Messages:
    19
    Joined:
    Jan 24, 2012
    I can understand why that limitation would prevent you from using ZFS. However, just because your filesystem and volume management supports that paradigm of expanding doesn't make it a good idea.

    On another note, are you concerned about silent data corruption? All of the non-ZFS and non-btrfs (though I wouldn't use btrfs at this point) options are vulnerable to silent corruption.
     
  13. BecauseScience

    BecauseScience [H]ard|Gawd

    Messages:
    1,049
    Joined:
    Oct 9, 2005
    Nope, and I'm not even considering zfs or btrfs. They're totally out of the running.

    I'm open to using:
    • JFS
    • XFS
    • ext4 with >16tb support
    • ext4 without >16tb support (would use multiple <16tb filesystems)
     
  14. gjs278

    gjs278 Gawd

    Messages:
    712
    Joined:
    Feb 22, 2011
    uhh what the heck is going on here? you guys are voting ext4. it's limited to 16tb.

    ditch ext4, go with xfs, keep the barriers on, and you're set.
     
  15. ST3F

    ST3F Limp Gawd

    Messages:
    181
    Joined:
    Oct 19, 2011
    +1

    On my lab, we ahve been testing ZFSonLinux since october 2011
    1. X8DAL
    2. 2* x5502
    3. 6*1Go ECC
    4. adaptec 6805
    5. 8*2To samsung F4 RAIDZ2
    6. Ubuntu 11.10 server x86_64
    7. ZFS/SPL : ppa ubuntu spl-0.6.0-rc6 & zfs-0.6.0-rc6
    8. vPool ZFS : v28

    No problem.

    Works into ESXi too.

    Now I'm building 48 To Raw (> 32 To usable) on 24x 2 To expandable
    ... with OpenIndiana + Nappit on RaidZ2 : 6x 2 To on 4 vDevs. + 8 spares drives

    Cheers.

    St3F
     
    Last edited: Jul 10, 2012
  16. kac77

    kac77 2[H]4U

    Messages:
    2,197
    Joined:
    Dec 13, 2008
    The reason why probably has to do with ext4's activity. EXT4 itself isn't limited to 16TB, but the program(s) used to create and resize are. This is no longer an issue when it comes to creating the partition. Therefore sticking with EXT4 has some perks that should not be easily discounted.

    The other reason likely is because using multiple mount points and/or even symlinks solves the issue relatively easily. Not saying which is better than another just saying that's probably why most vote for ext4.
     
  17. BecauseScience

    BecauseScience [H]ard|Gawd

    Messages:
    1,049
    Joined:
    Oct 9, 2005
    As discussed earlier in the thread, ext4 is no longer limited to 16tb. The ext4 filesystem utilities and kernel have both been enhanced to handle >16tb.
     
  18. 1010

    1010 Limp Gawd

    Messages:
    187
    Joined:
    Jun 9, 2011
    XFS is extremely stable and works well with large volumes, but you must be aware of some finer areas surrounding inode mount options. Around 16TiB XFS does have an issue if you mount an XFS fs without the -o inode64 option. Be aware that once you envoke -o inode64, inodes will be transformed. From that point after you must mount with -o inode64 to access data previously written while mounted under -o inode64.

    There's a long history about -o inode64 on the sgi XFS mailing list and probably on SGI's site as well.


    http://xfs.org/index.php/XFS_FAQ#Q:_What_is_the_inode64_mount_option_for.3F

    XFS and > 16TiB volumes

    http://www.doc.ic.ac.uk/~dcw/xfs_16tb/
     
  19. 1010

    1010 Limp Gawd

    Messages:
    187
    Joined:
    Jun 9, 2011
    Keep barriers on and also research -o inode64
     
  20. 1010

    1010 Limp Gawd

    Messages:
    187
    Joined:
    Jun 9, 2011
    I passed on JFS due to it being a dead project. JFS does have bugs and it is no longer being actively developed or supported by IBM
     
  21. BecauseScience

    BecauseScience [H]ard|Gawd

    Messages:
    1,049
    Joined:
    Oct 9, 2005
    Interesting. Thanks for the pointers. It does look like xfs development is very active. http://xfs.org/index.php/XFS_Status_Updates
     
  22. 1010

    1010 Limp Gawd

    Messages:
    187
    Joined:
    Jun 9, 2011
    no problem, hope the info helps. A few reasons I still run XFS is because it's mature, well tested and proven, and also because of customizable performance options (in RAID) and support tools available.

    There are more recent filesystems which offer more options (ZFS, Btrfs), but they either aren't natively supported by the linux kernel, lack features, or are still relatively young (compared to XFS) in their development cycle.
     
  23. tonyb

    tonyb Limp Gawd

    Messages:
    130
    Joined:
    Dec 30, 2010
    I'd use something that's widely used. I had an archive volume in ReiserFS... yeah. Spent a few hours compiling a modern kernel to get the data off the drive. Right now, unless there's a very specific use case, I use ext4.

    XFS and JFS I'd avoid, only because they're not as widely used as they have been. Interesting that ZFS on Linux is used by LLL, wasn't aware of that. I'd go with a FreeBSD system running ZFS or OpenSolaris/Solaris before using ZFS on Linux, but that's me personally.
     
  24. 1010

    1010 Limp Gawd

    Messages:
    187
    Joined:
    Jun 9, 2011
    XFS isn't going anywhere. The draw for sysadmins for XFS is due to its maturity. You have a solid, proven filesystem that isn't prone to corruption or bugs which may occur on filesystems that are new or otherwise unproven (lacking years of testing and/or use in production env) XFS tools are robust and proven as well.

    Theodore Tso made inroads in the past few years with ext4, and it may be considered relatively safe now, but it still is a relatively young fs.

    JFS probably should be avoided for reasons previously mentioned. IBM has mostly abandoned the project and a number of bugs are left unfixed. Someone is still committed to developing JFS, but you're risking your data on the off chance that you won't be a statistic.
     
  25. brutalizer

    brutalizer [H]ard|Gawd

    Messages:
    1,593
    Joined:
    Oct 23, 2010
    What is LLL?
     
  26. obrith

    obrith Limp Gawd

    Messages:
    267
    Joined:
    Jun 11, 2004
    Lawrence Livermore National Laboratory
     
  27. gjs278

    gjs278 Gawd

    Messages:
    712
    Joined:
    Feb 22, 2011
    very. I sit in the #xfs channel on freenode. they work for redhat and they'll help anyone who stops by and has a question or issue. I see work and new features go by all the time.