Filesystem for Large Linux volume: XFS, JFS, or EXT4

Which filesystem?


  • Total voters
    31

BecauseScience

[H]ard|Gawd
Joined
Oct 9, 2005
Messages
1,047
I'd like to know what filesystems you guys use for large (16TB and up) volumes on Linux. I've been researching this for a while. As usual, opinions vary. I know about the old 16TB volume limitation for EXT4. I would be using recent userspace tools and kernel to get the >16TB support. The question is, how stable is the >16TB support? It's quite new.
 
At work where I have 40TB+ of linux software raid, I have moved for the most part to ext4 (from reiserfs, jfs and xfs) and I am in testing of btrfs on data that it would not hurt to lose. I know using btrfs for throw away data and ext4 for the important stuff sounds wrong. However btrfs comes with the "experimental" tag in the kernel and that does not make me want to risk having to spend the time to recover from tape..

Although I do not have any single filesystems of 16TB so I guess I do not help your question.
 
for me extys lack of resizing was a dealbreaker
i went for jfs, seemed fine, though it had some funky need to be rechecked uppon remounts
 
for me extys lack of resizing was a dealbreaker
i went for jfs, seemed fine, though it had some funky need to be rechecked uppon remounts

Beware of resizing of a filesystem of >32 TiB on JFS. It did not go well for me.


I use JFS:

Code:
root@dekabutsu: 02:20 PM :~# df -H /data /data
Filesystem      Size  Used Avail Use% Mounted on
/dev/sdd1        44T   30T   15T  67% /data
/dev/sdd1        44T   30T   15T  67% /data
root@dekabutsu: 02:20 PM :~# mount | grep data
/dev/sdd1 on /data type jfs (rw,noatime,nodiratime)
/dev/sde1 on /data2 type jfs (rw,noatime,nodiratime)
root@dekabutsu: 02:20 PM :~#
 
With my limited experience on file systems I'd say EXT4, because it is the most widely available and used, so you know it gets lot of use and is known to be reliable. The others often require to have extra stuff installed so only the most advanced people tend to use them. Though there are reasons to go with the others as well, but that's just my personal feeling on it.

Not that it means much, but my current main storage at home is 4.5TB md raid on EXT3. (old dinosaur server, using FC9).
 
With my limited experience on file systems I'd say EXT4, because it is the most widely available and used, so you know it gets lot of use and is known to be reliable. The others often require to have extra stuff installed so only the most advanced people tend to use them. Though there are reasons to go with the others as well, but that's just my personal feeling on it.

Not that it means much, but my current main storage at home is 4.5TB md raid on EXT3. (old dinosaur server, using FC9).

Stable ext4 >16 TiB support is still pretty new so I dont know how much I would trust it. Its only been in stable e2fstools since like November of last year.
 
Use ZFS on Linux (kernel module, not FUSE). If it's good enough for Lawrence Livermore National Laboratory, it's good enough for me.
 
Stable ext4 >16 TiB support is still pretty new so I dont know how much I would trust it. Its only been in stable e2fstools since like November of last year.

I saw kernel changelogs for >16TB support that were dated January 2012. :eek: I believe support came first in e2fstools and the kernel work was done later. Has anyone looked at the actual code changes? I'm wondering how extensive they are.

My gut says to go with ext4. The 16TB code is extremely new but other ext4 enhancements have gone smoothly in the past.

I keep reading report after report of XFS and JFS going bonkers for no reason, like houkouonchi's post. I personally have not had good luck with either JFS or XFS but it was many years ago so I take it with a grain of salt.

I know using btrfs for throw away data and ext4 for the important stuff sounds wrong. However btrfs comes with the "experimental" tag in the kernel and that does not make me want to risk having to spend the time to recover from tape..

I think you are correct in not trusting btrfs. People are still losing filesystems to it. It's a shame btrfs doesn't have raid6 or encryption yet. It would be an easy choice. Well...if it was stable too.
 
Last edited:
Use ZFS on Linux (kernel module, not FUSE). If it's good enough for Lawrence Livermore National Laboratory, it's good enough for me.

I have looked at that. However my big reluctance to use zfsonlinux is that this is an out of the mainline kernel driver. I have been burned more than once by these in that after some time development either stops or they decide to follow some other distributions release schedule like RHEL.
 
Use ZFS on Linux (kernel module, not FUSE). If it's good enough for Lawrence Livermore National Laboratory, it's good enough for me.

ZFS isn't suitable due to its limitations on adding additional storage. This filesystem will be on a raid6. Storage will be added to the raid6 one disk at a time.
 
ZFS isn't suitable due to its limitations on adding additional storage. This filesystem will be on a raid6. Storage will be added to the raid6 one disk at a time.

I can understand why that limitation would prevent you from using ZFS. However, just because your filesystem and volume management supports that paradigm of expanding doesn't make it a good idea.

On another note, are you concerned about silent data corruption? All of the non-ZFS and non-btrfs (though I wouldn't use btrfs at this point) options are vulnerable to silent corruption.
 
On another note, are you concerned about silent data corruption?

Nope, and I'm not even considering zfs or btrfs. They're totally out of the running.

I'm open to using:
  • JFS
  • XFS
  • ext4 with >16tb support
  • ext4 without >16tb support (would use multiple <16tb filesystems)
 
uhh what the heck is going on here? you guys are voting ext4. it's limited to 16tb.

I'd like to know what filesystems you guys use for large (16TB and up) volumes on Linux

ditch ext4, go with xfs, keep the barriers on, and you're set.
 
Use ZFS on Linux (kernel module, not FUSE). If it's good enough for Lawrence Livermore National Laboratory, it's good enough for me.
+1

I have looked at that. However my big reluctance to use zfsonlinux is that this is an out of the mainline kernel driver. I have been burned more than once by these in that after some time development either stops or they decide to follow some other distributions release schedule like RHEL.
On my lab, we ahve been testing ZFSonLinux since october 2011
  1. X8DAL
  2. 2* x5502
  3. 6*1Go ECC
  4. adaptec 6805
  5. 8*2To samsung F4 RAIDZ2
  6. Ubuntu 11.10 server x86_64
  7. ZFS/SPL : ppa ubuntu spl-0.6.0-rc6 & zfs-0.6.0-rc6
  8. vPool ZFS : v28

No problem.

Works into ESXi too.

Now I'm building 48 To Raw (> 32 To usable) on 24x 2 To expandable
... with OpenIndiana + Nappit on RaidZ2 : 6x 2 To on 4 vDevs. + 8 spares drives

Cheers.

St3F
 
Last edited:
uhh what the heck is going on here? you guys are voting ext4. it's limited to 16tb.

The reason why probably has to do with ext4's activity. EXT4 itself isn't limited to 16TB, but the program(s) used to create and resize are. This is no longer an issue when it comes to creating the partition. Therefore sticking with EXT4 has some perks that should not be easily discounted.

The other reason likely is because using multiple mount points and/or even symlinks solves the issue relatively easily. Not saying which is better than another just saying that's probably why most vote for ext4.
 
uhh what the heck is going on here? you guys are voting ext4. it's limited to 16tb.

The reason why probably has to do with ext4's activity. EXT4 itself isn't limited to 16TB, but the program(s) used to create and resize are.

As discussed earlier in the thread, ext4 is no longer limited to 16tb. The ext4 filesystem utilities and kernel have both been enhanced to handle >16tb.
 
I saw kernel changelogs for >16TB support that were dated January 2012. :eek: I believe support came first in e2fstools and the kernel work was done later. Has anyone looked at the actual code changes? I'm wondering how extensive they are.

My gut says to go with ext4. The 16TB code is extremely new but other ext4 enhancements have gone smoothly in the past.

I keep reading report after report of XFS and JFS going bonkers for no reason, like houkouonchi's post. I personally have not had good luck with either JFS or XFS but it was many years ago so I take it with a grain of salt.



I think you are correct in not trusting btrfs. People are still losing filesystems to it. It's a shame btrfs doesn't have raid6 or encryption yet. It would be an easy choice. Well...if it was stable too.

XFS is extremely stable and works well with large volumes, but you must be aware of some finer areas surrounding inode mount options. Around 16TiB XFS does have an issue if you mount an XFS fs without the -o inode64 option. Be aware that once you envoke -o inode64, inodes will be transformed. From that point after you must mount with -o inode64 to access data previously written while mounted under -o inode64.

There's a long history about -o inode64 on the sgi XFS mailing list and probably on SGI's site as well.


http://xfs.org/index.php/XFS_FAQ#Q:_What_is_the_inode64_mount_option_for.3F

XFS and > 16TiB volumes

http://www.doc.ic.ac.uk/~dcw/xfs_16tb/
 
uhh what the heck is going on here? you guys are voting ext4. it's limited to 16tb.



ditch ext4, go with xfs, keep the barriers on, and you're set.

Keep barriers on and also research -o inode64
 
Beware of resizing of a filesystem of >32 TiB on JFS. It did not go well for me.


I use JFS:

Code:
root@dekabutsu: 02:20 PM :~# df -H /data /data
Filesystem      Size  Used Avail Use% Mounted on
/dev/sdd1        44T   30T   15T  67% /data
/dev/sdd1        44T   30T   15T  67% /data
root@dekabutsu: 02:20 PM :~# mount | grep data
/dev/sdd1 on /data type jfs (rw,noatime,nodiratime)
/dev/sde1 on /data2 type jfs (rw,noatime,nodiratime)
root@dekabutsu: 02:20 PM :~#

I passed on JFS due to it being a dead project. JFS does have bugs and it is no longer being actively developed or supported by IBM
 
XFS is extremely stable and works well with large volumes, but you must be aware of some finer areas surrounding inode mount options. Around 16TiB XFS does have an issue if you mount an XFS fs without the -o inode64 option. Be aware that once you envoke -o inode64, inodes will be transformed. From that point after you must mount with -o inode64 to access data previously written while mounted under -o inode64.

There's a long history about -o inode64 on the sgi XFS mailing list and probably on SGI's site as well.


http://xfs.org/index.php/XFS_FAQ#Q:_What_is_the_inode64_mount_option_for.3F

XFS and > 16TiB volumes

http://www.doc.ic.ac.uk/~dcw/xfs_16tb/

Interesting. Thanks for the pointers. It does look like xfs development is very active. http://xfs.org/index.php/XFS_Status_Updates
 
Interesting. Thanks for the pointers. It does look like xfs development is very active. http://xfs.org/index.php/XFS_Status_Updates

no problem, hope the info helps. A few reasons I still run XFS is because it's mature, well tested and proven, and also because of customizable performance options (in RAID) and support tools available.

There are more recent filesystems which offer more options (ZFS, Btrfs), but they either aren't natively supported by the linux kernel, lack features, or are still relatively young (compared to XFS) in their development cycle.
 
I'd use something that's widely used. I had an archive volume in ReiserFS... yeah. Spent a few hours compiling a modern kernel to get the data off the drive. Right now, unless there's a very specific use case, I use ext4.

XFS and JFS I'd avoid, only because they're not as widely used as they have been. Interesting that ZFS on Linux is used by LLL, wasn't aware of that. I'd go with a FreeBSD system running ZFS or OpenSolaris/Solaris before using ZFS on Linux, but that's me personally.
 
I'd use something that's widely used. I had an archive volume in ReiserFS... yeah. Spent a few hours compiling a modern kernel to get the data off the drive. Right now, unless there's a very specific use case, I use ext4.

XFS and JFS I'd avoid, only because they're not as widely used as they have been. Interesting that ZFS on Linux is used by LLL, wasn't aware of that. I'd go with a FreeBSD system running ZFS or OpenSolaris/Solaris before using ZFS on Linux, but that's me personally.

XFS isn't going anywhere. The draw for sysadmins for XFS is due to its maturity. You have a solid, proven filesystem that isn't prone to corruption or bugs which may occur on filesystems that are new or otherwise unproven (lacking years of testing and/or use in production env) XFS tools are robust and proven as well.

Theodore Tso made inroads in the past few years with ext4, and it may be considered relatively safe now, but it still is a relatively young fs.

JFS probably should be avoided for reasons previously mentioned. IBM has mostly abandoned the project and a number of bugs are left unfixed. Someone is still committed to developing JFS, but you're risking your data on the off chance that you won't be a statistic.
 
Back
Top