Linux Founder Linus Torvalds Draws Ire for Criticizing Oracle ZFS

erek

[H]F Junkie
Joined
Dec 19, 2005
Messages
10,786
Don't know anything really about ZFS, typically use the ext4 filesystem.

""And if you're talking about OpenZFS, then yes, there's clearly maintenance there, but it has all the questions about what happens if Oracle ever decides - again - that 'copyright' means something different than anybody else thinks it means.""

upload_2020-1-16_7-9-9.png


https://www.tomshardware.com/news/linux-linus-torvalds-criticizes-oracle-zfs
 
Don't know anything really about ZFS, typically use the ext4 filesystem.

ZFS is useful if you're storing a lot of data and want to ensure that it's actually kept safe. It's an all-encompassing solution.

As to Linus -- his concerns are well founded, and I realize that such action from Oracle might be extremely disruptive, but I do wish he'd focus more on the technology being implemented than the legal aspect.
 
I am concerned at recent conflict between zfsonlinux / openzfs and the linux kernel developers.
 
Much ado about nothing for me. He kind of went off the deep end when talking about benchmarks and other stuff that are not why you would actually use ZFS, but his overall point is correct.
All he did was recommend kernel maintainers and developers not to use ZFS code in their distributions unless Oracle agrees to make ZFS GPL (with a signed letter). Given Oracle's past of suing for the use of their "technology", I would agree. Most Linux distributions are more a labor of love than a capitalist venture. Oracle going after Linux and anyone using ZFS could put a huge dent in a number of groups and potentially ruin some people financially.
 
ZFS was originally developed by Sun Microsystems for their Solaris OS as the file system to end all file systems.

It is a file system and software RAID solution all in one, and it is - provided you feed it with sufficient RAM and CPU cycles - more reliable and resistant to corruption than any hardware RAID implementation.

Linus does have a point though.

Sun Microsystems used to be a great and prolific contributor to the open source community, but when Oracle acquired them they revoked their contributions as much as they were legally able to.

This is why we no longer have OpenOffice for instance. They withdrew a lot of Suns code from the project, forcing it to fork into a Sun-less project and rename it to LibreOffice.

Oracle is evil in this regard.

Right now, OpenZFS exists forked from OracleZFS, but my understanding is there are still a number of legacy license issues with the project, meaning if Oracle wanted to be evil like they have in the past they can still do a lot of damage.

There is a similar Linux project that is completely open source called BTRFS, which is much better from the licensing perspective, but for all intents and purposes BTRFS is dead. Many of the major contributors - including Red Hat - have pulled out. I'm guessing a little bit of the reason for these comments is that Linus is bitter about the deprecation of BTRFS.

ZFS really is all that. It's difficult to imagine a better file system, but the licensing issues means that it is problematic.

This came to a head when the Linux Kernel team recently removed some things ZFS needed to function in the 5.0 Kernel and the OpenZFS team had to scramble and patch things up.

I'm hoping that the OpenZFS project is busy at work rewriting old code to resolve the licensing issues, but who knows.
 
Last edited:
While Linus was over the top (as always) he's not wrong. No distro ships with ZFS outside of Ubuntu 19.10 and that's only experimental until 20.04. It'll be very interesting to see what Oracle does if/when Ubuntu starts winning more server space due to ZFS support. Right now using ZFS is at your own risk so having Canonical back it could make Oracle go after them. The CDDL that ZFS is under sucks hard and is purposely incompatible with GPL which is where all the issues lie. Sadly Oracle will never shift ZFS to GPL and will most likely never give that piece of paper to Linus saying we're OK with the code being mainlined.

It's too bad btrfs has never truly panned out. It's not a bad file system but it isn't great either. I have great hope for bcachefs though and with it possibly being mainlined this year it will only get stronger because we do need a good solid mainlined file system that can do what ZFS can do.
 
"The benchmarks I've seen do not make ZFS look all that great."

I don't know what benchmarks he is looking at but zfs on my old x79 board with a xeon and ECC has no problem saturating a 10g connection to my 3970x threadripper. Switched to 40g mellanox 3 IB cards and from the sata based ssd pool can get 15gbit 1.8gb/s sustained file copies to my local PCIE4 nvme from the array. When files are cached in the 64gb of ram on it, its even faster. Just tested the IB parts last night real quick but pulled it apart since I have a pile of watercooling parts to setup on the 3970x after work tonight.

Edit: add to that the fact I can just pull the drives and export / import the pool with no issues into any new hardware unlike dealing with raid cards and that setup is hard to beat, my older pool has lived through 3 different motherboard / cpu combos and migrated from openindiana to ubuntu.
 
ZFS was originally developed by Sun Microsystems for their Solaris OS as the file system to end all file systems.

It is a file system and software RAID solution all in one, and it is - provided you feed it with sufficient RAM and CPU cycles - more reliable and resistant to corruption than any hardware RAID implementation.

Linus does have a point though.

Sun Microsystems used to be a great and prolific contributor to the open source community, but when Oracle acquired them they revoked their contributions as much as they were legally able to.

This is why we no longer have OpenOffice for instance. They withdrew a lot of Suns code from the project, forcing it to fork into a Sun-less project and rename it to LibreOffice.

Oracle is evil in this regard.

Right now, OpenZFS exists forked from OracleZFS, but my understanding is there are still a number of legacy license issues with the project, meaning if Oracle wanted to be evil like they have in the past they can still do a lot of damage.

There is a similar Linux project that is completely open source called BTRFS, which is much better from the licensing perspective, but for all intents and purposes BTRFS is dead. Many of the major contributors - including Red Hat - have pulled out. I'm guessing a little bit of the reason for these comments is that Linus is bitter about the deprecation of BTRFS.

ZFS really is all that. It's difficult to imagine a better file system, but the licensing issues means that it is problematic.

This came to a head when the Linux Kernel team recently removed some things ZFS needed to function in the 5.0 Kernel and the OpenZFS team had to scramble and patch things up.

I'm hoping that the OpenZFS project is busy at work rewriting old code to resolve the licensing issues, but who knows.

My understanding is that even though there are these forked projects, Oracle can claim it is a derivative work and still claim it as using their code (see Oracle vs Google over Java in Android)
 
There is a similar Linux project that is completely open source called BTRFS, which is much better from the licensing perspective, but for all intents and purposes BTRFS is dead. Many of the major contributors - including Red Hat - have pulled out. I'm guessing a little bit of the reason for these comments is that Linus is bitter about the deprecation of BTRFS.

I've only used ZFS at home (with small arrays), but where I was working they were using BTRFS on all the Linux systems, and it wasn't uncommon for it to have weird technical issues that made our systems go from mostly working to fairly broken without a lot of notice of impending doom. I haven't seen reports of ZFS acting that way; some people complain about its ram usage, but generally once you've got it set up, it seems to just continue to work as long as you notice and respond to disk failures; although, at least on FreeBSD it's pretty easy to upgrade the zfs features after an OS upgrade but not properly upgrade your bootloader and then be unable to boot --- I've managed to pull that one off enough times that I remember how to quickly fix it ;).
 
I was considering ZFS at home until I read it requires ~1GB of RAM per 1TB of storage. Not exactly mega performant.

In any case, Jim Salter's article stating that Torvalds does not understand ZFS was just bait.
 
My understanding is that even though there are these forked projects, Oracle can claim it is a derivative work and still claim it as using their code (see Oracle vs Google over Java in Android)

Correct. The CDDL that ZFS uses is incompatible with the GPL so the derivative work could potentially be claimed as Oracle's code.
 
Linus was 100% correct. However he then went on and stated "the benchmarks I've seen do not make ZFS look all that great," Torvalds declared: "Don't use ZFS. It's that simple." I admit, this made me frown...

Obviously Linus has never used ZFS? As it's not really about benchmarks, it's about efficient redundancy - Performance is a secondary concern.
 
I was considering ZFS at home until I read it requires ~1GB of RAM per 1TB of storage. Not exactly mega performant.

This widely quoted figure seems to be some random person's rule of thumb, for ZFS with deduplication turned on; and may not have even been accurate in that use case. Deduplication doesn't seem to be a very useful feature outside of some very specific use cases (and in those cases, the reduction in storage would certainly justify the memory needed). As with other filesystems, any spare memory in your system will end up in the service of the read cache; unlike other filesystems, this read cache is measured separately, which leads to some of the reporting that zfs is a memory hog; certainly some portion of that is also that ZFS wasn't always great at releasing its cache memory in times of need, I think it's gotten better at that, though.
 
This widely quoted figure seems to be some random person's rule of thumb, for ZFS with deduplication turned on; and may not have even been accurate in that use case. Deduplication doesn't seem to be a very useful feature outside of some very specific use cases (and in those cases, the reduction in storage would certainly justify the memory needed). As with other filesystems, any spare memory in your system will end up in the service of the read cache; unlike other filesystems, this read cache is measured separately, which leads to some of the reporting that zfs is a memory hog; certainly some portion of that is also that ZFS wasn't always great at releasing its cache memory in times of need, I think it's gotten better at that, though.

Yes, I read that as well. Someone give me a figure pls
 
I would say that there is no per TB memory requirement at all. I have over 100TB of data on ZOL.

With that said my 2 largest servers have 32GB of ram now. They had 8GB in the past.
 
Last edited:
Yes, I read that as well. Someone give me a figure pls

Sun used to say your system should have 1GB of ram to run ZFS, with no particular amount per increment of storage. Seems reasonable to me.
 
This widely quoted figure seems to be some random person's rule of thumb, for ZFS with deduplication turned on; and may not have even been accurate in that use case. Deduplication doesn't seem to be a very useful feature outside of some very specific use cases (and in those cases, the reduction in storage would certainly justify the memory needed). As with other filesystems, any spare memory in your system will end up in the service of the read cache; unlike other filesystems, this read cache is measured separately, which leads to some of the reporting that zfs is a memory hog; certainly some portion of that is also that ZFS wasn't always great at releasing its cache memory in times of need, I think it's gotten better at that, though.

I think it came from a FreeNAS forum hardware sticky years ago.

When I asked around in that forum for what the source was for this figure as I couldn't find it in any official ZFS documentation, I got a rude response from one of the moderators calling himself Cyberjock essentially saying that they know what's best and not to question them, unless you want an unreliable system and data loss. :/

My take was that if you are passing something off as fact, you should be able to back it up, but his take was that it wasn't their job to teach their users :/

It was really a quite ofputting response and one of the many reasons I stopped using FreeNAS.
 
The ram requirement as I've read it did start on the FreeNas forums and Cyberjock (last I saw a few years ago still had his junkyard dog avatar) is well known among the rest of that forum as being rude, arrogant grognards. If you build a system for Freenas that the didn't generally recommend, its all your fault and your mother dressed you funny today. Just search around and you'll find lots of arguments about it I'm sure. That and the whole debate over ECC vs non ECC.
 
The ZFS defenders get their panties in a bunch over this stuff.

Linus can never ever ever include ZFS oracle or open or otherwise mainline. He can't the way its licenced it can be read as the entire project becomes a derivative work from that point forward. Including it mainline would open the potential for Oracle to bring suit. Being Oracle that isn't exactly a fat chance statment more... ya sounds like something they would do statement.

ZFS might be the best file system ever made... and you know what I might even argue it is. It still should not be part of any commercial Linux distribution ever. Canonical officially supports it for Ubuntu server and frankly they are playing with fire. They claim their legal team says they are in the clear.. and imo their legel team is then a bunch of UK based monkeys who have only read about US law. I don't see Oracle bring suit against Canonical... cause well frankly they are a 100 million dollar a year grossing company that brings in profits of under 10 million. They are a very very small fish. Red Hat no doubt will not touch official ZFS support... and Linus can never risk including it mainline in the actual kernel.

That leaves bolt ons... and I really wish the ZFS boosters would stop thinking the Linux kernel guys are after them all the time for sane changes they make, like cleaning up depricated kernel symbols exports ect. Sure the main kernel developers are on record as saying they dislike ZFS... and ZFS people need to realise as long as ZFS is licenced CDDL it is never going to be really compatabile with the kernel, and making it work means the project that supports it is going to have to stay current on what the kernel is currently doing so they can create their bolt ons.

Its easy to see how Linus and people like Kroah-Hartman hate ZFS. The people that love it are zealots... the people that created it are clearly waiting to sue. If Oracle wanted ZFS mainlined they could GPL it tomorrow which would allowt he openzfs project to follow suit and in a few months it would be mainline. As it is under CDDL the Linux world should actually shun it so it dies. This BS bolt on support and cult of ZFS... is exactly what the Oracle sleeze Legal dept wants. Today its Ubuntu with offical support... but make no mistake if Linus screws up or gets old or dies and is replaced by someone that mainlines openzfs under CDDL. Or if the main Linux server developers really start using ZFS officially. Oracle would be in court faster then Larry Ellison invests in new super pacs when the polticial winds shift. lol
 
The ZFS defenders get their panties in a bunch over this stuff.

Linus can never ever ever include ZFS oracle or open or otherwise mainline. He can't the way its licenced it can be read as the entire project becomes a derivative work from that point forward. Including it mainline would open the potential for Oracle to bring suit. Being Oracle that isn't exactly a fat chance statment more... ya sounds like something they would do statement.

ZFS might be the best file system ever made... and you know what I might even argue it is. It still should not be part of any commercial Linux distribution ever. Canonical officially supports it for Ubuntu server and frankly they are playing with fire. They claim their legal team says they are in the clear.. and imo their legel team is then a bunch of UK based monkeys who have only read about US law. I don't see Oracle bring suit against Canonical... cause well frankly they are a 100 million dollar a year grossing company that brings in profits of under 10 million. They are a very very small fish. Red Hat no doubt will not touch official ZFS support... and Linus can never risk including it mainline in the actual kernel.

That leaves bolt ons... and I really wish the ZFS boosters would stop thinking the Linux kernel guys are after them all the time for sane changes they make, like cleaning up depricated kernel symbols exports ect. Sure the main kernel developers are on record as saying they dislike ZFS... and ZFS people need to realise as long as ZFS is licenced CDDL it is never going to be really compatabile with the kernel, and making it work means the project that supports it is going to have to stay current on what the kernel is currently doing so they can create their bolt ons.

Its easy to see how Linus and people like Kroah-Hartman hate ZFS. The people that love it are zealots... the people that created it are clearly waiting to sue. If Oracle wanted ZFS mainlined they could GPL it tomorrow which would allowt he openzfs project to follow suit and in a few months it would be mainline. As it is under CDDL the Linux world should actually shun it so it dies. This BS bolt on support and cult of ZFS... is exactly what the Oracle sleeze Legal dept wants. Today its Ubuntu with offical support... but make no mistake if Linus screws up or gets old or dies and is replaced by someone that mainlines openzfs under CDDL. Or if the main Linux server developers really start using ZFS officially. Oracle would be in court faster then Larry Ellison invests in new super pacs when the polticial winds shift. lol


I agree with many things you say. Oracle is a terrible company that does shitty things, and CDDL is a significant drawback of ZFS.

On the merits - however - it really is as good as the zealots claim. No other file system or RAID solution comes even close to touching it.

In a way your comments touch on my biggest gripe when it comes to Linux, and that is the "my way or the highway" approach, and GNU/Open source license zealotry. The linux community really needs to get away from that. We need to get to the point where whatever is best for the user, regardless what license or distribution model it uses is made compatible. In the end any software exists solely for th epurpose of filling User needs, and Linux and the open source community as a whole needs to get some proper software development model religion, and stop fighting over their autism perfect world license ideals and start caring about how can they do whatever it takes to make the best possible product for the end user, regardless of license or distribution model.

And I don't really care if I get official support or if it is included in the official package manager or not. I'm happy to use a 3rd party trusted project PPA for this one. All i ask is that the Kernel team show the same courtesy to compatibility with ZFS as they do with other major projects.
 
Last edited:
I agree that it shouldn't be mainlined, since the legal concerns are not exactly speculative given the history.

However, I don't see why it can't be a separate 3rd party download, same way I can install proprietary GPU drivers, or video codecs, etc.

I didn't bother testing ZFS because Ubuntu said it was experimental and I don't want to take chances with my data. But everything I read sounds good, I will want to test it when it's stable.
 
  • Like
Reactions: ChadD
like this
I agree that it shouldn't be mainlined, since the legal concerns are not exactly speculative given the history.

However, I don't see why it can't be a separate 3rd party download, same way I can install proprietary GPU drivers, or video codecs, etc.

I didn't bother testing ZFS because Ubuntu said it was experimental and I don't want to take chances with my data. But everything I read sounds good, I will want to test it when it's stable.

It's been rock solid for me for years
 
The ZFS defenders get their panties in a bunch over this stuff.

Linus can never ever ever include ZFS oracle or open or otherwise mainline. He can't the way its licenced it can be read as the entire project becomes a derivative work from that point forward. Including it mainline would open the potential for Oracle to bring suit. Being Oracle that isn't exactly a fat chance statment more... ya sounds like something they would do statement.

ZFS might be the best file system ever made... and you know what I might even argue it is. It still should not be part of any commercial Linux distribution ever. Canonical officially supports it for Ubuntu server and frankly they are playing with fire. They claim their legal team says they are in the clear.. and imo their legel team is then a bunch of UK based monkeys who have only read about US law. I don't see Oracle bring suit against Canonical... cause well frankly they are a 100 million dollar a year grossing company that brings in profits of under 10 million. They are a very very small fish. Red Hat no doubt will not touch official ZFS support... and Linus can never risk including it mainline in the actual kernel.

That leaves bolt ons... and I really wish the ZFS boosters would stop thinking the Linux kernel guys are after them all the time for sane changes they make, like cleaning up depricated kernel symbols exports ect. Sure the main kernel developers are on record as saying they dislike ZFS... and ZFS people need to realise as long as ZFS is licenced CDDL it is never going to be really compatabile with the kernel, and making it work means the project that supports it is going to have to stay current on what the kernel is currently doing so they can create their bolt ons.

Its easy to see how Linus and people like Kroah-Hartman hate ZFS. The people that love it are zealots... the people that created it are clearly waiting to sue. If Oracle wanted ZFS mainlined they could GPL it tomorrow which would allowt he openzfs project to follow suit and in a few months it would be mainline. As it is under CDDL the Linux world should actually shun it so it dies. This BS bolt on support and cult of ZFS... is exactly what the Oracle sleeze Legal dept wants. Today its Ubuntu with offical support... but make no mistake if Linus screws up or gets old or dies and is replaced by someone that mainlines openzfs under CDDL. Or if the main Linux server developers really start using ZFS officially. Oracle would be in court faster then Larry Ellison invests in new super pacs when the polticial winds shift. lol

I think I agree with most of what you said, but I wouldn't have used those words ;). ZFS can't be mainlined without an act of god/Larry Ellison, so Linus is certainly right to not care about it. I think a lot of the changes that break ZoL are sane, but some of them do seem intended to spite.

Arguing about it is just tilting at windmills though. If you want ZFS on Linux, effort would be best spent building a fast yacht you can use as leverage to get Ellison to relicense or writing a similar filesystem from scratch (but, without falling into all the filesystem development traps), or trying one of the OSes with a compatible license.
 
  • Like
Reactions: ChadD
like this
I agree with many things you say. Oracle is a terrible company that does shitty things, and CDDL is a significant drawback of ZFS.

On the merits - however - it really is as good as the zealots claim. No other file system or RAID solution comes even close to touching it.

In a way your comments touch on my biggest gripe when it comes to Linux, and that is the "my way or the highway" approach, and GNU/Open source license zealotry. The linux community really needs to get away from that. We need to get to the point where whatever is best for the user, regardless what license or distribution model it uses is made compatible. In the end any software exists solely for th epurpose of filling User needs, and Linux and the open source community as a whole needs to get some proper software development model religion, and stop fighting over their autism perfect world license ideals and start caring about how can they do whatever it takes to make the best possible product for the end user, regardless of license or distribution model.

And I don't really care if I get official support or if it is included in the official package manager or not. I'm happy to use a 3rd party trusted project PPA for this one. All i ask is that the Kernel team show the same courtesy to compatibility with ZFS as they do with other major projects.

That is fine for userland software. User land software can run under what ever licence they want. I don't care if user land is GPL CPPL BSD or 100% closed source. Nothing wrong with that... there are legit reasons for companies to develop closed source software.

When it comes to the kernel though... yes for Linux it survive it needs to be militant GPL only. No non GPL code should be allowed anywhere near the kernel code. Unfortunently for all of us File systems to operate at 100% peek really should be mainlined. The only company to blame for it not being licenced properly to be included is oracle. They bought it they own it... and if they choose to stick with CDDL licencing well then its incompatabile with the Linux kernel. Just the way it is and the way it will always be as long as Linus is in charge... and seeing as his number 2 agrees with him. Its all on oracle. Linus is on recrod as saying the day he gets a notarized letter from Larry Ellison saying Oracle give up all future claim to ZFS and will never sue anyone using the code, he'll mainline it.

I agree ZFS is a great file system. Sun did well... and its really too bad Oracle is the company that got their IP. But facts are facts.... ZFS is not a file system Linux can or should be supporting. They don't go out of their way to make life difficult for closed source bolt on crap like ZFS or Nvidia drivers... but at the same time they can't be expected to watchout for those projects either. If they have a list of kernel symbol exports that are no longer used by any of the kenrel code.... yes clean it up. I'm sorry if that means the OpenZFS kids need to pay attention and ensure they are using current working kernel systems... but why should Linus or any of the other kernel developers pay attention to every line of their code for them and bloat the core project. If Linus and the kernel developers start bloating the kernel to accomadate one project where does it end. Perhaps Nvidia says hey you keep only depricated things alive for ZFS... how about you just include our alternate EPL implementation so we can hook into the kernel stuff we need to support Nvidias version of wayland. Hey perhaps AMD decides there propritary driver needs X or Y to stick around cause changing it is to much work and why should they if Linus is accomedating everyone else.

I guess I'm saying no one should expect special treatment. If you are working on a project unwilling to adopt the same licencing as other kernel code.... why should you expect to be included into the kernel code, or have the kernel developers go out of their way to accomadate your incompatible code. The Kernel under the GPL is all about everyone coming to the table and sharing their work. When Samsung and facebook submit their file systems to the kernel they don't ask for special treatment so they can retain a licence that allows them the right to call backsies as the CDDL does. They are required to GPL the code, share it and be fine with another company using that code. Oracle doesn't get to be special just cause they created something good. With out the GPL at the kernel level Linux fails. (as I said userland it don't matter we can have 200 DEs with 200 different licences its irrelevent... its the kernel that must remain pure)
 
Last edited:
I think I agree with most of what you said, but I wouldn't have used those words ;). ZFS can't be mainlined without an act of god/Larry Ellison, so Linus is certainly right to not care about it. I think a lot of the changes that break ZoL are sane, but some of them do seem intended to spite.

Arguing about it is just tilting at windmills though. If you want ZFS on Linux, effort would be best spent building a fast yacht you can use as leverage to get Ellison to relicense or writing a similar filesystem from scratch (but, without falling into all the filesystem development traps), or trying one of the OSes with a compatible license.

lol yes we just need some Linux lovers to get to work on a super boat. :)
 
The part I am curious about is if this CDDL nonsense predates Oracle or not.

People generally speak favorably of Sun, and dislike Oracle and blame all of this on Oracle, but I can't help but wonder if the bad lice song was in there the entire time and it only came to light after Oracle started suing.

I can't imagine you can legally bait and switch by putting software out there under a relaxed license and then switching it to a more restrictive one once projects start using the code...

But yes. Really disappointed Oracle bought Sun. It did lots of damage.

In my ideal world Oracle would die a horrible fiery death.
 
I understand concerns of having it directly on distro due to licensing problems.
At the present situation its more likely that debian will stop development, versus oracle evil overloards taking our openzfs.

Other concerns are more interesting as they are true ZFS is slower than ext4/xfs, and doesn't have many options/optimizations those do have.
ZFS is not ideal fs for root. You should keep it separated as it will cause issues - and its only question of time before forums are flooded with ubuntu ppl complaining about zfs bricking their root (but it wasn't zfs fault).
What shtbuntu did is stupid - but hey this moves acually makes debian look better - except it doesn't don't.
 
ZFS is useful if you're storing a lot of data and want to ensure that it's actually kept safe. It's an all-encompassing solution.

As to Linus -- his concerns are well founded, and I realize that such action from Oracle might be extremely disruptive, but I do wish he'd focus more on the technology being implemented than the legal aspect.
ZFS is an enterprise solution. It's built for uptime. There are just as safe solutions if "to the minute" uptime is not urgent for you and you are not swimming in replacement hardware on-hand.
 
ZFS is an enterprise solution. It's built for uptime. There are just as safe solutions if "to the minute" uptime is not urgent for you and you are not swimming in replacement hardware on-hand.

What other solutions do copy on write, checksum everything, have snapshots that don't take up additional space, don't lose a ton of storage space to parity, and are free?
 
I understand concerns of having it directly on distro due to licensing problems.
At the present situation its more likely that debian will stop development, versus oracle evil overloards taking our openzfs.

Other concerns are more interesting as they are true ZFS is slower than ext4/xfs, and doesn't have many options/optimizations those do have.
ZFS is not ideal fs for root. You should keep it separated as it will cause issues - and its only question of time before forums are flooded with ubuntu ppl complaining about zfs bricking their root (but it wasn't zfs fault).
What shtbuntu did is stupid - but hey this moves acually makes debian look better - except it doesn't don't.

I don't know what you are talking about here. Performance is absolutely stellar. You can't compare it to ext4 because ext4 is a single drive file system unless you have it backed with some sort of separate hardware RAID, and provided you don't bottleneck ZFS by giving it insufficient RAM or CPU (it is a software solution after all) it will outperform any hardware RAID. Another cool thing is that if you really want to run ext4 with ZFS you can. You can create a so called "zvol", a virtual block device built into the ZFS system, which you can then format with ext4. I use this method for swap partitions on my servers.



I have no xfs experience, so I can't speak to that.

As far as using ZFS for root, it is not an issue at all. I have been doing it on two servers for years. Only problem I ever had was when I moved drives and SAS controlers around and all the device names for the drives changed. My storage pool is created based on permanent device names, but Debians installer set up the root pool using direct device names (/dev/sda and /dev/sdb) which was a problem when they changed.

When this happened though, and the root pool failed to import on boot, it was simple to address. I got an initramfs console, poked around figured out which devices wwre the correct ones and imported the pool manually, and everything continued to boot as normal.

Is argue it is a beautiful thing to be able to snapshot your root file system and revert if things go wrong, and use ZFS send/recv to do remote differential block based backups based on those snapshots.

You have to be a little bit careful as if you update the kernel and then restore to a old snapshot with previous kernels you may have an unbootable system due to grub pointing to a kernel that doesn't exist yet, but I solve this by just having one old spare emergency kernel installed that I never change just as a constant rescue boot.

And even if this happens, it isn't THAT difficult to fix with a USB rescue boot and a chroot.

One of the great parts about ZFS is how easy it is to rescue things when things go wrong. Unlike with hardware RAID, if one system goes down you don't need the same compatible controller. You can import that pool on any machine that can see all the drives (even if they for example used to be connected to a SAS HBA, and now are plugged in via USB) as long as you can install ZFS on it.
 
Last edited:
I don't know what you are talking about here. Performance is absolutely stellar. You can't compare it to ext4 because ext4 is a single drive file system unless you have it backed with some sort of separate hardware RAID. I have no xfs experience, so I can't speak to that.

As far as using ZFS for root, it is not an issue at all. I have been doing it on two servers for years. Only problem I ever had was when I moved drives and SAS controlers around and all the device names for the drives changed. My storage pool is created based on permanent device names, but Debians installer set up the root pool using direct device names (/dev/sda and /dev/sdb) which was a problem when they changed.

When this happened though, and the root pool failed to import on boot, it was simple to address. I got an initramfs console, poked around figured out which devices wwre the correct ones and imported the pool manually, and everything continued to boot as normal.

Is argue it is a beautiful thing to be able to snapshot your root file system and revert if things go wrong, and use ZFS send/recv to do remote differential block based backups based on those snapshots.

You have to be a little bit careful as if you update the kernel and then restore to a old snapshot with previous kernels you may have an unbootable system due to grub pointing to a kernel that doesn't exist yet, but I solve this by just having one old spare emergency kernel installed that I never change just as a constant rescue boot.

And even if this happens, it isn't THAT difficult to fix with a USB rescue boot and a chroot.

One of the great parts about ZFS is how easy it is to rescue things when things go wrong. Unlike with hardware RAID, if one system goes down you don't need the same compatible controller. You can import that pool on any machine that can see all the drives (even if they for example used to be connected to a SAS HBA, and now are plugged in via USB) as long as you can install ZFS on it.


Well you writing as experienced linux users, that can manage that kind of recovery etc ~ most people using ubuntu do not; and will install zfs on a single drive. (Guess what happens then when that drive is marked as degraded, or faulted for whatever reason strange bug or actually hardware problems.)

In terms of performance here's a great comparison between ext4 and zfs (page 2 & 3)
https://www.phoronix.com/scan.php?page=article&item=ubuntu1910-ext4-zfs&num=2
https://www.phoronix.com/scan.php?page=article&item=ubuntu1910-ext4-zfs&num=3

ZFS is great for storage, vm storage etc | its not that great in actual use.
 
Well you writing as experienced linux users, that can manage that kind of recovery etc ~ most people using ubuntu do not; and will install zfs on a single drive. (Guess what happens then when that drive is marked as degraded, or faulted for whatever reason strange bug or actually hardware problems.)

That is fair. I wouldn't recommend this to someone who does not know what they are doing.

My path to becoming competent with ZFS was long and arduous and started with the relative ease of FreeNAS, and then I built knowledge of it over time.

I don't know what other than ignorance would cause the casual user to install on a ZFS, especially with only a single drive.


In terms of performance here's a great comparison between ext4 and zfs (page 2 & 3)
https://www.phoronix.com/scan.php?page=article&item=ubuntu1910-ext4-zfs&num=2
https://www.phoronix.com/scan.php?page=article&item=ubuntu1910-ext4-zfs&num=3

ZFS is great for storage, vm storage etc | its not that great in actual use.

I will agree with you that ZFS is a stupid choice for a single drive.

That is not what it is intended for, and it will not perform well in single drive configurations.

I don't use it for single drive configurations either. Whenever I use ZFS I am at least setting up a mirror, if not a RAIDz2 (RAID6) pool.


Comparing ZFS to ext4 on a single drive is comparing apples to corkscrews.

The real comparison is to a hardware or another software RAID solution on multiple drives, and in this setup I'd take ZFS every time.

I do run Linux on my desktop as well, and there I used ext4 for everything.

I have considered setting up a small ZFS mirror for my root drive though, just to add some redundancy. I've never gotten around to it though.


All of that said, copy on write designs have many advantages on slower drives and can significantly speed things up, by essentially turning random file access to it sequential file access etc. etc.

As the drives get faster, that advantage goes away and at a certain point it is probably faster to just use the drives natively, as long as you don't need redundancy.

I have never used ZFS or any RAID on NVMe drives, and probably wouldn't. Without the performance increase you get from slow drives (spinning hard drives and data SSD's) the overhead is likely just going to slow you down.

Traditional hardware RAID will suffer from this as well, but it is less jarring, because hardware RAID doesn't have as big of a performance advantage on slower drives.
 
Last edited:
The part I am curious about is if this CDDL nonsense predates Oracle or not.

People generally speak favorably of Sun, and dislike Oracle and blame all of this on Oracle, but I can't help but wonder if the bad lice song was in there the entire time and it only came to light after Oracle started suing.

I can't imagine you can legally bait and switch by putting software out there under a relaxed license and then switching it to a more restrictive one once projects start using the code...

But yes. Really disappointed Oracle bought Sun. It did lots of damage.

In my ideal world Oracle would die a horrible fiery death.

Yes CDDL was used by Sun. I imagine they figured it would make sense to try and protect some of their patents while still being mostly true to open source. CDDL is open source the idea is that it protects patents used in that source... by saying anything that uses them going forward after incorporating patented open source code is a derivative work. It is very close to the language used by Oracle when suing Google over java for instance where they claim... sure google isn't using directly java, but they started there and by starting there the following work also needs to licence their java patent.

Yes Sun did use CDDL for ZFS... and Oracle has continued to use it. The main issue with it at this point is Orcale has already filed cases against others based on the derivative work argument. Oracle uses Linux and ZFS themselves... it would be in their interest to switch the licence so it could be mainlined. As it is they bolt their own ZFS on to Oracle Linux. I guess they are ok wiht the current state of affairs as they are the only ones really offering real ZFS with Linux. They also have the advantage of not caring to much about what the latest version of the Linux kernel is up to... I have little doubt 99% of all Oracle Linux installs are using their UEK kernel (unbreakable enterprise kernel) which is like 2 LTS versions back at all times... gives their rather large team of developres plenty of time to make sure their ZFS bolt ons are perfect. In the end its all about the server money.... Oracle also provides a Red Hat compatible kernel, and I have no doubt they don't really want Red Hat having their main selling feature ZFS.

The best thing that could happen to the Linux small(er) server market would be for a proper ZFS replacement to really step up. Problem is cloud seems to be where most of the big devlopment money is and clustered file systems are not at all the same thing. Facebook or Aamzon isn't going to develop a ZFS replacement.

To be honest... all this ZFS talk is going to die soon imo.

XFS is the future. Yes... I said XFS that silicon graphics file system. lol

Red Hat decided btrfs was not the future awhile back... and have started work on modernizing XFS. They added reverse mapping only a year and a half back or so... which is going to let them introduce copy on write, data duplication, online metadata scrubbing, snapshots... and the same type of accurate error detection and damage reconstruction found in zfs.

Anyway that is where I see it going... for desktop users ya ext4 makes 1000% more sense then zfs. For Linux server users I would actualy be suggesting XFS... it has always been a good fast reliable large size storage solution. Over the next few years Red hat is going to flesh it out into a complete ZFS replacement.
 
Last edited:
I don't know what other than ignorance would cause the casual user to install on a ZFS, especially with only a single drive.

I will agree with you that ZFS is a stupid choice for a single drive.

They see new shiny button on the installation menu, and hear great things as most safe filesystem etc etc.
I do not expect user to actually research anything - i expect user to be as dumb as possible.

I started with zfs only recently like 3-4 years ago as a log storage project at work.
(At this moment i'm running 192TB system (24 x 8TB) for logs, and backups with gzip-8 compression (raidz2), and fast storage with 48TB SSD (24x 2TB) for vm storage with lz1 (also raidz2) with 2 pools one for sql related without any sort of caching directwrite, and one with lot of cache mem and dedup for systems.)
I also want to move my home kvm storage into zfs (and use multipath functionality to access it from my host/s) but at this moment it would take too much work - so i'm putting that off until i take vacations or something.

As more enterprise/pro user, who has also worked with hdfs and other technologies - if zfs doesn't provide distributed functionality in base code soon, its bright commons future may end quite soon - within next year or 2. ( at least in enterprise world).
 
They see new shiny button on the installation menu, and hear great things as most safe filesystem etc etc.
I do not expect user to actually research anything - i expect user to be as dumb as possible.

I started with zfs only recently like 3-4 years ago as a log storage project at work.
(At this moment i'm running 192TB system (24 x 8TB) for logs, and backups with gzip-8 compression (raidz2), and fast storage with 48TB SSD (24x 2TB) for vm storage with lz1 (also raidz2) with 2 pools one for sql related without any sort of caching directwrite, and one with lot of cache mem and dedup for systems.)
I also want to move my home kvm storage into zfs (and use multipath functionality to access it from my host/s) but at this moment it would take too much work - so i'm putting that off until i take vacations or something.

As more enterprise/pro user, who has also worked with hdfs and other technologies - if zfs doesn't provide distributed functionality in base code soon, its bright commons future may end quite soon - within next year or 2. ( at least in enterprise world).

FreeNAS implemented some distributed functionality into recent versions, but it is not built into ZFS directly.

Personally I want none of that.

I prefer my bits all in one place under my manual control. I do off-site backups, but I do them in a static Cron-based way.

I actively avoid technologies, software and projects that distribute or move things to the cloud. I think that stuff is insanity.
 
Yes CDDL was used by Sun. I imagine they figured it would make sense to try and protect some of their patents while still being mostly true to open source. CDDL is open source the idea is that it protects patents used in that source... by saying anything that uses them going forward after incorporating patented open source code is a derivative work. It is very close to the language used by Oracle when suing Google over java for instance where they claim... sure google isn't using directly java, but they started there and by starting there the following work also needs to licence their java patent.

Yes Sun did use CDDL for ZFS... and Oracle has continued to use it. The main issue with it at this point is Orcale has already filed cases against others based on the derivative work argument. Oracle uses Linux and ZFS themselves... it would be in their interest to switch the licence so it could be mainlined. As it is they bolt their own ZFS on to Oracle Linux. I guess they are ok wiht the current state of affairs as they are the only ones really offering real ZFS with Linux. They also have the advantage of not caring to much about what the latest version of the Linux kernel is up to... I have little doubt 99% of all Oracle Linux installs are using their UEK kernel (unbreakable enterprise kernel) which is like 2 LTS versions back at all times... gives their rather large team of developres plenty of time to make sure their ZFS bolt ons are perfect. In the end its all about the server money.... Oracle also provides a Red Hat compatible kernel, and I have no doubt they don't really want Red Hat having their main selling feature ZFS.

The best thing that could happen to the Linux small(er) server market would be for a proper ZFS replacement to really step up. Problem is cloud seems to be where most of the big devlopment money is and clustered file systems are not at all the same thing. Facebook or Aamzon isn't going to develop a ZFS replacement.

To be honest... all this ZFS talk is going to die soon imo.

XFS is the future. Yes... I said XFS that silicon graphics file system. lol

Red Hat decided btrfs was not the future awhile back... and have started work on modernizing XFS. They added reverse mapping only a year and a half back or so... which is going to let them introduce copy on write, data duplication, online metadata scrubbing, snapshots... and the same type of accurate error detection and damage reconstruction found in zfs.

Anyway that is where I see it going... for desktop users ya ext4 makes 1000% more sense then zfs. For Linux server users I would actualy be suggesting XFS... it has always been a good fast reliable large size storage solution. Over the next few years Red hat is going to flesh it out into a complete ZFS replacement.
(ts likely going to be fat32-64;)
I assume they are going to make their own fs or use one of the distributed filesystems. (unless ibm buys zfs from oracle - or buys oracle.)
 
  • Like
Reactions: ChadD
like this
(ts likely going to be fat32-64;)
I assume they are going to make their own fs or use one of the distributed filesystems. (unless ibm buys zfs from oracle - or buys oracle.)

Never know ya IBM is the monkey wrench. They could decide paying for ZFS is less hassel. But Red Hat has been defaulting XFS for RHEL server for 3 years now... and the continue unlining XFS improvements. They as far as I know are still following their XFS roadmap.

https://www.phoronix.com/scan.php?page=news_item&px=XFS-2019-Copy-On-Write-Better
This is from almost a year ago now... but ya so far the work on xfs continues. Hopefully IBM doens't abandon it and jump in with Oracle.
 
FreeNAS implemented some distributed functionality into recent versions, but it is not built into ZFS directly.

Personally I want none of that.

I prefer my bits all in one place under my manual control. I do off-site backups, but I do them in a static Cron-based way.

I actively avoid technologies, software and projects that distribute or move things to the cloud. I think that stuff is insanity.
Main problem in enterprise is capacity.

How many HBA controllers can you add before you degrade your performance? How many disks you can have, and how much.
Thats the sole reason why we run into distributed technologies.

Downtime
Price
Capacity

For capacity
Currently most dc's run dell's 720xd with either 12x 3.5" bays or 24x 2.5" | some will run supermicro's 24-36 x 3.5" for capacity and price. (no1 really wants to use lenovo or hp anymore - whatever the reason :) )

For downtime
Distributed filesystems, where you won't cry over spilled milk because half of dc went out because of a stupid mistake or hardware failure. Time is $.

Price
Its much cheaper to buy couple older servers with 12-36 slots than buying a server, with 1-2 HBA controllers, and storage boxes. Not to mention if it goes down for whatever reason you are still up.
 
Back
Top