Any ZFS encryption coming soon? I.e. ZFS 29/30?

tangoseal

[H]F Junkie
Joined
Dec 18, 2010
Messages
9,741
Right now we have ZFS 28 which is amazing but I was wondering if any of you ZFS pundits out there know if we are ever going to see ZFS 29/30 etc.... in products like FreeNAS, NAS4Free, OpenInd, etc..

I am really looking forward to being able to ecrypt on the pool level or dataset level avoiding whole disk encryption. Probably a pipe dream I know.
 
Not likely - at least no in the form of Solaris compatible versions.

ZFS v28 was the last one part of an "open" release of Solaris. Oracle dropped that part of their support and went to fully closed development going forward.

That is why ZFS has branched. There is now the Solaris path forward and the "open" branch. The open branch is based on v28 and has added the concept of "options". I don't know when - or if - the open branch will have encryption built in.
 
Well I am running 8 2tb reds... in my ZFS nas. How would you recommend I encrypt that data? As of now I know I can encrypt each of the 8 drives before committing them to a pool but everytime I boot the NAS, which isnt that often, will I have to enter my password for each drive or just one time for all? I haven't done this yet.
 
I'm not sure I would trust Solaris encryption after the Oracle prez got up before the world and blathered on about how NSA spying is God's gift to the universe.
 
I'm not sure I would trust Solaris encryption after the Oracle prez got up before the world and blathered on about how NSA spying is God's gift to the universe.

I would not trust ANY closed source encryption. There is a high chance that some kind of master key is included.
 
I would not trust ANY closed source encryption. There is a high chance that some kind of master key is included.

I remember probably 15-20 years ago the buzz that a lot of encryption devices and stuff had NSA back doors. That was all for the "tin foil hatters", but it turns out they were right all along. :D
 
I remember probably 15-20 years ago the buzz that a lot of encryption devices and stuff had NSA back doors. That was all for the "tin foil hatters", but it turns out they were right all along. :D

Or the NSA wants you to think that because they want you to stop using easily available crypto.

Security (in a crypto sense of the word) and closed source just don't mix if you want any confidence in the solution. That has always been true though.
 
Code:
root@backup:~# zpool upgrade -v
This system is currently running ZFS pool version 34.

The following versions are supported:

VER  DESCRIPTION
---  --------------------------------------------------------
 1   Initial ZFS version
 2   Ditto blocks (replicated metadata)
 3   Hot spares and double parity RAID-Z
 4   zpool history
 5   Compression using the gzip algorithm
 6   bootfs pool property
 7   Separate intent log devices
 8   Delegated administration
 9   refquota and refreservation properties
 10  Cache devices
 11  Improved scrub performance
 12  Snapshot properties
 13  snapused property
 14  passthrough-x aclinherit
 15  user/group space accounting
 16  stmf property support
 17  Triple-parity RAID-Z
 18  Snapshot user holds
 19  Log device removal
 20  Compression using zle (zero-length encoding)
 21  Deduplication
 22  Received properties
 23  Slim ZIL
 24  System attributes
 25  Improved scrub stats
 26  Improved snapshot deletion performance
 27  Improved snapshot creation performance
 28  Multiple vdev replacements
 29  RAID-Z/mirror hybrid allocator
 30  Encryption
 31  Improved 'zfs list' performance
 32  One MB blocksize
 33  Improved share support
 34  Sharing with inheritance

For more information on a particular version, including supported releases,
see the ZFS Administration Guide.

As mentioned before, switching over to Solaris 11.1 would get you the encryption features.
 
As a former user of Solaris 11/11, Solaris 11.1, OpenSolaris, OpenIndiana, Illumian, (and if you count machines I've set up for others, also a little of NexentaStor Community Edition and FreeBSD/FreeNAS) and most recently OmniOS, and as a current user of ZFSonLinux... lol at my stupid self for having used ANY of the Solaris crap before. Overall network and system performance are just so much better for me with Linux/ZoL. Though I started using those other things a fairly long while before ZoL existed. And while people will probably tell me I'm dumb for thinking of ZoL as stable enough to use, I've had zero issues, even though I was actually expecting issues. Of course, I wouldn't recommend it for production enterprise usage, but who knows, maybe someday we'll be able to say it's that stable.

OmniOS was by far the worst for me. I was getting about 15MB/s writes over either SMB or NFS to a 6-disk RAID-10 (well, the ZFS equivalent, a stripe of 3 mirrors)... I had a friend tell me it was a bad idea for me to use the "bloody" branch (what a childish name for it) but that's the only one they offered as a VMWare appliance, and their text installer wouldn't work in vSphere as it recognized neither the key to continue in the installer nor the "alternate" key it said to press if the main keys did nothing. Seriously? You can't even make an installer that works with vSphere's console keyboard mapping? What a POS. (There could be a way to fix that, but I've never had that issue with any other OS installer in vSphere.) And then there's the fact that they still aren't bundling a LSI driver that works with the M1015. Why is that? Because, if you go in their IRC channel, you are likely to be subject to some expletives if you tell them you're looking for support for home usage (even when you haven't mentioned your overall opinion of the OS). A few of them are helpful, but overall, you'll find they don't give a crap about us home users or making an OS that works well for us. Which is fine, but we need to realize that these OpenSolaris-based distros are just as far away from being perfect as Linux is, if not further.

So, in conclusion, only consider ZoL when you're not in the enterprise. And if you don't like ZoL, upgrade path be damned, go Solaris 11.1. It works a hell of a lot better and more consistently than any OpenSolaris-based distro. Just keep in mind you can NOT move pools between Solaris 11.1 and any relatively recent OpenSolaris-based distro unless you create the pools with an older version and don't upgrade (which means no encryption). My "version 5000" OmniOS pool did work with ZoL - not sure if a Solaris 11.1 pool would - fair chance they'd also be incompatible.
 
Last edited:
I've been using pc-bsd. It actually has boot environments built in, full zfs features, etc...
 
OmniOS was by far the worst for me. I was getting about 15MB/s writes over either SMB or NFS to a 6-disk RAID-10 (well, the ZFS equivalent, a stripe of 3 mirrors)... I had a friend tell me it was a bad idea for me to use the "bloody" branch (what a childish name for it) but that's the only one they offered as a VMWare appliance, and their text installer wouldn't work in vSphere as it recognized neither the key to continue in the installer nor the "alternate" key it said to press if the main keys did nothing. Seriously? You can't even make an installer that works with vSphere's console keyboard mapping? What a POS. (There could be a way to fix that, but I've never had that issue with any other OS installer in vSphere.) And then there's the fact that they still aren't bundling a LSI driver that works with the M1015. Why is that? Because, if you go in their IRC channel, you are likely to be subject to some expletives if you tell them you're looking for support for home usage (even when you haven't mentioned your overall opinion of the OS). A few of them are helpful, but overall, you'll find they don't give a crap about us home users or making an OS that works well for us. Which is fine, but we need to realize that these OpenSolaris-based distros are just as far away from being perfect as Linux is, if not further.
.

I can only told about my experiences.

So far, OmniOS is the most up to date free and stable ZFS implementation
It is based on Illumos, the source of any free ZFS (BSD and ZOL use this as well)

IBM 1015 (the original one, not the one reflashed to LSI 9211 is not supported per default.
Flash it to raidless 9211 IT mode (suggested on BSD and ZOL as well)

Your problems with OmniOS seems a problem with your config not a general Solaris or OmniOS problem.
Usually read/ write values on OmniOS are more like (even sync writes on ESXi) : http://www.napp-it.org/doc/manuals/benchmarks.pdf
Mostly Solaris gives you the fastest ZFS storage system.

btw.
You may try my new OmniOS VM as well. It is not a very basic OmniOS but a complete ZFS appliance preconfigured for ESXi 5.5.
(You must have some time to download, unzip and upload the 30 GB folder from www.napp-it.org/downloads )
 
You may try my new OmniOS VM as well. It is not a very basic OmniOS but a complete ZFS appliance preconfigured for ESXi 5.5.
(You must have some time to download, unzip and upload the 30 GB folder from www.napp-it.org/downloads )

Thanks for the link, and I would check it out if I hadn't folded my storage server back into my desktop. Linux has the additional advantage for me of being a usable desktop OS as well, unlike ANY Unix.

I did tweak some things on OmniOS, particularly for NFS, with zero benefits. There may have been tweaks that I missed, but I don't feel I should have to fiddle with settings to make extremely common protocols work correctly. Sorry, but that's the job of the OmniOS team.

And lol at the 9211 firmware flash... why would I downgrade my firmware just to use a junky driver? I grabbed the actual LSI driver, installed it, and it works perfectly. And my card will do JBOD passthrough, RAID-0/1/10, without having to flash different firmwares. What a stupid idea that was, having to flash separate firmwares for different functionality.

I'm not saying ZFSonLinux is right for everyone here, but I do think some of you guys like Solaris a bit too much, and same with FreeBSD.

Oh and BTW, the M1015 works perfectly in my Asus P8P67 Pro. I saw a ton of people who said theirs did not. No idea what's going on with them. What it did do, however, is increase my boot time about 4-5x, and that's overall, not just the added time of the extra Option ROM to load. It displays a black screen now for at least 30 seconds before anything else happens. (This part isn't a response to anyone here in particular, just including it as a tangent about the M1015, as everyone else seems to have spread incorrect information about it.)
 
And lol at the 9211 firmware flash... why would I downgrade my firmware just to use a junky driver? I grabbed the actual LSI driver, installed it, and it works perfectly. And my card will do JBOD passthrough, RAID-0/1/10, without having to flash different firmwares. What a stupid idea that was, having to flash separate firmwares for different functionality.

One of the best cards for ZFS software raid is the LSI 9211 with IT mode firmware.
The reason why the IBM is so popular is the fact that you can reflash it to a LSI 9211
- for less than half the price.

Without that, nobody in the ZFS world would care about the IBM 1015.
 
One of the best cards for ZFS software raid is the LSI 9211 with IT mode firmware.
The reason why the IBM is so popular is the fact that you can reflash it to a LSI 9211
- for less than half the price.

Without that, nobody in the ZFS world would care about the IBM 1015.

The only point of the flash is because of default drivers lacking support for the M1015, and on the OpenSolaris-derivatives, it's a pain to get the right driver working as compared to real Solaris or Linux or Windows on which you can just use actual LSI drivers. (Actually, OpenSolaris can too, but not the latest ones. You have to go several releases back. I ran Illumian with official LSI drivers that worked perfectly with my M1015 back when Illumian itself was new, but broke everything when I tried to upgrade them as support had been dropped for the OS)

Using the proper drivers for the M1015 is better than flashing it to the wrong firmware. Just that in some cases, people find the flash to be less of a pain. It's still the wrong way to do things. Of course, the proper firmware is the 9240 firmware, not the actual IBM firmware, because the LSI firmware is always newer than the modified versions for the third parties. Just flashed the latest firmware yesterday actually. But they're the same (well, it's a 9220 - the real 9240 has the RAID-5 key, but you don't want to use RAID-5 on SAS2008 chips anyway as it's way too slow).

Lack of OOB support for the M1015 is the fault of the OmniOS (and other OpenSolaris-derivatives that probably also still don't support it OOB) devs and noone else. LSI has working drivers* and so does Nexenta. You can get a driver from Dan McDonald from Nexenta (one of the SUPER helpful guys - I had to get help from a Nexenta guy for OmniOS) but they seriously need to BUNDLE his driver (or - better - get a real LSI driver working).

For what it's worth, local disk benchmarks on the machine with danmcd's driver were fine. As were netio tests both across my physical network and to another VM. So while I can't necessarily say his driver performs the same as the real one, I'm confident in saying, at the very least, it performs okay. Too many variables between that VM and the Illumian VM I had with the real LSI drivers to compare, and I've long-since trashed the Illumian VM after realizing they were never going to update it again. Personally, I think it was a pretty important distro, and letting it die was the wrong move. Anything that makes Unix more usable is not just nice, but downright necessary.

Of course this is all my personal opinion, and I don't want to talk up the M1015 like it's "all that." In fact the long bootup time is pissing me off a bit so I may be dumping mine soon.
 
Last edited:
Back
Top