ZFS Questions

Szyrs

n00b
Joined
Aug 14, 2015
Messages
4
I'm having difficulty finding recent information and this forum looks like the best place to ask, so if anyone has any information that would be great. I'll try to keep it simple. My intention is to create a ZFS storage node that I can share with other physical machines and vm workstations. Cheaper is always better, available hardware is:

Xeon W3690
24GB ECC
HP Motherboard from z400 (don't have the # handy).
3ware 9650se 24m8

1. Is FreeBSD10 the only ZFS OS option that supports full drive encryption?

2. Is OpenSolaris the best performing ZFS OS option?

3. Is OmniOS the best ZFS OS option for ongoing support?

4. With depuplication and all the other treats running, how much hard drive space will I be able to run with 24GB RAM? Does it vary at all from FreeBSD to OmniOS or other illumos'?

5. Does FreeNAS still lack features when compared with the other options? I've seen lots of old posts about moving on to better systems but they still seem to be one of the better supported options, even if the scope is more limited. The only comparisons I can find are years out of date. I'm intending to mess about with virtualisation on some other machines and so I'm not entirely sure what I will want from this storage machine just yet.


Sorry if I've missed a thread that explained just this but I'm finding it hard to information at a high level that is newer that 2013. I'm happy to go with opensolaris or freebsd10 or openindiana but if the newer options are more suited to me then I'd like to take advantage of that.
 
1. Drive encryption is in BSD, Linux and Solarish (Oracle Solaris and Forks)
BSD and Linux encrypt disks below ZFS
The Solarish options are not based on encrypted disks but on encrypted files on disks
what means it is the slowest but allows a backup of encrypted files/pools to unsecure places.

Oracle Solaris is the only one with real ZFS encryption.

2.
On many performance tests, Solarish NFS wins over BSD while iSCSI seems similar. Solaris CIFS wins quite often over SAMBA. Solarish is not always the best but one of the best.

3.
There are commercial support options with OmniOS and Oracle Solaris (for Solaris based systems).
You can find similar options for BSD and Linux.

4. All OpenZFS variants have the same restrictions as they all use the Illumos upstream.
Think twice about using dedup.

5.
see 4.

Some options why I prefer Solarish
- only a single way of disk identification (WWN on newer HBAs)
- working hot spare management
- Comstar (integrated FC/iSCSI stack), Crossbow (virtuel nics, etherstubs) and SMF (services)
- Solaris CIFS server (I prefer it over SAMBA) with trouble free Windows SID, ACL and "previois versions" support
- all basic storage features provided by Solaris/ OmniOS
not that use what you want toolset compared to Linux or BSD
 
Last edited:
Thanks very much for that.

Just to clarify what you mean by "Solaris" when you compare to others, do you mean solaris including derivatives, or do you literally mean Solaris 11? My inderstanding is that FreeBSD and Linux both have zfs ported to them, whereas Solaris Derivs have ZFS in the kernal. Is that the reason for the performance advantage?


Also, could you please expand a little on warning against dedup? Aside form the resouces required for it, are there problems with it? I do a lot of work with digital media and one nightmare that I'm constantly fighting against is having multiple files, or many copies of one file, with small differences between them. I was really hoping to kill two birds with one stone by moving to ZFS. It's possible to pick up an older server with 128gb ddr2 ram, if that would help? I think they are about $300 currently,,,
 
Last edited:
Has anyone had any sucess with this sort of setup?

"The Alternative: L2ARC

So far, we have assumed that you want to keep all of your dedup table in RAM at all times, for maximum ZFS performance. Given the potentially large amount of RAM that this can mean, it is worth exploring some alternatives.

Fortunately, ZFS allows the use of SSDs as a second level cache for its RAM-based ARC cache. Such SSDs are then called "L2ARC". If the RAM capacity of the system is not big enough to hold all of the data that ZFS would like to keep cached (including metadata and hence the dedup table), then it will spill over some of this data to the L2ARC device. This is a good alternative: When writing new data, it's still much faster to consult the SSD based L2ARC for determining if a block is a duplicate, than having to go to slow, rotating disks.

So, for deduplicated installations that are not performance-critical, using an SSD as an L2ARC instead of pumping up the RAM can be a good choice. And you can mix both approaches, too."

From: http://constantin.glez.de/blog/2011/07/zfs-dedupe-or-not-dedupe
 
_Gea I believe you are responsible for Napp-it? Would you recommend the Solaris 11.3beta or 11.2 build with the napp-it work around?
 
Thanks very much for that.

Just to clarify what you mean by "Solaris" when you compare to others, do you mean solaris including derivatives, or do you literally mean Solaris 11? My inderstanding is that FreeBSD and Linux both have zfs ported to them, whereas Solaris Derivs have ZFS in the kernal. Is that the reason for the performance advantage?


Also, could you please expand a little on warning against dedup? Aside form the resouces required for it, are there problems with it? I do a lot of work with digital media and one nightmare that I'm constantly fighting against is having multiple files, or many copies of one file, with small differences between them. I was really hoping to kill two birds with one stone by moving to ZFS. It's possible to pick up an older server with 128gb ddr2 ram, if that would help? I think they are about $300 currently,,,

When talking about both Oracle Solaris and the free forks around Illumus, I prefer to talk about Solarish. They differ in items that were introduced after the v28 release like features in Illumos or ZFS encryption and SMB 2.1 in Solaris 11.3

Mostly the kernel aspect is important when talking about Solaris CIFS. This multithreaded service is mostly faster than its SAMBA counterpart as it is Solaris and ZFS optimized (can hold Windows SID and ACL as ZFS attribute) whereas SAMBA needs to run on everything that is able to add to binary numbers.

about dedup
ZFS dedup is realtime dedup. While this is a sophisticated way of dedup it need to hold the complete dedup table in RAM. This can mean up to 5 GB of RAM per TB of dedup data. If you do not have enough RAM, this table must be loaded from disk. There are reports of people who tried to delete a snap in such a case what lasts a week.

With enough RAM and special workloads, you can enable dedup. Main problem is, that dedup works poolwide. Once enabled, you cannot disable without destroying the pool. Next propblem is that you usually want RAM as a fast readcache. If you use the RAM for dedup, your storage may become quite slow.

Especially in cases where your dedup ratio is not higher than say 3, I would not use dedup and when then only with a smaller dedicated pool. Otherwise its better to enable LZ4 compress and add some disks.
 
Last edited:
_Gea I believe you are responsible for Napp-it? Would you recommend the Solaris 11.3beta or 11.2 build with the napp-it work around?

Solaris 11.3 is beta. I would expect the final within a few months.
There are some improvements in 11.3 around zones, LZ4 compress and SMB 2.1

These improvements may be essential for a storage server especially LZ4 compress and SMB 2.1.
If you are using Macs, the SMB 2.1 performance improvement may be impressive
(100MB/s instead of 50 MB/s)

If your setup is not mission critical, I would try Solaris 11.3 beta.
 
Avoid dedup, it does not work well. Oracle has bought Greenbyte, and they have a superior Dedup technology that works very very well, best in class sort of. Wait until Oracle ZFS incorporates Greenbyte dedupe.

Compression should always be enabled on all pools, as it increases performance. It is faster to read 1000 bytes and uncompress it in RAM to 2000 bytes, than to read 2000 bytes from disk.
 
Personally I store 99% not compressible media so I don't compress. Dedup could save me some GBs but not much (for example same trailers on several Blu-Ray discs). I use software to look for identical files and remove or link the duplicates as needed.

Where deduplication should shine is for many close to identical, non Solaris VMs. Maybe also on a storage infrastructure for movie production. Both scenarios require best performance though, so deduplication might be counterproductive.
 
Back
Top