OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

A couple quick things _Gea;

-- For whatever reason the latest OmniOS would not work for us. We would continue to get an APD issue with our NFS shares. We had to go with OmniOS v11 r151010 and everything worked perfectly

-- On the home > System > Network IB > ipib section the IP addresses do now display. The command line works and the interfaces are setup correctly. I can create new interfaces through it just fine, but I can't see them. This is with 0.9f6

-- SRP didn't work out of the box even though the menu choice is there. I had to install "pkg install storage-server" to get it to work. I didn't know if it was left off intentionally or accidentally.
 
Hey Gea I've seen you say that smb on Solaris tops out around 300MB/s without jumbo frames. Is that per client or total connection to all clients?

It was quite a time ago when I made some tests with a single user on OI/OmniOS with SMB1.
 
A couple quick things _Gea;

-- For whatever reason the latest OmniOS would not work for us. We would continue to get an APD issue with our NFS shares. We had to go with OmniOS v11 r151010 and everything worked perfectly

-- On the home > System > Network IB > ipib section the IP addresses do now display. The command line works and the interfaces are setup correctly. I can create new interfaces through it just fine, but I can't see them. This is with 0.9f6

-- SRP didn't work out of the box even though the menu choice is there. I had to install "pkg install storage-server" to get it to work. I didn't know if it was left off intentionally or accidentally.

OmniTi did some improvements with NFS on current 151014. While there is no known general problem with ESXi, I would also stay with a former release with NFS if this was definitely more stable.

As i do not use IB myself, the IB integration was done together with user Frank as a free and open community effort. You may check the menu actions at "/var/web-gui/data/napp-it/zfsos/03_system and network=-lin/033_Network_IB/" with common functions in /var/web-gui/data/napp-it/zfsos/_lib/illumos/iblib.pl . I can add missing points but need some help due missing hardware.

"pkg install storage-server" was only needed on Solaris in the past.
I added this into the wget installer for OmniOS as well just to be sure everything is there.
 
Last edited:
get a 750, honestly could you do enough writes for it to get that messed up before you rebuild your array? I guess I shouldn't talk, I use an X25-e SLC slog. But still, get the 750 and save your money. An over provisioned 750 should last forever.
 
get a 750, honestly could you do enough writes for it to get that messed up before you rebuild your array? I guess I shouldn't talk, I use an X25-e SLC slog. But still, get the 750 and save your money. An over provisioned 750 should last forever.

good point, OP it like crazy and it should last :)
 
I have only tried a single user without performance settings like jumbo frames that may be needed for higher values.

Ok, understood.

I know this has been asked a million times but I'm having trouble grasping SMB share permissions on ZFS.

Is the basic premise that on the Solaris system you just create the same usernames that already exist on the Windows boxes? And then assign permissions in Napp-it based on those usernames?
 
Edit: Rebooted both machines and my issues still seem to be happening.


Removed my initial post as my issues are still not resolved. Will continue to work on it and post back.
 
Last edited:
@CopyRunStart
This is similar on any server os.
If you need access restriction you must create users locally on you server. On access you must enter a known loginname and password. If your local windows username and password is the same, this login is skipped.

The only other option is a centralized user database with Windows Active Directory. To use AD you must join your server to the AD.

You can assign permissions remotely from Windows (optionally login as root to gain full permissions) or locally on OmniOS/Solaris
 
TL;DR
Create a DNS A Record for the server before joining it to the domain.

Thanks for the info.
I cannot say when this is needed.
In my own setup (OmniOS, Windows 2012 AD, public domain) I do not need.
A simple join is enough.
 
Looks like my issues are still present. Rebooted both the OmniOS server and my workstation. I'm back to only being able to get onto the shares using the IP address. I'm playing around with SAMBA4 at the moment so this isn't a standard windows domain controller. I'll get one of those running quick and see if my issue's persist.
 
Zarathustra[H];1041882575 said:
FreeNAS seems conspicuously absent from this discussion.

You're not serious right? This is a thread about a solution that would obsolete FreeNAS and other pre-packaged NAS solutions.
 
You're not serious right? This is a thread about a solution that would obsolete FreeNAS and other pre-packaged NAS solutions.

That is not clear from the first post. It explains what ZFS is and touts ZFS as the next best thing in storage :p FreeNAS is a ZFS implementation.

Can you explain in which way this would be superior/preferable?
 
that's a topic for another thread, but I think it's clear from the post that ZFS is rad and one way to leverage ZFS in an easier to use way is to use _Gea's front end.
 
@CopyRunStart
This is similar on any server os.
If you need access restriction you must create users locally on you server. On access you must enter a known loginname and password. If your local windows username and password is the same, this login is skipped.

The only other option is a centralized user database with Windows Active Directory. To use AD you must join your server to the AD.

You can assign permissions remotely from Windows (optionally login as root to gain full permissions) or locally on OmniOS/Solaris


I should have been more specific. Basically my confusion is between how Solaris handles how *nix/solaris treats permissions vs how Windows treats permissions.

I didn't realize I could just join the Solaris Server to the AD. Does anyone know of any good guides on integrating AD with Solaris? I did some googling but the guides seem a bit incomplete.


EDIT: Forgot to mention I did testing regarding my earlier question on maxing out 10Gb with SMB.

Test was: Solaris x86 Server connected to 10Gb switch with Twinnaxial cable. 10 Windows Hosts connected to the same switch with 1Gb connections. I had all 10 machines copy down the same 5GB file at once over smb. According to Solaris System Monitor, it was sending at about 8.8Gb/s or 1.1GB/s.
 
Last edited:
Zarathustra[H];1041882671 said:
That is not clear from the first post. It explains what ZFS is and touts ZFS as the next best thing in storage :p FreeNAS is a ZFS implementation.

Can you explain in which way this would be superior/preferable?

The superior ZFS platform among BSD, OSX, Linux or Solaris is always a very subjective statement. Many ZFS problems are common between them so no problem to discuss them here or to talk about differences between platforms. But basically this is a Solarish (Oracle Solaris and its forks) thread not a general ZFS thread. There are other forums with a focus on BSD, FreeNas, Nas4Free, ZFSGuru or ZoL.
 
I should have been more specific. Basically my confusion is between how Solaris handles how *nix/solaris treats permissions vs how Windows treats permissions.

I didn't realize I could just join the Solaris Server to the AD. Does anyone know of any good guides on integrating AD with Solaris? I did some googling but the guides seem a bit incomplete.

Maybe its because from outside view, Solarish behaves quite identical to a Windows 2003 server after joining a Domain. This affects management via Windows server console or permissions regarding users, user identification with SIDs, snaps as previous versions and ntfs similar ACLs.

You may read/google docs from Oracle but you can also use docs from Microsoft as the behaviour is quite identical for a remote user or regarding handling of a ZFS filesystem as you can move it without preparing any settings to keep permissions intact just like an ntfs disk..
 
Cool!

A question regarding L2Arc. I have a Samsung 850 Pro lying around. If I wanted to add this but don't want my L2Arc to be the size of the entire disk, how can I control it's size? Do I need to format and partition it or do I want to add the entire disk and control the L2Arc size from somewhere else?
 
Do I need to worry about 4k alignment or anything? Is it easy to partition a disk the wrong way?
 
I think, someone correct me if I'm wrong, but you can just partition it like normal and then just add it without the need to specify alignment. I'm not sure if you can specify alignment on a slog/l2arc device.
 
Partitioning is one option.
The other option is a fixed reservation = HPA (host protected area)
You can create with tools like hdat2 http://www.hdat2.com/files/cookbook_v11.pdf

Main advantage: you can use the full disk. No need to care about partitions.
This is simiar to the reservations of enterprise SSDs.

In any case, you should use a new SSD or do a secure erase to help the SSD firmware to optimize data in the background.
 
you could also dd an image like

** assumes you've formatted and mounted the drive to /slog **
sudo dd if=/dev/zero of=/slog/slog0.img bs=1m count=1024 //makes a 1GB image
then use IoSetup (google it) to create the block device and add it to the zpool

granted this is convoluted so don't do it
 
Is this under linux? the syntax looks like it. If so, not recommended, as there is a possible deadlock due to the block code and zfs recursing or somesuch...
 
i use it on my pool to cache torrent writes. i use it in conjunction with forcing all writes to be synchronous to the slog drive so that my pool doesn't get too fragmented or so the theory goes
 
i use it on my pool to cache torrent writes. i use it in conjunction with forcing all writes to be synchronous to the slog drive so that my pool doesn't get too fragmented or so the theory goes

The ZIL/slog does not affect regular writes to your pool.
Without sync enabled, all small writes are cached in RAM and written after a few seconds as a large sequential write. On a powerloss these few seconds are lost. As ZFS is a copy on write filesystem, your filestructure is always intakt, but not all commited writes are on disk.

With sync enabled, the same is happening. But additionally every single small write command is logged to the ZIL device (onpool or as a seperate slog) and commited from the ZIL. This is the case to be transactional save. Example, a single financial transaction afftecs two accounts with two transactions, one where you take the money from an account and one where it goes onto another. Both or none must be done or some money can go to the digital nirwana when only the first transaction is on disk.

The same when you use it to store VMs with older filesystems where an inline data update is followed by an update of the meta data. A crash between can corrupt such filesystems. Your application or OS must trust a commit an critical writes.

The ZIL is only read after a crash on next reboot to finish all commited writes. Think of it like the BBU on hardwareraid.
In your case, just disable sync and use an SSD only pool for torrents. With a regular SMB filer, sync is not used per default and is quite useless as ZFS is always valid and a file that is written during a powerloss is always corrupt.
 
Last edited:
Hmm. Thanks gea. How do I limit or prevent fragmentation of my larger pool without periodically recreating the pool?
 
You cannot limit or prevent fragmentation.
But this is only a problem if your pool is unbalanced and/or nearly full.

So if the question is how to prevent a performance degration,
the answer is: keep fillrate below say 70-80%

If you use SSDs, fragmentation is no longer a problem.
 
With SSDs becoming cheaper I'll move to them more. Thanks Gea. I think what I'll do is create a zvol that is 75% of the size of my zpool and only use that volume. The pool I speak of is a single 3 terabyte spindle disk.
 
Partitioning is one option.
The other option is a fixed reservation = HPA (host protected area)
You can create with tools like hdat2 http://www.hdat2.com/files/cookbook_v11.pdf

Main advantage: you can use the full disk. No need to care about partitions.
This is simiar to the reservations of enterprise SSDs.

In any case, you should use a new SSD or do a secure erase to help the SSD firmware to optimize data in the background.

Didn't think of that one! I did this for my SLOG.

My thoughts were that if I partitioned with an ultra high speed NVME device, I might be able to get away with using the other part of its partition. Although not sure if that is recommended.
 
Question regarding ARC hit rate. Is there anyway to increase this # or is this just the nature of the algorithm and data.
 
Didn't think of that one! I did this for my SLOG.

My thoughts were that if I partitioned with an ultra high speed NVME device, I might be able to get away with using the other part of its partition. Although not sure if that is recommended.

Partitioning and using the other parts ex as L2Arc is an option.
But performance of an SSD depends on Interface, controller an flash quality.
A faster interface is not enough.
 
Question regarding ARC hit rate. Is there anyway to increase this # or is this just the nature of the algorithm and data.

The algorithms around the ZFS Arc are one of the best at all. If you want to increase hit rate, you can add more RAM or add an L2Arc SSD (not as fast)
 
Back
Top