OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

Nice to hear..thanks..!

How do I get the monitor extensions to do their thing, do i have to enable a certain service..?

4vauAPP.png

You must enable "Mon" in the top level menu (upper right corner)
 
still hella cool you can get support from the projects creator in one of the most accessible ways possible: these forums. Kudos _Gea.
 
You must enable "Mon" in the top level menu (upper right corner)

Right on! Never saw those:eek:..maybe create a big button the next release! :D

I also enabled the Pro Acceleration Agents. is that what is represented here?
http://www.napp-it.org/extensions/index_en.html

The "Edit" button enables logs and edit...edit what and which logs..? Should these always be enabled or only when troubleshooting?
 
Acceleration read disks, filesystem, snaps, groups etc in the background to display them without a delay. Without it may happen that you have to wait quite a long time on every menu reload.

Edit can show the source code of menus. You can also display a long version of a log of last actions and commands as well as the current state of some Perl hashes where all informations of a menu are collected.
 
I want to start testing replication from my prod napp-it appliance to my microserver napp-it appliance.

Idea now is to run a replication once each week, I want to keep 4 weeks ..do I need to set the
Keep target snapnumber to 4
or
Hold unkept snaps for days to 30? (or both?)
 
Last edited:
This is a AND relation, read it like
keep at least 4 snapshots and delete only replication snaps when they are older than 30 days
 
Oke, so if I understand correctly and just fill in keep at least 4 replication snaps and schedule it once a week I should be good to go.. :D

EDIT
I just realized that it doesn't really make sense to keep 4 replication snaps..because it's a "replication" job and not versioning..so If i throw away something away on A and replicate it to B..it will be gone on site B..
 
Last edited:
Replication snaps are regular snaps and can be used for versioning on the target side.

You only need to care that they are
- in sync on source and target based on the number in snap name
- on a new replication run, the target filesystem is resetted to the last common snap,
so regular snaps that are done on target via autosnap afterwards are lost.

If you need versioning on the target, you must care about with replication snaps.
 
Isn't the appliance some different software (perhaps other build of Solaris).

Does their appliance OS run on non Oracle Hardware?

I do not believe that Oracle ZFS Storage Appliance is available as a HW independent software product available for installation. I believe it is running on modified Solaris installation and available only with the HW.
 
Cool. Well if someone can confirm if 11.3 supports VAAI it would be appreciated :)

One thing about Solaris, I've read there are driver issues with LSI 2308 on ESXi. Some people have taken OmniOS drivers to replace Solaris drivers but I'm not sure how stable that is. Sounds like a hack.
 
I think I've asked this before but I don't remember seeing any answers...

What's the best strategy for creating UIDs and GIDs for sharing between both NFS and SMB?

For example, Solaris seems to use unique UIDs but then all users share the "users" group. Most Linux systems use the "User Private Group" method where each user has a unique group associated with it.

If I understand correctly, User Private Groups would be the most similar to Windows, right?

That means that each user (i.e. "jmk396") would have it's own group (i.e. "jmk396"). Again, if I understand correctly, each file would be owned by the same user and group (simple permissions), but the ACLs could grant access to groups such as "Everyone".
 
The best strategy is not to try.

Use NFS3 shares when you can allow anonymous access as any restriction is only based on good will
as you simply need to set an ip and uid to get access to NFS3 shares

Use SMB shares when you need authentification and authorisation

The problem is not the question about user groups (gid).
This is only a setting during user creation

Problems with Solaris CIFS:
- Unix groups are not used by the CIFS server
Solaris adds a Windows compatible group management as Unix groups behave different than groups on Windows.
If you want to keep SMB groups and Unix groups in sync, you must add mappings.

- Solaris CIFS can use real Windows SIDs as extended ZFS attribute. This is mainly important with Active Directory
where you can add user mappings if you need to keep local users in sync with AD users.
CIFS use local Unix users so NFS and SMB can be in sync then.

- Solaris CIFS uses Windows alike NFS4 ACL only. If you need to combine this with access based on Unix permissions,
you will find nothing than problems as they behave different especially regarding permission inheritance.


Other options
- Use SAMBA instead of CIFS. Not as Windows compatible but more compatible to NFS as SAMBA use UID and GID only
- Use NFS4
 
Oke, so if I understand correctly and just fill in keep at least 4 replication snaps and schedule it once a week I should be good to go.. :D

EDIT
I just realized that it doesn't really make sense to keep 4 replication snaps..because it's a "replication" job and not versioning..so If i throw away something away on A and replicate it to B..it will be gone on site B..

I have to correct myself thank you Gea !!!
It's certainly possible to have versions using the replication target.

Just share the replication share and with previous versions you'll see the snapshots from the source.
In my testing the share is removed every time I ran the replication (so had to re-share the target but that's not a big issue)
 
The best strategy is not to try.

Use NFS3 shares when you can allow anonymous access as any restriction is only based on good will
as you simply need to set an ip and uid to get access to NFS3 shares

Use SMB shares when you need authentification and authorisation

The problem is not the question about user groups (gid).
This is only a setting during user creation
Gea,

Thanks as always for the reply.

I don't really concern about security at all. I was just thinking about Windows interoperability.

For example, Windows doesn't have a "Primary Group" concept like Unix does. Therefore, if I have a Unix user with a username of "jmk396" and a group of "users", how does this translate to Windows?
 
Windows is far more flexible.
You create a user, no need for a group
You can create groups with groups and/or users as members.

This is why Sun added a Windows compatible SMB group management
to add this flexibility to Solaris.

If you want to translate Windows groups to Unix groups, you can use
idmapping like wingroup:abc = unixgroup: users

But if you do not need security, just keep the folder fully open
like everyone@=modify recursively and set aclmode to restricted (Illumos only)
to avoid permission modification from Unix via chmod.
 
Is there an alternative aclmode for non-Illumos?

I'm trying to understand the different modes and it sounds like aclmode=discard might work?

EDIT: To make things even simpler, can I specify on ZFS, that any NFS requests from 192.168.0.20 should map to user 'jmk396' and group 'users' and then ignore all chmod requests?
 
Last edited:
Is there an alternative aclmode for non-Illumos?

I'm trying to understand the different modes and it sounds like aclmode=discard might work?

EDIT: To make things even simpler, can I specify on ZFS, that any NFS requests from 192.168.0.20 should map to user 'jmk396' and group 'users' and then ignore all chmod requests?

aclmode=discard should work to avoid permission modifications but this will affect SMB as well..
And NFS clients (from different OSs) are different, some use nobody, some the uid of current user.
This is a client not a ZFS behaviour.

At last, you must accept that NFS3 and SMB is incompatible beside a fully open setting.
 
Hmmm, aclmode=discard still allows a chmod to modify permissions.

Does ZFS On Linux have an equivilant to aclmode=restricted?

It looks like ZFS On Linux uses "acltype" instead of "aclmode" and has the options of off, disabled, noacl, an posixacl.
 
On Linux, you do not have the option to use Solaris CiFS, you are restricted to SAMBA
- without the special Windows and ntfs alike features but with a better compatibility to the Unix/Linux uid/gid
authentication and NFS.
 
Both have the same roots but since Oracle closed ZFS development, there are two levelopment lines,
OpenZFS and closed source ZFS development. Both are incompatible now not interchangeable.

Both have different highlights. Currently Solaris 11.3 is the most feature riich ZFS OS with unique
features like ZFS encryption, SMB 2.1 on Solaris CIFS and ultrafast sequential resilvering.

Basically its a question if you want to follow closed source Oracle or free OpenZFS.
Or outside noncomercial demo or developmet use its a question of money. You can use
OmniOS for free with a commercial support option while Solaris is 1000$ per year (with support).
 
Quick question for you _Gea...

I currently have an AIO ZFS San (Napp-it on OmniOS r151014), and I also have an Oracle license, so I can freely use Oracle products. In your opinion, is it worth it to switch to the official Solaris 11.3 to take advantage the features you mentioned above?

Thanks!
 
Quick question for you _Gea...

I currently have an AIO ZFS San (Napp-it on OmniOS r151014), and I also have an Oracle license, so I can freely use Oracle products. In your opinion, is it worth it to switch to the official Solaris 11.3 to take advantage the features you mentioned above?

Thanks!

Not easy to answer.

In my main job, I am responsible for the IT of a small university with more Macs than PCs. We own about a dozen of ZFS storage systems, used as a filer, as a backup system and many all-in-one.

As Apple switched to SMB as the default protocol with a lousy performance on SMB1 whe are waiting for SMB2 support in OmniOS. Currently we stay at AFP or NFS for performance sensitive tasks. SAMBA is not an option for me.

I have no problem with Oracle as a company like many. But I do not like the focus of Oracle to database or cloud use while storage is their strongness. And I simply do not like to spend the money to switch completely to Solaris. A mixed configuration is also not possible as Oracle does not allow a zfs send to OpenZFS.

Beside that, Solaris 11.3 is the current ZFS OS champion.
If you have a license with support this is a hard to beat option now.

This may change (hopely soon) with Illumos + SMB2 + all the other SMB improvements that were announced.
 
So had a couple of minutes to test tonight.

I have SM LSI 2308/LSI 9207 which works perfectly well with OmniOS (running stable P20.04 Bios). I tried to pass it through to Solaris and it does not work. The CD will just hang at probing devices.

I've read here that you can copy the drivers from OmniOS but am not sure how safe/stable this is to do.

If one wants to go to Solaris on ESXi what do you think is the best path?

1. Use LSI 9207 and copy drivers from OmniOS
2. Replace 9207 with older 9211.
3. Replace 9207 with newer 9300 card (I've read that the drivers for the 9300 series card aren't as stable and don't perform as well at least on FreeBSD variants).

Thoughts?
 
Will add. Also hangs if using VMXNET3 network type. Probably because I did not bother installing vmware tools.
 
Last edited:

Don't think it would make a difference. Apparently P20.04 (4th revision) is all good. P20 rev1 got a bad rap.

I've confirmed that once I copy the files from OmniOS, I can see the drives on the controller.

/kernel/drv/mpt.conf
/kernel/drv/mpt_sas.conf
/kernel/drv/amd64/mpt_sas
/kernel/drv/amd64/mpt
/kernel/kmdb/amd64/mpt
/kernel/kmdb/amd64/mpt_sas

In the link above they copy some additional files, they are 32-bit. I don't believe they are necessary as Solaris 11 is 64-bit.
 
Last edited:
Thanks for the info _Gea. I've already upgraded my pools, so I guess there's no clean/easy way to migrate to Oracle ZFS even if I wanted to, lol.
 
Given Solaris lack of support for ESXi with LSI cards and the inability to get ZFS to open ZFS this seems like a nightmare waiting to happen. If their drivers worked I'd consider it. What happens if you ever update Solaris down the road and the drivers no longer work?

I read another report of a user on this forum having issues with a LSI 3008 on ESXi with Solaris as well. Maybe the powers that be at Oracle don't want you using this setup.

I'll stick with OmniOS for now. Really nothing I can complain about.
 
He did it, long awaited (Gordon Ross from Nexenta, gratulation, among others)
SMB 2.1 - my Mac users will be happy

https://www.illumos.org/issues/6399
https://www.illumos.org/issues/6398
https://www.illumos.org/issues/6352
https://www.illumos.org/issues/6400
https://www.illumos.org/issues/1087

soon
https://www.illumos.org/issues/3525

Pretty good news for Illumos (OI, OmniOS and SmartOS among other)
But still no RDMA, no SMB Multichannel as in SMB 3.1 :/
(which works with Ubuntu 14.04 / Debian + ZoL)

You talk about Mac...
-> Mavericks (10.9) it is SMB2,
-> Yosemite (10.10) & El Capitan (10.11) it is SMB3
 
This is not related to the question if you use Linux or a Unix like BSD or Solaris,
this is a feature of SAMBA 4 that is available on any X-System.

If you need some of the new features for a client, you currently can try SAMBA 4 or you better use
Windows 2012R2 as your SMB server. While Nexenta may offer SMB3 at first with CIFS,
the free Solarish CIFS is now at 2.1 level as well as Oracle Solaris 11.3.

The Solarish CIFS server has many advantages of its own with Windows alike NFS4 ACLs, real Windows SIDs
for Domain users, zero config, trouble free previous version support, multithreaded, kernel/ZFS based etc.
Even with SMB1 it was often faster than SAMBA 4 with SMB3 - with Windows clients but not with Mac clients.
If you connect a newer Mac to an SMB1 server, it is half as fast than a Windows machine.
With SMB 2.1 this is no longer an issue, so this is a huge step together with long path support and other features.
 
Hmm, it seems that r151016 is now the new stable release for OmniOS, anybody tested this yet?
 
I have not tried yet.
For an average user, this update is a minor update.
Maybe for WAN use of replications the resumable zfs send can be important.

Lets wait for the big steps.(SMB2, improved AD support, Trim, persistent L2Arc, ..
 
Back
Top