OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

Yor are doing advanced share or permission settings without knowing what they mean.
Yes, that's exactly the point!

I only want to have full access of anything using root account and share some folders for guest (read only, no pw, no create, only readx), so i guess will need only folder b & c.

I destroyed the dataset and recreated a new one with default settings (the same you advised) but i still can't access it from the windows machine. (access denied error message when i enter my root pw)
Is there something to do from the windows side?

It seems windows don't know my root account:
EFbqXNwru.png

I created a "Mastaba" user with full set and i can access, but not with the root account.
 
Last edited:
First, if you need guest access without login, you can do this on a filesystem level, ex
- create a filesystem pool1/data and share with guest=ok

Then create a second filesystem like pool1/personal and share without guest and set the needed permissions on the folder.

SMB access as root should not be a problem unless root has a SMB password (see menu user)
Only problem: Have you set any id-mappings for root? (They are only for Active Directory to map AD users to local Solaris users)
 
Last edited:
Yeah i also thought of creating multiple datasets, but i wanted more flexibility (occasionnaly share/unshare specific folders without copying from a dataset to another)

How do you set id-mappings?
 
1. You must decide if you want to share with or without login
2. with the idmap command (or napp-it menu users)

Just to be sure:
Redo a passwd root as well to re-set a SMB password for root
 
Last edited:
1 no login, only some selected folders with read only.
2 what do you input in the fields?
 
1. enable guest and set folder ACL either to everyone=readonly or modify
2. nothing or delete mappings if there are any
 
what is the appropriate way to upgrade napp-it from 1 version to the next?

What I have now is napp-it-13b (v0.9e1_preview Jan 12 2014) AIO VM appliance, with 2 iSCSI target defined, and 2 storage pool

Code:
root@Napp-it-13b:/var/web-gui/_log# zpool list
NAME       SIZE  ALLOC   FREE  EXPANDSZ    CAP  DEDUP  HEALTH  ALTROOT
Storage1  3.62T   100G  3.53T         -     2%  1.00x  ONLINE  -
Storage2  3.62T   576K  3.62T         -     0%  1.00x  ONLINE  -
rpool     29.8G  5.03G  24.7G         -    16%  1.00x  ONLINE  -
root@Napp-it-13b:/var/web-gui/_log# stmfadm list-lu
LU Name: 600144F0C374B87300005351F9F80001
LU Name: 600144F0C374B873000053BFB4590001
root@Napp-it-13b:/var/web-gui/_log#

and I want to upgrade to 14b (http://www.napp-it.org/downloads/index_en.html)

Can someone comment on my steps?
  1. login to napp it web ui and save comstar config
  2. copy saved config
    Code:
    cp /var/web-gui/_log/COMSTAR-*.bkp) to /Storage1
  3. export my pool
    Code:
    zpool export Storage1 and Storage2
  4. shutdown napp-it
  5. change ESXi VM config to the new napp-it VM image (passthru my storage controller)
  6. startup napp-it
  7. import my pool
    Code:
    zpool import Storage1 and Storage2
  8. restore COMSTAR config
    Code:
    svccfg import COMSTAR-*bkp
  9. profit and enjoy new napp-it
 
If you just want to upgrade napp-it and not OmniOS, you can just simply go to napp-it->upgrade and do an upgrade...

Im 99% that works, but better wait for Gea....
 
Which options do I have in terms of migrating a napp-it install from an OmniOS VM to a new Solaris 11 VM?

I am going to attach the HBA's to the Solaris 11 VM and import the pools. I am also going to recreate the Unix groups manually (used with the ACLs). Setting up snapshots and scrub again should be painless.

Can I just backup, move and restore the Comstar configuration and then thats it?
 
Which options do I have in terms of migrating a napp-it install from an OmniOS VM to a new Solaris 11 VM?

I am going to attach the HBA's to the Solaris 11 VM and import the pools. I am also going to recreate the Unix groups manually (used with the ACLs). Setting up snapshots and scrub again should be painless.

Can I just backup, move and restore the Comstar configuration and then thats it?

Your main problem may be the ZFS version.
OmniOS use v5000 per default (with feature flags) that cannot be imported to Solaris.
In this case you need to backup/recreate the pool.

If you are on Pool v28 you just can import the pool, restore Comstar (or recreate targets manually). For permissions, you should recreate users with the same UID. Regarding SMB groups, they are different to Unix groups (re-create them in napp-it menu users).

Attention: In Solaris 11.1, Comstar is broken. You need a service agreement with Oracle for the fix (or use Solaris 11.2)

Restore the content of /var/web-gui/_log if you want to keep jobs, keys and groups intact.
 
Whats best way to share between VMs on esxi 5.1, Xeon 1230 with 16gb ecc ram and supermicro board

Currently running 24hdd on zfs 2tb each. Each is setup as raidz2 12 drives each.

Currently running 3 vms, OpenIndiana, Windows7, and battery backup VM.

Wanting to install and test out Linux Mint (Flawless Media Server), however seeing how I can import/share my ZFS to it without being limited to network gigabit speeds between VMs..

Currently the Win7 and Openindiana only transfer at 100 meg per second (gigabit). Wondering how do I go about so they transfer at full HDD speeds.. Im getting benchmark speeds of 900MB/s read and ~650write internally on my OI vm.. Is there a different way to share between the VM's so they get real access instead of network share limitations?

Thanks for any advice
 
If you use vmxnet3 vnics between your VMs, you can expect several Gbit/s between your VMs.
You can add some extra performance if you activate Jumboframes
 
How do you set jumbo frames on omnios?

I only get ~200MB/s between a M500 960GB SSD and a 10*5K4000 RAIDZ2 using two STGN-I2S + 3m SFP+ cables plugged to a GS728TXS switch.
NIC on the Windows machine has jumbo packet set to 9014, switch has jumbo frames set to 9216 max.
I tried the mtu commands without success, i have the same "link busy" message.
 
Another question:
I swapped the X540-T2 of my backup server for a STGN-I2S but now i can't access it.
I installed a STGN-I2S on the other zfs server without problem, i re-created the interface like here and all worked like before, but not with the second server.
I also tried to use one of the 1Gb port and it worked, but not with the 10Gb NIC.

56NHZ9qbEY.png
 
Last edited:
If you are using the new VM, importing the pools and the Comstar config is ok. You may also save/copy keys, jobs, replications and groups (content of /var/web-gui/_log/)

Other option:
update napp-it: menu about-update
update OmniOS: http://omnios.omniti.com/wiki.php/Upgrade_r151008_r151010

This is to update that I successfully "pgraded" my napp-it to latest version
  • saved COMSTAR config
  • export pool
  • import new napp-it to new VM
  • fix vm setting (passthru controller, RAM, etc)
  • start new VM
  • import pool
  • restore COMSTAR config

With my simple setup, this works wonder
As I put more setup/config into the server, I have a feeling this won't work

EDIT: for reference, my original thread: http://hardforum.com/showpost.php?p=1040956624&postcount=6167
 
Last edited:
I have been trying to install OmniOS_Text_r151010j.iso in a vm on esxi 5.1 and when I press F2 (Continue) it brings me back to the install menu. Any ideas? I have searched and cannot find this issue.
 
I have been trying to install OmniOS_Text_r151010j.iso in a vm on esxi 5.1 and when I press F2 (Continue) it brings me back to the install menu. Any ideas? I have searched and cannot find this issue.

Fixed...figured out that it required a floppy interface, which I typically delete.
 
I have been trying to install OmniOS_Text_r151010j.iso in a vm on esxi 5.1 and when I press F2 (Continue) it brings me back to the install menu. Any ideas? I have searched and cannot find this issue.

What about Esc-2 ?? Solaris based operating systems tend to have issues with emulators and function keys.
 
I've been using napp-it for a few weeks now (and love it) on my all-in-one.

I've noticed it barely uses any of the RAM i give it. It's currently on 12GB, running 3 VMs on the SSD pool atm.

The system has:

6 x 4TB drives in RAIDZ-2 = 21.8TB raw, 14.3TB usable, 11TB available
2 x 500GB SSDs in Mirror
30GB Mirrored bootdisk(rpool)

I've tried increasing the workload by doing scrubs while copying files etc. I've even tried running it on only 3GB of RAM and it still performs exceptionally (116MB/s transfers) as it does with 12GB.

Surely something is off, all I ever read is ZFS eating 90% of RAM whenever it can and sparing 1GB.

Currently on 0.9f1 running on OmniOS 151010 (pre-J)

zpool status
jWGtWPr.png


napp-it
VWiBjjC.png


esxi (brown is 12GB, red is what its using)
w4tr5Yl.png
 
Keep in mind, if the VMs disk working set fits in N GB of ARC, zfs won't use much if any more than that...
 
Post your arc report
System>Statistics>Arc

Should look something like this:

I'm fairly sure the red line isn't "used" as in "in use by the operating system" but what ESXi considers active.

In use does not equal active.

I had to roll back to 0.9e1 from 0.9f1 to get these statistics (or i just couldn't find them)

cKQyhUz.png


It looks like ZFS is doing its job after all. I may scale down its memory in the future if I urgently need it but for now I think the 12GB will do for 25TB RAW
 
0.9f1 use a new arcstat script in menu System - Basic Statistic - ARC
that shows hit and miss values for Arc and L2Arc
 
I added a static IP to one of my Ethernet Interfaces on Solaris 11.1. I can't connect to that IP via CIFS (access denied) but I can connect to the DHCP addresss on the same Interface. (It has 2 IP's right now). How can I allow CIFS access on the static IP?

ussBzUd.jpg
 
Last edited:
Just trying to setup all-in-one box with latest Napp-it.
I would like to learn to do that without using the ACL extension. I have figured out CIFS, but I am stuggling with the NFS share to be used from ESXI?
I just want to make the NFS share available to ESXI and I guess I have to do this using CLI...
Could anyone please advise me how to set this up without using the ACL extension?
Thanks in advance.
 
Just trying to setup all-in-one box with latest Napp-it.
I would like to learn to do that without using the ACL extension. I have figured out CIFS, but I am stuggling with the NFS share to be used from ESXI?
I just want to make the NFS share available to ESXI and I guess I have to do this using CLI...
Could anyone please advise me how to set this up without using the ACL extension?
Thanks in advance.

1.) via CLI and /usr/bin/chmod
http://docs.oracle.com/cd/E18752_01/html/819-5461/gbabw.html

2.) SMB share the filesystem and SMB connect as root,
set ACl everyone=modify recursively from Windows
 
Can you just get a small, cheap SSD to use?

The problem, I discovered, was that the OmniOS install does not like it when you have a lot of RAM installed in the system(?), I removed all sticks but one and it installed perfectly fine, then reinstalled the RAM afterwards.

:confused::confused::confused::confused::confused:

Now I have a new problem... I was able to get LUs setup with views and all that jazz, then I deleted a few and it appears that Omni kernel paniced. I restarted the box and since then I have not been able to get ESXi to see any new LUs I've added since. I followed the instructions on creating an LU, setting up a host group, setting up a target and viewing the LU, but no dice...

Edit: Just created some new LUs and added them to the view, still not showing up in ESXi.
 
Last edited:
Thx _Gea!
I am still having some problems understanding the syntax.
Say I want to use this for ESXI::
ZFS pool = pool1 and ZFS folder = NFS
What would the the correct CLI syntax use?
Sorry for taking you time with this....
Cheers,
 
If you want to create a dataset to share out via NFS for ESXi, do 'zfs create pool1/NFS' and then 'zfs set sharenfs=on pool1/NFS' and you are all set. Well, I think you also want to say 'chown nobody:nobody /pool1/NFS' afterward.
 
Hello, I just purchased the complete license for non commercial home use recently and have been having a problem with replication between my two appliances.

During the initial replication sync from my backup appliance I keep getting this error about 12% into the replication job -

info 1328: job-repli 1011, source-dest snap-pair 0 not found, check repli_snaps on source and target. .
Check filesystems and group membership.

info 1328: initial remote replication finished (time: 1800 s, size: MB, performance: 0.0 MB/s) error

It appears that the snapshot on the source SAN gets deleted after this error as well. Not sure if that has something to do with it.

Any help would be greatly appreciated!
 
Hello, I just purchased the complete license for non commercial home use recently and have been having a problem with replication between my two appliances.

During the initial replication sync from my backup appliance I keep getting this error about 12% into the replication job -

info 1328: job-repli 1011, source-dest snap-pair 0 not found, check repli_snaps on source and target. .
Check filesystems and group membership.

info 1328: initial remote replication finished (time: 1800 s, size: MB, performance: 0.0 MB/s) error

It appears that the snapshot on the source SAN gets deleted after this error as well. Not sure if that has something to do with it.

Any help would be greatly appreciated!

The initial transfer works like:

- create a source snap
- transfer the source snap via zfs send -> netcat -> zfs receive (to a newly created filesystem)
- when zfs receive comes to an end, the script checks if the target filesystem is valid and created together with a initial target snap

-if the target filesystem is not created, the transfer was not initiated: check network or target space

-if the target filesystem is created but if you have an error during a transfer the the target filesystem is not valid (no valid target snap)
You must then destroy or rename the target filesystem prior a new initial run. The old initial source snap will be deleted and recreated.

-If you have a valid initial run, a interrupted incremental transfer does not need a new initial run as you can retry at any time.

- You should use 0.9f1 (current is from July 17th)
- Check actions in menu Jobs - Monitor - clear Monitor
 
Back
Top