OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

Disclaimer: newbie.

I built an all-in-one Napp-It server a little over a year ago, has been running like a top.

In the interest of keeping things up to date, I'd like to update ESXi, currently running 5.5.

What's the latest supported release of ESXi and is there any reason not to update to it?

Thank you.
 
I have switched completely to ESXI 6.5 U1
Main difference to 5.5 is the new Web-HTML management console. The old Windows Vsphere management tool is no longer working.

The new Webconsole yet has some small bugs ex
When you change settings it reports wrong RAM settings. A F5 (reload) can fix this
 
Last edited:
Hello. After upgrading to 18.01 Pro Feb.22.2018 on two servers, both now have a non-working Realtime zilstat monitor — just "n.a".

I have both Mon and Acc enabled. The other realtime monitors are OK.

Anyone else having a napp-it zilstat problem?
 
yes, seems a bug
In the meantime you can use System > Basic Statistics
 
yes, seems a bug
In the meantime you can use System > Basic Statistics

Thank you, but alas, that does not work in OmniOS r151024n. The probe fbt::zil_lwb_write_start:entry was renamed to fbt::zil_lwb_write_issue:entry in illumos 8585, "improve batching done in zil_commit()," 1271e4b10df.
 
Thank you for this info
I have updated 17.03dev where zilstat is working on OmniOS 151024 (not Solaris 11.4)
 
Hi, I am having a problem that Mac OS High Sierra cannot connect to an NFS share to napp-it.

zfs get sharenfs lists:
filepool01/backup/backup-tm sharenfs rw=macbook,root=macbook

But when connection I get: Beim Verbinden mit dem Server „fdb02.local“ ist ein Fehler aufgetreten. Du hast nicht die erforderlichen Zugriffsrechte, um auf diesen Server zuzugreifen.
which means in essence i have no permissions to access the server.

Another NFS share is for everyone:
filepool02/share/tempo sharenfs on local
and on this share I cannot acces the contents, finder says: 'Der Ordner „tempo“ kann nicht geöffnet werden, da du nicht die erforderlichen Zugriffsrechte zum Anzeigen des Objektinhalts hast.'
which means in essence that it cannot show the content of the folder because of no permissions.
So although mount seems to succeeds, I can only watch at an empty shared folder.

But from an old note I found I added permissions like this:
Code:
/usr/bin/chmod -R A+user:nobody:full_set:file_inherit/dir_inherit:allow /filepool01/backup/backup-tm

It seems I am missing something there! The 'tempo' folder is also shared via SMB, the other folder is not.

What can I do to connect from MacOS th the nfs share?
 
The problem with NFS (v3) is that there is no authentication or authorisation beside some good will restrictions on sharing based on ip
(if you set share restrictions use ip not hostname), for first tests just set NFS=on

When accessing NFS some clients do this as nobody others with its uid. OSX use the uid of the current user.
If permissions are set to nobody you have no access.

What I would do:
reset all permissions recursively to everyone@=modify (free napp-it ACL extension option or via Windows as root)
enable the share with NFS=on
connect from finder via goto server and nfs://serverip/pool/filesystem
 
aha, many thanks! Now I see it's a bit less compatible from the Mac OS side there (i can connect to those shares with mucommander which is a java app).

I was used to set sharenfs=rw=hostnames because this works fine with zfsonlinux. But since ip addresses are quite dynamic here I better use SMB I guess.
 
Hi,

I currently run an AIO host with ESXI, Omni OS, Napp-It 17.06pro.
The storage is 2 RAID-Z1 arrays, 4x2TB and 4x3TB, one with 1 pool the other with 2 pools.
ESXI runs on a separate SSD.

There is one Win10 OS running on iSCSI from one of the pools.

Is there a way to migrate the pools as is easily to a new array without losing the NFS and iSCSI paths to ESXI?
I am considering swapping out all the disks for 3 or 4 8TB drives.

Thanks,
 
For NFS you need the same path pool/filesystem
So replicate your fs ex tank/data to a new pool ex tank2 (results in tank2/data)
Then remove the old pool, export tank2 and import as tank

For ESXi you may need to reboot ESXi to connect the NFS on a new poolwith same path again

For iSCSI you need to keep the LU GUID
If you have enabled iSCSI in menu filesystems, you only need to replicate the iSCSI zvol
where the GUID is part of the zvol name and you can then just enable on the new Pool

If you have configured iSCSI in menu Comstar manually you must either write down the GUID and set it on a LUN import or save and restore Comstar settings.
 
Last edited:
So I learned about an interesting error, if you manually make a COMSTAR LU and set the blocksize to 4096 using a zfs volume block device as the storage target, the backing storage takes about 1.5-2x as much as you'd expect. Which will run the pool to 0 bytes free and break all kinds of fun stuff.
 
Hello, I've installed the lastest OmniOS CE stable.

I launch the napp-it install, and nothing much happens :

Code:
aesma@omniosce:~$ wget -O - www.napp-it.org/nappit | perl
--2018-03-08 08:08:15--  http://www.napp-it.org/nappit
Resolving www.napp-it.org... 188.93.13.227, 2a00:1158:1000:300::368
Connecting to www.napp-it.org|188.93.13.227|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 44359 (43K)
Saving to: 'STDOUT'
-                   100%[===================>]  43.32K  --.-KB/s    in 0.03s
2018-03-08 08:08:15 (1.21 MB/s) - written to stdout [44359/44359]
aesma@omniosce:~$

What is going wrong ?
 
For NFS you need the same path pool/filesystem
So replicate your fs ex tank/data to a new pool ex tank2 (results in tank2/data)
Then remove the old pool, export tank2 and import as tank

For ESXi you may need to reboot ESXi to connect the NFS on a new poolwith same path again

For iSCSI you need to keep the LU GUID
If you have enabled iSCSI in menu filesystems, you only need to replicate the iSCSI zvol
where the GUID is part of the zvol name and you can then just enable on the new Pool

If you have configured iSCSI in menu Comstar manually you must either write down the GUID and set it on a LUN import or save and restore Comstar settings.

Thanks Gea.

I'm out of free ports on the HBA. So I thought doing the following:

1. Export one of the ZFS pools / array.
2. Disconnect those disks
3. Connect new disks in their place, replicate the remaining pool to the new disks.
4. Disconnect the replicated array, and re-connect the disks with the array I exported earlier
5. Import and replicate the array
6. export both pools and re-import with the old names.
7. Configure the iSCSI via COMSTAR.

I suppose this should be workable?

Thanks,
 
Your naming is a little confusing but as I asume that you have two ZFS pools, then you can export and remove the disks of one pool
and insert the new disks to create a new pool. Then replicate one old pool to the new pool (you cannot replicate to disks or arrays=vdevs).

Another option is to keep the old pool(s) as is and replace disk by disk with Disks > Replace
When autoexpand is set to on the pool has the new bigger capacity when the last disk is replace.
 
OK I figured it out, I needed to do this step first :

use DNS name resolution (copy over DNS template)
cp /etc/nsswitch.dns /etc/nsswitch.conf
 
gea

I'm trying to figure out where the enclosure/chassis devices are located in OmniOS (in linux they're in /dev/sg*, /dev/ses*, or /dev/bsg/*)

The reason why is that I have a couple of SA120s attached and am planning on controlling the fans via sg_ses
 
err actually, nvm, found it under /dev/es/* :)

Also, in any case anyone else is wondering, sg_ses does work on setting fan speeds on enclosures in OmniOs as well (at least on SA120s).
 
On a side note, I just realized that napp-it fully supports Oracle Solaris; So if you have an oracle license, would you choose Oracle Solaris or Solarish/ZFS(OmniOS, OI, etc...)?
 
Oracle Solaris is still somehow superiour over Illumos and Open-ZFS.
In all my tests it is fastest with many unique features like ZFS encryption, faster sequential resilvering, improved dedup, NFS 4.1, SMB3 or security features like auditing in Solaris 11.4. Solaris ZFS v43 is not compatible to Open-ZFS v5000.

Oracle guarantees support at least until 2034 but reduced engagement into Solaris. Now I would expect more developpers at Open-ZFS than at Oracle for Solaris. Features like encryption are nearly ready in Open-ZFS, other unique like vdev removel or expanability for Raid-Z are on the way.

Open-ZFS is free and Opensource with many options on Unix like BSD and Illumos ex OmniOS/OI/SmartOS etc, OSX or Linux and maybe Windows in future. So there are arguments for both.
 
Your naming is a little confusing but as I asume that you have two ZFS pools, then you can export and remove the disks of one pool
and insert the new disks to create a new pool. Then replicate one old pool to the new pool (you cannot replicate to disks or arrays=vdevs).

Another option is to keep the old pool(s) as is and replace disk by disk with Disks > Replace
When autoexpand is set to on the pool has the new bigger capacity when the last disk is replace.

Thanks. I ordered 4 x 8TB drives then to replace the 4 x 2TB and 4x 3TB RAID-Z1 arrays.
Important data was stored on both arrays.

The current recommendation is to go with 2 x mirror vdev of 8TB instead of RAID-Z2?

Also, currently I have 2 VMs. One for Omni-OS (Napp-it), and one for a Win10 OS which runs all the software I need.
When creating the server ~4 years ago with ESXI5.5 I got horrible performance for the Win10 OS when I was using a VMDK which was exposed through NFS. Disabling sync writes sped things up, but was risky since the metadata was exposed to power failure as well.
So I passed through an iSCSI device which is passed to the VM as RDM. The VM is backed up using Acronis.

Is this still a good practice, or maybe I should move everything back to NFS and use a small NVME SSD or a couple of 32GB Optanes?
Is there some improvement with NFS writes on ESXI 6.5 (Which im currently running)?


Thanks,
 
Last edited:
gea,

In your tests, Were you able to install vmware-tools/open-vm-tools in Solaris 11.4? Been struggling to have vmxnet3 working..
 
that's a pity, I would've assumed that they prioritized this as Esxi is one of the biggest testing/POC platforms. Oh well, back to OmniOS :)
 
As Oracle engineers are around in the beta forum It may be a good idea to add a comment about this there.
 
Last edited:
Has anyone else migrated off of OmniOS? It's becoming difficult to justify staying on it when there are other compelling options e.g. Linux or FreeBSD. I am leaning more towards BSD.

Gea, how well does Napp-IT integrate with FreeBSD or Linux?
 
Currently downloading gcc on OmniOS via pkg install at 70k/sec (I'm on a gigabit connection). Feels like I'm downloading from some European basement over dial up 1999.
 
Well, I was planning on moving to Solaris 11.4, but they still don't have any viable drivers for VMXNet3. Until then, sticking to omnios.
 
Has anyone else migrated off of OmniOS? It's becoming difficult to justify staying on it when there are other compelling options e.g. Linux or FreeBSD. I am leaning more towards BSD.

Gea, how well does Napp-IT integrate with FreeBSD or Linux?

There is no napp-it on Free-BSD (there is for ex FreeNAS).
Napp-it on Linux has about 30% of the functionality on OmniOS, OI or Solaris.

OmniOS CE is hosted in Switzerland at ETH Zürich
No problem from Germany to download at wirespeed (1G)

Problem seems the interconnectivity Europe-US
 
Hey Gea, just reporting on a bug I've mentioned a few times. Pushover has never sent alerts for me for degraded pools on Solaris 11.3 and Napp-it 16.x,17.x. Today I had a degraded pool and did not receive an alert. While investigating, I did an:

Code:
iostat -e c3t4d0
As soon as I did that, Napp-it sent me an alert through pushover.

Hope this helps you identify the issue.

Also, upgrading to 18.x now, so maybe it's already fixed. Thanks Gea!
 
Hi all, long time Napp-it user here - pretty sure I commissioned it around 5-6 yrs ago now and have only upgraded hard drives over the years. No major issues in that time period which speaks volumes about the stability.

Given its been a while and I have been very lazy with updates (updated Napp-it a couple of times early on) the system is now starting to shows it age - specifically with regard to SMB1 shares via OpenIndiana and shares compatibiliy with Windows 10 which I have recently upgraded to.

I am thinking the best approach is to start afresh and import my existing pool but I am terrified that I am going to mess this up.. If someone could kindly provide the correct sequence to achieve this safely then that would be greatly appreciated.

Some details about the current system;
- ESXi 5.0.0 - i tried updating to 5.5 in order to support windows 8 VMs but had an issue with the vmxnet3 adapter throughput in OpenIndiana as soon as I updated. Not sure if this is hardware compatibility or not but rolling back solved the issue.

- Napp-It v0.9d2 - OpenIndiana 151a

At a guess I would imagine its going to be something like this;
- export pool via Napp-it
- upgrade esxi? i want to stick with 5.0 but a later version (update3) just incase there is a hardware compatibility issue with my intel quad port network adapter (82571EB) and ESXi 5.5 - which seemed to be the case when I previously tried updating.
- install omniOS and latest napp-it
- import pool

Does this sound about right? anything specifically I should watch for. Thanks

edit: does the pool version matter? i think im currently on v28
 
Hey Gea, just reporting on a bug I've mentioned a few times. Pushover has never sent alerts for me for degraded pools on Solaris 11.3 and Napp-it 16.x,17.x. Today I had a degraded pool and did not receive an alert. While investigating, I did an:

Code:
iostat -e c3t4d0
As soon as I did that, Napp-it sent me an alert through pushover.

Hope this helps you identify the issue.

Also, upgrading to 18.x now, so maybe it's already fixed. Thanks Gea!

When you run the alert job, it executes a zpool status and checks for OFFLINE|DEGRADED|UNAVAIL|FAULTED|REMOVED

On problems it segss an alert and blocks this error for 24h. If a different error is detected a new alert is triggered. Iostat errors can only trigger an alert if a ZFS pool problem is the outcome. (Only a report job checks other problems like smart or iostat errors)

see
/var/web-gui/data/napp-it/zfsos/_lib/scripts/job-push.pl line 169
 
Hi all, long time Napp-it user here - pretty sure I commissioned it around 5-6 yrs ago now and have only upgraded hard drives over the years. No major issues in that time period which speaks volumes about the stability.

Given its been a while and I have been very lazy with updates (updated Napp-it a couple of times early on) the system is now starting to shows it age - specifically with regard to SMB1 shares via OpenIndiana and shares compatibiliy with Windows 10 which I have recently upgraded to.

I am thinking the best approach is to start afresh and import my existing pool but I am terrified that I am going to mess this up.. If someone could kindly provide the correct sequence to achieve this safely then that would be greatly appreciated.

Some details about the current system;
- ESXi 5.0.0 - i tried updating to 5.5 in order to support windows 8 VMs but had an issue with the vmxnet3 adapter throughput in OpenIndiana as soon as I updated. Not sure if this is hardware compatibility or not but rolling back solved the issue.

- Napp-It v0.9d2 - OpenIndiana 151a

At a guess I would imagine its going to be something like this;
- export pool via Napp-it
- upgrade esxi? i want to stick with 5.0 but a later version (update3) just incase there is a hardware compatibility issue with my intel quad port network adapter (82571EB) and ESXi 5.5 - which seemed to be the case when I previously tried updating.
- install omniOS and latest napp-it
- import pool

Does this sound about right? anything specifically I should watch for. Thanks

edit: does the pool version matter? i think im currently on v28

I would do a whole update like
- export the ZFS pool (import would work without prior export)
- update ESXi to 6.5U1 (boot ESXi ISO/Usb installer and select update)
The old Windows vsphere is no longer working, management is done via browser now
- download and deploy the current napp-it ova (OmniOS 151024 and napp-it 18.01)
or install OpenIndiana + napp-it manually
- boot the storage VM and import the pool
- update the pool to v5000 (menu Pools, click on version)

check/ set jobs, users and permissions
 
When you run the alert job, it executes a zpool status and checks for OFFLINE|DEGRADED|UNAVAIL|FAULTED|REMOVED

On problems it segss an alert and blocks this error for 24h. If a different error is detected a new alert is triggered. Iostat errors can only trigger an alert if a ZFS pool problem is the outcome. (Only a report job checks other problems like smart or iostat errors)

see
/var/web-gui/data/napp-it/zfsos/_lib/scripts/job-push.pl line 169


The pool was already marked as degraded for at least 24 hours, but napp-it did not send an alert. For some reason running iostat on the drive triggered the alert.
 
Hello,

I'm curious if the following HBA would work for a ZFS pool?

https://www.ebay.com/itm/IBM-SAS920...061704&hash=item58e8909c27:g:Uj8AAOSwkXdaotmW

I've been using ZFS for many years now, probably close to when this project started way back when. I'm stuck using the X8SIL-F motherboard for my all in one. I'd like a new motherboard with at least 9 SATA ports and a M.2 port in a micro-atx format. Is there anything out there comparable to my old motherboard that has these features? I have 8-4TB drives on the way and my current setup will not support them.

TIA
 
As an eBay Associate, HardForum may earn from qualifying purchases.
Back
Top