A typical AiO setup with ESXi and a storage VM requires two independent disk controller (Sata, SAS HBA or M.2)
On one you install ESXi, the other is given to a storage VM in passthrough mode. With an SAS HBA physical raw disk mapping of single disks to VMs is also a supported option. So with...
NVMe Pools can make problems on OS downgrades
"If you're not using ZFS on NVMe devices, you can ignore this message.
With the integration of #14686, the nvme...
In the meantime we are at 151042o
- Fix for a rare kernel panic due to a race condition in poll()
- AMD CPU microcode updated to latest versions as of 20220408
- OpenSSL updated to version...
OmniOS 151042 stable is out,
Release 151030 LTS is now end-of-life.
You should upgrade to r151038 to stay on a supported LTS track.
OmniOS is fully Open Source and free.
Nevertheless, it takes a lot of time and...
The Tty.so is part of Expect. This error results from a newer unsupported Perl.
Part of napp-it is Tty.so from OmniOS for Perl up to 5.34. It worked for OI as well.
Does the problem remains after a logout/login in napp-it?
What is the output of perl -v
OI is more critical than OmniOS as...
22.dev with Apache is a beta, grouping, clustering or remote repication not working!
To downgrade, download 21.06 , 22.01, 22.02 or 22.03 (use mini_httpd)
optionally stop Apache manually via pkill -f bin/httpd
and restart mini_httpd via /etc/init.d/napp-it restart
A ZFS filesystem can only exist below another ZFS filesystem but can be mounted at any point (must be an empty folder, default mountpoint is /pool/filesystem). A pool itself is also a ZFS filesystem. This is no limitation, this is the way ZFS works. Usually you create a ZFS filesystem ex...
napp-it is switching from mini_httpd to Apache 2.4
Up to now the webserver below napp-it is mini_httpd. This is an ultra tiny 50kB single binary webserver. With current operating systems https is no longer working due newer OpenSSL demands. As there is only little work on mini_httpd we decided...
If write stalls to a basic vdev (this is what you have as slog), zfs is waiting forever for the io to be finished as otherwise a dataloss of last sync writes happens.
Action: reboot and "replace" slog with same or remove + readd. Maybe a clear is enough then. Last sync writes lost (up to 4GB of...
If you have enabled acceleration (acc) or monitoring (mon) in napp-it (topmenu right of logout) there are running background tasks. Acc tasks read system informations in the background to improve respondability for some time after last menu actions.
In the end you can only increase efficiency ex with Jumbo or faster disks, reduce raid calculations ex with mirrors that also improves multistream read, reduce disk access with RAM or avoid extra load ex due encryption. If the CPU or disk load is at 100% the system is as fast as possible with...
Yes I am too. For a pure 28/5 pool this should work
In a "production" environment you would use Windows Active Directory for user management. In such a case the Windows SID remains always the same as the Solarish SMB server use the real AD SID as reference for permissions. If you import a pool...
A pool move is possible between Oracle Solaris and Open-ZFS with pool v28/5.
Pool versions > v28 or zfs Solaris v6 is incompatible. If pool is not exported properly prior a move, you need zpool import -f poolname
Sun did its best to be as Windows ntfs compatible as...
For the ACL settings you can check aclinherit settings of the filesystem or its parent.
If you remove -R only the filesystem is replicated not the ones below (-I keeps the datasets between source base-snap and next incremental source snap)
Job-IDs must be unique
You can use any snap for an initial full replication. For ongoing incremental replications you need common snap pairs for a target rollback.
You cannot switch from -i/I to -R on incremental replications as you lack the common snappairs for daughter filesystems/zvols.
As the target filesystem...
Replication -I transfers all intermediate snaps on next replication run. Avoid to delete older replication snaps from a former run as you need at least one common snap pair to continue a ZFS replication.
Prior an incremental replication the target filesystem does a rollback to the common snap...
I am not sure if secondary AD support is still working on current Solaris as all my machines are now OmniOS.
On OmniOS you can only join one AD. If you loose AD connectivity you must rejoin or restart SMB
A/B: multiple DNS, set at System > Network Eth > Dns
When you join an AD, you must use the AD for DNS and the AD must be online all the time
You can SMB access with a local user (even if AD is off). To use AD again after being offline/online, you must restart SMB or rejoin.
To access data if...
Intel Optane is the fastest NVMe with around 500k write iops, only RAM is faster and the RMS 300 is build on RAM. Latency is nothing to concern with RAM. Random write iops is an indicator of latency. As the RMS has more than twice the iops, I would expect latency to be less than halve of the...
Older napp-it installers compiled smartmontools from sources on OmniOS and Solaris. A newer napp-it installs smartmontools on OmniOS from the OmniOS repository (in /opt). The current napp-it can use smartmontools installed in /sbin or under /opt.
Can you reinstall Solaris 11.2 to check if the...
Extras from OmniOS or pkgsrc are installed under /opt to be OS independent, see
server: Mac/Windows client -> OmniOS server share
client: OmniOS as client -> Windows server share
You cannot create SMB only users. Every user must be a regular Unix user. This is the case for Solaris and its forks. Only difference is the password. For Unix the pw hash is stored in /etc/shadow while the SMB password is in /var/smb/smbpasswd (different structure).
If you create a user in...
Iops of a Raid-Z [1-3] is like a single disk while sequential performance of Raid-0/Z scale with number of datadisks.
So iops are quite the same while a Raid-0 may be faster sequentially (not relevant with NVMe)
NVMe passthrough is a very critical part of ESXi. Some configs work, other not.
In last case I would use the NVMe under ESXi and give vmfs vdisks to VMs
Main disadvantage is that all data go VM -> vdisk driver -> ESXi driver instead
VM -> native driver. Mostly this is acceptable and ESXi is...
I would not suggest to use an old template with an old OmniOS due the many bug and security fixes or newer features on a current OS
Instead install a current OmniOS 151038 lts or 040 stable:
- upload iso to your local datastore, https://omnios.org/download.html
- create a new vm (Solaris...
Update for minIO users:
napp-it 20.dev from nov 09 supports the the new minIO settings (required for minIO newer may 2021)
- ROOT_USER and ROOT_PASSWORD instead the former KEY and SECRET
- a new webconsole at port 800x (1000 lower than service port) with support for user, groups and...
new in napp-it 21.dev
Push alerts via Pushover, Telegram, SendinBlue or your own api:
Per default a push use the following webapi
If there is a my file
/var/web-gui/_my/scripts/webapi/webapi.pl it is used instead (update save)
After a new installation of OmniOS 151040, you need the following links for napp-it (or a rerun of the wget installer)
ln -s /lib/libssl.so /usr/lib/libssl.so.1.0.0
ln -s /lib/libcrypto.so /usr/lib/libcrypto.so.1.0.0