OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

As ZFS stores all Raid informations on disks, this is save. You only need to

- export pool, power down
- move HBA
- power up, import pool
 
rpool (operating system) is normally not on a disk provided by a HBA but a local ESXi datastore ex Sata bootdisk for ESXi.
 
Tried that, but it seems it did something to passthrough settings. Can't turn on the NAS VM, it tells me "Failed to start the virtual machine. Device 101:0.0 is not a passthrough device." Any idea what went wrong or how to solve this?
Thanks!

Edit: checked the config of the VM, there's a PCI Device, but the dropdown behind is empty and cannot be selected. I could only remove it. Maybe once removed, I can add the device again fresh?

Another edit: so obviously, moving the card into another slot gave it another path. I had to go in the host configuration and enable the controller for passthrough again, then reboot the host. Then I had to remove the PCI-passthrough device from the NAS VM and boot it. Once it was up, I shut it down again and added the "new" passthrough device. Only then, it could boot again. Now I imported the pool again and everything works as a charm.
Cheers!
 
Last edited:
so obviously, moving the card into another slot gave it another path. I had to go in the host configuration and enable the controller for passthrough again, then reboot the host. Then I had to remove the PCI-passthrough device from the NAS VM and boot it. Once it was up, I shut it down again and added the "new" passthrough device. Only then, it could boot again. Now I imported the pool again and everything works as a charm.
Cheers!

That is correct
 
So I'm in the process of setting up the HA configuration. I've elected to go with mgmt == LAN. One question: for security, quite some time ago, I put all of the IPMI interfaces on my servers on a special VLAN that is only accessible from my win10 workstation. I note that the two HA servers (bare metal) and the control VM (ESXi) need access to the IPMI addresses. Is the way to fix this to add vlan access on the two physical hosts, as well as the control VM? Re-reading your PDF, it's not clear if the HA members need IPMI access for stonith, as the PDF says "VM reset command for head 1 via ipmi or ESXi cli (for tests enter echo 1)".
 
A failover from an active head1 to a standby head2 happens under control of the cluster controlserver/VM. This means that the controlserver initiates a fast remote shutdown of head1 followed by a pool import on head2, a failover of the HA ip and optionally a restore of services like iSCSI or www or a user/group restore from the former active head.

For this you normally do not need an additional kill command (stonith/ shot the other node in the head) of head1. But if head1 hangs for whatever reason and head2 imports the pool it can become corrupted. This is why a second independent kill mechanism of a former active head is implemented. If the whole cluster is virtualised, this can be a VM reset via SSH to ESXi. With a barebone server you can initiate a hard reset via ipmi.

In both cases only the controlserver needs access to ESXi management or the ipmi interface ex via an additional nic or vnic there. If you do not need this additional security, you can skip/fake this step (" echo 1" simulates a successfull stonith ). The heads do not need ESXi or ipmi access.


btw
The multihost ZFS property is already in Illumos. This may be a n additional option to stonith to protect a pool.

but read
https://illumos.topicbox.com/groups...ture-multiple-import-protection-for-ha-setups

more
https://openzfs.org/w/images/d/d9/05-MMP-openzfs-2017.4.pdf
 
Last edited:
A failover from an active head1 to a standby head2 happens under control of the cluster controlserver/VM. This means that the controlserver initiates a fast remote shutdown of head1 followed by a pool import on head2, a failover of the HA ip and optionally a restore of services like iSCSI or www or a user/group restore from the former active head.

For this you normally do not need an additional kill command (stonith/ shot the other node in the head) of head1. But if head1 hangs for whatever reason and head2 imports the pool it can become corrupted. This is why a second independent kill mechanism of a former active head is implemented. If the whole cluster is virtualised, this can be a VM reset via SSH to ESXi. With a barebone server you can initiate a hard reset via ipmi.

In both cases only the controlserver needs access to ESXi management or the ipmi interface ex via an additional nic or vnic there. If you do not need this additional security, you can skip/fake this step (" echo 1" simulates a successfull stonith ). The heads do not need ESXi or ipmi access.


btw
The multihost ZFS property is already in Illumos. This may be a n additional option to stonith to protect a pool.

but read
https://illumos.topicbox.com/groups...ture-multiple-import-protection-for-ha-setups

I can confirm multihost present.
 
A failover from an active head1 to a standby head2 happens under control of the cluster controlserver/VM. This means that the controlserver initiates a fast remote shutdown of head1 followed by a pool import on head2, a failover of the HA ip and optionally a restore of services like iSCSI or www or a user/group restore from the former active head.

For this you normally do not need an additional kill command (stonith/ shot the other node in the head) of head1. But if head1 hangs for whatever reason and head2 imports the pool it can become corrupted. This is why a second independent kill mechanism of a former active head is implemented. If the whole cluster is virtualised, this can be a VM reset via SSH to ESXi. With a barebone server you can initiate a hard reset via ipmi.

In both cases only the controlserver needs access to ESXi management or the ipmi interface ex via an additional nic or vnic there. If you do not need this additional security, you can skip/fake this step (" echo 1" simulates a successfull stonith ). The heads do not need ESXi or ipmi access.


btw
The multihost ZFS property is already in Illumos. This may be a n additional option to stonith to protect a pool.

but read
https://illumos.topicbox.com/groups...ture-multiple-import-protection-for-ha-setups

Thanks, makes my job much simpler :)
 
Hopefully one last question: what does 'Net Link head 2 name of the nic link ex vmxnet3s and add :3 ex vmxnet3s0:3 (HA ip will use this link)' mean? Is this intended to be a dedicated heartbeat link between the two hosts?
 
Normally you add an ip to a network link ex vmxnet3s0 ex the LAN adress. Under this ip you can always access the server. Additionally you can add or remove one or more ip adresses to the same network link temporarily and name the link with an additional :n (numbers must not be ongoing, you can use :8 without the need of lower numbers set)

You see these numbers also when you call ifconfig -a

In a cluster environment you access services like nfs or smb not over the "normal" link but under a moveable HA ip that is either provided by head1 or head2. The controlserver must know in advance under which link number it can set or access the ip ex vmxnet3s:3 on a per link base (The number is hardcoded in the management software, you should not use for other purposes)
 
I'm trying to compare this with the pacemaker/corosync HA ZFS I was using before. So in my case, 192.168.3.0/24 is the 10gb storage network. Host #1 is 192.168.3.44 and host #2 is 192.168.3.45. I intend to use 192.168.3.40 as the HA IP that vsphere will connect to. If I am understanding you correctly, in the CC configuration: HA IP is 192.168.3.40, Net Link Head 1 (primary storage) would be 'aggr0:3' (I have 2 10gb in a bond), whereas Net Link Head2 (old sandy bridge standby storage server), it would be i40e0:3, since the standby server does not have a bonded 10gb link. Does this sound correct?
 
OmniOS Security Update r36m, r34am, r30cm - Security / Features/ Bugfixes

Support for secure RPC for the Netlogon protocol between OmniOS systems and Microsoft Active Directory servers is added to all OmniOS versions under support. This is required to fully mitigate CVE-2020-1472, and will shortly be enforced by Windows domain controllers.

If you use Windows Active Directory you should at least evaluate.

https://omniosce.org/releasenotes.html
 
Well, I'm not sure what I am doing wrong. I'm trying to follow the instructions in the Z-RAID pdf, but no matter what I do, after I enter all the settings in the CC appliance, it fails totally. Here is the screenshot enclosed. BTW, I think I was not clear - I have 10.0.0.0/24 as LAN and 192.168.3.0/24 as 10gb storage. I've tried setting appliance LAN in either subnet, and neither works. It was also not clear if I'm to set the two appliances in cluster-failover mode before setting up CC appliance, but I've done both ways with no difference.

p.s. I appreciate your patience here...
 

Attachments

  • image0003.jpg
    image0003.jpg
    73.9 KB · Views: 0
Very strange. I tried to do 'pkg update'. It works on 1 of the 3 installs (omnios1) but not omnios2 or omnios-cc. I get this:

WARNING: Errors were encountered when attempting to retrieve package
catalog information. Packages added to the affected publisher repositories since
the last retrieval may not be available.

Errors were encountered when attempting to contact repository for publisher 'extra.omnios'.

Unable to contact valid package repository: https://pkg.omniosce.org/r151036/extra/
Encountered the following error(s):
Framework error: code: E_SSL_CACERT (60) reason: SSL certificate problem: certificate has expired
URL: 'https://pkg.omniosce.org/r151036/extra'

Errors were encountered when attempting to contact 2 of 2 repositories for publisher 'omnios'.

Unable to contact valid package repository: https://pkg.omniosce.org/r151036/core/
Encountered the following error(s):
Framework error: code: E_SSL_CACERT (60) reason: SSL certificate problem: certificate has expired
URL: 'https://pkg.omniosce.org/r151036/core'

Unable to contact valid package repository: https://pkg.omniosce.org/r151036/core/
Encountered the following error(s):
Framework error: code: E_SSL_CACERT (60) reason: SSL certificate problem: certificate has expired
URL: 'https://pkg.omniosce.org/r151036/core'

on both failing hosts. I verified that /etc/ssl/pkg/OmniOSce_CA.pem. I'm certainly no SSL expert, and didn't do anything that I am aware of. I can also confirm their cert is NOT expired (if I visit those URLs from a browser, all is well.) I can only conclude something is broken in the omnios installs? I have no idea why this happened (I certainly wasn't messing with pkg information, or deleting things, and 2 of my 3 installs are borked?) I suppose I can reinstall, but without some idea what the heck happened, doesn't give me a warm feeling...
 
A "certificate expired" happens when the OmniOS cerificate is indeed expired (unlikely) or when the system date is wrong (too old or in future). On ESXi VM date is in sync with ESXi server date.
 
Well, their cert is certainly not expired. I never thought of the system date. I will check, thanks!
 
Okay, looking good. I've got ipmi working using ipmitools. I tested the failover time by running a linux guest with the dd command using 'status=progress', from /dev/sda to /dev/null, starting a manual failover, and starting a timer when disk I/O stopped, and stopping the timer when it resumed. Seems to be on the order of 45 seconds or so, which sounds ok.
 
Hopefully last step: enabled and turned on the failover service (auto mode). Tested by using ipmi manually to power off the master. It all seems to work. The one thing I'm wondering, is in the Appliance Cluster page, ipmi is showing a red indicator?
 

Attachments

  • image0005.jpg
    image0005.jpg
    71.2 KB · Views: 0
The check executes a console command and tests the result so you need for ex an ipmi status info,
For tests or without stonith this can be a simple "echo 1" and test for a return value of 1.
 
Hey, just about to buy some hardware for my next machine and wanted some advice on where to spend the cash. I'm building an all in one and wondering whether it's worth just getting more RAM or spending on an NVMe array for VM storage. I have spinning rust as my main backup/storage and want to run a number of VMs. I've got a couple of Optane 900P drives to use as well.

Initially I was thinking of getting some PCIe x16 M.2 caddies and populating those but with 128GB RAM now costing about the same as 8 1TB M.2 drives, what's the best plan? Motherboard will be the Asrock ROMED8-2T.
 
You need RAM for the storage VM and other VMs. With ultrafast storage RAM is not important for ZFS performance than with slower pools up from a certain level. You should have at least 8 GB RAM for the storage VM for a decent performance. You may find improvements up to say 32GB. Up from then you need special workloads to have a relevant advantage (ex multiuser mailserver with millions of files) as the RAM readcache helps with small random files and access patterns not with sequential workloads and the writecache is 10% RAM, max 4GB per default.

Regarding NVMe and storage
Especially the cheaper desktop NVMe have three disadvantages in a server
- no powerloss protection. A crash during a write can corrupt files. Bad for a pool, really worse for an Slog.
The Optane 90x (unlike the datacenter Optane 4801) do not have guaranteed powerloss protection but are expected to work well.
Most cheap NVMe are bad at plp.
- Performance is not good on steady load
- Endurance is not good

A good compromise are the 12G SAS SSDs like WD SS530. Nearly as fast as NVMe, plp and much easier to handle than NVMe and pci-e passthrough (no problem with many of them, hotplug etc)

btw
Nice board. O have tested a similar one from SuperMicro as a candidate for my next server replacements,
https://napp-it.org/doc/downloads/epyc_performance.pdf
 
Thanks. On the PLP point, if you have a UPS that can shutdown the server, is it as important?

The cost of those SAS SSDs is far too high though. They seem to be >3X the cost of a consumer drive and I'm not sure I'm going to get anywhere near the endurance limits.
 
UPS can always be helpful, not only in case of a power outage but also with residual current operated circuit breakers
but they do not help in case of a system crash. Even for a SoHo server especially with such a highend machine I would care about.

Main problem with data corruptions and VMs is that you mostly detect problems late, better to use any method to avoid them. SSDs are more critical regarding powerloss than mechanical disks where you simply can disable the disk cache.

A cheaper alternative are Samsung PM.. SSD/NVMe. Not as superiour than the wd but much cheaper and with plp.
 
I'm getting this error every 15 minutes, when playing movies or music the connection drops from the OI server (SMB share called 'storage'), so am guessing it may be related. Otherwise connectivity is fine most of the time without any authentication issues.

Apr 9 09:46:50 openindiana smbsrv: [ID 138215 kern.notice] NOTICE: smbd[NT Authority\Anonymous]: storage access denied: IPC only
Apr 9 09:46:50 openindiana last message repeated 7 times

9 years later I have the exact same issue.

I have a OmniOS 5.11 (omnios-r151032-19f7bd2ae5 November 2019) install with napp-it and a few Windows clients in question. I've actually had this issue since November 2019 but haven't got around to looking at it until now.
Windows client A: Windows 10 LTSC 2019 (version 1809)
Windows client B: Windows 10 Pro 20H2 (version 2009) doing a anonymous guest login
Windows client C: Windows Server 2019 (version 1809) doing a user login to the Share

Client B and C have zero issues talking to my OmniOS. The Omni console shows the same "smbsrv notice smbd nt authority\anonymous media access denied ipc only" errors whenever Client A tries do a prolonged WRITE to the OmniOS server. Client A can read from the OmniOS/napp-it server all day, but whenever it tries to write more than a few megabytes, the OmniOS server seizes up and any client on the network talking to it has it's connection to it hang for 20 seconds. The server recovers after this 20 seconds and resumes connections. I tried both drive mappings logging a Ombi share user or just network \\ browsing as an anonymous to the server but both fail. The OmniOS server really doesn't like this client!
 
Last edited:
There are three options
- Client A or a service on it tries unwanted anonymous access
- There are some recent critical bugfixes on Windows 10 and OmniOS to follow Windows demands.
I would update Windows and OmniOS to current (Windows 20H2 and OmniOS 151036 stable)
- The delay can happen due locking problems. To check switch nbmand (menu ZFS filesystems) or oplock (menu Service > SMB > Properties). Switch back when not helpful. You can also try to limit server smb version ex to 2.0 (OmniOS 151036 is up to SMB 3.1.1 with the kernel/ZFS based SMB server).

I would start with an OmniOS update to 151036 stable, see
http://www.napp-it.org/doc/downloads/setup_napp-it_os.pdf
 
Thanks for the suggestions Gea.
Windows is patched to the newest security March 2021 update. This LTSC version is the extended maintenance support release with continued updates.
I disabled the guest account on OmniOS and cleared out all saved credentials on Client A, and reconnected to OmniOS with a new non-root OmniOS account, it took the new login and I could read from the server as usual. Checking Get-SmbConnection it's definitely using the OmniOS account to do the IPC connections and not an anonymous connection.

Code:
ServerName ShareName UserName      Credential      Dialect NumOpens
---------- --------- --------      ----------      ------- --------
SAN1       IPC$      LPT1\winadmin SAN1\omniadmin  3.0.2   0
SAN1       media     LPT1\winadmin SAN1\omniadmin  3.0.2   2

But the locking by a still happens by NT Authority\Anonymous (according to OmniOS console) when I try to write to OmniOS. Client B reports the exact same username and dialect used, with a secondary IPC$ connection too, with zero issues. Sooo bewildered by this.
 
Last edited:
Edit: Resolved the issue by disabling Jumbo Frames on Client A. All the other clients talk the OmniOS server over SMB have jumbo frames enabled with no issues, but there seems to be issues with Jumbos on this particular host. Shrug! So glad it's fixed.
 
  • Like
Reactions: _Gea
like this
OmniOS 151038, Stable and Long-Term-Supported (LTS) Release, TBC of May 2021
https://github.com/omniosorg/omnios-build/blob/master/doc/ReleaseNotes.md

There are some important new features available like persistent l2arc, SMB improvements, improvements around Bhyve/LX or improved support for newer hardware ex AMD Zen, Intel X710 or newer chipsets. If you intend a fast switch, you can evaluate the new features in OmniOS 151037 bloody that can be updated in may to 151038.
 
_Gea I noticed an issue that the SMTP password for email notifications has a 28 or so character limit. Some email services uses API keys for mailing now. I'm using Sendgrid and they make your stmp password your API key which is 70 characters long.
 
edit /var/web-gui/data/wwwroot/cgi-bin/admin.pl line 2171 (current pro) and remove length limit:
&ask('value_smtp-pw',$txt{'set_smtp_pw'},$cfg{'value_smtp-pw'},'m28');

change to
&ask('value_smtp-pw',$txt{'set_smtp_pw'},$cfg{'value_smtp-pw'});

please report if it works.
I will then modify napp-it
 
Hey Gea, I'm getting the
Code:
Tty.c: loadable library and perl binaries are mismatched (got handshake key 10c80080, needed 10f80080)
error. I don't think I changed anything in the Perl environment because I only use this machine to run ZFS + Napp-it. Is there anything I can try to fix it?

Oracle Solaris 11.4.30.88.3.
Napp-it running version : 18.12s

EDIT: Fixed by downloading 18.12w7 (free). Keeping the post up in case someone else runs into this issue.
 
Ever since I went to the latest stable omniOS I am getting weird file modified behavior. If I make a VBS file in notepad++ and keep it open while I execute the script, I get a prompt saying the file has been modified. It used to not modify the file after I executed it. Any idea what's going on here?
 
Back
Top