Search results

  1. G

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    OmniOS 151042 stable is out, https://github.com/omniosorg/omnios-build/blob/r151042/doc/ReleaseNotes.md Release 151030 LTS is now end-of-life. You should upgrade to r151038 to stay on a supported LTS track. btw OmniOS is fully Open Source and free. Nevertheless, it takes a lot of time and...
  2. G

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    Update Tty.so problem on OpenIndiana with Perl 5.22 is fixed on current napp-it 21.06, 22.01 and 22.dev
  3. G

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    The Tty.so is part of Expect. This error results from a newer unsupported Perl. Part of napp-it is Tty.so from OmniOS for Perl up to 5.34. It worked for OI as well. Does the problem remains after a logout/login in napp-it? What is the output of perl -v btw OI is more critical than OmniOS as...
  4. G

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    current 20.dev supports grouping and http (port 80) and https (port 443) and grouping of appliances
  5. G

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    22.dev with Apache is a beta, grouping, clustering or remote repication not working! To downgrade, download 21.06 , 22.01, 22.02 or 22.03 (use mini_httpd) optionally stop Apache manually via pkill -f bin/httpd and restart mini_httpd via /etc/init.d/napp-it restart
  6. G

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    1. A ZFS filesystem can only exist below another ZFS filesystem but can be mounted at any point (must be an empty folder, default mountpoint is /pool/filesystem). A pool itself is also a ZFS filesystem. This is no limitation, this is the way ZFS works. Usually you create a ZFS filesystem ex...
  7. G

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    napp-it is switching from mini_httpd to Apache 2.4 Up to now the webserver below napp-it is mini_httpd. This is an ultra tiny 50kB single binary webserver. With current operating systems https is no longer working due newer OpenSSL demands. As there is only little work on mini_httpd we decided...
  8. G

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    I do not use but propably you need link aggregation, https://docs.oracle.com/cd/E36784_01/html/E37516/gmsab.html#scrolltoc
  9. G

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    If write stalls to a basic vdev (this is what you have as slog), zfs is waiting forever for the io to be finished as otherwise a dataloss of last sync writes happens. Action: reboot and "replace" slog with same or remove + readd. Maybe a clear is enough then. Last sync writes lost (up to 4GB of...
  10. G

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    If you have enabled acceleration (acc) or monitoring (mon) in napp-it (topmenu right of logout) there are running background tasks. Acc tasks read system informations in the background to improve respondability for some time after last menu actions.
  11. G

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    I have never tried one. You may use one on a Linux LX container.
  12. G

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    A newer napp-it is faster than an older due optimized reading of ZFS properties. Another reason may be enabled acceleration (read properties in the background), see toplevel menu near logout.
  13. G

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    Napp-it default tuning increases nfs/tcp/vmxnet3 buffers/servers. Regarding ESXi you may try advanced settings of the napp-it VM and set latency to low.
  14. G

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    In the end you can only increase efficiency ex with Jumbo or faster disks, reduce raid calculations ex with mirrors that also improves multistream read, reduce disk access with RAM or avoid extra load ex due encryption. If the CPU or disk load is at 100% the system is as fast as possible with...
  15. G

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    Yes I am too. For a pure 28/5 pool this should work In a "production" environment you would use Windows Active Directory for user management. In such a case the Windows SID remains always the same as the Solarish SMB server use the real AD SID as reference for permissions. If you import a pool...
  16. G

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    Pool move A pool move is possible between Oracle Solaris and Open-ZFS with pool v28/5. Pool versions > v28 or zfs Solaris v6 is incompatible. If pool is not exported properly prior a move, you need zpool import -f poolname Permissions Sun did its best to be as Windows ntfs compatible as...
  17. G

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    For the ACL settings you can check aclinherit settings of the filesystem or its parent. If you remove -R only the filesystem is replicated not the ones below (-I keeps the datasets between source base-snap and next incremental source snap) Job-IDs must be unique
  18. G

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    You can use any snap for an initial full replication. For ongoing incremental replications you need common snap pairs for a target rollback. You cannot switch from -i/I to -R on incremental replications as you lack the common snappairs for daughter filesystems/zvols. As the target filesystem...
  19. G

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    Replication -I transfers all intermediate snaps on next replication run. Avoid to delete older replication snaps from a former run as you need at least one common snap pair to continue a ZFS replication. Prior an incremental replication the target filesystem does a rollback to the common snap...
  20. G

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    I am not sure if secondary AD support is still working on current Solaris as all my machines are now OmniOS. On OmniOS you can only join one AD. If you loose AD connectivity you must rejoin or restart SMB
  21. G

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    A/B: multiple DNS, set at System > Network Eth > Dns When you join an AD, you must use the AD for DNS and the AD must be online all the time You can SMB access with a local user (even if AD is off). To use AD again after being offline/online, you must restart SMB or rejoin. To access data if...
  22. G

    Radian RMS-200/RMS-300 vs Intel OPTANE 905p for ZFS SLOG

    Intel Optane is the fastest NVMe with around 500k write iops, only RAM is faster and the RMS 300 is build on RAM. Latency is nothing to concern with RAM. Random write iops is an indicator of latency. As the RMS has more than twice the iops, I would expect latency to be less than halve of the...
  23. G

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    Older napp-it installers compiled smartmontools from sources on OmniOS and Solaris. A newer napp-it installs smartmontools on OmniOS from the OmniOS repository (in /opt). The current napp-it can use smartmontools installed in /sbin or under /opt. Can you reinstall Solaris 11.2 to check if the...
  24. G

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    Extras from OmniOS or pkgsrc are installed under /opt to be OS independent, see /opt/ooce/smartmontools/sbin/smartctl SMB services server: Mac/Windows client -> OmniOS server share client: OmniOS as client -> Windows server share
  25. G

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    1. You cannot create SMB only users. Every user must be a regular Unix user. This is the case for Solaris and its forks. Only difference is the password. For Unix the pw hash is stored in /etc/shadow while the SMB password is in /var/smb/smbpasswd (different structure). If you create a user in...
  26. G

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    Raid-0 and Raid-Z stripe data over disks. On access you must wait until any disk is ready. Only sequential performance can scale as any disk must hold only a part of overall data.
  27. G

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    Iops of a Raid-Z [1-3] is like a single disk while sequential performance of Raid-0/Z scale with number of datadisks. So iops are quite the same while a Raid-0 may be faster sequentially (not relevant with NVMe)
  28. G

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    NVMe passthrough is a very critical part of ESXi. Some configs work, other not. In last case I would use the NVMe under ESXi and give vmfs vdisks to VMs Main disadvantage is that all data go VM -> vdisk driver -> ESXi driver instead VM -> native driver. Mostly this is acceptable and ESXi is...
  29. G

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    I would not suggest to use an old template with an old OmniOS due the many bug and security fixes or newer features on a current OS Instead install a current OmniOS 151038 lts or 040 stable: - upload iso to your local datastore, https://omnios.org/download.html - create a new vm (Solaris...
  30. G

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    Update for minIO users: napp-it 20.dev from nov 09 supports the the new minIO settings (required for minIO newer may 2021) - ROOT_USER and ROOT_PASSWORD instead the former KEY and SECRET - a new webconsole at port 800x (1000 lower than service port) with support for user, groups and...
  31. G

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    new in napp-it 21.dev Push alerts via Pushover, Telegram, SendinBlue or your own api: Per default a push use the following webapi /var/web-gui/data/napp-it/zfsos/_lib/scripts/webapi.pl If there is a my file /var/web-gui/_my/scripts/webapi/webapi.pl it is used instead (update save)
  32. G

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    After a new installation of OmniOS 151040, you need the following links for napp-it (or a rerun of the wget installer) ln -s /lib/libssl.so /usr/lib/libssl.so.1.0.0 ln -s /lib/libcrypto.so /usr/lib/libcrypto.so.1.0.0
  33. G

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    OmniOS 151040 stable is out! new features see https://github.com/omniosorg/omnios-build/blob/r151040/doc/ReleaseNotes.md
  34. G

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    For those who are using mpio SAS (due better performance or in a HA cluster): There is a bug in mpathadm in OmniOS 151038lts (the multipath admin tool). A fix is available, Topicbox
  35. G

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    More about zone options in OmniOS https://omnios.org/setup/zones Support for virtio-9p is now included (make filesytems directly connectable to VMs in Bhyve, currently as hot fix)...
  36. G

    Store 10 GB data for 7-8 years, SSD?

    I would look at solutions that - simply allow silent data errors without data loss - is read only (protection against undetected modifications) - report and auto repair them on regular checks - optionally encrypted A solution can be a ZFS pool from 3 drives in a 3 way mirror. For an additional...
  37. G

    How to clean up orphaned SATA disk on omniosce?

    Sata AHCI hotplug is not as troublefree as LSI/ BroadCom HBA hotplug. I also have seen situations where it does not work as expected.
  38. G

    How to clean up orphaned SATA disk on omniosce?

    Earlier versions of napp-it default tuning enabled sata hotplug in /etc/system. As OmniOS added this as a default in 151030, this tuning (and a lower timeout for disk reads) is no longer needed in napp-it. If you additionally add this as a napp-it tuning option you get a message during boot that...
  39. G

    How to clean up orphaned SATA disk on omniosce?

    Format lists all connected disks (unlike iostat that is an inventory since reboot). Hotplug means that you can hot remove/add a disk. If you call then format after a few seconds it shows only connected disks- see also the comment from OmniOS in /etc/system.d/_omnios:system:defaults * The...
  40. G

    How to clean up orphaned SATA disk on omniosce?

    Sata hotplug is enabled on OmniOS by default since 151030 https://github.com/omniosorg/omnios-build/blob/r151030/doc/ReleaseNotes.md in /etc/system.d/_omnios:system:defaults * enable sata hotplug set sata:sata_auto_online=1
Top