OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

If you want Timemachine support via SMB on OmniOS, you must

- update to OmniOS 151034 stable due SMB3 and some Apple extras
- share a filesystem and enable oplock (napp-it Services > SMB > property)
- enable Bonjour + Timemachine + Multicast in napp-it menu Services > Bonjour

https://forums.servethehome.com/ind...on-napp-it-with-smb-shares.16309/#post-260246
Thanks....I had read that thread yesterday, not realizing the OmniOS 151034 requirements.

From a logging perspective, making changes via Napp-It shows the commands/processing at the bottom of the screen. Is this logged elsewhere?
 
The last stable 151032 would be ok too but not 151030.

About the logging
This is not done to disk but you can enable "edit" in the toplevel menu (napp-it Pro). You then see a new toplevel menu item "log" that shows the actions on last menu call.
 
Hi _Gea!

I think I found and issue with the new(?) SMART attribute view in the listing, a very nice feature btw.
It works for attr 5 (Reallocated sectors) but not for 197 (Pending sector reallocations).

See screenshots from two separate installs (I gotta get some new disks :cautious:):

1596618408089.png


1596618436541.png


Working example for ATTR 5

1596618497382.png


1596618473950.png


Regards,
Wish
 
CurrentPendingSector are 24 sectors that had data but the drive can not read.


Reallocated sectors are a problem if the number grows. I have had a few drives at work with this number stable for 5+ years but if it grows daily or weekly its usually a very bad sign.
 
update

The problem is in /var/web-gui/data/napp-it/zfsos/_lib/illumos/get-disk-smart-bg.pl line 251
$k="smart_197"; should be $k="smart_198";

I have updated 19.12 and 20.06/dev
 
Last edited:
Hey _Gea, hope you're doing well.

I'm wondering if you've ever run any tests in napp-it with several hundred ZFS Datasets. I like to have my data very modular/packable/moveable so I have automated the creation of new datasets for every new task we work on. I've accumulated several hundred datasets and have noticed that I am now unable to load the "ZFS Filesystems" page in napp-it. At the bottom of the page I can see it's loading but the page never generates.

I'm using napp-it 18.12s on Solaris 11.4 with plenty of CPU and RAM to play with. The pool is 30x 7200RPM drives.

Code:
zfs list -o all
loads instantly via command line.
 
I suppose, the problem is not the listing of the filesystems but the listing of all properties in menu filesystems, some are zfs properties, some are filesystem properties like acl permissions.

What you can try is comparing a current napp-it pro 20.06 with acceleration enabled. This will get those properties in tha background. You may need an evalkey from https://www.napp-it.org/extensions/evaluate_en.html to update.

If it gives the same result, you can go back to 18.12 in menu About > Update.
 
I have put some work into a ZFS Cloud-Filer concept.
Filer + inhouse S3 Cloud for same files = Cloud-Filer

This is a local ZFS filer where you save, edit and store your primary office, enterprise or school/university/student data in a multiuser environment with authorisation, authentication and filelocking based on SMB with snap protection of previous versions.

This is a different approach to internet/cloud sharing where from a storage view only a sync and share of documents is possible. Beside some webbased tools, you cannot work directly with the files from a cloud.

While it is possible to just enable SMB and a Amazon S3 compatible cloud sharing (minIO) for the same data and ZFS filesystem, this is outside a single/few user scenario the best way to create corrupted files as there is no protection via filelocking or against unwanted overwriting.

My concept is based on two filesystems for data you want to share to the internet, one for SMB and one for S3 sharing to avoid any dependencies. Based on snaps and a one or two way sync on demand or a timetable, newer documents are updated. To avoid that you must store all data twice, dedup is enabled for the two filesystems. To avoid the RAM problems of realtime ZFS dedup, a special vdev is used for the dedup table (ex an Optane DC4801-100 mirror).

For a seemless integration, you need a user/ group/ policy management where you can keep SMB and S3 access in sync, at least on a read/write/readwrite/none access policy. For SMB you can use Windows for user and policy management. For S3 you need a lot of CLI commands, see https://docs.min.io/docs/minio-multi-user-quickstart-guide.html

To make this more usable, I am just integrating user, group and policy management into napp-it, ex policy management based on S3 buckets.

create_policy.png


more details, see (work in progress in current napp-it 20.dev)
http://www.napp-it.org/doc/downloads/cloudsync.pdf
 
I am trying to replace my 6 3TB drives in a Raidz1 pool with 8 TB drives. I was able to replace 2 drives and am working on the 3rd. The 3rd is taking substantially much longer. I've tuned all the resilver properties the same way as the other drives. I've looked at the iostat output and it appears the new drive is only writing at a constant 25MB/s rather than the 100+ MB/sec that the first two drives did. The iostat also said the drive is 97% busy. All the 8 TB drives are identical. Any idea why there would be such a huge discrepancy between the resilvers?

For reference Im using OpenIndiana + NappIt in an all-in-one setup.
 
I can see three options

- a bad/worse disk ex due bad sectors or another damage.
Check the disk with a disk low level tool ex wd data lifeguard (Windows) and an intensive test

- a bad cabling or backplane (if you use another bay)
Try another cable/bay

- Disks are not really identical, ex the faster are cmr, the slower is an msr model
https://www.servethehome.com/wd-red-smr-vs-cmr-tested-avoid-red-smr/2/

You can also create a basic pool on this disk and another and compare
benchmark results from Pool > Benchmark. If they are different, disks are different or something is bad.
 
I can see three options

- a bad/worse disk ex due bad sectors or another damage.
Check the disk with a disk low level tool ex wd data lifeguard (Windows) and an intensive test

- a bad cabling or backplane (if you use another bay)
Try another cable/bay

- Disks are not really identical, ex the faster are cmr, the slower is an msr model
https://www.servethehome.com/wd-red-smr-vs-cmr-tested-avoid-red-smr/2/

You can also create a basic pool on this disk and another and compare
benchmark results from Pool > Benchmark. If they are different, disks are different or something is bad.
The 3rd disk finished after about 3 days where as the first 2 took about 12 hours. I'm on to the 4th disk and it is going at almost the same slow speed as the 3rd disk. The drives all came in identical packaging and have the same model number. One thing I notice is the slower drives have a SN that starts with 50 and the two drives that went fast their SN start with 80. The two faster disks were purchased from Microcenter and the remaining 4 from Amazon. Below is a screenshot of the current resilver. Note that I didn't go in order so when I say 1st, 2nd, 3rd, 4th disks Im talking about the order I resilvered in not the order in the zpool.

resilver.jpg
 
I cannot say if there is a real difference between or if the slower ones are some sort of fake/bad/older/different firmware disks. If all the disks from Amazon with sn 50.. are slower, I would send them back and try to get more from the others with sn 80...
 
ZFS encryption performance on OmniOS and Intel Xeon silver 4110 from 2019 vs a new AMD Epyc 7302

After weeks of waiting I got a new SuperMicro BTO system with the H12SSL-C mainboard and a 16/32 Core 7302 and 128GB RAM, H12SSL-C | Motherboards | Super Micro Computer, Inc.

I made some tests with a disk pool, an NVMe pool and an Optane pool with and without encryption as this is what requires performance. While sync write seems a performance problem with encryption, the other results are promising. As you can load such a system with 24 NVMe system performance may need a jump.

I have also tried a virtualized OmniOS on ESXi 7.0U1 but found NVMe pass-through problems that need some more work.
12G SAS in pass-through mode ex with WD SS530 SAS SSDs are troublefree and nearly as fast as NVMe.

First impression: same pool is up to twice as fast as the Xeon system with nearly the same price. https://napp-it.org/doc/downloads/epyc_performance.pdf
 
Last edited:
I just changed OSes on my fileserver from an old openindiana (oi_151a8) box to a new omnios v11 r151032e box running nappit 20.06a3 Pro. I have a 12TB zfs filesystem with tons of snaps and folders as you would expect for a fileserver. Everything appeared to work ok but once we started running any applications that hit the network drives it blew up. MSaccess databases said "too many users" and would not load, UPS worldship died and would not load, quickbooks would not load etc.. Individual files seemed to work fine, it was just applications that had a database of any sort on the network drives would fail. I thought it was a server-level issue at first but I created a new ZFS filesystem and copied the troubleseome msaccess database over and it ran fine. So I have narrowed it down to a ZFS-level issue now but am completely stumped. I tried resetting ACLs and matched everything perfectly on a test folder and cant get it to work for the life of me. Any ideas here?
 
wow what a nightmare, finally figured it out. I had to set NBMAND to ON, which I have always set to OFF in the past. Either the behavior of omnios changed or I have some old/conflicting ZFS properties in my filesystem that it was mad about. The only other thing I can think of is my NFS service was disabled on this new server since I hadn't created any NFS shares yet, not sure why that would mess things up though. Hopefully this helps somebody else
 
While I'm at it, does omnios r151032 support smbv3? Will it still function with smbv1 disabled on win10 like they keep trying to do?
 
The OS file locking setting nbmand should be set to on unless you have special applications (netatalk afp was one of them) otherwise some Windows application may get locking problems.

OmniOS 151032 (no longer under support, you should update to current stable) is SMB 3.0. Current OmniOS stable 151036 is SMB 3.1.1

Normally a client decides the SMB version but in menu Service > SMB > Properties you can set min/max server smb protocol.
 
Anyone else seeing a bug with SMB1 in the latest OmniOS build? It had been a year or more since I had updated. I use SMB1 from Android, Solid Explorer specifically. Solid Explorer is a great overall file browser and my one of choice but it has issues with SMB2. I have used SMB1 from Solid Explorer to my OmniOS share for years. After the update I can open the share and see my root folder structure. When opening a root directory, like "Documents", I am presented with a single subfolder, also called "Documents". Opening the subfolder presents another "Documents" and so on, ad-infinitum. It doesn't matter how many levels deep you go. It's Documents all the way down. If you open the Music folder in the root, then it is Music folders all the way down. It's weird and I wanted to see if anyone can duplicate this or knows of a fix. I did try some other Android apps with SMB1 support and can duplicate the issue so it is not a Solid Explorer problem.
 
Last edited:
Have you been experimenting with basing everything ontop SmartOS instead of involving ESXI? SmartOS is, just like ESXI, a hypervisor of type 1. So they have the same use cases and target group. One difference is that for managing ESXI, you need a separate PC with admin gui. SmartOS dont need the separate PC, you can admin SmartOS directly using the same server. Also, SmartOS does not have any RAM limitations as free try out ESXI has. SmartOS seems really innovative.
 
Anyone else seeing a bug with SMB1 in the latest OmniOS build? It had been a year or more since I had updated. I use SMB1 from Android, Solid Explorer specifically. Solid Explorer is a great overall file browser and my one of choice but it has issues with SMB2. I have used SMB1 from Solid Explorer to my OmniOS share for years. After the update I can open the share and see my root folder structure. When opening a root directory, like "Documents", I am presented with a single subfolder, also called "Documents". Opening the subfolder presents another "Documents" and so on, ad-infinitum. It doesn't matter how many levels deep you go. It's Documents all the way down. If you open the Music folder in the root, then it is Music folders all the way down. It's weird and I wanted to see if anyone can duplicate this or knows of a fix. I did try some other Android apps with SMB1 support and can duplicate the issue so it is not a Solid Explorer problem.

There was a lot of work done on the Illumos SMB server recently. It now supports SMB 3.1.1. SMB 1 is more or less end of life and should be avoided. As I cannot say what Andoid filebrowser works better or if there is a workaround, you may ask at https://illumos.topicbox.com/groups/discuss
 
Have you been experimenting with basing everything ontop SmartOS instead of involving ESXI? SmartOS is, just like ESXI, a hypervisor of type 1. So they have the same use cases and target group. One difference is that for managing ESXI, you need a separate PC with admin gui. SmartOS dont need the separate PC, you can admin SmartOS directly using the same server. Also, SmartOS does not have any RAM limitations as free try out ESXI has. SmartOS seems really innovative.

SmartOS is a killer OS based on the Solaris fork Illumos with best of all support for VMs. It supports Solaris zones, LX (Linux) zones, Bhyve, KVM and Docker. From these features it is on par or even superiour to ESXi or ProxMox. From a minimalistic approach it is similar to ESXi. You boot it from a USB stick and it runs completely from RAM with some system folders redirected to the ZFS datapool. Nothing of relevance on the bootstick.

As it is a Illumos distribution like NexentaStor, OmniOS or OpenIndiana it has superiour ZFS capabilities and offers the ZFS/kernelbased/multithreaded SMB server so additional filer use ex via NFS, SMB or S3 would be a dream.

I have tried to add SmartOS to the range of my supported operating systems. While not completely impossible, it is quite hard to add filer options as SmartOS wants everything in VMs (zones) with a massive restriction of options in the global zone where usually filer options are located.

It would be possible to add a mechanism of save/restore global zone options for filer or general server use, but this would require some work and maintenance. This is why I do not support SmartOS. If there would be a ready to use mechanism I would add napp-it support immediatly. If you use SmartOS as platform for virtualisation only SmartOS is perfect with enterprise grade stability.

Some Filer services work in a zone so some services are possible but not a overall user experience comparable to a filer based on OmniOS or OpenIndiana (support for LX container and Bhyve was ported over from SmartOS to Illumos).
 
Hopefully someone can throw me a clue here. So I've been running ZOL 0.8.x hosting esxi 6.7 storage. Decided to give omnios/hyper-v a go. Got the latest omniosce installed with all available updates. But, when I try importing my 8-disk raid-10 jbod pool or 2-disk raid-1 backup pool:

root@omnios:~# zpool import
pool: jbod
id: 18186492277098622343
state: UNAVAIL
status: The pool was last accessed by another system.
action: The pool cannot be imported due to damaged devices or data.
see: http://illumos.org/msg/ZFS-8000-EY
config:

jbod UNAVAIL insufficient replicas
mirror-0 UNAVAIL insufficient replicas
c0t5000C500412EE41Fd0 UNAVAIL corrupted data
c0t5000C50041BD3E87d0 UNAVAIL corrupted data
mirror-1 UNAVAIL insufficient replicas
c0t5000C50055E99CDFd0 UNAVAIL corrupted data
c0t5000C500426C6F73d0 UNAVAIL corrupted data
mirror-2 UNAVAIL insufficient replicas
c0t5000C50055E9A7A3d0 UNAVAIL corrupted data
c0t5000C5005621857Bd0 UNAVAIL corrupted data
mirror-3 UNAVAIL insufficient replicas
c0t5000C50056ED546Fd0 UNAVAIL corrupted data
c0t5000C50057575FE3d0 UNAVAIL corrupted data
logs
/dev/disk/by-id/wwn-0x5000a7203009b720-part1 UNAVAIL corrupted data

pool: backup
id: 17638970758987483910
state: UNAVAIL
status: The pool was last accessed by another system.
action: The pool cannot be imported due to damaged devices or data.
see: http://illumos.org/msg/ZFS-8000-EY
config:

backup UNAVAIL insufficient replicas
mirror-0 UNAVAIL insufficient replicas
c0t5000C5002E3AA680d0 UNAVAIL corrupted data
c0t5000C5002E38E0EBd0 UNAVAIL corrupted data
root@omnios:~#

I had an issue with the CentOS storage appliance so I was not able to export either pool before pulling the plug, but I doubt that is the issue. More likely, ZOL is doing something omnios doesn't like :( Is there any way to get around this? If not, I'm going to have to boot from a ubuntu/zfs rescue CD image, plug in a spare disk, and copy over stuff. Very annoying, if so, since I have several TB of backup data on both pools. The backup pool is a bit of a misnomer - it's actually duplicates of backup files on the jbod pool, so if I *have* to, I can trash the backup pool, recreate it under omnios, then boot to a ZOL rescue image and copy everything from jbod => backup, but I'd really rather not :)
 
well that was exasperating! I made sure to recreate both pools with version 28&5 and still no luck. I did try 'zpool import -d /dev/dsk/' with no luck for either pool :( I will look at that link...
 
ZOL has some kind of incompatibility issue. As I said, even creating the pool with NO features, I was unable to import on Omnios. So I recreated the 1TB SATA pool with 28/5 on Omnios, and WAS able to import it on the ubuntu rescue CD, and am now (again) copying data over. What a PITA.
 
Well, getting close to giving up on omnios. I have 10 drives in my JBOD, and sas2ircu doesn't support the 12gb HBA I have. New enough sas3ircu don't work, and P4 from the avago site coredumps. This is a bit of a PITA, as hyperv doesn't support NFS storage, so it's iSCSI or (maybe SMB), but if I have a drive die, and can't tell what damn slot it's in, it's a no-go here. Last reference I found someone claimed P5 worked, but for me it didn't. Ugh...
 
Ugly hack of the year: I had a spare HBA on the shelf, and the JBOD is dual-head (you already know where I'm going here lol). Put the spare in the hyper-v server, plugged into the 2nd port on the JBOD, and dl'ed the latest&greatest sas3irc for windows, and ta da :)
 
Sas2ircu is for 6G HBA, sas3ircu is for 12G HBAs.
Sadly there is no support for sasircu from BroadCom any longer. Sas3irci p4 is the last that works in Illumos (and Solaris 11.3). Newer ones are Oracle Solaris 11.4 only as they are for the Solaris lsc driver and not the Illumos mpt one. There is a method to modify newer sas3ircu for Illumos, see https://www.illumos.org/issues/6784 that I use in current napp-it (where I have included the method to modify sas3ircu automatically)

In general you mainly need sas3ircu to switch the red alert led on/off in case of problems and for disk bay detection (wwn number -> slot number). In napp-it I support either sas3ircu or use driver based detection for this especially as sas3ircu works only up to lsi 3008 based HBAs.

When you create pools v28, you should select v5 also even outside Solaris where there is no ZFS v6 like on Solaris
 
Interesting. Well, I don't understand why I was getting core dumps :( This is all only for the case where a disk is having issues and I have to locate it in the JBOD, so maybe I just live with it. What an adventure. Oracle seems to have broken the download link for Solaris, I looked at Nexentastor, and the CE is 5 years old (no thanks!) I have played with quantastor (linux based) in the past, but any linux appliance I have seen insists on doing iSCSI to zvols, which sucks performance-wise, so I'm basically stuck with OmniOS for now. On the other hand, the patch looks easy enough, so I'll give that a try...
 
Last edited:
Napp-it menu Disks > Disks Location > select sas3ircu can modify sas3ircu automatically
 
Hey all,
I run an AIO napp-it (OmniOS) box on an ESXi 6.7 U3. I have two HBA's built in to connect my harddisks, and I want to add an Nvidia GPU now to assign it to a single VM. I noticed that my only PCIx slot that supports x16 is occupied by one of the HBA's. Is it safe to move the HBA to a x8 slot so I can use the x16 slot for the GPU? If not, how can I achieve this?
Thanks for all ideas and your suggestions!
 
Back
Top