OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

ZFS is crash resistent.
This means that ZFS - a Copy on Write Filesystem is not corrupt after a crash during writes (unlike older filesystems). This does not mean that a VM or a database is consistent on a crash as ZFS uses up to 4 GB RAM as writecache for a better performance. Think of an accounting software where you put off an amount from one account and the system crashes prior you can add it to another account (money in data nirwana as the ramcache is lost on a crash).

ZFS offers sync write, a mechanism where every commited write is logged. On a crash the commited writes are done on next reboot to allow a database or VM to be consistent. Sadly sync write requires a log device with powerloss protection and ultra low latency for a good performance. In the past these log devices (slog) were expensive and despire slow compared to fast writes without sync.

The new Intel Optane is a game-changing technology. When you use them as an Slog, even sequential sync writes are nearly as fast as writes without sync. If you use Optane not for Slog but the pool itself, it opens a new performance level on small random writes and sequential writes. Even a filer with sync enabled is possible now. I am impressed!

See http://napp-it.org/doc/downloads/optane_slog_pool_performane.pdf
 
Hey Gea,

Just got an alert that one of my rpool disks died. laaaaaame. However, when i login the web UI is unresponsive. If i recall previously, you said it is depending on certain commands completed, one of them being format.

When I login to the console and execute format, it just says "Searching for disks.." and hangs.

I'm running OmniOS latest and greatest. What next? do i need to find the disk first and remove it? could that be hanging it up? if so, can I run the SAS tools for the LSI HBA that I downloaded manually from the command line to find the disk?

Any help appreciated. thanks!
 
Hey Gea,

Just got an alert that one of my rpool disks died. laaaaaame. However, when i login the web UI is unresponsive. If i recall previously, you said it is depending on certain commands completed, one of them being format.

When I login to the console and execute format, it just says "Searching for disks.." and hangs.

I'm running OmniOS latest and greatest. What next? do i need to find the disk first and remove it? could that be hanging it up? if so, can I run the SAS tools for the LSI HBA that I downloaded manually from the command line to find the disk?

Any help appreciated. thanks!

Figured it out. since the disk had just started to die, it was still in the pool faulted, but trying to run. this hung up the format command. I let it keep running and eventually the disk was removed and the UI was responsive again.

I assume we just need to pull the disk, wait a bit for things to refresh, then pop in a new one in the same slot (all slots are full) wait for it to see the disk, then just replace the disk in UI or command line?
 
You may check System > Logs if napp-it is basically running or /var/adm/messages at console for reasons

Other option is check if a activity disk is constantly on when the pool hangs (remove). If the system is hanging completely, remove all datadisks, reboot and insert disk by disk until the system is hanging (bad disk found, remove)

This is uncritical for a ZFS raid. The pool becomes available when enough disks come back.
 
I was able to locate the disk, thanks!

Another question though. I got a failure alert for this disk last night. but just got the same alert tonight. Does it repeat every 24 hours? I dont see anything else faulted so I assume that is it.
 
An alert check is done for example every 5 minutes but you propably don't want a mail every 5 minutes so futher alerts with same reason (bad pool) are blocked for 24h.
 
ok, perfect. next problem (ha! sorry) my google Foo is failing me.

in the Web UI, I try to replace the disk (which i removed and replaced in same slot with new disk):

Here is the rpool:

pool: rpool
state: DEGRADED
status: One or more devices has been removed by the administrator.
Sufficient replicas exist for the pool to continue functioning in a
degraded state.
action: Online the device using 'zpool online' or replace the device with
'zpool replace'.
scan: resilvered 51.0G in 0h7m with 0 errors on Thu Apr 6 14:10:58 2017
config:

NAME STATE READ WRITE CKSUM
rpool DEGRADED 0 0 0
mirror-0 DEGRADED 0 0 0
c1t5000C5002346EEF7d0s0 ONLINE 0 0 0
c1t5000C500234C8147d0s0 REMOVED 0 0 0

errors: No known data errors

When i try to replace the removed disk in napp-it UI, it errors out:

"cannot replace c1t5000C500234C8147d0 with c5t50000394182AA306d0: no such device in pool"

I tried from command line to, same:

# zpool replace -f "rpool" c1t5000C500234C8147d0 c5t50000394182AA306d0
cannot replace c1t5000C500234C8147d0 with c5t50000394182AA306d0: no such device in pool

I'm guessing this has to do with the disk being removed? can you point me in the right direction?

Cheers!
 
ok, perfect. next problem (ha! sorry) my google Foo is failing me.

in the Web UI, I try to replace the disk (which i removed and replaced in same slot with new disk):

Here is the rpool:

pool: rpool
state: DEGRADED
status: One or more devices has been removed by the administrator.
Sufficient replicas exist for the pool to continue functioning in a
degraded state.
action: Online the device using 'zpool online' or replace the device with
'zpool replace'.
scan: resilvered 51.0G in 0h7m with 0 errors on Thu Apr 6 14:10:58 2017
config:

NAME STATE READ WRITE CKSUM
rpool DEGRADED 0 0 0
mirror-0 DEGRADED 0 0 0
c1t5000C5002346EEF7d0s0 ONLINE 0 0 0
c1t5000C500234C8147d0s0 REMOVED 0 0 0

errors: No known data errors

When i try to replace the removed disk in napp-it UI, it errors out:

"cannot replace c1t5000C500234C8147d0 with c5t50000394182AA306d0: no such device in pool"

I tried from command line to, same:

# zpool replace -f "rpool" c1t5000C500234C8147d0 c5t50000394182AA306d0
cannot replace c1t5000C500234C8147d0 with c5t50000394182AA306d0: no such device in pool

I'm guessing this has to do with the disk being removed? can you point me in the right direction?

Cheers!

Strange, this is what we had to do instead:

zpool replace rpool c1t5000C500234C8147d0s0 c5t50000394182AA306d0

which worked.

We then (i assume needed since this disk needs to be bootable?):

installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c5t50000394182AA306d0s0

Let me know if my logic is sound. I'm no ZFS expert. haha.
 
The zpool replace command wants olddisk and newdisk as parameter.
On Solaris datapools this is always the whole disk or partition so the name ends with d0

napp-it asumes this as well.
When like in your case on a mirrorred rpool, you use a slice of a disk (name ends with s0)
you may indeed need to start the replace command via console manually.

Probably as you need extra modifications on a bootdisk you may need to remove the disk
from the bootmirror and add a new one then to rebuild the boot system.
 
I'm stuck upgrading napp-it:
I finally upgraded a box from OI 151a8 to Hipster 2017.10 without any major hickups.
The napp-it upgrade (from 0.99 ?) via wget however is stuck at
Code:
3. setup/ update napp-it
3.1. update folders
---------------------------------
cp: cycle detected: /var/web-gui/data/./wwwroot/cgi-bin/napp-it

Any idea how to get past this without leaving napp-it in a permanently bad state?

Thanks!
 
Either

- delete/rename /var/web-gui/data/wwwroot/cgi-bin/napp-it
(this one gave the trouble)

or
- rename /var/web-gui/data/ to
/var/web-gui/data.old and restart the wget installer for a clean setup
 
another two days later in the lab
http://napp-it.org/doc/downloads/optane_slog_pool_performane.pdf

Can someone with Solaris verify these results vs OmniOS from Windows (SMB and iSCSI).
They are too good!

TL;DR
-
„too long; didn‘t read“
This benchmark sequence was intended to answer some basic questions about disks, ssds, NVMe/Optane,
the effect of RAM and the difference between native ZFS in Oracle Solaris v.37 vs OpenZFS in the free
Solaris fork OmniOS. If you want to build a ZFS system this may help to optimize.

1. The most important factor is RAM

Whenever your workload can be mainly processed within your RAM, even a slow HD pool is nearly as fast as an ultimate Optane pool. Calculate around 2 GB for your OS. Then add the wanted rambased write cache (OmniOS default: 10% of RAM, max 4GB) and add the RAM that you want as readcache. If your workload exceed your RAM capabilities or cannot use the RAM like with sync-write performance can dramatically go down. In a homeserver/ mediaserver/ SoHo filer environment with a few users and 1G networks 4-8GB RAM is ok.In a multiuser environment or with large amount of random data (VMs, larger databases) use 16-32GB RAM. Ifyou have a faster network (10/40G) add more RAM and use 32-64G and more.

2. Even a pure HD pool can be nearly as fast as a NVMe pool.

In my tests I used a pool from 4 x HGST HE8 disks with a combined sequential read/write performance of more than 1000 MB/s. As long as you can process your workload mainly from RAM, it is tremendous fast. The huge fallback when using sync-write can be nearly eleminated by a fast Optane Slog like the 900P. Such a combination can be nearly as fast as a pure SSD pool at a fraction of the cost with higher capacity. Even an SMB filer with a secure write behaviour (sync-write=always) is now possible as a 4 x HGST HE8 disks in a Raid-0 + an Optane 900P offered 1.6 GB/s sync write performance on Solaris and 380GB/s on OmniOS

3. Critical workloads (many user, many random data)

In such a situation, use SSD only pools.
A dedicated Slog is not needed but prefer SSDs with powerloss protection when you want sync write.

4. Ultra critical or performance sensitive workloads

Intel Optane is unbeaten!
Compared to a fast NVMe it reduces latency from 30us down to 10us and increases iops from 80k to 500k. While on most workloads you will not see much difference as most workloads are more sequential or the RAM takes the load some are different. If you really need small random read write performance you do not have an alternative. Additionally Optane is more organized like RAM. This means no trim or garbage collection or erase prior write like on Flash is needed. Even a concurrent read/write workload does not affect performance in the same way as it was on Flash.

5. Oracle Solaris with native ZFS v.37 beats OpenZFS

OmniOS, a free Solartis fork is known to be one of the fastest OpenZFS systems but native ZFS v.37 on
Solaris plays in a different ligue when you check pool performance as well when you check services like SMB. What I have found is that Solaris starts writes very fast and stalls then for a short time. OmniOS with its write throtteling seems not as fast regarding overall write performance but can guarantee a lower latency.
RAM efficiency regarding caching seems to be the major advantage on Solaris and even with low RAM
for caching sync write performance even on harddisks is top
 
My openindiana running good for few months and all of the sudden, it's very sluggish. After restarting VM, this screen shows up:
upload_2017-12-20_19-32-15.png


Gea, do you know what's going on?
 
Any further information in system logs (napp-it menu System > Logs)?
 
Any further information in system logs (napp-it menu System > Logs)?

I can't even go to napp-it, very slow so i decided to reinstall OI & Napp-it and the problem persisted so I pulled all of my M1015 cards out and check one by one and apparently 1 of the card is bad. The M1015 card has been very unreliable these days, this is my second bad one in just a couple of months.
 
Sorry if this was covered and I missed it. I'm running OmniOS 151022 and after months and months of 'pkg update' always reporting nothing to do I thought it was strange to borderline suspicious and in digging around it looks like OmniOS is basically EOL? Is that true? If so that would probably explain why I never see any updates. At this point is it recommended to move to OI going forward?

EDIT: Nevermind, I stumbled on the community edition and I need to set the new publisher to get updates now. I'll be sure to toss them some donations for keeping it going too!
 
Last edited:
Hi all... new to this forum, but looking for assistance diagnosing a problem I'm having with my Napp-It setup. I'm running an HP Microserver Gen 10 with OmniOS & Napp-It, with the goal of file sharing, and eventual NFS storage for my other ESX setup. With the Datastore up, whether I use SMB or NFS, I get slow transfers downloading files from my NAS. Transfers burst at first, but end up falling back to around 30MB/s. Uploads on the other hand, get close to line speed at appx 110MB/s. I've tried different memory, different storage drives (SSD, Sata), even tried an LSI SAS card to bypass the HP default storage connection, but still can't figure out why downloads from the storage shares are so slow.

I've run benchmarking tests and I get acceptable disk speeds, well in line with the SSD or SATA disks being used. The Microserver has 2 network ports and I've tried both.

(And backstory, I've even tried running FreeNAS with this setup and had the exact same results).

Stumped. Any thoughts or advice?


An example of my symptoms:
1.png
 
Last edited:
Propably a Windows driver/setting problem or a cabling problem.

I would try

- goto Windows nic settings and check for an option to disable int throtteling
This will increase nic performance at a slightly higher CPU load

- check for a newer nic driver for Windows ex from the nic manufacturer, optionally try another nic (prefer Intel)
especially Realtek nics are known give performance problems, they are not good, only cheap

- replace cabling, optionally use a crossover cable to connect NAS and Windows directly.
 
I actually have the same results if I use a physical host running Windows, a virtual one, a MacBook, and even an Ubuntu virtual host as well. All the clients have the same transfer experience grabbing a file off the NAS. Also tried swapping cables to the NAS.

Propably a Windows driver/setting problem or a cabling problem.

I would try

- goto Windows nic settings and check for an option to disable int throtteling
This will increase nic performance at a slightly higher CPU load

- check for a newer nic driver for Windows ex from the nic manufacturer, optionally try another nic (prefer Intel)
especially Realtek nics are known give performance problems, they are not good, only cheap

- replace cabling, optionally use a crossover cable to connect NAS and Windows directly.
 
If you can rule out a client like Windows or a cable, then you must look at RAM or nic in the server

How much RAM for OmniOS?
Can you add a local performance view from napp-it menu Pools > Benchmark (current napp-it) to give an impression random/ sequential performance
Can you try another nic preferable Intel?

Another option is comparing Oracle Solaris 11, the fastest ZFS server. Oracle also use a different nic driver.
See http://napp-it.org/doc/downloads/optane_slog_pool_performane.pdf
where I compared SMB performance of OmniOS vs Solaris
 
Last edited:
Here's the specs of all the options I've tried. Feel like I've exhausted most of my testing options, hence why I'm grasping for straws here. Will post Benchmark info shortly...

NAS Server
  • RAM - tried 8GB, 16, and 32
  • OS - tried Omni (napp-it), FreeNAS
  • NICs - tried onboard 1GB NICs, and 10GB NIC (when FreeNAS was installed)
  • HDs - tried 4 SSD's in RAIDZ pool, 2 SSD's in mirror pool, SSD standalone, SATA HD standalone
Test Hosts
  • OS - tried Win10 VM, Ubuntu VM, Win10 laptop, Macbook Pro laptop

If you can rule out a client like Windows or a cable, then you must look at RAM or nic in the server

How much RAM for OmniOS?
Can you add a local performance view from napp-it menu Pools > Benchmark (current napp-it) to give an impression random/ sequential performance
Can you try another nic preferable Intel?

Another option is comparing Oracle Solaris 11, the fastest ZFS server. Oracle also use a different nic driver.
See http://napp-it.org/doc/downloads/optane_slog_pool_performane.pdf
where I compared SMB performance of OmniOS vs Solaris
 
Benchmark:

Bennchmark filesystem: /new/_Pool_Benchmark
begin test 3 ..randomwrite.f ..
begin test 3sync ..randomwrite.f ..
begin test 4 ..singlestreamwrite.f ..
begin test 4sync ..singlestreamwrite.f ..


set sync=disabled
begin test 7 randomread.f ..
begin test 8 randomrw.f ..
begin test 9 singlestreamread.f ..
pool: new


NAME STATE READ WRITE CKSUM
new ONLINE 0 0 0
c1t2d0 ONLINE 0 0 0


hostname omniosce Memory size: 32216 Megabytes
pool new (recsize=128k, compr=off, readcache=all)
slog -
remark


Fb3 randomwrite.f sync=always sync=disabled
383 ops 16201 ops
76.591 ops/s 3239.142 ops/s
18966us cpu/op 576us cpu/op
9.7ms latency 0.2ms latency
0.4 MB/s 25.2 MB/s

Fb4 singlestreamwrite.f sync=always sync=disabled
25 ops 2680 ops
2.401 ops/s 501.998 ops/s
829972us cpu/op 3871us cpu/op
414.7ms latency 2.0ms latency
2.3 MB/s 501.8 MB/s
________________________________________________________________________________________

read fb 7-9 + dd (opt) randomread.f randomrw.f singlestreamr
pri/sec cache=all 28.0 MB/s 44.7 MB/s 464.7 MB/s
________________________________________________________________________________________



And a DD benchmark as well:

Memory size: 32216 Megabytes

write 12.8 GB via dd, please wait...
time dd if=/dev/zero of=/new/dd.tst bs=2048000 count=6250

6250+0 records in
6250+0 records out
12800000000 bytes transferred in 80.700336 secs (158611484 bytes/sec)

real 1:26.9
user 0.0
sys 18.0

12.8 GB in 86.9s = 147.30 MB/s Write

wait 40 s
read 12.8 GB via dd, please wait...
time dd if=/new/dd.tst of=/dev/null bs=2048000

6250+0 records in
6250+0 records out
12800000000 bytes transferred in 15.042613 secs (850915992 bytes/sec)

real 15.0
user 0.0
sys 11.4

12.8 GB in 15s = 853.33 MB/s Read


Here's the specs of all the options I've tried. Feel like I've exhausted most of my testing options, hence why I'm grasping for straws here. Will post Benchmark info shortly...

NAS Server
  • RAM - tried 8GB, 16, and 32
  • OS - tried Omni (napp-it), FreeNAS
  • NICs - tried onboard 1GB NICs, and 10GB NIC (when FreeNAS was installed)
  • HDs - tried 4 SSD's in RAIDZ pool, 2 SSD's in mirror pool, SSD standalone, SATA HD standalone
Test Hosts
  • OS - tried Win10 VM, Ubuntu VM, Win10 laptop, Macbook Pro laptop
 
Last edited:
Hi, I run Napp-it all-in-one on 18.01free, ESXI5.5 and the main server VM is SBS2011 and I can no longer see my SMB folders on my domain connected Windows 10 PC's.
1nappit.jpg

I used to have these setup with ACL just fine and I only noticed recently they've disappeared.
Do I need to reset something? perhaps since upgrading napp-it to the current version
 
Benchmark:

________________________________________________________________________________________

read fb 7-9 + dd (opt) randomread.f randomrw.f singlestreamr
pri/sec cache=all 28.0 MB/s 44.7 MB/s 464.7 MB/s
________________________________________________________________________________________

Your readvalues (and random write values) are worse, especially the low randomread value of 28 MB/s

If you compare http://napp-it.org/doc/downloads/optane_slog_pool_performane.pdf
at page 7 (single HGST HE8 disk), with a random readvalue of 267 MB/s

read fb 7-9 + dd (opt) randomread.f randomrw.f singlestreamr
pri/sec cache=all 267.8 MB/s 252.0 MB/s 2.7 GB/s

The bad performance affects mostly random reads as random writes
go to the rambased writecache first and then sequentially to disk.

The singlestreamread value is mostly RAM/cache performance related (much higher as a disk can be)
As you have tried several disks/SSDs and a HBA as welI with same bad results I would asume a hardware problem ex with RAM or bios settings (set to defaults)
 
Hi, I run Napp-it all-in-one on 18.01free, ESXI5.5 and the main server VM is SBS2011 and I can no longer see my SMB folders on my domain connected Windows 10 PC's.
View attachment 50264
I used to have these setup with ACL just fine and I only noticed recently they've disappeared.
Do I need to reset something? perhaps since upgrading napp-it to the current version


Have you updated from a quite old OmniOS?
In this case, you must set netbios_enable to true in menu Services > SMB > Properties

The default in current OmniOS is false = do not publish shares what means you can only connect when you enter the sharename directly like
\\serverip\sharename
 
Have you updated from a quite old OmniOS?
In this case, you must set netbios_enable to true in menu Services > SMB > Properties

The default in current OmniOS is false = do not publish shares what means you can only connect when you enter the sharename directly like
\\serverip\sharename

I checked and netbios_enable = true. Version was previously from 2016.
is now running on: SunOS xxx 5.11 omnios-r151018-ae3141d i86pc i386 i86pc / OmniOS v11 r151018

I remember having a 2day Pro evaluation in 2016 to set it up and that worked for a long while.
 
I also have to mention that in order to get OmniOS to boot properly, I have to set ACPI to "disabled" or the server just hangs on boot.

Your readvalues (and random write values) are worse, especially the low randomread value of 28 MB/s

If you compare http://napp-it.org/doc/downloads/optane_slog_pool_performane.pdf
at page 7 (single HGST HE8 disk), with a random readvalue of 267 MB/s

read fb 7-9 + dd (opt) randomread.f randomrw.f singlestreamr
pri/sec cache=all 267.8 MB/s 252.0 MB/s 2.7 GB/s

The bad performance affects mostly random reads as random writes
go to the rambased writecache first and then sequentially to disk.

The singlestreamread value is mostly RAM/cache performance related (much higher as a disk can be)
As you have tried several disks/SSDs and a HBA as welI with same bad results I would asume a hardware problem ex with RAM or bios settings (set to defaults)
 
IDE mode is bad.
AHCI is the correct mode and AHCI should work perfectly


Is there a bios update available
Recheck bios settings
Compare a HBA for bootdisk/datadisks
 
ACPI, not AHCI. It's one of the boot options from the Omni boot menu.

I did update the BIOS today, but still issues booting with ACPI enabled. The only way I get this to boot with OmniOS is to turn that off. Noticed similar issues with FreeBSD (FreeNAS). I can't be sure, but it's almost like these OS's aren't playiing nice with the PCIe stuff on my Microserver Gen10.

IDE mode is bad.
AHCI is the correct mode and AHCI should work perfectly


Is there a bios update available
Recheck bios settings
Compare a HBA for bootdisk/datadisks
 
ah, ok
Unix (BSD, Solaris) is build for 24/7 usage. Energy saving mechanism that are mostly a home concern are not so an item.
Just disable it. Remains the question about the very bad random read/write values that limits performance.

As you tried different systems, disks or HBA it must be some sort of hardware problem.
 
Compatibility of Solaris 11.4 Public Beta with napp-it

currently napp-it is not working due problems with the Perl module Expect (IO::Tty not compiling)
This should create a Tty.so file in /root/.cpan that is needed for napp-it If someone has success, please report.

Problem: Expect is compiled via
perl -MCPAN -e shell
install Expect
exit

-> gives an error with IO:Tty

Info about the beta repository
Solaris 11.4 Beta is pre configured for the default stable (nonworking with the Beta) repository and comes without a compiler installed.
You must switch repository to

PUBLISHER TYPE STATUS P LOCATION
solaris origin online F https://pkg.oracle.com/solaris/beta/

For access to this repository, you must register at https://pkg-register.oracle.com/register/repos/
where you can the download a certificate and a key and where you MUST accept the license to get access.
Copy cert and key ex to /root, then wait some time until you get access.. What I did.

1. remove old repository
pkg unset-publisher solaris

2. add beta repository
pkg set-publisher -c /root/pkg.oracle.com.certificate.pem -k /root/pkg.oracle.com.key.pem -g https://pkg.oracle.com/solaris/beta solaris

3. add a compiler ex pkg install gcc-5
and try to compile Expect

4. see https://community.oracle.com/thread/4117614
 
Update

napp-it is running from current release Feb 02 on Solaris 11.4b (not all functions tested)
If you want that the napp-it wget installer compiles ex smartmontools 6.6 you should set the beta repository prior napp-it and install gcc (pkg install gcc-5)

you need to setup the beta repository. If you have defined it after a napp-it setup,
install storage services manually
pkg install --accept --deny-new-be storage/storage-server

Solaris 11.4 manuals
https://docs.oracle.com/cd/E37838_01/
 
Last edited:
ZFS Encryption as a ZFS property
with a key per filesystem is a feature of Oracle Solaris and an upcoming feature of Open-ZFS

In the light of the upcoming EU ruleset dsgvo that even demands state of the art datasecurity at a technical level, I concentrate to make ZFS encryption (lock/unlock) accessable for end-users without admin access to the storage management GUI (User-Lock/Unlock) and to allow a locking/unlocking based on a timetable ex Auto-UnLock on working days in the morning and Auto-Lock in the evening.

User-Lock/Unlock via SMB and watched folders (working in current napp-it dev)
User-Lock is a new napp-it Pro feature to allow a user to lock/unlock a filesystem without access
to the Storage administration software. For User-Lock/Unlock, you must

- create an encrypted filesystem
- use a file or https based key
- enable User-Lock in ZFS Filesystems >> Encryption

- start the autolock service in menu Services

The service creates a ZFS filesystem "pool"/ UserEncryption with a subfolder per encrypted filesystem.
Enable SMB sharing for this filesystem with a wanted ACL setting for the share and its subfolders
per encrypted filesystems (for userlock enabled filesystems).

Content of these subfolders
Folders: =lock and =unlock
Controlfile: =switcher.move
Statusfile: service-state.xx_yy (xx=service state, yy=lockstate) ex service-state.online_locked

To unlock a filesystem: move the file =switcher.move to the folder =unlock
To lock a filesystem: move the file =switcher.move to the folder =lock

Auto-Lock (todo)
is a Pro feature to automatically lock/unlock a filesystem based on a timetable
 
Back
Top