OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

thats funny, when i built my server i bought what everybody was recommending as it was supposed to work great.
 
I

EDIT2: what the fuck ever. I'm done with ZFS bullshit. there isn't a damn thing wrong with the drives, garbage raid system. Screw the 30TB of movies i had stored on it.

Hardware and software can fail and ZFS is not the holy grail. I have lost a lot of data with any sort of raid within the last 20 years - based on software raid and with very expensive hardware raid. In some cases even ZFS would not have helped like with an overvoltage case due a damaged PSU. For such rare cases, you do backups.

Believe me, there is no other data storage technology freely available that is as robust as ZFS. If you have problems, than not because of ZFS but because of mostly hardware problems.

Your pool remains offline until enough disks come back. At least this i a positive aspect. With a hardware raid, sporadic missing disks are a real problem. So if your disks are ok, your pool can be imported on any working hardware. Only if more disks fail than the selected redundancy (1 to 3 disks on Z1 to Z3) then your pool is lost like with any striped raid.
 
Hardware and software can fail and ZFS is not the holy grail. I have lost a lot of data with any sort of raid within the last 20 years - based on software raid and with very expensive hardware raid. In some cases even ZFS would not have helped like with an overvoltage case due a damaged PSU. For such rare cases, you do backups.

Believe me, there is no other data storage technology freely available that is as robust as ZFS. If you have problems, than not because of ZFS but because of mostly hardware problems.

Your pool remains offline until enough disks come back. At least this i a positive aspect. With a hardware raid, sporadic missing disks are a real problem. So if your disks are ok, your pool can be imported on any working hardware. Only if more disks fail than the selected redundancy (1 to 3 disks on Z1 to Z3) then your pool is lost like with any striped raid.

Ugh, ok managed to get the one pool back online that i was having issues with, finally got it to import the pool, however the other pool that had been functioning is now showing as suspended XD. Time to play around with the IBM more and see if i can't find a combo that will get that pool back online again.
 
It is a good usage to export prior import but if you forgot, you can import without a former export. All pool properties are kept. You may need to recreate users and reassign permissions.

Thanks again.

I got it up and running with omni and napp-it 13b

So far it's running as before. But i get some strange speed drops when reading (copying files) from napp-it. I can copy the same file to my desktop and my laptop both connected to 1gbit and the laptop gets 600 kb/sec the other 60 mb/sec.

Before i was using E1000 network now both are enabled and i think it uses VMXNET 3 is this the culprit maby?
 
E1000 is quite buggy with Esxi 5.5. With my preconfigured appliance I modified TCP segmentation offload and LSO settings to improve stability but suggest to use vmxnet3 (that is usually faster but should be tested under heavy load prior final use).

Now we have ESXi 5.5U1 since a few days with modified vnic drivers and vmware tools. I have not yet found the time to check but hope for improved stability and performance.
 
E1000 is quite buggy with Esxi 5.5. With my preconfigured appliance I modified TCP segmentation offload and LSO settings to improve stability but suggest to use vmxnet3 (that is usually faster but should be tested under heavy load prior final use).

Now we have ESXi 5.5U1 since a few days with modified vnic drivers and vmware tools. I have not yet found the time to check but hope for improved stability and performance.

I tried to disconnect the E1000 card but now only 1 PC can connect to the share. the other just won't it can see it in network discovery but I tried to flush dns and reboot the client, in statistic on napp-it web it still shows the E1000 card. I can't ping the ip or dns name from the problem client.
 
I started over with a new clean vm without the E1000 connected from the beginning now its running on all the clients.

I can't seem to remember how to set up other users then root to access the shares and how to edit the rights. Is there a how to somewhere?
 
Use napp-it menu user (click on ++ add local user) to add another user.

Then assign an allow ACL permission to the shared filesystem like modify to everyone@ or this user. You can modify permissions from Windows as root , via CLI and /usr/bin/chmod or within napp-it and the ACL extension. (menu zfs folder, klick on ACL entry under folder-ACL )
 
Hello people.
I recently installed my first true NAS box at home, which is a HP Microserver N40L
with 16GB in RAM, 1x250GB for OS and 4x 2TB Enterprise SATA disks provided by HP in a RAIDZ.

I'm using the newest/updated OmniOS v11 r151008 and also Napp-it and other services.
What I would like to know is, have there been any issues/problems and do people
have some performance tuning tips regarding networking issues on the BC5723 controller provided
by the HP Microserver ? It's the bge module/driver ?

Sometimes I find the speeds to the BOX will rock up & down. I haven't configured
a gigabit network, thats on the plan this weekend. I have full-duplex and flowctrl enabled.
For an example, I noticed after building my small ipf firewall rules and enabled the firewall
the speed did go down, specially with CIFS and NFS(didn't test the AFP).
Usually streaming content from XBMC on a TV-PC which is running M$ W7.

So, any performance tips out there ?

Thanks in advance.

Best regards,

Svavar O - Reykjavik - Iceland
 
Last edited:
Running OmniOS ESXi appliance. Currently on 9e1_nightly. When I ran an About/Update a couple weeks ago, it failed & broke Napp-it. Omni, ZFS, Shares all worked. I ran a wget install, which restored Napp-it, so all now works again.

Are there any settings that the wget install would have overwritten? What settings should I check?

Thank you. Excellent GUI, etc.
 
Running OmniOS ESXi appliance. Currently on 9e1_nightly. When I ran an About/Update a couple weeks ago, it failed & broke Napp-it. Omni, ZFS, Shares all worked. I ran a wget install, which restored Napp-it, so all now works again.

Are there any settings that the wget install would have overwritten? What settings should I check?

Thank you. Excellent GUI, etc.

About-Update updates only the napp-it GUI and napp-it functions like autojob or replication etc while the wget updater configures the whole OS with napp-it and installs/updates needed NAS tools.
In both cases settings like pool parameters or settings like passwords, jobs or keys or other programs are not affected.
 
i'm moving my server to another case. same motherboard, same hba controller.
I did a pool export before moving. Because of the new case i changed the harddisk order. so they are not all on the same port of the hba as before. When i power on and try to import the pool again i have a degraded pool. it can not find all disks! i thought the order didnt matter and zfs would find the disks! of course i wrote down at what port a disk was before. so then it worked again. the order of the disks has changed because of disk failure in the past. so i'm using port 0,1,2 and then 5,6,7.. i want to change it back to 0,1,2,3,4,5

how can i change disk order and have a working system again? what does the -f import function do? I didnt try it because i was a bit scared.
 
Hey,

Weird issue here: I run a scrub every 2 weeks. Recently, the scrub is always causing a low number of hardware errors (less than 10) and a slightly bigger number transport errors. These numbers have been the same everytime.

The last few scrubs have reported as repairing 320K. This weeks one is 128K.

I'm writing probably a couple of hundred gig a week, and these writes DO NOT cause any of these errors.

user@server01:~$ iostat -exMn
extended device statistics ---- errors ---
r/s w/s Mr/s Mw/s wait actv wsvc_t asvc_t %w %b s/w h/w trn tot device
0.1 0.2 0.0 0.0 0.0 0.0 0.0 18.1 0 0 0 0 0 0 c4t0d0
7.6 0.5 0.5 0.0 0.0 0.0 0.0 1.3 0 0 0 0 0 0 c4t1d0
7.6 0.5 0.5 0.0 0.0 0.0 0.0 1.4 0 0 0 0 0 0 c4t2d0
7.6 0.5 0.5 0.0 0.0 0.0 0.0 1.3 0 0 0 0 0 0 c4t3d0
7.3 0.5 0.5 0.0 0.0 0.0 0.0 1.8 0 0 0 0 0 0 c4t4d0
6.7 0.5 0.5 0.0 0.0 0.0 0.0 4.7 0 1 0 7 29 36 c4t5d0
7.5 0.5 0.5 0.0 0.0 0.0 0.0 1.3 0 0 0 0 0 0 c4t6d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 2.2 0 0 0 0 0 0 c4t7d0
0.1 0.2 0.0 0.0 0.0 0.0 0.0 18.3 0 0 0 0 0 0 c3t0d0
7.7 0.5 0.5 0.0 0.0 0.0 0.0 1.3 0 0 0 0 0 0 c3t1d0
7.6 0.5 0.5 0.0 0.0 0.0 0.0 1.3 0 0 0 0 0 0 c3t2d0
7.6 0.5 0.5 0.0 0.0 0.0 0.0 1.3 0 0 0 0 0 0 c3t3d0
7.5 0.5 0.5 0.0 0.0 0.0 0.0 1.4 0 0 0 0 0 0 c3t4d0
7.5 0.5 0.5 0.0 0.0 0.0 0.0 1.4 0 0 0 0 0 0 c3t5d0
7.5 0.5 0.5 0.0 0.0 0.0 0.0 1.3 0 0 0 0 0 0 c3t6d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 2.1 0 0 0 0 0 0 c3t7d0

user@server01:~$ zpool status dpool
pool: dpool
state: ONLINE
scan: scrub repaired 128K in 3h42m with 0 errors on Thu Apr 3 02:43:02 2014
config:

NAME STATE READ WRITE CKSUM
dpool ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
c3t1d0 ONLINE 0 0 0
c4t1d0 ONLINE 0 0 0
c3t2d0 ONLINE 0 0 0
c4t2d0 ONLINE 0 0 0
c3t3d0 ONLINE 0 0 0
c4t3d0 ONLINE 0 0 0
raidz2-1 ONLINE 0 0 0
c3t4d0 ONLINE 0 0 0
c4t4d0 ONLINE 0 0 0
c3t5d0 ONLINE 0 0 0
c4t5d0 ONLINE 0 0 0
c3t6d0 ONLINE 0 0 0
c4t6d0 ONLINE 0 0 0
spares
c3t7d0 AVAIL
c4t7d0 AVAIL
 
i'm moving my server to another case. same motherboard, same hba controller.
I did a pool export before moving. Because of the new case i changed the harddisk order. so they are not all on the same port of the hba as before. When i power on and try to import the pool again i have a degraded pool. it can not find all disks! i thought the order didnt matter and zfs would find the disks! of course i wrote down at what port a disk was before. so then it worked again. the order of the disks has changed because of disk failure in the past. so i'm using port 0,1,2 and then 5,6,7.. i want to change it back to 0,1,2,3,4,5

how can i change disk order and have a working system again? what does the -f import function do? I didnt try it because i was a bit scared.

Export + power-off + assign disks to other ports + power-on + pool import should work.On problems you can try a rename of the /etc/zfs/zpool.cache and retry.

With modern LSI SAS2 controller in IT mode, WWN is used as disk-id. These numbers are independent from a controller port and unique for a disk. Whenever possible they should be prefered.

The import -f means forced. It is needed if you want to import a pool that was not properly exported.
 
Hey,

Weird issue here: I run a scrub every 2 weeks. Recently, the scrub is always causing a low number of hardware errors (less than 10) and a slightly bigger number transport errors. These numbers have been the same everytime.

The last few scrubs have reported as repairing 320K. This weeks one is 128K.

I'm writing probably a couple of hundred gig a week, and these writes DO NOT cause any of these errors.

Checksum errors are real problems as they indicate wrong data on disks detected by checksum tests during reads. If the cause are silent errors they got repaired and thats it.

If this happens more often, you must check your hardware.
Iostat errors and messages are helpful. They indicate problems on a driver level - below the real errors detected by ZFS. If you have disks with significant higher error or wait or busy values, you should remove/replace them and do a low level check with a manufacturer's tool (mostly on Windows).
 
Thanks _Gea.

That's the thing... there's no checksum errors, I don't have any problems with wait times during reads or writes.
 
Thanks _Gea.

That's the thing... there's no checksum errors, I don't have any problems with wait times during reads or writes.

You had checksum errors - at least in one file.
Scrub verifies all file checksums. If one is detected you have a corrupted file that is repaired or reported as unrecoverable (when you have no redundancy).

If this is a single incident - no problem, you use ZFS for that.
If this happens regularly you have any problem.

You can do a lowlevel check of disk c4t5d0 and its cabling as this is the only disk with significantly different values from iostat. You can also wait if the problem gets worser until ZFS throw a disks out due to too many errors.
 
Ah right - when you said checksum errors I thought you meant zpool status was reporting checksum errors in the cksum column.


Will keep an eye it.
 
Tonight I ran an update from r151006 to latest stable (r151008). After the update ran, i got a message that Napp-It was creating a boot environment for 0.9b. I had been running 0.9e-1 before the update. After the machine reboot, NappIt won't come up. I tried to reinstall but got a Perl error. Here is the relevant text:

Code:
joltman@OmniOS-NAS:/export/home/joltman$ cat /etc/release     OmniOS v11 r151008
  Copyright 2013 OmniTI Computer Consulting, Inc. All rights reserved.
  Use is subject to license terms.                          joltman@OmniOS-NAS:/export/home/joltman$ uname -r           5.11
joltman@OmniOS-NAS:/export/home/joltman$ sudo wget -O - www.napp-it.org/nappit | perl
ld.so.1: perl: fatal: libc.so.1: version 'ILLUMOS_0.6' not found (required by file /usr/perl5/5.16.1/lib/i86pc-solaris-thread-multi-64int/CORE/libperl.so)
ld.so.1: perl: fatal: libc.so.1: open failed: No such file or directory
-sh: 6853: Killed
sudojoltman@OmniOS-NAS:/export/home/joltman$ : error in /etc/sudo.conf, line 0 while loading plugin `sudoers_policy'    sudo: unable to dlopen /usr/lib/sudo/sudoers.so: (null)
sudo: fatal error, unable to load plugins

joltman@OmniOS-NAS:/export/home/joltman$

Any help would be appreciated.
 
The OmniOS update initiates a new BE and activates this new BE,
you must reboot then and login as root

If you had installed napp-it, it should work with 1008 as well
If you want to do a wget now, do it without sudo (you are already root)

It its not working go back to the last working BE and retry

In general a complete reinstall of OmniOS + napp-it last less than 30 min.
This is mostly faster than searching for problems. If you need, save/restore
the current napp-it settings like jobs etc from /var/web-gui/_log manually
or via menu System - backup napp-it (saves this folder to your datapool)

If you want to search for the problem look at /etc/sudo.conf
Sudo is used by napp-it as well and this may be the reason for the problem
 
Last edited:
It's probably been explained, so I apologize for that :eek:

I'm having problems with ACLs. I have a omnios with nappit e1 (pro) and the system was previously part of a domain and one domain user had full rights. I left the domain and it is now on a normal workgroup lan. I went to "acl on folder" and added one local user with full permission on the filesystem (top folder) and remove the domain user. But I found out that some folders were inaccessible with my new user and still had that domain user listed.

How can I change all the subfolders so that only the new user is listed. I only need 1 user with full access for windows shares and nothing else (do i need root, owner, everybody, etc, which are created as a default?).
 
you must reset acl recursively on all files/folders

you can use:
- cli command /usr/bin/chmod

-Windows (most Pro editions, do not set deny permissions from Windows)
connect as root

- napp-it acl extension
menu zfs filesystems -> folder-acl -> reset acl (below listing)

you must set the acl according to your needs.
If you need only one user, only this one needs permissions (root has always access)
 
Ok i have data/bridge/tv (pool, filesystem, one of the top folders on that pool). Where should i be located when performing this, bridge or tv? Which option in reset acl do I use (i have pro licence)? After reset I just add my local user (and again where should I be located)?
There is also acl on SMB extension options in nappit menu. Should I also do smth there? :)
Thanks.
 
If the problematic folder is 'tv' folder, then the following command should do it:

Code:
/usr/bin/chmod -R  A- /data0/bridge/tv
/usr/bin/chmod -R  A0=user:root:rwxpdDaARWcCos:fd-----:allow /data0/bridge/tv
/usr/bin/chmod -R  A1=user:dami:rwxpdDaARWcCos:fd-----:allow /data0/bridge/tv
/usr/bin/chmod -R  A2=owner@:rwxp--aARWcCos:fd-----:allow /data0/bridge/tv
/usr/bin/chmod -R  A3+group@:r-x---a-R-c--s:fd-----:allow /data0/bridge/tv
/usr/bin/chmod -R  A4+everyone@:--------------:fd-----:allow /data0/bridge/tv

If you are doing it over nappit, go to bridge/tv folder, select Reset ACL's and select 'root' and select files, folders and recursive.
After that is done, select 'add local user' and add user 'dani' with full rights, again selecting folder, files and recursive.

You could also first select Modify, afterwards delete the 'everyone' user and add dami user with full rights.

That should do it.

acl on SMB shoudl always be set to everyone with all rights. This only sets who has access to see the share. In case of everyone, everyone can see the share, but when someone tryies to search it, ACLs on folder comes in and denies them if they don't have access.

Matej
 
If the problematic folder is 'tv' folder, then the following command should do it:

Code:
/usr/bin/chmod -R  A- /data0/bridge/tv
/usr/bin/chmod -R  A0=user:root:rwxpdDaARWcCos:fd-----:allow /data0/bridge/tv
/usr/bin/chmod -R  A1=user:dami:rwxpdDaARWcCos:fd-----:allow /data0/bridge/tv
/usr/bin/chmod -R  A2=owner@:rwxp--aARWcCos:fd-----:allow /data0/bridge/tv
/usr/bin/chmod -R  A3+group@:r-x---a-R-c--s:fd-----:allow /data0/bridge/tv
/usr/bin/chmod -R  A4+everyone@:--------------:fd-----:allow /data0/bridge/tv

If you are doing it over nappit, go to bridge/tv folder, select Reset ACL's and select 'root' and select files, folders and recursive.
After that is done, select 'add local user' and add user 'dani' with full rights, again selecting folder, files and recursive.

You could also first select Modify, afterwards delete the 'everyone' user and add dami user with full rights.

That should do it.

acl on SMB shoudl always be set to everyone with all rights. This only sets who has access to see the share. In case of everyone, everyone can see the share, but when someone tryies to search it, ACLs on folder comes in and denies them if they don't have access.

Matej

When I go to "add local user", I don't have the option to select "folder, files, and recursive". I'm having a similar problem where ACL is only being applied to the top folder, and not recursively. I can manually add the user to each folder but I have hundreds, which would be a pain.
 
When I go to "add local user", I don't have the option to select "folder, files, and recursive". I'm having a similar problem where ACL is only being applied to the top folder, and not recursively. I can manually add the user to each folder but I have hundreds, which would be a pain.

Solaris ACL are very similar to Windows ACL with one main difference.
On Windows all deny ACL are processed prior all allow rules.

On Solaris the order of the rules is relevant where the first matching rule is used.
Therefor you cannot simply add a rule recursively. You must do the following:

Set the desired ruleset to a parent folder and use reset-acl (current folder as source)
to inherit these settings (same on Windows where you can set ACLs to a folder and select inherit/reset to folders below under advanced settings in security options)
 
You had checksum errors - at least in one file.
Scrub verifies all file checksums. If one is detected you have a corrupted file that is repaired or reported as unrecoverable (when you have no redundancy).

If this is a single incident - no problem, you use ZFS for that.
If this happens regularly you have any problem.

You can do a lowlevel check of disk c4t5d0 and its cabling as this is the only disk with significantly different values from iostat. You can also wait if the problem gets worser until ZFS throw a disks out due to too many errors.

OK, it's done another (scheduled) scrub. No errors this time (the numbers in iostat are the same), and no repairs listed in zpool status.

Wierd!

user@server:~$ iostat -exMn
extended device statistics ---- errors ---
r/s w/s Mr/s Mw/s wait actv wsvc_t asvc_t %w %b s/w h/w trn tot device
0.1 0.2 0.0 0.0 0.0 0.0 0.0 17.4 0 0 0 0 0 0 c4t0d0
9.0 0.3 0.6 0.0 0.0 0.0 0.0 1.3 0 0 0 0 0 0 c4t1d0
9.0 0.4 0.6 0.0 0.0 0.0 0.0 1.4 0 0 0 0 0 0 c4t2d0
9.0 0.4 0.6 0.0 0.0 0.0 0.0 1.3 0 0 0 0 0 0 c4t3d0
8.7 0.4 0.6 0.0 0.0 0.0 0.0 1.6 0 0 0 0 0 0 c4t4d0
7.9 0.3 0.6 0.0 0.0 0.0 0.0 4.5 0 0 0 7 29 36 c4t5d0
8.9 0.4 0.6 0.0 0.0 0.0 0.0 1.4 0 0 0 0 0 0 c4t6d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 2.2 0 0 0 0 0 0 c4t7d0
0.1 0.2 0.0 0.0 0.0 0.0 0.0 17.8 0 0 0 0 0 0 c3t0d0
9.0 0.3 0.6 0.0 0.0 0.0 0.0 1.4 0 0 0 0 0 0 c3t1d0
9.0 0.3 0.6 0.0 0.0 0.0 0.0 1.3 0 0 0 0 0 0 c3t2d0
8.9 0.4 0.6 0.0 0.0 0.0 0.0 1.4 0 0 0 0 0 0 c3t3d0
8.9 0.3 0.6 0.0 0.0 0.0 0.0 1.4 0 0 0 0 0 0 c3t4d0
8.9 0.4 0.6 0.0 0.0 0.0 0.0 1.4 0 0 0 0 0 0 c3t5d0
8.9 0.4 0.6 0.0 0.0 0.0 0.0 1.4 0 0 0 0 0 0 c3t6d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 2.1 0 0 0 0 0 0 c3t7d0


user@server:~$ zpool status
pool: dpool
state: ONLINE
scan: scrub repaired 0 in 3h24m with 0 errors on Thu Apr 17 02:24:21 2014
config:

NAME STATE READ WRITE CKSUM
dpool ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
c3t1d0 ONLINE 0 0 0
c4t1d0 ONLINE 0 0 0
c3t2d0 ONLINE 0 0 0
c4t2d0 ONLINE 0 0 0
c3t3d0 ONLINE 0 0 0
c4t3d0 ONLINE 0 0 0
raidz2-1 ONLINE 0 0 0
c3t4d0 ONLINE 0 0 0
c4t4d0 ONLINE 0 0 0
c3t5d0 ONLINE 0 0 0
c4t5d0 ONLINE 0 0 0
c3t6d0 ONLINE 0 0 0
c4t6d0 ONLINE 0 0 0
spares
c3t7d0 AVAIL
c4t7d0 AVAIL

errors: No known data errors
 
Solaris ACL are very similar to Windows ACL with one main difference.
On Windows all deny ACL are processed prior all allow rules.

On Solaris the order of the rules is relevant where the first matching rule is used.
Therefor you cannot simply add a rule recursively. You must do the following:

Set the desired ruleset to a parent folder and use reset-acl (current folder as source)
to inherit these settings (same on Windows where you can set ACLs to a folder and select inherit/reset to folders below under advanced settings in security options)

Ahh, I had a fundamental misunderstanding about what I was doing. This worked perfectly. Thank you!
 
On an old NAS I had a setup where family members could read some folders and had full access to others. This was to stop accidental deletions. The structure was something like this:
Code:
ParentShare$
    ChildShare01
    	Folders ...
    ChildShare02
    	Folders ...
Where
  • ParentShare$ (hidden) User01 full read / write access, all other users have no access
  • ChildShare01 All users have read / write access
  • ChildShare01 User01 as read / write access, all other users have read only access
In other words the rights were set at the share level, not at the folder / file level. Is there a way I can do this with ZFS / OI / Omni etc

pcd
 
I suppose, your old NAS is Windows.
If you move to OmniOS, you have comparable ACL on Shares and ACL on file/folder settings as long as you use the kernelbased Solaris CIFS server and not SAMBA. You can also hide shares when you add a $ to the sharename.

What you cannot do with the kernelbased CIFS server are nested shares when a share is not a ZFS filesystem as a share is a filesystem property with CIFS. You can use SAMBA that offers this feature but SAMBA is not as fast and easy to use as Solaris CIFS and does not support Windows comparable ACL settings and full Windows SID support.

So yes, if ChildShare01 and 02 are ZFS filesystems below the ParentShare filesystem, you may create a parent hidden share and two filesystems/shares below where you use share level ACL to restrict access as desired (with file/folder ACL open for everyone)

Your other options
create a single nonhidden share from filesystem ParentShare with regular folders ChildShare01 and 02 below. Keep share level ACL to defaults (full access) and restrict access based on file/folder ACL.

or
create two filesystems/shares with desired ACL settings on file/folders or at share level. There is no real need for the hidden parent share.
 
I suppose, your old NAS is Windows.
QNAP which runs some version of Linux

What you cannot do with the kernelbased CIFS server are nested shares when a share is not a ZFS filesystem as a share is a filesystem property with CIFS. You can use SAMBA that offers this feature but SAMBA is not as fast and easy to use as Solaris CIFS and does not support Windows comparable ACL settings and full Windows SID support.

So yes, if ChildShare01 and 02 are ZFS filesystems below the ParentShare filesystem, you may create a parent hidden share and two filesystems/shares below where you use share level ACL to restrict access as desired (with file/folder ACL open for everyone)
That's what I'm trying to do, but am struggling with setting up the nested share.
I'm happy to have each Child as a separate ZFS Filesystem, but how do I go about setting up the Parent share? As you said the share is a filesystem property, do I need to set up some other non ZFS share at the parent level (i.e mount the child at a lower level), if so how? At this moment I have <pool>/<parent> then child as folders from parent.

Your other options
create a single nonhidden share from filesystem ParentShare with regular folders ChildShare01 and 02 below. Keep share level ACL to defaults (full access) and restrict access based on file/folder ACL.

or
create two filesystems/shares with desired ACL settings on file/folders or at share level. There is no real need for the hidden parent share.
These options apply security on the file/folder.

pcd
 
It "should" work when you

- create a filesystem on your pool ex /pool/data
and share it hidden as data$

- create two filesystems below ex pool/data/a and pool/data/b
and share them as a and b

now you can SMB connect to data$ and a and b and when you connect data$
you see the "folders" a and b below and you can open them. You can set share-level ACL on all three shares

BUT!
Support for nested shares (remember, these are different filesystems not only shares - they can have complete
different settings like casesensitivy etc) can cause a lot ot problems.
Indeed nested shares are only supported in newest OmniOS releases. On older releases
you can only use independent not nested shares and not traverse to filesystems below via SMB.

With newest OmniOS I saw problems with share level ACL (like they work only after
a SMB restart or are only temporary on sessions or sometimes not working at all) so I would currently
avoid them if possible and restrict access on a file permission level.
 
I just need advice. I have a zfs all-in-one with omnios running four SSD drives in essentially a raid10. I use NFS to connect to ESXi, and store VMs on this share.

Would I be better off buying a raid card and presenting those locally (raid10) to esxi for better performance? Or is it going to be negligible with the performance I have already.
 
I just need advice. I have a zfs all-in-one with omnios running four SSD drives in essentially a raid10. I use NFS to connect to ESXi, and store VMs on this share.

Would I be better off buying a raid card and presenting those locally (raid10) to esxi for better performance? Or is it going to be negligible with the performance I have already.

If you assign very low RAM to your ZFS vSAN (say 1 GB) then i would expect that a local ESXi raid would be slightly faster as long as you use a very good raid card that allows parallel reads from both parts of a mirror because you have no overhead and less data to read/write (no checksums) - even if you disable sync on ZFS for faster operation.

If you assign more RAM to your ZFS vSAN to allow a larger read-cache I would expect ZFS to be faster or much faster.
With enough RAM most reads are provided now from RAM instead of disk.

Not to forget the other ZFS features not available on a simple ESXi raid like
- Copy On Write filesystem (always consistent)
- Checksums
- Compress (LZ4)
- NFS or SMB access for fast VM move/clone/backup
- ZFS snaps
- Fast Log-Device (ZIL) for secure sync writes (not needed with fast SSD only pools
but can help with consumer SSDs to reduce small writes on them)
- High performance replication (keep two filers in sync)
- general file services
 
Hello,

I'm using arcstat.pl script for monitoring ZFS ARC statistics (https://github.com/mharsch/arcstat) which using data from kstat. It is very interesting, that on all of my Storage servers there is every 5 seconds realy large numbers of "total ARC accesses per second" (read column).

Is there any way, how can I find, what causes these high numbers or why just every 5 seconds?

Example from one of server:
Code:
    time    read  miss  miss%  dmis  dm%  pmis  pm%  mmis  mm%  arcsz     c
11:56:59       0     0      0     0    0     0    0     0    0    52G   52G
11:57:00     552    60     10    47    8    13  100     1   25    52G   52G
11:57:01     487    26      5    26    5     0    0     0    0    52G   52G
11:57:02     474    41      8    35    7     6   42     0    0    52G   52G
11:57:03     203    21     10    21   10     0    0     0    0    52G   52G
11:57:04   47507   209      0   207    0     2    0   181    0    52G   52G
11:57:05     341    41     12    39   11     2   40     3   14    52G   52G
11:57:06     278    36     12    36   13     0    0     3   37    52G   52G
11:57:07     581   100     17    90   15    10   83     4   33    52G   52G
11:57:08    1107   294     26    22    2   272   97     4   12    52G   52G
11:57:09   49283   407      0   400    1     7    0   197    0    52G   52G
11:57:10     762    32      4    24    3     8    5     0    0    52G   52G
11:57:11     370   149     40     8    3   141  100     1    0    52G   52G
11:57:12     899    15      1    15    1     0    0     1   50    52G   52G
11:57:13    3683   437     11    41    1   396   98     2   66    52G   52G
11:57:14   36644   153      0   151    0     2    0   142    0    52G   52G
11:57:15   11382    62      0    62    0     0    0    40    0    52G   52G
 
Hi I got a cabinet a little like this the old one http://www.xcase.co.uk/rackmount-se...wap-caddy-single-or-redundant-psu-849-00.html

I got my HDD in 1 ZFS pool whit 2. 6TB vdev of 12 2TB in raidz 2
but I are thinking what if my back-plate are that the right name ? is that thing where hdd are in, and the sas cabel go from to the sas controller
when I set it up I don't think abut it but now I like to keep my raidz safe if one of the back-plate die can I do some to take the zfs off and switch drive or will it destroy it
 
Gea:

How do you get per-disk busy status in monitoring? I would like to monitor how and when my disks are busy when I'm performing tests or when my system doesnt respond as it should.

Matej
 
Hello,

I'm using arcstat.pl script for monitoring ZFS ARC statistics (https://github.com/mharsch/arcstat) which using data from kstat. It is very interesting, that on all of my Storage servers there is every 5 seconds realy large numbers of "total ARC accesses per second"

ZFS collects all small random writes in RAM and writes it every 5s to disk as a large sequential write to improve write performance. This needs a ARC cache update as well.
 
Hi I got a cabinet a little like this the old one http://www.xcase.co.uk/rackmount-se...wap-caddy-single-or-redundant-psu-849-00.html

I got my HDD in 1 ZFS pool whit 2. 6TB vdev of 12 2TB in raidz 2
but I are thinking what if my back-plate are that the right name ? is that thing where hdd are in, and the sas cabel go from to the sas controller
when I set it up I don't think abut it but now I like to keep my raidz safe if one of the back-plate die can I do some to take the zfs off and switch drive or will it destroy it

ZFS is controller independent software raid.
You can move disks between any controller (Sata, SAS). It does not matter if you use a backplane or expander.
 
Gea:

How do you get per-disk busy status in monitoring? I would like to monitor how and when my disks are busy when I'm performing tests or when my system doesnt respond as it should.

Matej

try iostat -xtcn
in a second putty session.
 
Back
Top