Napp-it issues with NFS

luckylinux

Limp Gawd
Joined
Mar 19, 2012
Messages
225
Hello,

I hope I can get some help with napp-it since I really like it.

For samba setup it works flawlessly.
However for NFS setup I'm starting to lose my hair for NFS. Normally on GNU/Linux (or FreeBSD) you setup a new group (adduser/useradd with its GUI) and a new user (with its UID) on the server. Then on the clients you do basically the same taking care you setup user/group with the same UID/GUI.

The problem with napp-it is AFAIK:
1) I cannot create any user (UNIX) group, let alone force its GUI
2) I cannot even edit existing users UID

I get the impression the User management part of napp-it mainly (or only) focuses on SMB. For NFS it doesn't really provide the tools needed. Am I supposed to do this through the shell on OmniOS ? I am running napp-it 18.01 pro by the way, not sure if this was a regression.

Thank you for your help :)
 
Yes, napp-it is focused on SMB and the needs of the Solarish kernelbased SMB server. Unlike SAMBA it uses Windows SID and Windows SMB groups, not Unix groups. For users you can assign a uid on creation time. If you need unix groups for other services, see https://docs.oracle.com/cd/E23824_01/html/821-1451/gkhqx.html

But care about NFS3. As there is no authorisation or authentication, all access restrictions are based on a pure goodwill base. Additionally different servers or clients show a different behaviour. Some use nobody when creating a file, others the uid of a client. Restrictions are more or less only possible based on client ip or firewall settings.

As the kernelbased SMB server always use Windows ntfs compatible NFS4 ACL a NFS + SMB share is basically incompatible regarding permissions. The best compatibility gives a everyone=modify permission set (every NFS client will work) and SMB restrictions based on share permissions.
 
Thank you for your reply _Gea.

But care about NFS3. As there is no authorisation or authentication, all access restrictions are based on a pure goodwill base. Additionally different servers or clients show a different behaviour. Some use nobody when creating a file, others the uid of a client. Restrictions are more or less only possible based on client ip or firewall settings.
Would you mind explaining this is more details please ? In your napp-it all-in-one tutorials you purposedly suggest using OmniOS+Napp-it to host esxi virtual machines using NFS. I agree that there is no SMB support in ESXi, but is this unsafe then?

As the kernelbased SMB server always use Windows ntfs compatible NFS4 ACL a NFS + SMB share is basically incompatible regarding permissions. The best compatibility gives a everyone=modify permission set (every NFS client will work) and SMB restrictions based on share permissions.
What would be the problem to use NFS4 instead of NFS3? At the moment on my NAS (running Gentoo Linux on ESXi with PCIe passthrough) I serve using NFS3+NFS4. When clients mount using NFS3 they get issues like stale handle and so on. Using NFS4 no problems, plus ACLs are supported.

I mainly use GNU/Linux machines (Gentoo / Debian) and automount at reboot using SMB doesn't really work since it asks for password. I really like as NFS automount just works. Another option might be FUSE + sshfs using private keys. Or what would you suggest instead? I quickly read about NFS security yesterday and it seems you could setup NFS4+IPSEC but basically nobody uses it ...

Isn't there an option like NFS4 + private key (like ssh authentication) ? This would provide automatic mount of shares + security

EDIT: At the moment imy setup is quite (very) unsafe. I basically allow 192.168.0.0/20 clients to mount. And permissions are not that good. This means that if one PC gets infected it can easily spread it and spell disaster across the whole network ...
 
Last edited:
When Sun invented NFS they asumed a secure network with performance and simplicity as goal. So best use case is a secure network for your NFS server and ESXi with its vmkernel interface ex via a separate internal vswitch or a seperate physical nic/vlan.

If you need access to the storage server from an unsecure LAN, your next step is to use a firewall to restrict NFS access to a nic. With lower security needs you can limit access based on a client ip on sharing. As anyone can try different ips and uids this is not really save.

You can then use NFS4 with authentication but this can complicate settings. With current ESXi you need 4.1 (OmniOS has 4.0, only a genuine current Solaris has 4.1)
 
If you need access to the storage server from an unsecure LAN, your next step is to use a firewall to restrict NFS access to a nic. With lower security needs you can limit access based on a client ip on sharing. As anyone can try different ips and uids this is not really save.
How would be a firewall any different ? You would still apply rules based on IP and ports, right ?

I could also implement 802.11 x authentication but that poses the problem with the few unsupported devices. Then I need a separate (V)LAN, firewall rules, ... I guess it gets complicated really quickly.

You can then use NFS4 with authentication but this can complicate settings. With current ESXi you need 4.1 (OmniOS has 4.0, only a genuine current Solaris has 4.1)
Maybe I'll give Proxmox VE a try ;). Is NFS4 + authentication very difficult to setup ?

I'm not really interested in ESXi authentication for NFS. I'm only concerned about the other clients on the (W)LAN being able to access shares they should not to ...
 
The OmniOS firewall can be set based on a link so can block a nic not only an ip range.
Another secure option would be iSCSI that you can bind to a nic/ip.

https://docs.oracle.com/cd/E23824_01/html/821-1459/fmvcd.html
When you say "can block a NIC" I assume you mean by its MAC address, right? That's not considered very safe on a (W)LAN as anybody can easily spoof and fake its MAC address. It provides an additionnal layer of protection, sure, but not a very tough one.

Isn't NFS via ssh private key possible? That would be super safe
 
Firewall rules like block a link is based on the OmniOS network links, not a server or client Mac address.

ESXi requires NFS 4.1. If you want to remain in the Solaris world you would need Oracle Solaris.
Illumos based systems are currently on NFS 4.0

Beside that I have never tried passwordless NFS via SSH keys


Beside that:
If security is a real concern, you must also care about ESXi and its vmkernel adapter. This would be even a bigger security concern. In such a case I would place ESXi and its management interface with NFS and NFS storage in a secure network ex SAN and connect the with a dedicated vswitch. For NFS storage use a small OmniOS instance for all ZFS datastore benefits.

For general filer use, you can add a second OmniOS instance with SMB only. Give this VM two nics, one for LAN and one for the secure network to allow backups/replications.

This will seperate NFS and ESXi management completely from your LAN. A management host must be in the SAN network or would need two interfaces.

btw
I am just playing with the new ESXi 6.7 (first impression is very good and very fast). With an LSI HBA you can add a pysical Disk directly to a VM (VM > add disk > add raw disk) what makes dividing disks very easy, unlike the former raw disk mappings.
 
Last edited:
btw
I am just playing with the new ESXi 6.7 (first impression is very good and very fast). With an LSI HBA you can add a pysical Disk directly to a VM (VM > add disk > add raw disk) what makes dividing disks very easy, unlike the former raw disk mappings.
I had the impressions until now that in all your tutorials you reccomended PCIe passthrough as the only solution reliable for ZFS. Raw disk mappings were highly discouraged since they could lead to all sort of problems (data corruption, access timing issues, ...). What changed ?
 
PCIe pass-through is a supported ESXi option.
While it does not work with any cards, the LSI HBAs are rock solid.

Physical RDM of Onboard Sata disks is an unsupported ESXi option via console.
It may work or not, no guarantee and not easy to setup. Not a solution that VMware cares about.

Pass-through of single disks (prefer SAS but Sata works) over an SAS HBA via ESXi is now easy to setup
and a supported option and has a good chance to be fast and reliable. Smart works and disks are portable to a barebone setup.

You should only avoid SAS controller with cache as this limits control for ZFS.
 
Back
Top