OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

Are there any reasons, despite TB-limit and no-encryption-support, against using NexentaStor Community Edition?
Are there any specialities regarding ESXi?

I find their gui very resource heavy and unstable (not just me either.)
 
Which VMXNET3 issue? I'm using vmxnet3 on IO without any issues at all!

And on a sidenote OI w/ napp-it can't be beaten. I found it to be a fantastic combo, with minimal maintenance.

Ok I will reconsider OI, I am a bit worried about it, because Developpement seem to be slowing down lately, and was worried about it's stability/long term support, compared to an entreprise supported product such as solaris11 or nexentrastor
 
Well, keep in mind that 'supported' is only true for solaris11 if you are willing to shell out for a (pricey) support contract. And the new nexentastor will be based on illumian, same as OI.
 
Kingston 8GB (2 x 4GB) ECC Unbuffered DDR3 1333
You have 4 DIMM Slots that accept each up to 8 GB Unbuffered Memory, so how do you want to reach 32GB with 4GB DIMMs?

That case is very nice, Motherboard too (one NIC does not work under ESXi without patching)
General Consensus regarding controller cards at the moment: IBM M1015 with IT-mode
RAID-Z2 recommended vdev-size: 6 (not 8)
 
Can't speak to the RAM (I use crucial), but the mobo and CPU are fine. I'd go with M1015 flashed to IT firmware for the drives. If you are going to be serving storage to ESXi for VMs, I would go with raid10 over raidz2 (much better read performance...)
 
You have 4 DIMM Slots that accept each up to 8 GB Unbuffered Memory, so how do you want to reach 32GB with 4GB DIMMs?

That case is very nice, Motherboard too (one NIC does not work under ESXi without patching)
General Consensus regarding controller cards at the moment: IBM M1015 with IT-mode
RAID-Z2 recommended vdev-size: 6 (not 8)
fixed memory link
guess I'll have to do some more planning for the vdev-size
 
I am just playing around with it under VMware and it runs like a champ, really like it.

it does. go hang out on the nexentastor.org forums. it's fine when it works, but the chances of having something wedge or start eating CPU time or something is non-trivial.
 
newbie here:
I don't get it, i am doing something wron, but cannot find out what...

I finally managed to get all my shares via SMB working, but one doesnt. It's the filsystem called audio, shared via SMB.
i already did the following steps more than once, but... well :confused:
Here is what i tried several times:
Code:
chown -R fileserv:staff audio
chmod -R 775 audio
and then set permission to 775 via nappit in order to set ACL accordingly.
After applying, it nevertheless keeps displaying the permissions in the PERM row in nappit as "755+181" and i cannot even delete a file or folder that i just created over SMB.
But i got it working that way on other filesystems/shares.

When listing the file 'file-from-fileserv.jpg' just created in a subfolder of audio, it outputs:
Code:
drwxrwxr-x+  13 fileserv staff     14 2012-04-11 22:24 .
drwxr-xr-x+ 181 fileserv staff    181 2012-04-11 21:58 ..
(...)
drwxrwxr-x+   2 fileserv staff      2 2007-10-25 17:26 
----------+   1 fileserv staff 247417 2012-04-03 20:38 file-from-fileserv.jpg
drwxrwxr-x+   2 fileserv staff      2 2012-04-11 22:07 Neuer Ordner
And just to be complete, logged in via SMB as user fileserv, i even cannot delete the folder 'dfg dfg dfgd fgdf', is that normal?

greetings
mo

##edit
another strange thing: now i cannot even login via AFP which shares just two filesystems - one for timemachine, one for other stuff.
 
Last edited:
I find their gui very resource heavy and unstable (not just me either.)

it does. go hang out on the nexentastor.org forums. it's fine when it works, but the chances of having something wedge or start eating CPU time or something is non-trivial.

I've had no trouble with it so far, but i am running 100% hardware from the HSL on bare metal. GUI has also worked fine for me with no issues with hogging CPU. their GUI is basically just python with an EXTjs based ajax front end. It's certainly not perfect tho, so i suggest anyone considering it troll the forums and know what they are getting. My experience so far has been tho, run it on the right hardware and problems are limited. It's been a breeze for me so far. YMMV. really looking forward to illumian based version.
 
I've had no trouble with it so far, but i am running 100% hardware from the HSL on bare metal. GUI has also worked fine for me with no issues with hogging CPU. their GUI is basically just python with an EXTjs based ajax front end. It's certainly not perfect tho, so i suggest anyone considering it troll the forums and know what they are getting. My experience so far has been tho, run it on the right hardware and problems are limited. It's been a breeze for me so far. YMMV. really looking forward to illumian based version.

Actually, i'll backpeddle a little bit there. the only nexentastor problems i've had personally were trivial ones, like getting the slotmap picture to show correctly for my SC847 JBOD chassis.
 
I'm wondering. Are zfs pool compatible with multiple OS in some order? Iwould be curious to setup a pool and then benchmark it with OI/solaris/nexatanstor/freebsd
 
UNSUCCESSFUL happens mostly when
- there is already a computeraccount for this host in the domain -> DELETE and retry
- SMB service is not working -> SMB share a folder, restart service or reboot
- wrong user/PW


Ugh, for some reason Domain\Administrator doesn't work. I finally noticed the User "NAS" that I had put in AD a long time ago and used it to join at of course it worked the first try.
 
newbie here:
I don't get it, i am doing something wron, but cannot find out what...

I finally managed to get all my shares via SMB working, but one doesnt. It's the filsystem called audio, shared via SMB.
i already did the following steps more than once, but... well :confused:
Here is what i tried several times:
Code:
chown -R fileserv:staff audio
chmod -R 775 audio
and then set permission to 775 via nappit in order to set ACL accordingly.
After applying, it nevertheless keeps displaying the permissions in the PERM row in nappit as "755+181" and i cannot even delete a file or folder that i just created over SMB.
But i got it working that way on other filesystems/shares.

When listing the file 'file-from-fileserv.jpg' just created in a subfolder of audio, it outputs:
Code:
drwxrwxr-x+  13 fileserv staff     14 2012-04-11 22:24 .
drwxr-xr-x+ 181 fileserv staff    181 2012-04-11 21:58 ..
(...)
drwxrwxr-x+   2 fileserv staff      2 2007-10-25 17:26 
----------+   1 fileserv staff 247417 2012-04-03 20:38 file-from-fileserv.jpg
drwxrwxr-x+   2 fileserv staff      2 2012-04-11 22:07 Neuer Ordner
And just to be complete, logged in via SMB as user fileserv, i even cannot delete the folder 'dfg dfg dfgd fgdf', is that normal?

greetings
mo

##edit
another strange thing: now i cannot even login via AFP which shares just two filesystems - one for timemachine, one for other stuff.

Why would you want "fileserv:staff" to own your folders? Much easier to have root own all of it, and the use ACL to control SMB permissions.

Generally you won't need to mess with permissions via SSH, just use the "ZFS Folder" link in Napp-it.

Start by making sure what the folder is set to 777+ via napp-it. Then use default ACL permissions(o full e modi) AFAIK.

Use root to connect via Windows and test again. If it still acts wierd, try re-entering your password for your users via napp-it "user" menu.
 
are you using open indiana to make permission changes via cli?

theres a difference between the chmod in the path and the chmod that will do the job. probably others like 'ls' too.

"/usr/bin/chmod -R A- /tank" (note tank is the name of my pool) will remove all non trivial acl's and basically reset everything

"/usr/bin/chmod -R A+user:pitne:full_set:allow /tank" will give user pitne full non-trivial rights to the entire pool. Then I just mapped that user for CIFS and everything seems to work good
 
are you using open indiana to make permission changes via cli?

theres a difference between the chmod in the path and the chmod that will do the job. probably others like 'ls' too.

(note tank is the name of my pool) this will remove all non trivial acl's and basically reset everything:
Code:
/usr/bin/chmod -R A- /tank

this will give user pitne full non-trivial rights to the entire pool. Then I just mapped that user for CIFS and everything seems to work good:
Code:
/usr/bin/chmod -R A+user:pitne:full_set:allow /tank
 
Firstly Gea, you are a legend! thank you.

Now, this may have been covered already but I failed to find it. Im running napp-it 0.6r currently and want to update to the latest version.

It appears that the best approach is to;
wget -O - www.napp-it.org/nappit07 | perl
reboot
wget -O - www.napp-it.org/nappit | perl


Is this the best approach? And will I need to reconfigure anything in particular once updated? (i remember previously having to redo 'jobs' after a previous update)

Thanks again!!
 
to update/install intiially or from a former version, open a console and enter

su
wget -O - www.napp-it.org/nappit | perl
rebooot

old jobs should run but new ones may enhance functionality
 
OpenIndiana 151a prestable 2

"oi_151a3 is the third update since OpenIndiana 151a was released in September and the first since
then to be available as freshly pressed ISOs."

http://wiki.openindiana.org/oi/oi_151a_prestable2+Release+Notes

This releaase fixes the bug that if you want to install several packages, you must reboot between.
I have already fixed this problem with current napp-it wget installer for older OI releases where the new installer is installed automatically.
 
_Gea

Did you remove NDMP from your Napp-it recently?

I installed the package with "pkg install ndmp", but I'm not sure how to configure it. Trying to create an initiator so another box can connect to the tape drive I have installed.
 
many thanks for the answers! Here is what i ended up doing…

* add user fileserv to group root - since root can have different password on different machine and i will mix this up, for me it doesn't make sense to use the root account directly when using shares…
* usermod -u 104 -G root,staff fileserv in order to maybe circumvent some future issues
* chown -R root:root /myzsfpool in order to reset owners…
* /usr/bin/chmod -R A- /myzsfpool
* /usr/bin/chmod -R A+user:fileserv:full_set:allow /myzsfpool
* /usr/bin/chmod -R A+user:share:read_set:allow /myzsfpool
* idmap add win user:Administrator unixuser:fileserv in order to be complete. But does this make even sense?

The really strange thing about not being able to login via AFP: well, of course i missed something - after re-setting the passwords for the users, it was back working again :) timemachine here we go....

some notes
* nappit seems really strange to me when i set permissions on the 'zfsfolder' page: when i set them to 775 for a filesystem, it displays 755 when the refreshed page comes up. When setting to 770 it displays 700.
* i tried to set those 'trivial ACLs', it seemed to do something, but eventually came back with the error message about not being able to do it because of having no license for that - i thought those 'trivial ACLs' would get set - or its more likely i did it the wrong way ;-)
* nevertheless its a really great thing!!
* thats incredible for a newbie what one can find in this stuff - didn't know about that /usr/bin/chmod exists and that it has different capabilities!
many thanks!!

greetings
mo
 
many thanks for the answers! Here is what i ended up doing…

* add user fileserv to group root - since root can have different password on different machine and i will mix this up, for me it doesn't make sense to use the root account directly when using shares…
* usermod -u 104 -G root,staff fileserv in order to maybe circumvent some future issues
* chown -R root:root /myzsfpool in order to reset owners…
* /usr/bin/chmod -R A- /myzsfpool
* /usr/bin/chmod -R A+user:fileserv:full_set:allow /myzsfpool
* /usr/bin/chmod -R A+user:share:read_set:allow /myzsfpool
* idmap add win user:Administrator unixuser:fileserv in order to be complete. But does this make even sense?

The really strange thing about not being able to login via AFP: well, of course i missed something - after re-setting the passwords for the users, it was back working again :) timemachine here we go....

some notes
* nappit seems really strange to me when i set permissions on the 'zfsfolder' page: when i set them to 775 for a filesystem, it displays 755 when the refreshed page comes up. When setting to 770 it displays 700.
* i tried to set those 'trivial ACLs', it seemed to do something, but eventually came back with the error message about not being able to do it because of having no license for that - i thought those 'trivial ACLs' would get set - or its more likely i did it the wrong way ;-)
* nevertheless its a really great thing!!
* thats incredible for a newbie what one can find in this stuff - didn't know about that /usr/bin/chmod exists and that it has different capabilities!
many thanks!!

greetings
mo

For SMB you should only look at ACL not unix permissions.
For netatalk it is essential, that you allow modify for everyone at the shared folder.
You can only restrict access to files and folders within.

about ACL
For working defaults, you can use menu ZFS-Folder-ACL (no need to use the extension)

about ACL extension
Its not fully finished, but adding trivial ACL and user ACL as well as resetting ACL to modify for everyone is free .
Update to newest 0.8e. There was a bug with free ACL settings.

and
do NOT idmap a local Solaris user to a local Solaris user.
Idmap is only used for mapping Windows-AD users or SMB groups to Unix users or groups
You do not need to map SMB administrator
 
Last edited:
Strange...Yesterday evening I added a new pool of 8 HDD's beside the exixting pool and can't access from my win7 PC....keeps asking for a login and password which I do not know! Has something changed in the latest Napp-It stable? Permissions are set to 777+ and SMB ans ZFS is active!

thanks

EDIT: I deleted the pool and recreated it and that solved the problem!
 
Last edited:
_Gea

Did you remove NDMP from your Napp-it recently?

I installed the package with "pkg install ndmp", but I'm not sure how to configure it. Trying to create an initiator so another box can connect to the tape drive I have installed.

ndmp is only part of the NexentaCore config. (no longer available, Nexenta deleted all infos on nexenta.org)
It is done on Nexenta like..(have not tried for years)

Enable NDMP daemon:
# svcadm enable ndmpd

Enable authorization. NDMP allows to work in three modes: no auth, clear text, and md5 digest. Most of backup software will require md5 hashed password. “ndmpcopy” works with clear text passwords. We will use “tmpuser” and “tmppass” on both machines for example:

# ndmpadm enable -a cleartext -u tmpuser

Enter new password:
Re-enter password:

Now its time to make the NDMP copy. Assume we have /opt/a on one machine and /opt/b on another. /opt/b is empty right now. This command will start copying:

# ndmpcopy -v hostone:/opt/a hosttwo:/opt/b -sa tmpuser:tmppass -da tmpuser:tmppass
 
Encryption on OpenIndiana

Not integrated in ZFS like with Solaris 11 and ZFS V.31+
but on an underlaying disk level and therefor working with ZFS V.28

i will add it to next versions of napp-it

The thought is:
- create files on a ZFS dataset (ex 1 GB with the option to backup them to any Filesystem/ Cloud provider)
- build block-devices from the files with lofiadm. (lofiadm supports encryption, must enter a pw here)
- build a regular ZFS pool from these files (use ex ZFS Z2 to recover from backup files with errors)

ex:
1. create a 1G file in /tank/secrets (a ZFS dataset)
cd /tank/secrets
mkfile 1g file1


2. create encrypted blockdevices from these file(s) -> creates a device /dev/lofi/1
lofiadm -c aes-256-cbc -a /tank/secrets/file1
Enter passphrase: ..

-repeat for all disks if you want to build a pool from more disks to have redundancy
(important if you want to backup these files on a non-ZFS file system)


3. Create a regular (ex. basic) ZFS pool from this or these (encrypted) device(s)
zpool create secretpool /dev/lofi/1


The newly created pool works like any ZFS pool.
To take offline you must export the pool and remove the devices

zpool export secretpool
lofiadm -d /tank/secrets/file1

To take online you must build devices from the files again using the same PW and import the pool
lofiadm -c aes-256-cbc -a /tank/secrets/file1
Enter passphrase: ..

If you use the wrong PW, all seems ok but there are no files...


Now you can import your pool from these devices
zpool import -d /dev/lofi shows all available pools

To import the pool, you must use:
zpool import -d /dev/lofi/ secretpool

Only disadvantage may be some lower performance (goes through ZFS twice + encryption).
But its very elegant, easy to implement and it is based simply only on one or more encrypted files.
If you want to backup them, you can just copy them. With small files its not a problem, even on FAT disks
with a max file limit of 2 GB. If you have build redundant ZFS pools from several files (ex Raid-Z1/2/3) its even not
a problem if (1/2/3) files get damaged for whatever reason.


http://constantin.glez.de/blog/2012/02/introducing-sparse-encrypted-zfs-pools
http://www.cuddletech.com/blog/pivot/entry.php?id=1029
https://blogs.oracle.com/yakshaving/entry/encrypted_fs_on_solaris_10
http://www.idevelopment.info/data/Oracle/DBA_tips/Automatic_Storage_Management/ASM_21.shtml
 
Last edited:
Encryption on OpenIndiana

Not integrated in ZFS like with Solaris 11 and ZFS V.31+
but on an underlaying disk level and therefor working with ZFS V.28

i will add it to next versions of napp-it

The thought is:
- create files on a ZFS dataset (ex 1 GB with the option to backup them to any Filesystem/ Cloud provider)
- build block-devices from the files with lofiadm. (lofiadm supports encryption, must enter a pw here)
- build a regular ZFS pool from these files (use ex ZFS Z2 to recover from backup files with errors)

ex:
1. create a 1G file in /tank/secrets (a ZFS dataset)
cd /tank/secrets
mkfile 1g file1


2. create encrypted blockdevices from these file(s) -> creates a device /dev/lofi/1
lofiadm -c aes-256-cbc -a /tank/secrets/file1
Enter passphrase: ..

-repeat for all disks if you want to build a pool from more disks to have redundancy
(important if you want to backup these files on a non-ZFS file system)


3. Create a regular (ex. basic) ZFS pool from this or these (encrypted) device(s)
zpool create secretpool /dev/lofi/1


The newly created pool works like any ZFS pool.
To take offline you must export the pool and remove the devices

zpool export secretpool
lofiadm -d /tank/secrets/file1

To take online you must build devices from the files again using the same PW and import the pool
lofiadm -c aes-256-cbc -a /tank/secrets/file1
Enter passphrase: ..

If you use the wrong PW, all seems ok but there are no files...


Now you can import your pool from these devices
zpool import -d /dev/lofi shows all available pools

To import the pool, you must use:
zpool import -d /dev/lofi/ secretpool

Only disadvantage may be some lower performance (goes through ZFS twice + encryption).
But its very elegant, easy to implement and it is based simply only on one or more encrypted files.
If you want to backup them, you can just copy them. With small files its not a problem, even on FAT disks
with a max file limit of 2 GB. If you have build redundant ZFS pools from several files (ex Raid-Z1/2/3) its even not
a problem if (1/2/3) files get damaged for whatever reason.


http://constantin.glez.de/blog/2012/02/introducing-sparse-encrypted-zfs-pools
http://www.cuddletech.com/blog/pivot/entry.php?id=1029
https://blogs.oracle.com/yakshaving/entry/encrypted_fs_on_solaris_10
http://www.idevelopment.info/data/Oracle/DBA_tips/Automatic_Storage_Management/ASM_21.shtml

it look really easy whas os do you self run you zfs on ? I are on solaris 11 now so if I pulle the triger to move to OpenIndiana I nede to move a lot of TB I got 12TB Encrypted now so how are it can I just create 2TB files so a whole hdd and now I just add 6 X 2TB for a new pool how are that work when you do it that way if you understand me :)
 
it look really easy whas os do you self run you zfs on ? I are on solaris 11 now so if I pulle the triger to move to OpenIndiana I nede to move a lot of TB I got 12TB Encrypted now so how are it can I just create 2TB files so a whole hdd and now I just add 6 X 2TB for a new pool how are that work when you do it that way if you understand me :)

I doubt that it will perform well with large pools.
It is more a solution for a smaller pool with sensitive data
(not for a large media pool - but i have not tried)
 
to update/install intiially or from a former version, open a console and enter

su
wget -O - www.napp-it.org/nappit | perl
rebooot

old jobs should run but new ones may enhance functionality

Tried updating but kept hanging at update images. It may have been downloading, but I wasn't sure so cancelled. Instead I updated via Solaris update manager, rebooted and tried again.

This time napp-it updated successfully. Have to agree with others, the new gui is very nice!
 
Could probably work well for larger pools too, would just need a lot of tuning.

I wonder whatlevel you would have to tune stuff also.

On the outer layer, probably want to disable checksum, data caching, maybe metadata caching also?

But with mostly default settings, zfs inside zfs could be a huge memory hog, and maybe start chewing through more cpu too.
 
I have just built an all-in-one on an ESXi 5.0 HP server with an LSI-9211-8i hba passed through to an OpenIndiana virtual SAN. The HP server has four physical NIC cards. In ESXi I created three virtual switches:
vSwitch0 with one NIC for external connectivity for ESXi management (vSphere access)
vSwitch1 with two NICs to hold a virtual linux server (which will be dual-homed -- one external and one virtual VMXNET3 nic) to be used as a DNS server, firewall and router for the other virtual hosts.
vSwitch2 with no external NICS for all other virtual hosts (including the OI SAN host) intending to use a private subnet (10.10.40.0/24). The router host connected to vSwitch1 would have a virtual VMXNET3 nic assigned an address in the 10.10.40.0/24 subnet. It's other NIC would have a routeable IP address and be used for external network access for the entire all-in-one.

When I first got the OI system going I attached an external NIC to vSwitch2 and assigned it a non-private (routeable) address so I could update the OS and download napp-it. But after setting it up, I removed the external NIC from the virtual switch.

Here's the problem: I want to use an ZFS folder shared via NFS as an ESXi datastore for virtual machine images. But the ESXi server on vSwitch0 cannot route to any host on vSwitch2. Once I create and configure the Centos server which will be dual-homed with one nic on the 10.10.40.0/24 network I could use it to route to the OI SAN on vSwitch2. But without the OI host being dual-homed too there is no way to get to the datastore on OI until the dual-homed Centos router is up. I really don't want to have the OI virtual machine itself dual-homed with direct external network connectivity because that would be a security risk.

I cannot figure out any way to give ESXi a virtual NIC (making it dual-homed) so it would have connectivity to vSwitch2.

Perhaps I misunderstand something. Any help/suggestions greatly appreciated!

UPDATE: I discovered that by configuring a vmkernel NIC for vSwitch2 with an IP address on the 10.10.40.0/24 I was able to vmkping the OI virtual SAN from ESXi. I then successfully set up an NFS datastore on the OI SAN. Gea -- your napp-it works wonderfully well! Kudos for all this hard work you have put into the project. Before when I was trying to learn OpenIndiana, I was doing most of what napp-it does via the command line. Even though I have tons of experience in unix/linux, that turned out to be a daunting task due to a) the latest Solaris 11 version of most networking stuff is different than it used to be, and never having used ZFS I had to spend tons of time reading documentation and experimenting.
--peter
 
Last edited:
When I login onto my OI server via IPMI I see in the main window that there are several updates to be downloaded from th OI server. Is it wise to update within OI desktop?

ty
 
I just built an all-in-one and am running OpenIndiana and using napp-it. I have everything running great. The only problem is I get better network performance if I run "ifconfig vmxnet3s0 mtu 1500" to set the mtu to 1500. My network does not support jumbo frames and the vmxnet3 driver sets up at 9000.

How can I get "ifconfig vmxnet3s0 mtu 1500" to run automatically at startup?

The nvadm command will not set the mtu to 1500 permanently so I so far have been unable to make the change persistent.

Thanks for any help.

CWagz
 
When I login onto my OI server via IPMI I see in the main window that there are several updates to be downloaded from th OI server. Is it wise to update within OI desktop?

ty

I ran all the updates without a problem. Actually I believe the .8e version of napp-it runs the updates before installing as well.
 
Well I updated Napp-It to latest version but it didn't update OI!
Had to do that from within OI!
All works fine after update!
Thanks!
 
ndmp is only part of the NexentaCore config. (no longer available, Nexenta deleted all infos on nexenta.org)
It is done on Nexenta like..(have not tried for years)

Enable NDMP daemon:
# svcadm enable ndmpd

Enable authorization. NDMP allows to work in three modes: no auth, clear text, and md5 digest. Most of backup software will require md5 hashed password. “ndmpcopy” works with clear text passwords. We will use “tmpuser” and “tmppass” on both machines for example:

# ndmpadm enable -a cleartext -u tmpuser

Enter new password:
Re-enter password:

Now its time to make the NDMP copy. Assume we have /opt/a on one machine and /opt/b on another. /opt/b is empty right now. This command will start copying:

# ndmpcopy -v hostone:/opt/a hosttwo:/opt/b -sa tmpuser:tmppass -da tmpuser:tmppass

Awesome I will give this a shot!
 
Back
Top