OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

Thank you for your detailed response to my inquiry. I do use napp-it on 2 servers running Solaris 11 Express (Home Use-Video streaming and storage).
I was just curious as it seems this ZFS port seems like a very serious project being developed by some of the engineers that were responsible for apples ZFS efforts.

..

I first noticed ZFS when Apple announced to support it in 10.4 After a detailed look,
I decided that i must have it to replace our Windows file server in a mixed Mac/ Windows environment.
Mac servers were quite useless at that moment (for my needs) although there was hope with XSAN.
Up to this i refused to use Unix/Linux because of its missing usability.

But then nothing happened with Apple. I moved to NexentaStor and decided to setup my own free ZFS server
with my own management interface to fit my needs.

Sun/Oracle Solaris is the technology leader in (closed and comercial) ZFS developmemt. Luckily we have
Illumos, the free fork of Solaris and we have FreeBSD (I suppose the OSX variant is closely based on)
with a quite modern ZFS file system to be sure that a free ZFS can survive.

It is good to have alternatives. I would like to use OSX as my filer but i doubt, Apple will focus on servers again.
 
Last edited:
If I may, I would like to ask which flavor/distro I should look at seriously when building a ZFS storage server?

I played with nexentastor, but the community edition is limited, and the 18TB limit may come back to bite me in the future.

I then switched to give FreeNAS a try; while everything appears to work and I am comfortable with *bsd command line, I feel as though it is not REAL ZFS.

So I am now looking for the most stable and supported FREE solution to build a ZFS server.

I came across this thread and see that post 1 has been recently updated. I gave openindiana a try but could not figure out how to A.) configure my network interface, and B.) setup LAGG interfaces for use with LACP.

What is currently the favorite distro or package?
 
What is currently the favorite distro or package?

You may have a lot of reasons for a favorite package.
If you look at SUN's idea of ZFS like a base of a package of new integrated server technologies,
then you get the best from Sun/Oracle. But thats not free.

Nearest is the free fork OpenIndiana. Currently it is a developer release only. But if you look
for the 'Best' free ZFS, use OpenIndiana.

Your problem: Use Openindiana live 151a. You can configure your interface via the GUI.
For Link aggregation google for crossbow. (i would think about 10 Gbe for much better performance)
 
When you say "openindiana live", are you referring to their "desktop" edition? And if so, is the desktop edition - simply the server version with a gnome gui over top?

What drawbacks are there with ZFS on FreeBSD, such as "freenas".

After fiddling with the openindiana server install and trying to read manpage after manpage, I could not get the network adapter (e1000g0) to come online... The fact that I am not familiar with the command line almost warrants me to stick with ZFS on FreeBSD even if it may be lacking the "latest" features, simply because if something with the core OS were to go wrong I would be able to navigate through cli to remedy the issue.

With regards to the fork of solaris, would i be able to install the OS on two drives such as I performed with nexentastor?
 
When you say "openindiana live", are you referring to their "desktop" edition? And if so, is the desktop edition - simply the server version with a gnome gui over top?

in short, yes
server install = minimal install = user unfriendly = bad usability= i do not like mostly, but best security

What drawbacks are there with ZFS on FreeBSD, such as "freenas".

Pro Solaris:
more modern ZFS, integrated CIFS and NFS server as a ZFS property,
faster and better Windows ACL compatibility compared to Samba, integrated iSCSI via Comstar,
integrated dtrace, integrated crossbow (virtual network technology), best performance without tuning the os

After fiddling with the openindiana server install and trying to read manpage after manpage, I could not get the network adapter (e1000g0) to come online... The fact that I am not familiar with the command line almost warrants me to stick with ZFS on FreeBSD even if it may be lacking the "latest" features, simply because if something with the core OS were to go wrong I would be able to navigate through cli to remedy the issue.

read the manuals from Oracle. If you use the live/Desktop version, you can set basic settings without CLI

With regards to the fork of solaris, would i be able to install the OS on two drives such as I performed with nexentastor?

If you want to mirror your system pool:
http://constantin.glez.de/blog/2011/03/how-set-zfs-root-pool-mirror-oracle-solaris-11-express

or just use a driverless hardware raid-1 enclosure
 
Last edited:
Can anyone offer some assistance? I am trying to integrate smart values (mainly temps) into napp-it.

From the CLI I can run a smartctl on the disks and it detects temps and that smart is ok (smartctl -a /dev/rdsk/c2t0d0s0 -T permissive -d scsi -H)

I can also run a smartd -q onecheck to parse the smartd.conf and its fine detecting my two rpool disks.

Ive also setup the SMF service which is online/running and also create the file in /etc/default/smartmontools.

Everything CLI appears fine but I cant make it talk back to napp-it.

Any help would be greatly appreciated.
Regards,
Paul
 
if you want to play with, you may add a private menu-item
with your additions.

napp-it itself loads smartvalues like state and temp in
/var/web-gui/data/napp-it/zfsos/_lib/get-disk.pl to hash %disk
 
Hey, wanting to config the ftp server on opensolaris 151, any guide out there to show the command lines to add users and the directories allowed? Or should I setup the ftp on the Windows 7 VM instead of opensolaris? I dont mind learning the command line as its probably faster than the windows 7 over SMB
 
I am experiencing what I believe to be bugs when using napp-it (6) with both OpenIndiana AND Solaris Express 11.

I setup both OS on two separate VMWare ESXi machines, and added a 16GB OS drive, plus 5 more virtual disks to play with zfs storage.

Both distributions install fine, and the script seems to execute and install on both platforms. Upon completion, I "passwd root" and then reboot the system.

When the system comes back up, I immediatley go to storage pools and create the first pool, a 2 drive mirror. I selected the first two disks, chose "mirror" and left everything else default.

Here's where I experience difficulty.

I then wanted to add vdev so I went to "create vdev" but with BOTH OpenIndiana AND Solaris Express, napp-it is telling me "there are no storage pools".

On OpenIndiana I went back to "create pool" and 2 of my drives have disappeared.

On Solaris Express, I still see all 5 drives so I again select the first two drives, create the "mirror" and click "save" but get an "OOPS error" stating that the drive is already a member of a storage pool.

Am I doing something wrong? Shouldn't they both be showing up under "add vdev" after creating pools?
 
I am experiencing what I believe to be bugs when using napp-it (6) with both OpenIndiana AND Solaris Express 11.

I setup both OS on two separate VMWare ESXi machines, and added a 16GB OS drive, plus 5 more virtual disks to play with zfs storage.

Both distributions install fine, and the script seems to execute and install on both platforms. Upon completion, I "passwd root" and then reboot the system.

When the system comes back up, I immediatley go to storage pools and create the first pool, a 2 drive mirror. I selected the first two disks, chose "mirror" and left everything else default.

Here's where I experience difficulty.

I then wanted to add vdev so I went to "create vdev" but with BOTH OpenIndiana AND Solaris Express, napp-it is telling me "there are no storage pools".

On OpenIndiana I went back to "create pool" and 2 of my drives have disappeared.

On Solaris Express, I still see all 5 drives so I again select the first two drives, create the "mirror" and click "save" but get an "OOPS error" stating that the drive is already a member of a storage pool.

Am I doing something wrong? Shouldn't they both be showing up under "add vdev" after creating pools?

napp-it 0.6 is buffering disk and ZFS state to handle really lots of disks and thousands of ZFS without waiting minutes for
reading them on every screen. This state is reread daily, manually per menu disk - reload or zfs folder - reload
or after actions that change config.

At this point it seems not reloaded after pool creation. Reload the new config manually
via menu ZFS-folder reload/ disk - reload

this bug is fixed in 0.6g
 
Last edited:
Hi, first let me say thanks for developing and sharing this great program!!

I decided to give Solaris 11 a try but I am having problems installing vmware tools. (same steps as in your all-in-one guide and worked on express 11).

At the end, get the error Guest operating system daemon : failed, then

Unable to start services for vmware tools

execution aborted.

Any ideas? TIA
 
Hi, first let me say thanks for developing and sharing this great program!!

I decided to give Solaris 11 a try but I am having problems installing vmware tools. (same steps as in your all-in-one guide and worked on express 11).

At the end, get the error Guest operating system daemon : failed, then

Unable to start services for vmware tools

execution aborted.

Any ideas? TIA

vmware tools are currently not working on new Solaris 11.
We have to wait until someone found the problem.

(Solaris 11 has a lot of new functions and libraries,
same with napp-it, took me two days to get it working)
 
bummer! Guess the only thing I'm concerned about is the network drivers. It comes up as a net0 instead of the regular e1000 (my tests showed the e1000 to be faster than vmxnet3 on SE11 and openindfiana, but maybe I was doing something wrong there)
 
maybe i missed something the other day when setting up a new ZFS pool but can you define the size of a ZFS NFS folder? i have 4 500gb drives in a raidz that i wanted to split up with 750gb to one NFS share with the other 1250gb to another NFS share?
 
napp-it 0.6 is buffering disk and ZFS state to handle really lots of disks and thousands of ZFS without waiting minutes for
reading them on every screen. This state is reread daily, manually per menu disk - reload or zfs folder - reload
or after actions that change config.

At this point it seems not reloaded after pool creation. Reload the new config manually
via menu ZFS-folder reload/ disk - reload

this bug is fixed in 0.6g


I must be doing something wrong - or maybe what I am doing is NOT supported?

When I create my first pool (2 disk mirrored set) -- I want to immediatley add a vdev as a hot spare (1 drive hot spare). But when I click on "add vdev" tab in napp-it, it says no pools exist.

So I then follow your instructions above and click on disks, reload. I then go back to "add vdev" and it STILL says no pools exist.

I can do all of this from command line, but I am looking for a gui that will make this easy for my clients.

ANY HELP OR FURTHER INSTRUCTIONS???
 
I must be doing something wrong - or maybe what I am doing is NOT supported?

When I create my first pool (2 disk mirrored set) -- I want to immediatley add a vdev as a hot spare (1 drive hot spare). But when I click on "add vdev" tab in napp-it, it says no pools exist.

So I then follow your instructions above and click on disks, reload. I then go back to "add vdev" and it STILL says no pools exist.

I can do all of this from command line, but I am looking for a gui that will make this easy for my clients.

ANY HELP OR FURTHER INSTRUCTIONS???

You are right, such basic problems are annoying.
i have just tried but can not reproduce this with newest nightly from today.
This has been a problem. (All actions involved in sharing needs modifications).
Try to update to newest

Be aware for some problems left. There are a lot of new things in Solaris 11
But it should basically work now or even better each day. Do not forget, Solaris 11
is out 6 days.

And knowing some CLI basics is always helpful
 
another question to go with one i asked a bit ago. when updating how should it be done LOL? i just wget it again and now my pools don't show but they still exist as all my data is still online and there

EDIT: lookng under pools thye don't show, but if i import pool they don't exist either but at the bottom fo the page it shows the pools. Whats going on!?!?!
 
You are right, such basic problems are annoying.
i have just tried but can not reproduce this with newest nightly from today.
This has been a problem. (All actions involved in sharing needs modifications).
Try to update to newest

Be aware for some problems left. There are a lot of new things in Solaris 11
But it should basically work now or even better each day. Do not forget, Solaris 11
is out 6 days.

And knowing some CLI basics is always helpful

Thanks! The latest version did solve that problem, although not "immediatley" it took a few minutes, but at least it is now showing up. Thanks for your hard work.
 
maybe i missed something the other day when setting up a new ZFS pool but can you define the size of a ZFS NFS folder? i have 4 500gb drives in a raidz that i wanted to split up with 750gb to one NFS share with the other 1250gb to another NFS share?


you can set a "quota" on your zfs folder. Once you have created the folder/filesystem, go back to the menu item and then read across the options... There are several settings and one of them is quota. You could set that quota to 750GB.

Regarding your question about upgrading, you can click on the napp-it menu item then upgrade is a sub menu underneath. I believe that is the "proper" way to do it, at least it worked for me.

cheers.
 
thanks, set the quotas, but yeah i screwed up the upgrading, how can i get my pools to show up since i used wget instead of the built in upgrade menu?
 
bummer! Guess the only thing I'm concerned about is the network drivers. It comes up as a net0 instead of the regular e1000 (my tests showed the e1000 to be faster than vmxnet3 on SE11 and openindfiana, but maybe I was doing something wrong there)

It's still e1000. net0 is just the virtual name. You can delete the device and recreate it and call it anything you want using ipadm
 
bummer! Guess the only thing I'm concerned about is the network drivers. It comes up as a net0 instead of the regular e1000 (my tests showed the e1000 to be faster than vmxnet3 on SE11 and openindfiana, but maybe I was doing something wrong there)

I'm not sure if your using ESX/ESXi, but if you are, when you created the VM you should choose "other" for the type, and select "Oracle 11 (64 bit)". It tweaks the settings for you and adds the E1000 for network controller type...

But, if you don't want to reinstall, then shut down the virtual machine. Go to "edit virtual machine settings" and click on the network adapter... You can then click the dropdown menu and change it to E1000 for "type". If it's not there, create another machine using the instructions above and see if you see what i'm talking about - if you do, and it's worth it to you, you can create that machine and drop the VMdisk image as that new machines hardrive :)

Not sure what changing the virtual machine network adapter settings would do when you turn the machine back on? Hopefully it's smart enough to see the old one missing and then new one added, and then auto config for you.
 
thanks, set the quotas, but yeah i screwed up the upgrading, how can i get my pools to show up since i used wget instead of the built in upgrade menu?

mostly it does not matter how to update. The wget cares about other os-settings
where the napp-it menu napp-it update only cares about the GUI itself.
I recomend the wget method between 0.5 -> 0.6

After updating you must reread the config with menu ZFS folder and disk reload
Do not interrupt, wait until each coomand is finished. Depending on amount
of disks/ZFS it could last a few secondes up to a minute (the reason i introduced buffering)
 
Is it possible to install ESXi and the Solaris VM onto software raid-1? IE I'd love to have two smallish SSDs being mirrored to install onto incase one goes kaput.
 
@Gea

I have a strange issue I think you might be able to assist with:

I use OI and napp-it to store my movies. Everything work perfectly from my workstation and Dune HD media player. My issue is when I browse for network shares on my WDTV Live media streamer(uses smb AFAIK), the only item I see on the list is the name "WORKGROUP". Of course trying to connect to that is not working.

Temporary solution is to restart network services from the napp-it interface. After a couple minutes the correct hostname shows up and I can browse the contents.

All connections are wired 100 or 1000Mbit, running DD-WRT on my router, no local DNS server. In all cases the napp-it box has been online before turning on the WDTV Live.

I have tried everything to make it work, here's what I've tried so far:

* Changed the workgroup of the WDTV to something alternate, didn't work
* Reset factory defaults on the WDTV
* Tried restarting SMB service on napp-it, didn't work.
* Messing around with DNS settings, etc in my router, no effect.
* Setting all IP's to static addresses, no effect.
* Added ip address of my router to the napp-it->System->Network->Router menu
* Disabled NFS
* Disabled/Enabled SMB shares
* Rebooted my napp-it appliance

So all that works is to online napp-it box, online WDTV and restart napp-it network service AFTER the WDTV is online.

Any pointers would be much appreciated, thanks.

/Jim
 
@Gea

I have a strange issue I think you might be able to assist with:

I use OI and napp-it to store my movies. Everything work perfectly from my workstation and Dune HD media player. My issue is when I browse for network shares on my WDTV Live media streamer(uses smb AFAIK), the only item I see on the list is the name "WORKGROUP". Of course trying to connect to that is not working.

Temporary solution is to restart network services from the napp-it interface. After a couple minutes the correct hostname shows up and I can browse the contents.

All connections are wired 100 or 1000Mbit, running DD-WRT on my router, no local DNS server. In all cases the napp-it box has been online before turning on the WDTV Live.

I have tried everything to make it work, here's what I've tried so far:

* Changed the workgroup of the WDTV to something alternate, didn't work
* Reset factory defaults on the WDTV
* Tried restarting SMB service on napp-it, didn't work.
* Messing around with DNS settings, etc in my router, no effect.
* Setting all IP's to static addresses, no effect.
* Added ip address of my router to the napp-it->System->Network->Router menu
* Disabled NFS
* Disabled/Enabled SMB shares
* Rebooted my napp-it appliance

So all that works is to online napp-it box, online WDTV and restart napp-it network service AFTER the WDTV is online.

Any pointers would be much appreciated, thanks.

/Jim

Basically (without a centralized DNS system) all systems can see only hosts that are members of the same workgroup. The workgroup named 'workgroup' is the default workgroup in Windows.
Think about a compuer shouting into the cable: Hey i am a member of workgroup, are there others?

Try: Goto napp-it menu service-smb and join workgroup 'workgroup'.
You should now be able to see other computers also beeing a member of 'workgroup'.
 
I've never heard of driverless raid that is very interesting, I wonder how it affects performance of SSDs.

They are not as fast as a mirror on a high-end Raid-controller.
But they are nearly as fast as a single drive
 
Hi Gea,

Long time reader first time posting.

I have found a small issue on the latest current build of napp-it with the latest build of OI.

I had an issue with the OS build so I decided to start it again from scratch. Once the build was complete and napp-it was installed, I attempted to import the ZFS pool from the napp-it GUI. I get an error when I attempt this with a big orange box and "oops"...

It also shows the correct command underneath this, in my case:

zpool import -f "16503017321535464208" "critdata"

When I run this from the CLI as root it works as expected, but for some reason it's not working from the napp-it GUI. Wondering if you might have a look at this?

Thanks again for all your work.

David
 
Hi Gea,

Long time reader first time posting.

I have found a small issue on the latest current build of napp-it with the latest build of OI.

I had an issue with the OS build so I decided to start it again from scratch. Once the build was complete and napp-it was installed, I attempted to import the ZFS pool from the napp-it GUI. I get an error when I attempt this with a big orange box and "oops"...

It also shows the correct command underneath this, in my case:

zpool import -f "16503017321535464208" "critdata"

When I run this from the CLI as root it works as expected, but for some reason it's not working from the napp-it GUI. Wondering if you might have a look at this?

Thanks again for all your work.

David

I can add that this bug is also present in Solaris 11. I had to manually import both of my pools.

Another bug is that no pools are listed in the pools table. They show up fine in the text underneath the table, but the table itself is empty.

This is on a fresh install of napp-it on Solaris 11.
 
Another bug is that no pools are listed in the pools table. They show up fine in the text underneath the table, but the table itself is empty.

You can fix this by hitting reload under ZFS folder, you should now see all pools correctly. Not sure if this is a feature or a bug....
 
Basically (without a centralized DNS system) all systems can see only hosts that are members of the same workgroup. The workgroup named 'workgroup' is the default workgroup in Windows.
Think about a compuer shouting into the cable: Hey i am a member of workgroup, are there others?

Try: Goto napp-it menu service-smb and join workgroup 'workgroup'.
You should now be able to see other computers also beeing a member of 'workgroup'.

Thanks a lot, but it seems like napp-it allready is a member of "workgroup". I'll try and figure out a way to have a local DNS server.

By the way, is anyone experiencing that the napp-it webinterface has become slow after the update to 0.6? When typing the napp-it ipadress in my browser I'm seeing "initialize napp-it" for around 3-4 seconds before the login box pops up. Furthermore I'm having the same delays when navigating napp-it menus.

Some times it takes 7+ seconds for a page to load. I've tried running napp-it from a local web browser(localhost:81), with the same results.

/Jim
 
By the way, is anyone experiencing that the napp-it webinterface has become slow after the update to 0.6? When typing the napp-it ipadress in my browser I'm seeing "initialize napp-it" for around 3-4 seconds before the login box pops up. Furthermore I'm having the same delays when navigating napp-it menus.

Some times it takes 7+ seconds for a page to load. I've tried running napp-it from a local web browser(localhost:81), with the same results.

/Jim

Most actions are similar between 0.5 and 0.6 like
at login: setting permission on napp-it files and check config
on every page: load basic os, zfs and disk config.

but the more complex, napp-it will become, the slower it is due to more commands that are called
menus below disk and zds can last longer than those under system and user. especially smart checks
for disks and ZFS lists with a lot of ZFS can last.

I try from time to time to improve. But currently my main concern is to remove bugs introduced with
changes needed for Solaris 11 compatibility.

there are some more checks on 0.6, but it should not increase times extremely.
try newest and a reboot, optionally check menu system - statistics for problems
 
Last edited:
Most actions are similar between 0.5 and 0.6 like
at login: setting permission on napp-it files and check config
on every page: load basic os, zfs and disk config.

but the more complex, napp-it will become, the slower it is due to more commands that are called
menus below disk and zds can last longer than those under system and user. especially smart checks
for disks and ZFS lists with a lot of ZFS can last.

I try from time to time to improve. But currently my main concern is to remove bugs introduced with
changes needed for Solaris 11 compatibility.

there are some more checks on 0.6, but it should not increase times extremely.
try newest and a reboot, optionally check menu system - statistics for problems

Thanks a lot, although it seemed VERY slow, so I decided to do a fresh install of napp-it. That reduced the load times significantly :) I guess it must have been a local issue here with me.
 
Napp-it.

I just re-installed napp-it on a new boot (rpool) disk I install on my server. When I try to access the web interface using http://<IP-Address:81, I get the following message :

(lib hash2file ) Datei /var/web-gui/data/napp-it/_log/tmp/zfs.cfg konnte nicht geschrieben werden.)
Fragen Sie bei Bedarf Ihren Systembetreuer Status: 500 Content-type: text/html
Software error:

(lib hash2file ) Datei /var/web-gui/data/napp-it/_log/tmp/zfs.cfg konnte nicht geschrieben werden.) at admin-lib.pl line 422.

For help, please send mail to this site's webmaster, giving this error message and the time and date of the error.
[Fri Nov 11 10:50:06 2011] admin.pl: (lib hash2file ) Datei /var/web-gui/data/napp-it/_log/tmp/zfs.cfg konnte nicht geschrieben werden.) at admin-lib.pl line 422.

The box is running Solaris 11 Express Edition.

Can someone please let me know what's wrong and how to fix this ?
 
Napp-it.

I just re-installed napp-it on a new boot (rpool) disk I install on my server. When I try to access the web interface using http://<IP-Address:81, I get the following message :



The box is running Solaris 11 Express Edition.

Can someone please let me know what's wrong and how to fix this ?

that is a write error due to a permission problem or a missing folder .
have you installed napp-it via wget?

try: delete/rename folder /var/web-gui and reinstall
 
Re-installing fixed it. Thanks gea.

I did use wget to install it the first time. So not sure what happened. But all set now.

Thanks
 
Back
Top