OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

dmo, which kind of controller do you have?

Usually /etc/power.conf has to be edited to enable power management and have device-threshold lines for each hard drive with a timeout.
 
I am having trouble reliably connecting to SMB and NFS shares from OI on OS X 10.7 Lion. I am aware of the SMB Finder bug but when I try to connect as Guest it doesn't allow it while Windows can. Maybe I'm not up to speed with how this works.

NFS shares are also messed up. I can mount any share and read any file, but when copying a file it locks up and randomly says disconnected. I tried "sudo mount_nfs -o sync -o vers=3 storeageserver:/yourpool/yourshare localmount/" as recommended on the OI site but I still can't write reliably to any NFS share. Any advice?
 
@_Gea

I'm confused about the power management options. Here's what I'm trying to achieve:

1) Spindown (or lower power usage) of harddrives after 10 minutes.
2) Lower CPU power usage as much as possible when not used.
3) Lower entire system power usage to an absolute minimum, when resources are not needed.
4) Hibernate or standby the system when not used in 60 minutes.

Can you please clarify how to achieve these options(if possible)?

Thanks again for an excellent product, I've tried a lot of combinations, but napp-it is by FAR the best out there!!

Best regards
Jim
 
Well I went ahead and upgraded, was failrly painless. For some reason the pkg script broke after the upgrade due to needing libssl 0.9.8 but only 1.0.0 was available. I grabbed it from a solaris VM and now its working. Crashplan appears to have broken as it has a dependency of svc:/milestone/sysconfig which is absent. I e-mailed crashplan.

Additionally, the new Solaris 11 has a new zfs/zpool version - it is saying my root and data pool are using an older format. What is the new zfs version, I can't seem to find out, is it 32, my zpools have version 31. Should I upgrade them?
 
thanks danswartz

ah:

32 One MB blocksize
33 Improved share support

I guess there are some new features, I wonder if you can share subdirectories via smb.
 
Finally, the Solaris 11 version has been released today, after more than 5 years of development. You can download the sharp release here:
http://www.oracle.com/technetwork/server-storage/solaris11/downloads/index.html


Or upgrade your Solaris 11 Express installation:
http://download.oracle.com/docs/cd/E23824_01/html/E23811/glpgv.html#glpdr

just to inform
napp-it 0.6d is basically running with Solaris 11 final
i will upload a preview tomorrow

only some minor corrections needed together with a ln from
/usr/lib/libssl.so.0.9.8 to libssl.so.1.0.0 and
/usr/lib/libcrypto.so.0.9.8 to libcrypto.so in the same folder
 
Last edited:
@_Gea

I'm confused about the power management options. Here's what I'm trying to achieve:

1) Spindown (or lower power usage) of harddrives after 10 minutes.
2) Lower CPU power usage as much as possible when not used.
3) Lower entire system power usage to an absolute minimum, when resources are not needed.
4) Hibernate or standby the system when not used in 60 minutes.

Can you please clarify how to achieve these options(if possible)?

Thanks again for an excellent product, I've tried a lot of combinations, but napp-it is by FAR the best out there!!

Best regards
Jim

you can achieve 1 and 2 depending on your cpu
(set settings in power.conf from default to enable) and
google or search this thread for power.conf or solaris power management

3. and 4. are not really achievable
Sun developped Solaris for datacenter and enterprise needs.
Power is important, energy saving only a bit

If you want to save power at home, shut down the server in the evening and power on when needed -
maybee with the help of a simple power timer. (There is a power down job in napp-it)
 
*solved*

I'm not sure if anyone else is running solaris 11 yet but my L2ARC (OCZ Vertex 2 64GB) seems to be filling much slower than normal under my usual workload. After a number of hours it is still at 0 (read via zpool iostat -v).

Anyone know if there are options about L2ARC filling or new changes? I upgraded to the latest zpool version (33).

edit:

cache settings for the zpool:
data primarycache all default
data secondarycache all default

edit2: maybe my ARC is not full yet. we shall see. yup that was it, nevermind!
 
Last edited:
I've got an urgent share/permissions problem. I had been running a Nexenta Community Edition box for quite some time with a RAIDz2 pool on it. I had decided to switch over to OI+NappIt to match a new SAN I just built that we are running. I imported my existing pool no problem, and everything is healthy. I somewhat hastily upgraded the pool version from 26 to 28 which I probably shouldn't have because now I can't go back. Problem is I started turning on SMB and NFS sharing for all my imported folders but all the ACL and permission stuff is staying 'zeroed' out. Every folder looks like this:
Folder-ACL= root-only, SMB-SHARE-All= none, PERM= "-"

Even the permission on the pool itself are set to "-" instead 755+ like usual when creating a pool.

If I try to change anything via the napp-it gui (acl, smb-share, or Perm) either nothing happens/changes OR i get an error message which says:
"chmod: WARNING: can't access /tank/ISOchmod:" (example folder here is tank/ISO)

I have to get this fixed somehow and quickly. I have done multiple napp it installs and done the passwd root chage and reboot afterwords just like other times. It's just that this imported pool is borked!
 
Last edited:
Adding Oracle Solaris 11 to my benchmark pool and will post results tomorrow.

Pool includes: Solaris 11 express TXT, Solaris 11 express GUI, Solaris 11 GUI, Solaris 11 TXT, Open Indiana 151a TXT, Open Indiana 151a GUI, Nexanta STOR, Nexanta Core, and FreeNAS.

Edit: Both Bonnie and IOMeter are being used for testing.
 
Last edited:
I've got an urgent share/permissions problem. I had been running a Nexenta Community Edition box for quite some time with a RAIDz2 pool on it. I had decided to switch over to OI+NappIt to match a new SAN I just built that we are running. I imported my existing pool no problem, and everything is healthy. I somewhat hastily upgraded the pool version from 26 to 28 which I probably shouldn't have because now I can't go back. Problem is I started turning on SMB and NFS sharing for all my imported folders but all the ACL and permission stuff is staying 'zeroed' out. Every folder looks like this:
Folder-ACL= root-only, SMB-SHARE-All= none, PERM= "-"

Even the permission on the pool itself are set to "-" instead 755+ like usual when creating a pool.

If I try to change anything via the napp-it gui (acl, smb-share, or Perm) either nothing happens/changes OR i get an error message which says:
"chmod: WARNING: can't access /tank/ISOchmod:" (example folder here is tank/ISO)

I have to get this fixed somehow and quickly. I have done multiple napp it installs and done the passwd root chage and reboot afterwords just like other times. It's just that this imported pool is borked!

possible problems and solutions
NexentaStor mounts all pools under /volumes
where all other OS's do under /

If you import, you must set a new mountpoint
(napp-it 0.6d should care about)

napp-it 0.6 is buffering ZFS and disk info for much better performance
with a lot of disks and ZFS and snaps:
try menu ZFS folder - reload if you have napp-it 0.6

in any case, i would suggest to update to 0.6d because acl extension
is much better there. try export/ import pool with 0.6d.
 
possible problems and solutions
NexentaStor mounts all pools under /volumes
where all other OS's do under /

If you import, you must set a new mountpoint
(napp-it 0.6d should care about)

napp-it 0.6 is buffering ZFS and disk info for much better performance
with a lot of disks and ZFS and snaps:
try menu ZFS folder - reload if you have napp-it 0.6

in any case, i would suggest to update to 0.6d because acl extension
is much better there. try export/ import pool with 0.6d.

This is what happened. I have tried exporting and imported into 0.6d. My pool is called tank and got imported with a mountpoint of /volumes/tank. There is some progress in that under the ZFS folders I now show the pool tank as having perm 755+. The subfolders still show same as before and can't be changed.

I am trying to change the mountpoint of the entire pool by doing the following:
# mkdir /tank
# zfs set mountpoint=/tank volumes/tank

I get an error saying cannot open 'volumes/tank': dataset does not exist
 
Hurm, I am getting a permission denied on Solaris 11 txt after fresh install when using passwd root.
 
This is what happened. I have tried exporting and imported into 0.6d. My pool is called tank and got imported with a mountpoint of /volumes/tank. There is some progress in that under the ZFS folders I now show the pool tank as having perm 755+. The subfolders still show same as before and can't be changed.

I am trying to change the mountpoint of the entire pool by doing the following:
# mkdir /tank
# zfs set mountpoint=/tank volumes/tank

I get an error saying cannot open 'volumes/tank': dataset does not exist

Name of your pool is tank so try after import:
# zfs set mountpoint=/tank tank

and you should not have a folder /tank when setting a mountpoint to /tank
 
Still getting it, going to try making sure the permissions are correct on the /etc/config files.

you may try to activate root as a valid account with napp-it menu services- ssh
in such a case, you can connect remotly as root ex via WinSCP
 
you may try to activate root as a valid account with napp-it menu services- ssh
in such a case, you can connect remotly as root ex via WinSCP

I think there might be a more stringent enforcement on the password aging now that's causing the problem. I ran into the same problem playing with Solaris 11 last night, no matter what I tried, it would throw a permission denied error when I try to passwd root.

So I shut the thing down, and this afternoon I gave it another try... presto it changed without giving me any grief.
 
I have been running the 0.500 build from july 3. I just upgraded to 0.600d from november 10. During the install, it spewed this:

agent-request -> request: /var/web-gui/data/napp-it/_log/tmp/read_zfs.request
Can't load '/var/web-gui/data/napp-it/CGI/auto/IO/Tty/Tty.so' for module IO::Tt: ld.so.1: perl: fatal: relocation error: file /var/web-gui/data/napp-it/CGI/auo/IO/Tty/Tty.so: symbol PL_dowarn: referenced symbol not found at /usr/lib/perl5.8/DynaLoader.pm line 225.
at /var/web-gui/data/napp-it/CGI/IO/Tty.pm line 30
Compilation failed in require at /var/web-gui/data/napp-it/CGI/IO/Pty.pm line 7
BEGIN failed--compilation aborted at /var/web-gui/data/napp-it/CGI/IO/Pty.pm lie 7.
Compilation failed in require at /var/web-gui/data/napp-it/CGI/Expect.pm line 2.
BEGIN failed--compilation aborted at /var/web-gui/data/napp-it/CGI/Expect.pm lie 22.
Compilation failed in require at /var/web-gui/data/napp-it/zfsos/_lib/zfslib.plline 1491.
BEGIN failed--compilation aborted at /var/web-gui/data/napp-it/zfsos/_lib/zfsli.pl line 1491.
Compilation failed in require at /var/web-gui/data/napp-it/zfsos/_lib/scripts/aent-request.pl line 103.

But everything *seems* to be working?
 
I have been running the 0.500 build from july 3. I just upgraded to 0.600d from november 10. During the install, it spewed this:

agent-request -> request: /var/web-gui/data/napp-it/_log/tmp/read_zfs.request
Can't load '/var/web-gui/data/napp-it/CGI/auto/IO/Tty/Tty.so' for module IO::Tt: ld.so.1: perl: fatal: relocation error: file /var/web-gui/data/napp-it/CGI/auo/IO/Tty/Tty.so: symbol PL_dowarn: referenced symbol not found at /usr/lib/perl5.8/DynaLoader.pm line 225.
at /var/web-gui/data/napp-it/CGI/IO/Tty.pm line 30
Compilation failed in require at /var/web-gui/data/napp-it/CGI/IO/Pty.pm line 7
BEGIN failed--compilation aborted at /var/web-gui/data/napp-it/CGI/IO/Pty.pm lie 7.
Compilation failed in require at /var/web-gui/data/napp-it/CGI/Expect.pm line 2.
BEGIN failed--compilation aborted at /var/web-gui/data/napp-it/CGI/Expect.pm lie 22.
Compilation failed in require at /var/web-gui/data/napp-it/zfsos/_lib/zfslib.plline 1491.
BEGIN failed--compilation aborted at /var/web-gui/data/napp-it/zfsos/_lib/zfsli.pl line 1491.
Compilation failed in require at /var/web-gui/data/napp-it/zfsos/_lib/scripts/aent-request.pl line 103.

But everything *seems* to be working?

this is a message from the background agents used for job management, grouping appliances
and replication. have you used the update function within napp-it? If so, update via online updater
wget -O - www.napp-it.org/nappit | perl and reboot afterwards.
 
I am confused. I did do 'wget -O - www.napp-it.org/nappit | perl'. That's how I did the update. Or do you mean I should have rebooted? I didn't think I needed to except for the initial install of napp-it?

After update 0.5 -> 0.6 a reboot is needed

Agent management is in the init script so it can be started and stopped together with napp-it.
via /etc/init.d/napp-it [start/stop/restart]
 
Ah, okay, thanks. I want to reboot the all in one anyway so I can look at some bios settings, so this is a good excuse to do that.
 
Ah, okay, thanks. I want to reboot the all in one anyway so I can look at some bios settings, so this is a good excuse to do that.

You could also restart napp-it at console via
sudo /etc/init.d/napp-it restart
 
Okay, cool. Any concerns about this that came out then:

/var/web-gui/tools/httpd/napp-it-mhttpd: started as root without requesting chroot(), warning only
 
I must be having a long day, neither Solaris 11 txt or GUI wanted to cleanly install and constantly gave errors. Napp-it worked, but I eventually gave up after reinstalling everything, only to be greeted by a frozen Oracle logo on startup. I reinstalled OI151a and installed Napp-It .600d, but every time to try to add a V-dev to the pool I created, I get the error that no pools exist. On the Pools page, it does not list a pool under zpool list- but under zpool status the initial pool I created is there. Attempting to add the disks via CMD give me errors about the discs being members of another zpool, even with the -f option.

Is there a easy way to strip away any previous config info from the 20~ drives I have installed? I am attributing this to the fact I have installed nearly a dozen OS's on this box and destroyed/recreated the same zfs pool for each OS.
 
dmo, which kind of controller do you have?

Usually /etc/power.conf has to be edited to enable power management and have device-threshold lines for each hard drive with a timeout.

I have an IBM M1015 flashed with LSI 9240-8i firmware. Does this not allow spindown? :(

There definitely are device threashold lines for each of my hard drives.... Spindown is pretty obvious though right? When your drives spin down they need to spin back up again. It's quite audible, but I can feel all my drives spinning still...
 
Last edited:
Okay, cool. Any concerns about this that came out then:

/var/web-gui/tools/httpd/napp-it-mhttpd: started as root without requesting chroot(), warning only

thats the usual warning when starting minihttpd webserver from the root account
the webserver is then running under the napp-it account without any permissions but sudo to root
(usually its not recomended to start a webserver with extended permissions but needed in this case to manage the box)
 
Last edited:
I must be having a long day, neither Solaris 11 txt or GUI wanted to cleanly install and constantly gave errors. Napp-it worked, but I eventually gave up after reinstalling everything, only to be greeted by a frozen Oracle logo on startup. I reinstalled OI151a and installed Napp-It .600d, but every time to try to add a V-dev to the pool I created, I get the error that no pools exist. On the Pools page, it does not list a pool under zpool list- but under zpool status the initial pool I created is there. Attempting to add the disks via CMD give me errors about the discs being members of another zpool, even with the -f option.

Is there a easy way to strip away any previous config info from the 20~ drives I have installed? I am attributing this to the fact I have installed nearly a dozen OS's on this box and destroyed/recreated the same zfs pool for each OS.

if there is an error, you must eventually reload the config in napp-it via menu zfs folder - reload or disk reload
(the config is buffered to have good performance with up to thousands of zfs and snaps or lots of disks)

If you want to delete disk config info you must reformat the disks. I do it with a external sata to USB adapter on my PC/Mac
(Its a feature not to eventually overwrite ZFS pool disks from not-destroyed pools)
 
Last edited:
I figured out reloading the config and I had the suspicion that's what I would need to do to reformat the drives. Thankfully everything is going well now. Thanks _Gea!
 
I have an IBM M1015 flashed with LSI 9240-8i firmware. Does this not allow spindown? :(

There definitely are device threashold lines for each of my hard drives.... Spindown is pretty obvious though right? When your drives spin down they need to spin back up again. It's quite audible, but I can feel all my drives spinning still...

should work with all controllers. set power management from default to enable.
eventually check if fault management is disabled,
see http://www.nexenta.org/boards/1/topics/1414
 
Last edited:
Hi, I'm getting this errors under autojob log

Can't open perl script "/var/web-gui/data/napp-it/zfsos/15_jobs and data services/auto.pl": No such file or directory

It seems that the programmed scrubs doesn't run. Will reinstall solve the missing file? (if I reinstall the settings will be preserved?

thank you
 
Hi, I'm getting this errors under autojob log

Can't open perl script "/var/web-gui/data/napp-it/zfsos/15_jobs and data services/auto.pl": No such file or directory

It seems that the programmed scrubs doesn't run. Will reinstall solve the missing file? (if I reinstall the settings will be preserved?

thank you

if you get such an error after an update with old scub or snap jobs,
delete and recreate these jobs
 
Gea..
With all the hard work you put into napp-it I'm sure your very busy but do you have any plans or desire to support Mac OSX ZFS? I'm currently Beta testing the Z-410 storage system for the Mac from http://tenscomplement.com/ . I have a 5 drive raidz set up but I miss your napp-it GUI.
 
Gea..
With all the hard work you put into napp-it I'm sure your very busy but do you have any plans or desire to support Mac OSX ZFS? I'm currently Beta testing the Z-410 storage system for the Mac from http://tenscomplement.com/ . I have a 5 drive raidz set up but I miss your napp-it GUI.

hello wingfat

I love Macs and i have a lot of them at work. napp-it as a web-Gui should work
on Macs without problem. Only a webserver and a perl-interpreter is needed.

The main problem is, that ZFS is not only a filesystem that you can adopt easily
to another platform in whole. In the last 10 years Sun had developped OpenSolaris
as a complete datacenter-ready server OS around ZFS with a lot of key-features
like the kernel based SMB and NFS server, the virtual switches with crossbow, iscsi
targets with comstar, drace, Windows compatible ACL and a lot of other things.
The whole is the reason why there is such a hype around ZFS.

ZFS on FreeBSD and i suppose on Z-410 is very different in possibilities and different
in such a lot of settings and procedures that napp-it must become a complete different
piece of software if it should support one of them. So do not expect a napp-it release
for *BSD or OSX

And to be honest. Apple has no longer a focus on pro-users. I do not expect a future
for ZFS on OSX. ZFS is a technology for large servers not for single disk desktops.
Apple have had the chance to buy SUN. But Steve Jobs decided against.

Even the remaining pro users that may need such a server for video editing do not
justify the needed efforts. If one needs or likes ZFS, it is not a problem to use it from
OSX via iSCSI, NFS or SMB. Only missing thing is a link 10Gb Ethernet to Thunderbolt
in iMacs. On a MacPro, you may connect your OSX machine with a Solaris SAN now via
10 Gb Ethernet.

Maybee this may change sometimes. But currently thats my opinion.
 
hello wingfat

I love Macs and i have a lot of them at work. napp-it as a web-Gui should work
on Macs without problem. Only a webserver and a perl-interpreter is needed.

The main problem is, that ZFS is not only a filesystem that you can adopt easily
to another platform in whole. In the last 10 years Sun had developped OpenSolaris
as a complete datacenter-ready server OS around ZFS with a lot of key-features
like the kernel based SMB and NFS server, the virtual switches with crossbow, iscsi
targets with comstar, drace, Windows compatible ACL and a lot of other things.
The whole is the reason why there is such a hype around ZFS.

ZFS on FreeBSD and i suppose on Z-410 is very different in possibilities and different
in such a lot of settings and procedures that napp-it must become a complete different
piece of software if it should support one of them. So do not expect a napp-it release
for *BSD or OSX

And to be honest. Apple has no longer a focus on pro-users. I do not expect a future
for ZFS on OSX. ZFS is a technology for large servers not for single disk desktops.
Apple have had the chance to buy SUN. But Steve Jobs decided against.

Even the remaining pro users that may need such a server for video editing do not
justify the needed efforts. If one needs or likes ZFS, it is not a problem to use it from
OSX via iSCSI, NFS or SMB. Only missing thing is a link 10Gb Ethernet to Thunderbolt
in iMacs. On a MacPro, you may connect your OSX machine with a Solaris SAN now via
10 Gb Ethernet.

Maybee this may change sometimes. But currently thats my opinion.

Thank you for your detailed response to my inquiry. I do use napp-it on 2 servers running Solaris 11 Express (Home Use-Video streaming and storage).
I was just curious as it seems this ZFS port seems like a very serious project being developed by some of the engineers that were responsible for apples ZFS efforts.

I have pasted a part of the release notes for your review.

Release Notes
Z410 Storage Beta

build 2011.11.07
✔Fixed a recent regression that was causing performance problems when Spotlight was indexing.
✔Fixed a panic that could occur when the system was doing ID based lookups (non paths).
✔Fixed a panic that could occur when setting attributes on special files (block and char devices).
✔Adjusted memory thresholds for 32 bit kernels to help avoid memory map exhaustion panics.
✔Added additional checking for memory mapped files to help avoid rare panics under heavy load.
✔Fixed Finder renames for top-level file system.
✔Some minor changes to the system preferences panel.

–––––––––––––––––––––––––––––––––––––––––––––––––––––––––––
build 2011.11.02
✔Fixed zfs send/receive to work with pipe output/input.
✔Added support for using older pool versions (macZFS version 8 pools) and for upgrading them to version 28.
✔Fixed the error return value from exchangedata(2) to be ENOTSUP.
✔Fixed the installer to better handle installing over existing versions of Z410.
✔Fixed some cases of a reentrancy panic sometimes seen during heavy vnode pressure.
✔Fixed a vnode ref underflow panic seen during unmounts and reboots.
✔Fixed a rare panic that could occur when VFS recycled a mapped file with dirty data.
✔Fixed a vnode reference panic that could occur when updating a spa configuration.
✔Fixed a rare panic: assertion failed: ZTOV(zp) == vp (happened only under heavy load).
✔Added additional restrictions for enabling deduplication property.
✔Removed some unnecessary kernel diagnostic logging messages.
✔Some minor edits to zfs documentation (man pages).

–––––––––––––––––––––––––––––––––––––––––––––––––––––––––––
build 2011.10.17
✔Added support for AFP sharing of ZFS file systems. Note that there are still some edge cases that do not work (like when a ZFS file system is owned by the root account).
✔Added man page documentation for the CLI zfs command to section 8 of the man pages— use "man zfs" from the terminal.
✔Added man page documentation for the CLI zpool command to section 8 of the man pages— use "man zpool" from the terminal.
✔Added support for using Advanced Format 512e (emulated 512 byte sectors) drives with ZFS. The "ashift" pool property can now be set at pool creation as a hint to ZFS to ignore the sector size reported by the device(s) used in the pool. See zpool(8) man page "Properties" section and Example #10 for additional information.
✔Added proxy devices (created on demand at mount time) so that each ZFS file system has its own associated disk. This is necessary since some system services and applications assume that all local file systems each have their own unique device.
✔Added several IOKit layer improvements so that disk probing for multiple disk pools is more robust.
✔Added dataset names to all our proxy devices.
✔Fixed some cases where Disk Utility was showing disabled devices for active ZFS file systems.
✔Non top-level file systems are now closer to being first class citizens. Spotlight indexing still isn't working due to their "automounted" status.
✔There were several filename changes for our zfs implementation files (e.g. "/dev/zfs").
✔Added a unique icon for each ZFS file system (now possible since each file system now has an associated device).
✔Added a unique icon for the top-level filesystem.
✔For pools with redundancy, we now wait a few seconds for stragglers before bringing the pool online.
✔For pools with redundancy, late arriving devices will be automatically placed online (auto online).
✔Fixed the displaying of non top-level file systems in "zfs list" and "zfs mount" commands
✔Disabled the default system behavior of taking external file systems offline after a user logs out. ZFS filesystems will now remain mounted so make sure to unmount them if you don't want them hanging around after you logout.
✔Fixed a case where "zpool destroy" and "zpool export" would not release references on the underlying vdevs.
✔Fixed the resolved device paths for zpool create command so that the persistent path is always used.
✔Fixed some edge cases where zfs mounts were failing.
✔Fixed iostat output to accommodate longer device names.
✔We now check for a 64-bit kernel running (k64) before setting the dedup property.
✔Changed a few file system stat values for better compatibility with system services.
✔Reduced our payload size for some internal event messages (for improved Lion compatibility).
✔We now force all Spotlight Index files to have a block size no greater then 4K (the HFS+ default block size). This was done for better compatibility with Spotlight which essentially assumes HFS+ semantics and implementation details (like a fixed 4K logical block size).


Thanks
 
Back
Top