OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

napp-it ver.: 0.9c1 nightly Jun.06.2013.
O/S ver.: SOLARIS 11.1

1) I can receive emails for job "status to -> Info", but do not receive any email for job " alert to -> Disk,Low,Job".

2a) Does napp-it install hdparm or other drive power management software?
-OR-
2b) How do I disable drive power management features on OS boot (AAM, APM & drive spin down)?
-OR-
2c) Does the other OSol (SmartOS, OmniOS & other) clones come with hdparm or other (advanced) drive power management software?

3) How do I setup a job in napp-it (or SOLARIS) to send an email alert, immediately when a drive fails or a pool is degraded (with drive & pool specific information)?

4) How do I setup a job in napp-it (or SOLARIS) to send me emails weekly/bi-weekly, with S.M.A.R.T. data of drives? (Hopefully this can be done as one email for each drive.)

5) Setup email notifications sent sen't immediately after (APC) UPS events (like brownouts, blackouts, over/under voltage etc) occur?

6) Lastly, I still cannot set CIFS/SMB permissions for USER1 to 'Full Access' & USER2 'Read-only Access' to the same shares! I have already tried this from within windows, as per _Gea's suggestions, but this method has not worked so far.

I have solved the following two endemic problems, after almost a year of struggling:
a) SSL/TLS email for email notifications/alert.
b) APC UPS driver compilation & installation. Now I can finally monitor (currently, only locally) & (have the system respond to) power events properly.

1,3
update to newest napp-it 0.9b3 or 0.9c2 (there was a problem with supress repeated alerts)

2. look at power.conf (OI, OmniOS as well, napp-it menu system Power Management for Illumos based systems)
http://docs.oracle.com/cd/E23824_01/html/821-1451/gjwsz.html

4,5. You need to create a script and start it as "other job"

6. Some Windows (non pro) do not work, use Pro editions, CLI (/usr/bin/chmod) or napp-it ACL extension

ps
If you have fixed some problems, please publish it here (post or add a pdf).
 
Gea,

Do you now if omnios has the high performance ssh patches applied? Also does it support no_cipher by default?

Just asking from a replication stand point. Being able to kick off first time large transfers with no cipher and the HP patch set would seem to be ideal for first time large syncs.
 
Gea,

Do you now if omnios has the high performance ssh patches applied? Also does it support no_cipher by default?

Just asking from a replication stand point. Being able to kick off first time large transfers with no cipher and the HP patch set would seem to be ideal for first time large syncs.

I don't know
You can ask this at the OmniOS mailing list
http://lists.omniti.com/mailman/listinfo/omnios-discuss

You can also check the release notes from time to time
http://omnios.omniti.com/wiki.php/ReleaseNotes
 
Hi,

I want to create a job that deletes any files/folders in a given location that "are more than 31 days old". For this I created a small script:
Code:
#!/bin/bash
find /testpool/testfilesystem \( -atime +31 -a -ctime +31 \) -print0 -delete
The script seems fine when I run it manually (besides the annoying "find: /testpool/testfilesystem/.$EXTEND: Permission denied" - does anyone know a simple way to get rid of this error?).

However, when I run this script as a napp-it job, I get the following error:
Code:
info: sh /home/testuser/Desktop/cleanup.sh: find: bad option -delete

Any idea how I can fix this?
-TLB
 
Hi,

I want to create a job that deletes any files/folders in a given location that "are more than 31 days old". For this I created a small script:
Code:
#!/bin/bash
find /testpool/testfilesystem \( -atime +31 -a -ctime +31 \) -print0 -delete
The script seems fine when I run it manually (besides the annoying "find: /testpool/testfilesystem/.$EXTEND: Permission denied" - does anyone know a simple way to get rid of this error?).

However, when I run this script as a napp-it job, I get the following error:
Code:
info: sh /home/testuser/Desktop/cleanup.sh: find: bad option -delete

Any idea how I can fix this?
-TLB
Compare which version of "find" you're running. Issue "which find" in your interactive shell, and then again as a napp-it job. Then add a full path to find in your command.
 
Nice, three different versions of find in OI151a8:
  • Through the napp-it cmd (run as root): /bin/find
  • In the shell as root: /usr/bin/find
  • In the shell as user: /usr/gnu/bin/find
Apparently the find in /usr/gnu/bin/find supports the -delete option (whereas the two others don't). The following code executes without an error in my VM:
Code:
#!/bin/bash
/usr/gnu/bin/find /testpool/testfilesystem \( -atime +31 -a -ctime +31 \) -print0 -delete
Thanks unhappy_mage!!!

However it still fails on the actual hardware (even though it's the same OI version) - more debugging to do... :)
 
Are you really, really, really sure you want to delete based on atime?
No, I'm not. :) Here's how I think it works (please correct me, if I'm mistaken):
  • Find any files or folders that have not been accessed for 31 days AND
  • Fine any files for folders whose inode hasn't changed in 31 days (so files or folders that have not been copied to this share or been modified)
Do I misinterpret the behavior I'm getting from the (-atime +31 -a -ctime +31) condition?

-TLB
 
mtime has nothing to do with inodes.
Code:
echo >file
would update mtime while the file keeps its inode. Has no impact on the command you run, though.

Make sure your fs is not mounted with the noatime option. Otherwise, frequently accessed but never modified files get wiped.

May I ask what kind of data is important enough to store and keep if accessed once in 30 days but unimportant enough to throw away if not?
 
mtime has nothing to do with inodes.
Correct, but ctime does and "Whenever mtime changes, so does ctime" (http://www.unix.com/tips-tutorials/20526-mtime-ctime-atime.html), hence mtime is really not that important to me. If you
Code:
echo 1>file
on the file you created earlier, you'll notice that both times changed.

In this case, I don't really care if the file was modified years ago, what counts is that someone uploaded that file into this space and I want to keep it there for a limited time for consumption.
The share is a temp space to share files (crash dumps, test binaries, etc.) for a limited time. One day is not enough, maybe a week would be sufficient, but I have the space right now.

Does that make sense?
-TLB
 
_Gea,

1, 3: After updating napp-it to v0.9b3, I am finally receiving 'alert to' emails. However it keeps emailing me about some job error that has occurred more than 7-8 months ago, as that pool is no longer available . Is there any way to clear up these old job related errors?

Also, does the 'alert to' email notify you if a pool is degraded, has errors or if a hard drive stops working?

2: I have already read through most of the power management documentation for SOLARIS (& most OSol clones,) none even mentions anything about advanced power management for hard drives.

On Windows (& LINUX) I can use 'hdparm' to perform these tasks, but this utility has not been ported for SOLARIS or any of the OSol clones.

What program do you (_Gea) use/how do you disable advanced power management features of drives (like AAM, APM & drive spin down)?

4: Thanks for the suggestion of using "other job", however I don't see any way of emailing the smart data. For now though, I have set it up so that the SMART data is written to a log on the local pool.

5: Same as above, I don't see any way to setup a job in napp-it, so that it can email alerts when UPS logs power events.

6: I have been using Pro versions of Windows (7 before & currently 8). I had asked this same question last year too, in this very same thread even. You suggested using the napp-it ACL extension to set permission for the new user on the smb share, it (napp-it ACL extension) didn't work the last time, & it still doesn't now! When I click on the pool name, in ACL extension page, all it does is just reloads the page & displays the pool names again.

Anyway, I have used 'chmod' utility to do the job; took almost a day to thoroughly read the documentation & then test it in a VM first (but the extra effort was well worth it, as I didn't mess anything up when using 'chmod'). The only problem was that the child objects (files & folders) didn't have their ACL updated with the new permissions by 'chmod'. I did manage to manually re-apply (reset) the permissions on all child objects from Windows Explorer (Properties -> Security tab). However with this approach, ACL on each top level (root directory of the pool) folder (& files) has to be reset individually; if you have a lot of folders (or files) at the root directory of the pool, it's going to be a major pain in the neck.

_Gea & Rectal Prolapse,

Most of the solutions were already posted in this thread, although you do have to piece them together. So I'll go ahead & write out the steps below.

APC UPS installation on SOLARIS, OSol clones:
NOTE: for more info, please refer to APC UPS Daemon.

Open a terminal window & login as root (by issuing SU at the prompt)

Install math header
Code:
pkg install header-math

Download the driver package
Code:
wget http://sourceforge.net/projects/apcupsd/files/apcupsd - Stable/3.14.10/apcupsd-3.14.10.tar.gz

untar it
Code:
tar xzvf apcupsd-3.14.10.tar.gz

Code:
cd apcupsd-3.14.10

Configure so that it works with USB type devices
Code:
./configure --enable-usb

Compile & install the packages
Code:
gmake
gmake install

Reboot the system
Code:
reboot -- -r

Edit the configuration file by uncommenting (remove #) & setting the appropriate values
Code:
nano /etc/opt/apcupsd/apcupsd.conf

You absolutely must change the UPSCABLE & UPSTYPE, otherwise it won't work (my APC UPS uses USB cable to connect to the system)
Code:
UPSNAME (provide_an_unique_name_for_your_ups)
UPSCABLE usb
UPSTYPE usb
#DEVICE /dev/ttya (you can leave this commented out)
BATTERYLEVEL (change to whatever you like)
MINUTES (change to whatever you like)
ANNOY (change to whatever you like)
BATTDATE (change to whatever you like)
BEEPSTATE (change to whatever you like)

Start the daemon
Code:
/etc/init.d/apcupsd start

Restart the daemon
Code:
/etc/init.d/apcupsd restart

View current status
Code:
/etc/opt/apcupsd/sbin/apcaccess

View power events log
Code:
tail /etc/opt/apcupsd/apcupsd.events
-OR-
Code:
more /etc/opt/apcupsd/apcupsd.events



TLS/SSL installation on SOLARIS, (possibly) OSol clones:

Open a terminal window & login as root (by issuing SU at the prompt)

Install math header (if you don't already have it)
Code:
pkg install math/header-math

Code:
pkg install net-ssleay

Enter the CPAN shell
NOTE: answer no for manual configuration
Code:
perl -MCPAN -e shell

NOTE: The following has to be issued from within the CPAN shell.

If like me, you also have an old version downloaded/installed, clean it first
Code:
clean Net::SMTP::TLS

NOTE: make sure to accept all dependencies
Code:
install Net::SMTP::TLS

Exit the CPAN shell
Code:
exit

Code:
install cpan

Code:
reload cpan

You have to install an older version of SSL package so that TLS work
Code:
install S/SU/SULLR/IO-Socket-SSL-1.68.tar.gz

That last line, containing the SSL package name and (correct) path, was by nezach from post #5073. Before his post, I was having a lot of trouble as I had the incorrect path; a big thank you to nezach.
 
Last edited:
Does anybody know how to add more space to a vm? I'm getting an error in my OI interface while trying to update it that I have not sufficient space on the HDD! I tried adding more diskspace from 16GB to 20GB in the vm settings but still doesn't work!

thx
 
_Gea,

1, 3: After updating napp-it to v0.9b3, I am finally receiving 'alert to' emails. However it keeps emailing me about some job error that has occurred more than 7-8 months ago, as that pool is no longer available . Is there any way to clear up these old job related errors?

Also, does the 'alert to' email notify you if a pool is degraded, has errors or if a hard drive stops working?

2: I have already read through most of the power management documentation for SOLARIS (& most OSol clones,) none even mentions anything about advanced power management for hard drives.

On Windows (& LINUX) I can use 'hdparm' to perform these tasks, but this utility has not been ported for SOLARIS or any of the OSol clones.

What program do you (_Gea) use/how do you disable advanced power management features of drives (like AAM, APM & drive spin down)?

4: Thanks for the suggestion of using "other job", however I don't see any way of emailing the smart data. For now though, I have set it up so that the SMART data is written to a log on the local pool.

5: Same as above, I don't see any way to setup a job in napp-it, so that it can email alerts when UPS logs power events.

6: I have been using Pro versions of Windows (7 before & currently 8). I had asked this same question last year too, in this very same thread even. You suggested using the napp-it ACL extension to set permission for the new user on the smb share, it (napp-it ACL extension) didn't work the last time, & it still doesn't now! When I click on the pool name, in ACL extension page, all it does is just reloads the page & displays the pool names again.

Anyway, I have used 'chmod' utility to do the job; took almost a day to thoroughly read the documentation & then test it in a VM first (but the extra effort was well worth it, as I didn't mess anything up when using 'chmod'). The only problem was that the child objects (files & folders) didn't have their ACL updated with the new permissions by 'chmod'. I did manage to manually re-apply (reset) the permissions on all child objects from Windows Explorer (Properties -> Security tab). However with this approach, ACL on each top level (root directory of the pool) folder (& files) has to be reset individually; if you have a lot of folders (or files) at the root directory of the pool, it's going to be a major pain in the neck.

1.
Jobs are in /var/web-gui/data/napp-it/_log/jobs/
Errors are logged in .err files. Delete them

2.
I do not need power management

3-5. You can use perl to email similar to my aler script
/var/web-gui/data/napp-it/zfsos/_lib/scripts/job-email.pl

6.
you cannot set ACL with ACL management to pools only to filesystems and folders
 
1.
Jobs are in /var/web-gui/data/napp-it/_log/jobs/
Errors are logged in .err files. Delete them

2.
I do not need power management

3-5. You can use perl to email similar to my aler script
/var/web-gui/data/napp-it/zfsos/_lib/scripts/job-email.pl

6.
you cannot set ACL with ACL management to pools only to filesystems and folders

1: There are no error (.err) files in that directory (/var/web-gui/data/napp-it/_log/jobs/), only .log & .par.

I also have that alert job to send me an email every 12 hours, if there's any errors, but it is emailing me only once a day and even then, it only sends emails once every two days (skipping a day in between.)

2: Exactly; but there doesn't seem to be anyway of disabling advanced power management (which is built into hard drives) from SOLARIS and clones. Constant drive spin down can reduce the longevity of a hard drive, as drive motors are rated for a fixed number of start-stop cycle. Spinning the drives up and down also causes temperatures to fluctuate, causing temperature related stress and reducing its lifetime.

3, 4, 5: Wow, that's some complex code (not to mention a lot)! Haven't done any programming for a whole now, also don't know PERL. That said, however, I'll look into it if/when I can and use your script as a template. Thanks.

6: Well it wasn't the pool, rather the root folder in the pool; but anyway, thanks to you I was still able to use chmod to solve that little problem. Unfortunately chmod doesn't seem to automatically propagate the new/updated ACL permissions to child objects.

Seeing how SOLARIS & OSol clones are geared towards servers and targets enterprises, they sure as hell don't have much in the way of monitoring and reporting built-in.

Well, thanks for all the help _Gea.
 
1: There are no error (.err) files in that directory (/var/web-gui/data/napp-it/_log/jobs/), only .log & .par.

I also have that alert job to send me an email every 12 hours, if there's any errors, but it is emailing me only once a day and even then, it only sends emails once every two days (skipping a day in between..

Alert jobs are intended to run more often example every 15 minutes to get errors when they happen. Once an alert was send it blocks repeated sends for 24h.

If you fix the error between and the error re happens, you get a new error immediatly.
 
Before you use a disk for ZFS it's good practice to wipe it with zeroes. Just in case.
Ive used ZFS for many years and followed the ZFS mail lists at Sun Microsystems from the very beginning - and I have never heard anyone recommend wiping a disk before using it.

Where have you heard this, and do you have more information on a link?
 
Ive used ZFS for many years and followed the ZFS mail lists at Sun Microsystems from the very beginning - and I have never heard anyone recommend wiping a disk before using it.

Where have you heard this, and do you have more information on a link?

Personal experience, not with ZFS but with filesystems in general. If you have ever tried to format an FFS disk with ext2/3 and later have it bomb in funny ways at a fsck, you develop habits.

Data on disk can be interpreted - in the wrong way by the wrong tools. If some automounter/checker/whatever tries to mount your ZFS disk as some other filesystem because remains of it are left in ZFS's unused areas, all sorts of funny side effects could happen.

It's best to start from a deterministic point. As I said, just a personal habit but not an unreasonable one I think.
 
Ive used ZFS for many years and followed the ZFS mail lists at Sun Microsystems from the very beginning - and I have never heard anyone recommend wiping a disk before using it.

Where have you heard this, and do you have more information on a link?

This is IMHO not ZFS related, but before employing a disk and deploying it into an array, I'd like to know if it is up to specs and standards.

I am using a "burn-in", just like memtest for RAM on new and used/migrated disks.
For new disks I run the tool at least 3x and 1x for a used one...if there are pending sectors
(and I've seen this happen quite some time on brand new disks) afterwards, the disk goes RMA or down the dwain.

...I use this: http://lime-technology.com/forum/index.php?topic=2817.0 script.
It is particulary usefiul for unRAID but there is no harm when using it for any array-type.
 
In my experience, new disks are dead on arrival or ok for first use.

With ZFS I have never never done such pre tests. I know them only when using disks without ZFS Filesystems (without checksums) or with hardware raid.
ZFS reports any errors much more accurate than any pre test when they happen - always.
 
Yes, technically I do agree.
Maybe this is a psych/mind-at-rest thing...like you *did* switch off the coffee-maker, you *know* it, you well *remember* that you did, but when you leave home, you double check (or even pull the plug;)
 
Alert jobs are intended to run more often example every 15 minutes to get errors when they happen. Once an alert was send it blocks repeated sends for 24h.

If you fix the error between and the error re happens, you get a new error immediatly.

No the error happened more than 6 months ago, since then it has never happened again as I deleted that old pool. napp-it however still keeps sending me alert emails about that old error.

Anyway, after trying a lot of different things it looks like it's finally been fixed. Although I'm not exactly sure what I did to fix it.
 
No the error happened more than 6 months ago, since then it has never happened again as I deleted that old pool. napp-it however still keeps sending me alert emails about that old error.

Anyway, after trying a lot of different things it looks like it's finally been fixed. Although I'm not exactly sure what I did to fix it.

Disk and cap errors are triggered by zpool status/list.
Job errors are triggered by .err files in the job/log filder. They are created when a job fails.

ps
if you delete a pool, jobs for that pool are not deleted automatically but triggers errors.
 
Hi guys.
I was wandering if someone could help me out regarding those Tty.so errors on Omnios and Napp-It 0.9b3 nightly Aug.17.2013.

I recently updated my OmniOS machine with the standard "pkg update" command. I also updated the Napp-It to the newest one,
but can't remember which updates I did first... :s

My current OmniOS is omnios-b281e50 , and Napp-IT 0.9b3 nightly Aug.17.2013.

Every time I go into Napp-IT->Users Section I get these following messages :

Code:
Software error:
Can't load '/var/web-gui/data/napp-it/CGI/auto/IO/Tty/Tty.so' for module IO::Tty: ld.so.1: perl: fatal: /var/web-gui/data/napp-it/CGI/auto/IO/Tty/Tty.so: wrong ELF class: ELFCLASS32 at /usr/perl5/5.16.1/lib/i86pc-solaris-thread-multi-64/DynaLoader.pm line 190.
 at /var/web-gui/data/napp-it/CGI/IO/Tty.pm line 30.
Compilation failed in require at /var/web-gui/data/napp-it/CGI/IO/Pty.pm line 7.
BEGIN failed--compilation aborted at /var/web-gui/data/napp-it/CGI/IO/Pty.pm line 7.
Compilation failed in require at /var/web-gui/data/napp-it/CGI/Expect.pm line 22.
BEGIN failed--compilation aborted at /var/web-gui/data/napp-it/CGI/Expect.pm line 22.
Compilation failed in require at /var/web-gui/data/napp-it/zfsos/_lib/userlib.pl line 318.
BEGIN failed--compilation aborted at /var/web-gui/data/napp-it/zfsos/_lib/userlib.pl line 318.
For help, please send mail to this site's webmaster, giving this error message and the time and date of the error.
Software error:
[Tue Aug 27 17:47:39 2013] admin.pl: Can't load '/var/web-gui/data/napp-it/CGI/auto/IO/Tty/Tty.so' for module IO::Tty: ld.so.1: perl: fatal: /var/web-gui/data/napp-it/CGI/auto/IO/Tty/Tty.so: wrong ELF class: ELFCLASS32 at /usr/perl5/5.16.1/lib/i86pc-solaris-thread-multi-64/DynaLoader.pm line 190.
[Tue Aug 27 17:47:39 2013] admin.pl:  at /var/web-gui/data/napp-it/CGI/IO/Tty.pm line 30.
[Tue Aug 27 17:47:39 2013] admin.pl: Compilation failed in require at /var/web-gui/data/napp-it/CGI/IO/Pty.pm line 7.
[Tue Aug 27 17:47:39 2013] admin.pl: BEGIN failed--compilation aborted at /var/web-gui/data/napp-it/CGI/IO/Pty.pm line 7.
[Tue Aug 27 17:47:39 2013] admin.pl: Compilation failed in require at /var/web-gui/data/napp-it/CGI/Expect.pm line 22.
[Tue Aug 27 17:47:39 2013] admin.pl: BEGIN failed--compilation aborted at /var/web-gui/data/napp-it/CGI/Expect.pm line 22.
[Tue Aug 27 17:47:39 2013] admin.pl: Compilation failed in require at /var/web-gui/data/napp-it/zfsos/_lib/userlib.pl line 318.
[Tue Aug 27 17:47:39 2013] admin.pl: BEGIN failed--compilation aborted at /var/web-gui/data/napp-it/zfsos/_lib/userlib.pl line 318.
Compilation failed in require at admin.pl line 442.
For help, please send mail to this site's webmaster, giving this error message and the time and date of the error.
[Tue Aug 27 17:47:39 2013] admin.pl: [Tue Aug 27 17:47:39 2013] admin.pl: Can't load '/var/web-gui/data/napp-it/CGI/auto/IO/Tty/Tty.so' for module IO::Tty: ld.so.1: perl: fatal: /var/web-gui/data/napp-it/CGI/auto/IO/Tty/Tty.so: wrong ELF class: ELFCLASS32 at /usr/perl5/5.16.1/lib/i86pc-solaris-thread-multi-64/DynaLoader.pm line 190. [Tue Aug 27 17:47:39 2013] admin.pl: [Tue Aug 27 17:47:39 2013] admin.pl: at /var/web-gui/data/napp-it/CGI/IO/Tty.pm line 30. [Tue Aug 27 17:47:39 2013] admin.pl: [Tue Aug 27 17:47:39 2013] admin.pl: Compilation failed in require at /var/web-gui/data/napp-it/CGI/IO/Pty.pm line 7. [Tue Aug 27 17:47:39 2013] admin.pl: [Tue Aug 27 17:47:39 2013] admin.pl: BEGIN failed--compilation aborted at /var/web-gui/data/napp-it/CGI/IO/Pty.pm line 7. [Tue Aug 27 17:47:39 2013] admin.pl: [Tue Aug 27 17:47:39 2013] admin.pl: Compilation failed in require at /var/web-gui/data/napp-it/CGI/Expect.pm line 22. [Tue Aug 27 17:47:39 2013] admin.pl: [Tue Aug 27 17:47:39 2013] admin.pl: BEGIN failed--compilation aborted at /var/web-gui/data/napp-it/CGI/Expect.pm line 22. [Tue Aug 27 17:47:39 2013] admin.pl: [Tue Aug 27 17:47:39 2013] admin.pl: Compilation failed in require at /var/web-gui/data/napp-it/zfsos/_lib/userlib.pl line 318. [Tue Aug 27 17:47:39 2013] admin.pl: [Tue Aug 27 17:47:39 2013] admin.pl: BEGIN failed--compilation aborted at /var/web-gui/data/napp-it/zfsos/_lib/userlib.pl line 318. [Tue Aug 27 17:47:39 2013] admin.pl: Compilation failed in require at admin.pl line 442.


I haven't found any solutions regarding this matter.
Any help would be much appreciated.


p.s. Is there any good way limiting the console error messages ?

Thanks in advance
Best regards,

Svavar - Reykjavik - Iceland
 
I haven't found any solutions regarding this matter.
Any help would be much appreciated.


p.s. Is there any good way limiting the console error messages ?

Thanks in advance
Best regards,

Svavar - Reykjavik - Iceland

Perl is different on OmniOS and needs different IO modules,
see Problems with tty::io

ex in menu pools or user:Omni stable and omni bloody need different Perl modules
Napp-it 0.9 should detect the Omni build and use one of the available modules

see http://www.napp-it.org/downloads/omnios.html

Code:
cp/var/web-gui/data/tools/omni_bloody/.  /var/web-gui/data/napp-it/
or
cp /var/web-gui/data/tools/omni_stable/.  /var/web-gui/data/napp-it/)


about console messages
- you can limit dns messages by adding a hostname for 127.0.0.1
- most other messages are sudo messages. They are suppressed on current napp-it after a reboot

all other messages are messages from programms that are started as root
 
_Gea, any plans to support custom mount points soon? I love napp-it but sadly can't use it for half of my filesystems, so just curious.
 
Hi,

Did anyone else notice an issue with the napp-it menu items after upgrading to 0.9b3?
I have two systems that I upgraded a few days ago where the menu now looks like this (server name edited out) on OI 151a8:
29l1dnl.png

When I hover the mouse over the menu items, the correct values "drop down". Both systems had 0.9a3 installed before the upgrade (through the napp-it UI). I've already rebooted the system to no avail.

I have a different OI system that I upgraded to napp-it 0.9b3 a little while back, and I don't see this issue there. Also I noticed that the placement of the napp-it logo is different than of the "fine" one: the logo is located left of the system name rather than on top of it (see screenshot above).

Any ideas what is wrong/how to fix this?
TLB
 
Hi,

Did anyone else notice an issue with the napp-it menu items after upgrading to 0.9b3?
I have two systems that I upgraded a few days ago where the menu now looks like this (server name edited out) on OI 151a8:
29l1dnl.png

When I hover the mouse over the menu items, the correct values "drop down". Both systems had 0.9a3 installed before the upgrade (through the napp-it UI). I've already rebooted the system to no avail.

I have a different OI system that I upgraded to napp-it 0.9b3 a little while back, and I don't see this issue there. Also I noticed that the placement of the napp-it logo is different than of the "fine" one: the logo is located left of the system name rather than on top of it (see screenshot above).

Any ideas what is wrong/how to fix this?
TLB

Do a browser reload to load the new css
 
you can do that from the command line. zfs set mountpoint=/path/to pool/dataset

Yes, we can do everything napp-it does or doesn't from the command line. My point was that when a custom mount point is used, a lot of the zfs commands from napp-it stop working.
 
ld.so.1: netatalk: fatal: libldap-2.4.so.2: open failed: No such file or directory

having a hard time figuring this one out, updated and was trying to troubleshoot afp performance issues, uninstall and reinstalled got this error, made a completely new vm and new install and same issue, help??

thanks!
 
Hi,

I started out to use my OI 151a8 NAS box as a TimeMachine target for a Mac OS X 10.6.8 notebook. I've installed netatalk AFP in napp-it 0.9b3 using the following configuration:
Code:
;
; Netatalk 3.x configuration file
;

[Global]
; Global server settings
afpstats = yes
mimic model = TimeCapsule6,106

[TimeMachine]
path = /tank/TimeMachine
vol size limit =  300000
time machine = yes
valid users = user1

The /tank/TimeMachine folder has the following permissions:
Code:
drwxrwx---+  3 user1 root           4 Sep  2 19:27 TimeMachine
     0:user:user1:list_directory/read_data/add_file/write_data
         /add_subdirectory/append_data/read_xattr/write_xattr/execute
         /delete_child/read_attributes/write_attributes/delete/read_acl
         /write_acl/write_owner/synchronize:file_inherit/dir_inherit:allow
     1:user:root:list_directory/read_data/add_file/write_data
         /add_subdirectory/append_data/read_xattr/write_xattr/execute
         /delete_child/read_attributes/write_attributes/delete/read_acl
         /write_acl/write_owner/synchronize:file_inherit/dir_inherit:allow
     2:owner@:list_directory/read_data/add_file/write_data/add_subdirectory
         /append_data/read_xattr/write_xattr/execute/read_attributes
         /write_attributes/read_acl/write_acl/write_owner/synchronize:allow
     3:group@:list_directory/read_data/add_file/write_data/add_subdirectory
         /append_data/read_xattr/execute/read_attributes/read_acl
         /synchronize:allow
     4:everyone@:read_xattr/read_attributes/read_acl/synchronize:allow
(note that IDs 2…4 where created by trying to run Time Machine, I did not create them)

However, trying to use Time Machine from the Mac, I get the following error after a few seconds:
Code:
The backup disk image "/Volumes/TimeMachine-1/MyMac.sparsebundle" could not be created (error 35).

When I open the TimeMachine share directly on the Mac, I see a MyMac.tmp.sparsebundle "file". When I click on "Show Package Contents", I see some files in there, but I can't delete any of those from the Mac:

Code:
The operation can't be completed because you don't have permission to access some of the items.

Looking at the file permissions on Solaris, I see:
Code:
user1@NAS:/tank/TimeMachine# ls -vl
total 4
drwx--S---+  3 user1 staff          6 Sep  2 19:27 MyMac.tmp.sparsebundle
     0:user:user1:list_directory/read_data/add_file/write_data
         /add_subdirectory/append_data/read_xattr/write_xattr/execute
         /delete_child/read_attributes/write_attributes/delete/read_acl
         /write_acl/write_owner/synchronize:file_inherit/dir_inherit
         /inherited:allow
     1:user:root:list_directory/read_data/add_file/write_data
         /add_subdirectory/append_data/read_xattr/write_xattr/execute
         /delete_child/read_attributes/write_attributes/delete/read_acl
         /write_acl/write_owner/synchronize:file_inherit/dir_inherit
         /inherited:allow
     2:owner@:list_directory/read_data/add_file/write_data/add_subdirectory
         /append_data/read_xattr/write_xattr/execute/read_attributes
         /write_attributes/read_acl/write_acl/write_owner/synchronize:allow
     3:group@:read_xattr/read_attributes/read_acl/synchronize:allow
     4:everyone@:read_xattr/read_attributes/read_acl/synchronize:allow


Are the permissions on the /tank/TimeMachine the problem?
Here the contents of the log file:
Code:
9/2/13 7:26:52 PM	com.apple.backupd[4310]	Starting standard backup
9/2/13 7:27:07 PM	com.apple.backupd[4310]	Attempting to mount network destination using URL: afp://user1@NAS._afpovertcp._tcp.local/TimeMachine
9/2/13 7:27:08 PM	com.apple.backupd[4310]	Mounted network destination using URL: afp://user1@NAS._afpovertcp._tcp.local/TimeMachine
9/2/13 7:27:09 PM	com.apple.backupd[4310]	Creating disk image /Volumes/TimeMachine-1/MyMac.sparsebundle
9/2/13 7:27:10 PM	com.apple.backupd[4310]	Error 35 creating backup disk image
9/2/13 7:27:10 PM	com.apple.backupd[4310]	Failed to create disk image /Volumes/TimeMachine-1/MyMac.sparsebundle, status: 35
9/2/13 7:27:15 PM	com.apple.backupd[4310]	Backup failed with error: 20
9/2/13 7:27:16 PM	com.apple.backupd[4310]	Ejected Time Machine network volume.

Any ideas?
TLB
 
Some very odd behavior and I'm stumped.

Last night, scrub was slowing I/O to a crawl so I decided to cancel it. I'd also had some files that were mistakenly renamed so I ran a rollback on my General filesystem. All seemed good.

A couple of hours later, no network protocols work - all the other filesystems seem to ok (VMware datastore, AFP for Time Machines).. just not the one I ran a rollback on. Can't figure it out. Disabled sharing, reenabled, ran another rollback, no dice. The files seem to be intact when I navigate the mount on the napp-it box (running Solaris 11, AIO).

Another issue - umount -f / zfs unmount -f hang.. as would rollback until I booted in single user mode. That said, nothing was using the filesystem so I have no idea wtf is going on. I started a rsync to a newly created filesystem.
 
Last edited:
Hi,

I started out to use my OI 151a8 NAS box as a TimeMachine target for a Mac OS X 10.6.8 notebook. I've installed netatalk AFP in napp-it 0.9b3 using the following ..

Any ideas?
TLB


Try ACL of shared ZFS:
- everyone@=modify or full

and ZFS properties
- nbmand =off
- aclmode and aclinheritance=passthrough
 
If anyone is interested in helping effemmess to

- create a setup script for Owncloud on OmniOS (or OI/Solaris) using the binaries from SmartOS (they are up to date and they should mostly work on any Solarish)
- create some napp-it menus for this add-on (mostly my part)
I intend to include this as a default add-on - replacing Xampp.

effemmess published a first setup script at
http://forums.servethehome.com/solaris-nexenta-openindiana-napp/2357-owncloud-omnios.html
 
Last edited:
upgrading to bloody didn't help... deff at a loss here.

installed from old bloody, no updates, seems to be an issue with the afp part? i just had this working a few months ago and now i need to restore some files, figures i can't get it :(

any thoughts or should i try to make netatalk 3.1 or what?
 
I have a new build on an XDTH-6f.
I have set the XDTH-6f SAS2008 to be in IT Mode using the LSI Firmware
I have an additional m1015 Crossflashed to 9211-IT Mode using http://www.servethehome.com/ibm-serveraid-m1015-part-4/
I did the sasadd without the "-" that was in the numbers.
Theoretically I have two of the same controllers


I can see all of the disk when doing the initialization, but I can't do anything to the disk
These are fresh, brand new disks


The error I get from command line when I "fdisk c50000...." is "cannot stat disk"


Is there some BIOS setting I am missing? During installation, it took forever to "seek disk", I had to install with all drives pulled except for one.

Should I be using IDE, and not AHCI, or does that only affect the drive connected to the motherboard? (OS Drive)
Setting options is for drive SATA #1

Any clues on what to do?

Capture.png

Capture1.png

Capture2.png

Capture3.png
 
Back
Top