OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

Hello, I have a problem with napp-it.
As long as I only have a couple of ZFS folder everything is OK.
But I have 500 folders. The page takes to build more than 1 minute
Does anyone have the same problem?
Version: openindiana appliance v. 0.500p
 
yes that could be a problem on slow machines or with a lot of ZFS-folders.
napp-it is calling a zfs list command each time and must wait until its finished.

i think about a cache function in one of next versions.

Gea
 
each IP port could only be used by one application.
call www.mydomain.com:80/xyz (non existant page) to see if you get an error
from apache (should run on port 80) or mini-http (should run on port 81).

if you get an apache error, recheck httpd.conf


Gea

Hi Gea Thank you for your reply.

I found the problem, my routers HTTP port forwarding had a private port set to 81 I guess that this kicks in and forwards on errors?

It says on the napp-it menu that phpmyadmin is installed with the amp package? I cannot find it, if not how do I install it?
 
Hi Gea Thank you for your reply.

I found the problem, my routers HTTP port forwarding had a private port set to 81 I guess that this kicks in and forwards on errors?

It says on the napp-it menu that phpmyadmin is installed with the amp package? I cannot find it, if not how do I install it?

its currently installed with the napp-it installer on Nexenta only.
I have not tried with OpenIndiana/ SE11 (on my todo list on a future version)

Gea
 
I'm having a bit of a problem with ACLs.

When one of my downloading applications (SABnzbd) creates files on an NFS share (ZFS) it has the following ACL:

Code:
drwxrwxrwx   2 1001     1001           7 Jun 27 08:21 c
     0:owner@:list_directory/read_data/add_file/write_data/add_subdirectory
         /append_data/read_xattr/write_xattr/execute/read_attributes
         /write_attributes/read_acl/write_acl/write_owner/synchronize:allow
     1:group@:list_directory/read_data/add_file/write_data/add_subdirectory
         /append_data/read_xattr/execute/read_attributes/read_acl
         /synchronize:allow
     2:everyone@:list_directory/read_data/add_file/write_data
         /add_subdirectory/append_data/read_xattr/execute/read_attributes
         /read_acl/synchronize:allow

In Windows, I'm mapping using root but this directory says "The requested security information is either unavailable or can't be displayed."

Furthermore, only root can delete these directories.

Can anybody help me fix this?

Also, how can I reset ACLs for an entire share including all files/folders?
 
I'm having a bit of a problem with ACLs.

When one of my downloading applications (SABnzbd) creates files on an NFS share (ZFS) it has the following ACL:

Code:
drwxrwxrwx   2 1001     1001           7 Jun 27 08:21 c
     0:owner@:list_directory/read_data/add_file/write_data/add_subdirectory
         /append_data/read_xattr/write_xattr/execute/read_attributes
         /write_attributes/read_acl/write_acl/write_owner/synchronize:allow
     1:group@:list_directory/read_data/add_file/write_data/add_subdirectory
         /append_data/read_xattr/execute/read_attributes/read_acl
         /synchronize:allow
     2:everyone@:list_directory/read_data/add_file/write_data
         /add_subdirectory/append_data/read_xattr/execute/read_attributes
         /read_acl/synchronize:allow

In Windows, I'm mapping using root but this directory says "The requested security information is either unavailable or can't be displayed."

Furthermore, only root can delete these directories.

Can anybody help me fix this?

Also, how can I reset ACLs for an entire share including all files/folders?

your problem
user with UID 1001 is owner
everyone@ has no permission to write ACL or ownership
root can always overwrite all ACL but has no ACL to explicit access files


you have some possibilities:

owner of the directory is unix user with uid 1001, so you may idmap
your Windows user to this user to be owner equivalent

you may change everyone@ to full permissions

you may change ownership from Solaris to root

you may add another ACL like root=full access at first place in ACL list
(While Windows process all deny first and the all allow, Solaris checks ACL like
a firewall ACL from top down with the first matching ACL do the job,
I currently develop an ACL management extension in napp-it where you can care about position of ACL's)

you may set new ACL recursively from Solaris

if you are root or idmapped to root, you may replace ACL or ownership
recursively from Windows from shared folder down


to avoid such problems, first set ACL on parent folder like
everyone@=full or everyone@=modify and root=full with inheritance to files and folders and then create new files and folders
(you can do that from some Windows versions, from napp-it ACL extension or from Solaris CLI)

Gea
 
Last edited:
Thanks for the information!

I'm using Windows 7 and mapped to my share as root. I've used the Owner tab and "Replace owner on subcontainers and objects." It appeared to make root the owner of the entire share and all objects.

After that I went to Security (on the share) and had root with full control and everyone with full control and I used "Replace all child object permissions with inheritable permissions from this object" but I got the following error on a couple of files/folders:

Code:
An error occured while applying security information to:
[...]
No mapping between account names and security IDs was done.

What am I doing wrong?
 
Reporting in.

Bought two identical systems a few weeks ago, tried a few different OS'es to see what I want to move my primary storage too.

Been using this for a few weeks now, feeling more and more ready to move my primary storage systems to openindiana based ZFS shares. CIFS speed is pretty good, not as fast as a single drive in win-to-win transfers, but good enough [Averaging 50MB/s] on the following hardware;

AME E-350 1.6GHZ All-in-One board
2x4GB DDR3
500GB Seagate 2.5" Momentus XT [Boot Drive]
4x 1TB Western Digital RE3 7200RPM drives [RAIDz]
Antec 300 Case / Antec 380W Green Power Supply

Later this week I will be putting together another similar system with 5400RPM WD EARS 1.5TB Drives with 4K sectors [and again a 500GB boot drive] with nearly identical specs

Now I want to build a couple with powerful hardware [xeon i7's, ecc memory, and WD velociraptors?] to see if there is any performance difference.
 
What is your I/O mix? Reads? Writes? Both? If read-heavy, you'd be better off getting two more drives, and going with a 3x2 mirror...
 
OK - I think I found my problem.

Some Linux applications are doing CHMOD on my files (over NFS) and this is removing the ACLs causing me a lot of headaches.

Can I have ZFS keep the ACLs even after the CHMOD commands?

For example, once a CHMOD is done on a file or directory, I can no longer edit or delete it using Windows over CIFS/SMB...
 
ACL and Unix permission is not a either or but a
one depends on another

if you reduce unix permissions below the level that is compatible
with your ACL settings, they must be changed also.

As SMB is ACL only and most unix applications are unix permission only,
you have a real compatibility problem. If your Linux application is doing a chmod
and you want to access these files via SMB you may need another chmod to reset permissions
and to set ACL in a way you can access these files

or
you may do a idmapping for smb and map your Windows user to root or to current
owner to these files.

Gea
 
Last edited:
Thanks!

Is there any performance difference between NFS and CIFS/SMB?

On my Unix systems I could just use CIFS/SMB instead of NFS and avoid the whole ACL problem but I thought NFS would be better since it's native to Unix/Linux systems...
 
Depends on the implementation. With OI boxes, I've seen some folks get better perf with NFS and others with CIFS. FWIW, any modern linux can use a CIFS share as easily (almost) as NFS, so if that is an issue...
 
Every time I restart my system the IP changes between *.*.*.11 and *.*.*.12

I am running and new to Openindiana does anyone have a step by step guide on how to setup a static IP address for a complete newbee
 
That's an issue with your DHCP server, sounds like :( Can you set it up to give the OI box a "static" DHCP IP? There are ways to give OI real static, but it's easier not to have to.
 
Every time I restart my system the IP changes between *.*.*.11 and *.*.*.12

I am running and new to Openindiana does anyone have a step by step guide on how to setup a static IP address for a complete newbee

I just map the desired ip address to the mac address in the lan setup table in the router, which works great.
 
I was wondering if there is a way to have napp-it email me if a drive fails. I tried setting up the e-mail but it seemed to just e-mail me at specific times and not upon something going wrong.
 
I have a very noobish question to ask. I created a pool with 4 2TB drives and then created a vdev with 4 more 2TB drives all in a raidz1. That should total 14 TB free (unless im wrong, which could be the case). When i look at the pool in napp-it, it tells me that I have 14.5 TB free space. When i map the drive in windows 7, it tells me that I only have 11.1 TB free space. Whats going on? Is Windows just wrong?

Thanks!
 
zpool info shows all space, including the unusable space due to the "parity" drives. BTW, 8 drives in raidz1 is looking for trouble.
 
How is it looking for trouble? If 1 dies i can have a new one to me in 48 hours to rebuild. I dont have 14 TB of data now so it wont take long to rebuild.
 
I didn't mean the command 'zpool info' :( I meant 'the info given you by zpool'. I double-checked on my system and I am not seeing the discrepancy. I checked some info and realized why. 'zpool list' shows you the total space in all top-level vdevs. Since I have a 3x2 mirror (raid10), there is no parity drive discrepancy. Please post 'zpool list xxx' and 'zfs list xxx' for your pool named 'xxx'.
 
zfs list XXX shows the right amount. I read your first post wrong which led me to try to run that command LOL. Should i just not worry about Windows reporting the right size?
 
Let me clarify: if you have a raidz1 pool with one vdev, there is one "parity" drive, so you should have (approx) a 1-drive discrepancy between zfs list and zpool list. The former is what to go by...
 
I'm no expert but everything I've read agrees with danswartz. Personally I have a 10 drive RAID-Z2 and I've even thought about adding a hot spare.

I guess it depends on how critical your data is, but the chances of having a second drive fail during a rebuild is fairly significant and is it really worth the risk just to have that little bit of extra space?
 
I'm no expert but everything I've read agrees with danswartz. Personally I have a 10 drive RAID-Z2 and I've even thought about adding a hot spare.

I guess it depends on how critical your data is, but the chances of having a second drive fail during a rebuild is fairly significant and is it really worth the risk just to have that little bit of extra space?

I would also never ever build a Raid-5 or Raid-Z1 from 8 drives.
Too risky to have a second disk failure during a rebuild.

I would use a Raid-Z2 with 8 disks. If you think about a extra hot spare,
I would build a Raid-Z3 instead. In case of a failure your Raid is in the same
state like a Raid-Z2 + hotspare AFTER a rebuild. Also the extra-drive is under ZFS control-
no suddenly damaged hotspare when you need it.

Hotspare is best if you have mirrors.


Gea
 
I was wondering if there is a way to have napp-it email me if a drive fails. I tried setting up the e-mail but it seemed to just e-mail me at specific times and not upon something going wrong.

set an alert-job with:
every month, every day, every hour and every minute (times to check for failures)
in case of a failure you will get email at once and then once a day until you fix the problem.

Gea
 
set an alert-job with:
every month, every day, every hour and every minute (times to check for failures)
in case of a failure you will get email at once and then once a day until you fix the problem.

Gea

Thanks!

edit: spoke too soon

so I just got this e-mail

Alert/ Error on prometheus from 30.06.2011 15:45

-disk errors: none

Shouldn't the alert just send emails with real errors?
 
Last edited:
I want to run Solaris as a ZFS host on my service with virtualbox running VM's.

I have 12 GB of RAM and I am planning to upgrade to 24 GB.

I was wondering whats up with solaris and not having a x64 edition? WTF?! Can ZFS possibly run good at x86?!

What is going on SUN? Is it MOON NOW?!
 
When solaris says x86 they mean as opposed to SPARC.

Code:
chris@cactus-solaris:~$ isainfo -v
64-bit amd64 applications
        sse4.2 sse4.1 ssse3 popcnt tscp cx16 sse3 sse2 sse fxsr mmx cmov amd_sysc
        cx8 tsc fpu
32-bit i386 applications
        sse4.2 sse4.1 ssse3 popcnt tscp ahf cx16 sse3 sse2 sse fxsr mmx cmov
        sep cx8 tsc fpu

so full 64 bit support.
 
When building a pool you have an option to set aside 10% for overflow protection. I'm using Napp-it . I can't find any details on what this is and how it's of benefit. Any input? Thanks!

overflow protection (use max 90% of current space)

Also to clarify, If I've got six 2tb disk.. I just create a pool with those disks in RAIDZ and thats it.. I don't need a vdev or anything else do I?
 
set an alert-job with:
every month, every day, every hour and every minute (times to check for failures)
in case of a failure you will get email at once and then once a day until you fix the problem.
Gea

So I set it as you said. Pulled a sata cable. Pool is now marked as degraded. I had clicked on 1-min and still set as 15-min. I ran the job manually and got the email. I concluded to change the interval, I need to disable it then re-enable with 1-min. Cron is now set right. But when I tried to re-test by plugging the cable back in, waiting for resilver and then pulling the cable again, I never got a message. It's almost like that 'once a day until you fix' isn't right. e.g. I did fix it, but it doesn't seem to be warning me of subsequent failures?
 
I want to run Solaris as a ZFS host on my service with virtualbox running VM's.

I have 12 GB of RAM and I am planning to upgrade to 24 GB.

I was wondering whats up with solaris and not having a x64 edition? WTF?! Can ZFS possibly run good at x86?!

What is going on SUN? Is it MOON NOW?!

you really need to engage your brain before you slag off a company....it is 64 bit....oh, it's owned by Oracle now by the way.
 
So I set it as you said. Pulled a sata cable. Pool is now marked as degraded. I had clicked on 1-min and still set as 15-min. I ran the job manually and got the email. I concluded to change the interval, I need to disable it then re-enable with 1-min. Cron is now set right. But when I tried to re-test by plugging the cable back in, waiting for resilver and then pulling the cable again, I never got a message. It's almost like that 'once a day until you fix' isn't right. e.g. I did fix it, but it doesn't seem to be warning me of subsequent failures?

Yes, if you got another error the same day after you have fixed a former error, you will not get an new error the same day but the next day. I have to reset the "do not send multiple errors the same day-function" in one of the next versions.

ps
Alert now sends also mails when available space is below 15%.
I forgot to reset a debug variable (fixed in version 0.500r) with the result
of sending daily alerts without reason.

Gea
 
Yes, if you got another error the same day after you have fixed a former error, you will not get an new error the same day but the next day. I have to reset the "do not send multiple errors the same day-function" in one of the next versions.

ps
Alert now sends also mails when available space is below 15%.
I forgot to reset a debug variable (fixed in version 0.500r) with the result
of sending daily alerts without reason.

Gea

so i was't crazy. thanks!
 
Back
Top