OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

" [ID 589989 daemon.error] Could not find DNS entry for tcp " seems to be a bug in Solaris 134, fixed in 141. But I am using OI 151 wtf?

NFS is probably the single most important part of my server. SO I would really like to make it work :D

Also why are my permissions for /tank/videos/films " 777+241 "

EDIT :

I removed "mdns" in /etc/nsswitch.conf. It is better. But... A reboot / restart of nwam recreate a "bad" file.
 
Last edited:
about NFS

Your NFS server may do a reverse lookup for your client IP's

check if you have a working dns for your NFS server and clients or manually add entries in
/etc/hosts file of your server for your clients like

150.102.23.21 silly.domain.name.com
150.102.23.52 funny.domain.name.com


read
http://publib.boulder.ibm.com/infoc...mn/doc/commadmndita/nfs_problems_hostname.htm


about Solaris fault management

as far as i know:
you will always get informed of a disk failure when you try to write to it.
Fault management will inform you of disks failures independently of write actions

Gea
 
Last edited:
silly-domain-name is a name I can chose myself on the server ? Or is it a precise name?
Can i just for example add at the end of the file : 192.168.0.1 imac ?
And thanks !
 
Last edited:
i suppose, you do not need a full domain name like

192.168.0.1 imac.mydomain.xx

hostname only like
192.168.0.1 imac

should be enough. try it and see if the error messages are not there any longer.

Gea
 
I have an ... interesting... bug :

NFS reports 5.72 Terbaytes of free space
ZFS reports 5.2 Terabytes of free space

Free space and percentages doesn't match :
http://imageshack.us/photo/my-images/62/bugqm.png/

EDIT : Using " format " it seems all of my disks have TWO slices ( s2 and s8 ) on them all parts of the same pool. I am a nix nub and I think I am confused.

Okay I think I will disable ssh into my NAS for myself. I am just going to break something by trying to "improve " things.
 
Last edited:
Hey. I'am having the same problem agian :-(
If I download napp-it script i see "use strict" again.

Thanks
 
Hey. I'am having the same problem agian :-(
If I download napp-it script i see "use strict" again.

Thanks

fixed, hope perl on oi 151 will accept use strict soon.
use strict is very helpfull on bugfixing.
its not funny to activate/ deactivate due to problems with oi 151
(problem is only with calling a script from wget not with cgi)

Gea
 
Has anyone here managed to get the napp-it extension AMP - Apache to work?

I am on OpenIndiana
I have just ran the “wget -O - www.napp-it.org/amp | perl” script and all looked to have installed ok, but when I try and access the files in the napp-it CP I get
Source file not found
/etc/apache2/apache2.conf
Apache looks to be in /etc/apache2/2.2/ but no sign of the apache2.conf file?

I have a domain pointed at my IP address and when I browse to it www.mydomain.com it forwards to www.mydomain.com/cgi-bin/napp-it/ the napp-it admin login page!!!

Gea, if you are reading this, are there any instructions anywhere on how to set this up or details of the default file locations and settings that are setup in the installation?
 
Last edited:
I get my LSI 9201-i16 today.

Can anyone tell me how i check and update the firmware from Napp-it?
 
Has anyone here managed to get the napp-it extension AMP - Apache to work?

I am on OpenIndiana
I have just ran the “wget -O - www.napp-it.org/amp | perl” script and all looked to have installed ok, but when I try and access the files in the napp-it CP I get
Source file not found
/etc/apache2/apache2.conf
Apache looks to be in /etc/apache2/2.2/ but no sign of the apache2.conf file?

I have a domain pointed at my IP address and when I browse to it www.mydomain.com it forwards to www.mydomain.com/cgi-bin/napp-it/ the napp-it admin login page!!!

Gea, if you are reading this, are there any instructions anywhere on how to set this up or details of the default file locations and settings that are setup in the installation?

yes, this part of napp-it is working on Nexenta only (where Apache is installed in
/etc/apache2 while OpenIndiana use /etc/apache2/2.2)

currently you have to do all Apache.conf settings via CLI or midnight commander.
will be fixed in a future version.

Gea
 
Gea,

Is mini-httpd part of the Napp-it installation, and does it conflict with Apache when that is started
 
I'm having problems with my jobs.
What could cause this?

Code:
## last execution:--run 1306786437 at fri 24.jun 2011 03:00
Use of uninitialized value in string ne at /var/web-gui/data/napp-it/zfsos/_lib/scripts/auto.pl line 323. Use of uninitialized value in string ne at /var/web-gui/data/napp-it/zfsos/_lib/scripts/auto.pl line 323. Use of uninitialized value in string ne at /var/web-gui/data/napp-it/zfsos/_lib/scripts/auto.pl line 323.
 
error message about a minor bug
(i wrote my $ok; instead of my $ok=""; )

fixed in current nightly
Gea
 
It seems it's impossible to monitor CPU temp in Solaris. That's quite a problem. When I have some time I will unmount my HDDs, boot on Linux and do a stress test there but... Don't Solaris pro care about temp?
 
Mmh thanks I'll try.
Also is there a reason why my ZFS Pool " Alloc " value is A LOT higher than my actual data on disk size?
 
zpool list
tank 10,9T 2,91T 7,96T 26% 1.00x ONLINE -

While : zfs list shows




Google about zpool list vs zfs list:

zpool list shows the total bytes of storage available in the pool.
ex 5 x 1 TB in raid Z3 -> 5 TB in zpool and 2 TB in ZFS list

zfs list shows the total bytes of storage available to the filesystem, after
redundancy is taken into account.

du shows the total bytes of storage used by a directory, after compression
and dedupe is taken into account.

"ls -l" shows the total bytes of storage currently used to store a file,
after compression, dedupe, thin-provisioning, sparseness, etc.

read also
http://www.cuddletech.com/blog/pivot/entry.php?id=1013
http://hub.opensolaris.org/bin/view...ythezpoollistcommandandthezfslistcommandmatch
http://dlc.sun.com/osol/docs/content/ZFSADMIN/gbchp.html

Gea
 
Last edited:
Is anyone else having trouble with KVM over IP on their supermicro board. For some reason, I can use my keyboard during boot, and during login, but after that, I lose this functionality (the mouse never works).

I have a Supermicro MB-X8Si6-F-O

changing the two input optios doesn't help either..IPMI firmware 2.50

Any ideas?
 
I have been avoiding this problem until now. I would like to avoid getting my htpc's wireless keyboard and also take advantage of this since I paid for it. I wish I had a different mouse that is wired to try on the computer I am connecting from but I doubt this is the problem. This seems more like an OS problem since I can use the keyboard just fine before logging in. I guess I need to see if I can use a live cd of Ubuntu or something to confirm that. Also seems I am the only one on the internet besides this guy

http://www.webhostingtalk.com/archive/index.php/t-930982.html

danswartz, are you using IPMI view on linux and/or windows?
 
Last edited:
Actually, the windows app doesn't work for me - I can never seem to login ;( I have been connecting to the IPMI port with a straight firefox browser under windows7.
 
Actually, the windows app doesn't work for me - I can never seem to login ;( I have been connecting to the IPMI port with a straight firefox browser under windows7.

Is this a plugin as I am having difficulty finding it...I would like to try this route. Thank you for your replies.
 
The problem with the ipmiview app is the "discovery" function never finds the host. No idea why and a straight up browser connect works fine. Refreshing my memory, not only does the 'discover' function not find the host, if I give the explicit IP, I get some error about 'the device is offline, connect anyway?'. If I say yes, I get the a progress bar for a bit, then a fail.
 
I am running Openindiana with napp-it and trying to setup a web server, I have installed the napp-it extension AMP, I have modified the Apache - httpd.conf:

DocumentRoot "/tank/web"
<Directory "/tank/web"> - web is a ZFS folder

When browsing to the domain www.mydomain.com it forwards to the Napp-it login page http://www.mydomain.com/cgi-bin/napp-it/admin.pl it appears that mini_http takes over?

On the napp-it admin page it says that the Apache server is running and I have re started it every time I have changed any config files

Gea any ideas why this is happening, do I need to disable mini_http or are there any other setting in Apache that I need to change. My router has set port redirection to 80 for HTTP
 
Last edited:
I am running Openindiana with napp-it and trying to setup a web server, I have installed the napp-it extension AMP, I have modified the Apache - httpd.conf:

DocumentRoot "/tank/web"
<Directory "/tank/web"> - web is a ZFS folder

When browsing to the domain www.mydomain.com it forwards to the Napp-it login page http://www.mydomain.com/cgi-bin/napp-it/admin.pl it appears that mini_http takes over?

On the napp-it admin page it says that the Apache server is running and I have re started it every time I have changed any config files

Gea any ideas why this is happening, do I need to disable mini_http or are there any other setting in Apache that I need to change. My router has set port redirection to 80 for HTTP

each IP port could only be used by one application.
call www.mydomain.com:80/xyz (non existant page) to see if you get an error
from apache (should run on port 80) or mini-http (should run on port 81).

if you get an apache error, recheck httpd.conf


Gea
 
I'm trying to set up a few replication jobs that will run over a 5Mb uplink. The first folder I'm replicating is 32GB, but the initial replication will never properly complete. It will appear to progress fine for 10 hours or so, then I'll check it and it shows no error, but the replicated folder is empty, like it rolls back the progress prior to ending. Is this a timeout issue, and if so can I tweak a switch in the job script to make it work properly? I actually have another folder to do that is ~150GB, and if it can't 32GB I don't have high hopes.

On another note - a few feature requests:

1. Allow a throttling setting on replication jobs
2. Allow replication jobs that are interrupted by a disconnection to be resumed
3. Allow replication jobs to be paused/resumed
4. Initial replication shows the total data being transferred. Incremental replication jobs should as well.

Thanks!
 
What exact driver do I use for Solaris 11 Express and the M1015?

From
http://www.lsi.com/products/storagecomponents/Pages/MegaRAIDSAS9240-8i.aspx
The only driver for Solaris is a old 32bit driver?

I'd like to Flash it also, but the Solaris Megaraid package doesn't seem to work.. i do a pkgadd -d and it says there are no packages found?

The controller is supported out of the box.
If you want to flash, you must must do it with good old MS-DOS or a free alternative
Boot from floppy, disk, cd or USB stick (a bootable USB stick is often a good idea)

read more:
http://forums.servethehome.com/showthread.php?97-IBM-M1015-Firmware-What-to-flash-with

Gea
 
I'm trying to set up a few replication jobs that will run over a 5Mb uplink. The first folder I'm replicating is 32GB, but the initial replication will never properly complete. It will appear to progress fine for 10 hours or so, then I'll check it and it shows no error, but the replicated folder is empty, like it rolls back the progress prior to ending. Is this a timeout issue, and if so can I tweak a switch in the job script to make it work properly? I actually have another folder to do that is ~150GB, and if it can't 32GB I don't have high hopes.

On another note - a few feature requests:

1. Allow a throttling setting on replication jobs
2. Allow replication jobs that are interrupted by a disconnection to be resumed
3. Allow replication jobs to be paused/resumed
4. Initial replication shows the total data being transferred. Incremental replication jobs should as well.

Thanks!

hello Astronot,
your wishes are hard or fulfill because not ZFS send/ receive nor netcat as transport mechanism
offers such features. Transfer is always as fast as your network (you may only reduce job priority).
There are no settings like a max transfer rate or interrupt/ resume. But that should not be a problem. After a initial
replication is done, all other syncs are usually minimal as they contain only changed data from last replication.

The same with transferred data. A initial sync transfers a complete ZFS so its possible to calculate
transfer from target size. That is not possible with incremental replications as you do not know, if you have
added data or removed data at source. Replication will display therefor the size of delta-snap to give a hint about
transfer time.

A slow/ very slow network connection is never a problem beside transfer time but it must be stable all the time.
If its interrupted, ZFS receive do a rollback. In Germany a DSL connection is interrupted after 24h and you get a new IP.
Maybe you have a similar problem.

The only solution is a initial local or intranet transfer. Incremental replications after this should not be a problem.
The major problem that need to be fixed already is that a receiver is waiting endless if there was a sender problem
(not working receiver netcat timeout) where you currently must manually stop a receiver.

Other possibility is a file based replication via rsync or robocopy instead.
(good if you have small files, not possible if you have VM-images)

Gea
 
The controller is supported out of the box.
If you want to flash, you must must do it with good old MS-DOS or a free alternative
Boot from floppy, disk, cd or USB stick (a bootable USB stick is often a good idea)

read more:
http://forums.servethehome.com/showthread.php?97-IBM-M1015-Firmware-What-to-flash-with

Gea

Gea, Thanks for your reply. I thought the native driver was mp2tsas which was applicable once it was in IT Mode. And you still need a driver for when it's in IR mode.

The controller doesn't appear in Napp-it. The flash on it currently is a IBM flash from Nov 2010. I do see the controller on boot though, and I can access it's BIOS.
 
Gea, Thanks for your reply. I thought the native driver was mp2tsas which was applicable once it was in IT Mode. And you still need a driver for when it's in IR mode.

The controller doesn't appear in Napp-it. The flash on it currently is a IBM flash from Nov 2010. I do see the controller on boot though, and I can access it's BIOS.

as far as i know, there is no real IT mode with this controller but it works
after flashing LSI 9240-8i firmware

So you need to reflash first

(the most wanted SAS2 card is a LSI 9211, a HBA adapter with IT mode,
but this have a different layout to IBM although they have both a LSI 2008 chipset)


Gea
 
hello Astronot,
your wishes are hard or fulfill because not ZFS send/ receive nor netcat as transport mechanism
offers such features. Transfer is always as fast as your network (you may only reduce job priority).
There are no settings like a max transfer rate or interrupt/ resume. But that should not be a problem. After a initial
replication is done, all other syncs are usually minimal as they contain only changed data from last replication.

The same with transferred data. A initial sync transfers a complete ZFS so its possible to calculate
transfer from target size. That is not possible with incremental replications as you do not know, if you have
added data or removed data at source. Replication will display therefor the size of delta-snap to give a hint about
transfer time.

A slow/ very slow network connection is never a problem beside transfer time but it must be stable all the time.
If its interrupted, ZFS receive do a rollback. In Germany a DSL connection is interrupted after 24h and you get a new IP.
Maybe you have a similar problem.

The only solution is a initial local or intranet transfer. Incremental replications after this should not be a problem.
The major problem that need to be fixed already is that a receiver is waiting endless if there was a sender problem
(not working receiver netcat timeout) where you currently must manually stop a receiver.

Other possibility is a file based replication via rsync or robocopy instead.
(good if you have small files, not possible if you have VM-images)

Gea


Thanks Gea, you've clarified a few things for me. I was messing with the timeouts in the job script, and after setting it as follows, the replication completed properly:

Code:
    my $src_interface     = "sudo /usr/bin/nc -w 60 10.0.0.139 57463";
    my $dest_interface    = "sudo /usr/bin/nc -w 600 -d -l -p 57463";


I get what you're saying about the other things - rollback was what I was seeing, I just could never tell what was causing it.
 
Back
Top