OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

Was able to remove snapshots via beadm which freed up enough space to boot into the OS. But I'm still at 98% used capacity. Is there a way to expand rpool/ROOT/napp-it-0.9a5? I've allocated 16gb to the OI vm but it's only using about 10gb (3gb to rpool, 7gb to swap).

You may delete older napp.it versions /var/web-gui/data_nnn (data is current)
and installers in /root
 
i have an lsi controller. but when i start the extension i get:

SAS2 monitoring needs Smart-Serials for correct disk detection. Smartvalues are missing!
I have the same problem with 3 ibm m1015, OmniOS stable with nappit a6.
If smartmontools are not working, you cannot use my disk detection.
So how to get them to work?

You may delete older napp.it versions /var/web-gui/data_nnn (data is current)
and installers in /root
I'm probably not the only one whos got no experience in unix commands, so would it be possible to include such options to delete old installs (and boot enviroment or what the options at the boot screen are called) into the application menu? :(
 
I have the same problem with 3 ibm m1015, OmniOS stable with nappit a6.

So how to get them to work?


I'm probably not the only one whos got no experience in unix commands, so would it be possible to include such options to delete old installs (and boot enviroment or what the options at the boot screen are called) into the application menu? :(

We have two separate differnt prpblems
- one is an old, in the meantime removed, not up to date menu extensions-sas2 extensions:
- use menu disk sas2 extension

but if you have a controller that is not supported by smartmontools, i have no solution

next are boot environments.
You can delete them and select active one in current menu snaps - boot envirinment

third is delete of unneeded files: should only be necessary if your systemdisk is too small.
Size depends on OS and RAM: A secure minimum is currently 25 GB
 
Last edited:
- one is an old, in the meantime removed, not up to date menu extensions-sas2 extensions:
- use menu disk sas2 extension
Works now, ty.

next are boot environments.
You can delete them and select active one in current menu snaps - boot envirinment
I only have an option: create boot enviroment. Maybe because I only have one nappit version installed so far (fresh 9a6)?

Another question. I (will) have 3 pools with 3tb and 1.5tb disks. I wanted to assign a 3tb spare (pool->extend pool) but I could only do it for one pool.
Code:
zpool status	

  pool: data
 state: ONLINE
  scan: none requested
config:

	NAME                       STATE     READ WRITE CKSUM     CAP            Product
	data                       ONLINE       0     0     0
	  mirror-0                 ONLINE       0     0     0
	    c8t50014EE0AE222FC7d0  ONLINE       0     0     0     3 TB           WDC WD30EFRX-68A
	    c8t50014EE602E71662d0  ONLINE       0     0     0     3 TB           WDC WD30EFRX-68A

errors: No known data errors

  pool: rpool
 state: ONLINE
  scan: none requested
config:

	NAME        STATE     READ WRITE CKSUM     CAP            Product
	rpool       ONLINE       0     0     0
	  c2t0d0s0  ONLINE       0     0     0     21.5 GB        Virtual disk

errors: No known data errors

  pool: storage1
 state: ONLINE
  scan: none requested
config:

	NAME                       STATE     READ WRITE CKSUM     CAP            Product
	storage1                   ONLINE       0     0     0
	  raidz2-0                 ONLINE       0     0     0
	    c8t50014EE0035A43CDd0  ONLINE       0     0     0     3 TB           WDC WD30EFRX-68A
	    c8t50014EE0035A4488d0  ONLINE       0     0     0     3 TB           WDC WD30EFRX-68A
	    c8t50014EE0035A45CCd0  ONLINE       0     0     0     3 TB           WDC WD30EFRX-68A
	    c8t50014EE058AF9266d0  ONLINE       0     0     0     3 TB           WDC WD30EFRX-68A
	    c8t50014EE058AF9355d0  ONLINE       0     0     0     3 TB           WDC WD30EFRX-68A
	    c8t50014EE058AF9365d0  ONLINE       0     0     0     3 TB           WDC WD30EFRX-68A
	    c8t50014EE058AF937Dd0  ONLINE       0     0     0     3 TB           WDC WD30EFRX-68A
	    c8t50014EE0AE05215Ad0  ONLINE       0     0     0     3 TB           WDC WD30EFRX-68A
	    c8t50014EE0AE0521D5d0  ONLINE       0     0     0     3 TB           WDC WD30EFRX-68A
	    c8t50014EE0AE0919C0d0  ONLINE       0     0     0     3 TB           WDC WD30EFRX-68A
	spares
	  c3t2d0                   AVAIL        3 TB           WDC WD30EFRX-68A

errors: No known data errors

  pool: vmdata
 state: ONLINE
  scan: none requested
config:

	NAME        STATE     READ WRITE CKSUM     CAP            Product
	vmdata      ONLINE       0     0     0
	  mirror-0  ONLINE       0     0     0
	    c3t0d0  ONLINE       0     0     0     256.1 GB       Samsung SSD 840
	    c3t1d0  ONLINE       0     0     0     256.1 GB       Samsung SSD 840

errors: No known data errors
Is this right or is there another way of assigning a single spare to more pools?

Also can you give me a quick guide (menu clicking sequence) when a disk fails and I have to use the spare while replacing the fail one with new and then back to normal (adding new back to pool and returning spare to spare).
 
Ohh.. In that case, you probably add the same spare to multiple pools with something like
zpool add zpoolname spare spare-device

Matej
 
As far as I know, spares can only be assigned to one pool, not many...

If you add a disk as a spare, it is in the pool-info but ZFS does not write anything to this disk so it is possible to add a spare to different pools.

Napp-it does not support this via Web-UI so it must be done via CLI -
but it is really "bad use" and not suggested at all.
 
The issue with it is, if you assign a spare to many pools, if a disk fails in more than one pool, that spare will get used twice, and now you have caused data corruption, and might loose data from both pools that already lost a disk or two now.
 
Apologies if this was posted already, didn't see it my searches...

Tried updating napp-it to version 0.9 from 0.8k and now get the following error...

Software error:

Can't locate UUID/Tiny.pm in @INC (@INC contains: /var/web-gui/data/napp-it/CGI /usr/perl5/site_perl/5.10.0/i86pc-solaris-64int /usr/perl5/site_perl/5.10.0 /usr/perl5/vendor_perl/5.10.0/i86pc-solaris-64int /usr/perl5/vendor_perl/5.10.0 /usr/perl5/vendor_perl /usr/perl5/5.10.0/lib/i86pc-solaris-64int /usr/perl5/5.10.0/lib .) at admin.pl line 719.
BEGIN failed--compilation aborted at admin.pl line 719.
For help, please send mail to this site's webmaster, giving this error message and the time and date of the error.
 
Apologies if this was posted already, didn't see it my searches...

Tried updating napp-it to version 0.9 from 0.8k and now get the following error...

Software error:

Can't locate UUID/Tiny.pm in @INC (@INC contains: /var/web-gui/data/napp-it/CGI /usr/perl5/site_perl/5.10.0/i86pc-solaris-64int /usr/perl5/site_perl/5.10.0 /usr/perl5/vendor_perl/5.10.0/i86pc-solaris-64int /usr/perl5/vendor_perl/5.10.0 /usr/perl5/vendor_perl /usr/perl5/5.10.0/lib/i86pc-solaris-64int /usr/perl5/5.10.0/lib .) at admin.pl line 719.
BEGIN failed--compilation aborted at admin.pl line 719.
For help, please send mail to this site's webmaster, giving this error message and the time and date of the error.

You MUST reboot after the upgrade
 
i'm a new (<1 month) user of OI+Napp It, and i'm hitting a brick wall when trying to set-up FTP Server on my NAS.
tried using this "guide" here :
http://docs.oracle.com/cd/E19963-01/html/821-1454/wuftp-231.html
but I can't seem to edit any of the files in the ftpd folder, seems they are read only.
before I mess up anything, is there a simpler guide, or even better - a webui, for setting up FTP server on OpenIndiana ?
 
i'm a new (<1 month) user of OI+Napp It, and i'm hitting a brick wall when trying to set-up FTP Server on my NAS.
tried using this "guide" here :
http://docs.oracle.com/cd/E19963-01/html/821-1454/wuftp-231.html
but I can't seem to edit any of the files in the ftpd folder, seems they are read only.
before I mess up anything, is there a simpler guide, or even better - a webui, for setting up FTP server on OpenIndiana ?

I suppose, you need root permission to edit files in /etc
easiest way:

- activate remote root access (napp-it menu services -ssh - allow root)
- connect from Windows via WinSCP as user root and edit files via WinSCP

Settings depend on used ftp-server
 
I'm doing a 2nd remote box, with periodic syncing. This is getting expensive enough, so I would like to avoid a Raid setup which reduces usable space even more.
What I would like is temporary redundancy (between sync intervals) on box1 until the offsite zfs replication is completed and confirmed.
For drive failure I would need to restore from remote box. I think that's feasible for me.
So, any way to do temporary redundancy?
Thanks and Thanks.

Also, Gea you are awesome and I am most grateful for all you've done.
 
Hello!

I'm having weird performances with the following hardware:
Supermicro x9SCM-F
Xeon i3-1230V2
8GB memory
Bios: 2.0b

If I SSH to ESX box and try a dd(time dd if=/dev/zero of=./dd1.tst bs=2048000 count=2048) command, I get an awful slow write, between 10MB/s and 20MB/s.
Hard drives are attached directly to the motherboard without RAID or anything.
Read speeds are around 90-100MB/s, which is "ok" for WD blue 2,5" disks....

Anyone else experiencing these problems?

Matej?
 
I tried booting live linux CD and did a read/write test on one of the ESX drives and transfers were as they should be, between 80 and 90MB/s.

I guess ESX doesn't like those cheap server motherboards:)

Matej
 
One after the other WD-Green 2TB drive is failing on me ATM :(

What are the general recommendations for 2TB / 3TB drives these days for ZFS?

I would like to avoid the special raid edition drives if possible, they're super expensive here.

WD RED Series perhaps?

Input appreciated
/Jim
 
So far I've had success with the new Toshiba 3 TB 7200 RPM drives (I have 10 of them in a pool for a couple of weeks now). They can be cheaper than the WD Reds sometimes. Just beware of Newegg destroying them!
 
In need of some help as belwo my disks in the disk menu of napp-it are showing as removed, I have rebooted to no avail, the data of the drives is completely accessible? Is it possible to reset the stat so they are read not showing removed

Qzgutwx.png



Thanks in advance
 
In need of some help as belwo my disks in the disk menu of napp-it are showing as removed, I have rebooted to no avail, the data of the drives is completely accessible? Is it possible to reset the stat so they are read not showing removed

Thanks in advance

Which napp-it version?
If you do not use newest, you may first try an update.
 
Today I joined my AIO to an Active Directory. Since then, after reboot, it would take ages for NFS datastore to get mounted in vmware and none of the servers on that datastore started...

I found out that after I got the message
mountd[494]: [ID 664212 daemon.error] No default domain set

NFS started to work again... I couldn't get the problem fixed, so now I'm reinstalling and will try again, but make a BE snapshot before trying:)

I don't know what changed when I added the server to AD domain.

MAtej
 
So far I've had success with the new Toshiba 3 TB 7200 RPM drives (I have 10 of them in a pool for a couple of weeks now). They can be cheaper than the WD Reds sometimes. Just beware of Newegg destroying them!

I'm pretty sure Toshiba 3tb drives are rebraded Hitach Ultrastar 3TB drives they look identical besides the label though the firmware is probably different like a WD red vs WD green
 
Version 0.9a7 nightly Feb.15.2013, I have been accessing the drives without issue, but am nervous about continuing to use them, is tere anything I can do to reset the status?
 
It may be a problem with the PERC controller.
Disks are not displayed as disks but as PERC ???
Is the PERC a HBA or did you build a Raid-0 from single disks?
In this case, ZFS cannot access the disks. (The hardware Raid controller hides them from OS)
- Can you get smartmon-values?

Basically napp-it discovers most disk infos via iostat -En
But iostat keeps them listed even when they are removed so infos from parted, format and zpool status are used to detect correct status. Mostly such problems are not critical and due to unexpected values from different disks or controllers. These infos are processed in /var/web-gui/data/napp-it/zfsos/_lib/get-disk.pl.Without having the same problem its hard to discover where the problem is.

These infos are mostly collected and displayed in realtime. Some infos are buffered to improve performance. I do not expect a buffering problem, but you may try menu disk - sas2 extension - delete buffered SAS info - just to be sure.

For bugfixing, you may activate Edit in topmenu (if you need an eval-key, goto napp-it.org/extensions) and call menu disk. Most needed infos are monitored in the topmenu %disk (all diskinfos) and Log (Log of what happens during processing a menu).

You may send these to listings as attachment to [email protected]
Maybee I can detect the reason.
 
Last edited:
Thanks logs sent from, hopefully I have done it correctly

Thanks,
basically the problem is based in an incompatibility of parted with the PERC 6i controller resulting in errors and bug reports when reading disk and partition infos from the PERC.

I see no other options, than ignoring the messages (since parted is not essential) or replace the PERC with something
more compatible like a IBM 1015 (needs reflashing to 9211/IT mode).

I do not know, it there is another firmware option for the PERC 6i but i would replace the controller.
You will also get better performance and support for disks > 2TB
 
Last edited:
Thanks, trying to secure an m1015, bizarrely I built a all in one lat night and pass-thru the perc o check the disks were ok and the the perc had no such issues, thanks again for looking
 
Quick question. How do I configure NFS so only certain IPs on my local subnet can access NFS shares?
 
Quick question. How do I configure NFS so only certain IPs on my local subnet can access NFS shares?

use Firewall with some rules like
- allow management (port, nic or ip-based, quick)
- allow desired machines or services (port, nic or ip-based, quick)
- deny the rest
 
I need guidance on drives to purchase, can anyone please assist?

Criterias for drives:
1) Energy efficient
2) Performance is not an issue (1Gbit network only)
3) May not run too hot, room with server in it is approx 28degrees celcius.
4) CAN'T be WD Greens(have 16 allready and I'm running back and forth to the dealer with RMA's)
5) 3TB capacity

Options within my budget and availability:

1) Seagate ST3000DM001
2) Western Digital RED WD30EFRX
3) Toshiba DT01ACA300
4) Seagate Constellation CS ST3000NC000 3 TB
5) Seagate Constellation CS ST3000NC002 3 TB

Difference between 4&5??

4&5 is a bit pricier, but I'm putting them here for comparison, optimal would be 1, 2 or 3.

Any inputs much appreciated
/Jim
 
I just bought 6 WD Reds 3 TB and am very happy with them.

Before i had 8x Seagate Constellation ES2 drives on my Hardware Raid, they were also very good but very expensive in comparison. 3 of them were DOA and i was getting new ones for them. 4 years later one drive of them died after 4 weeks another. so it was time for me getting new hardware, and i decided not to buy enterprise drives anymore, because the wd reds were half the price only and i don't really see any advantages for my use case.

I also have only Gbit Network and the performance is also ok, running 6x in raidz2 and i am getting about 110-120 mb/s over the network nfs/afp/smb.
The drives are not getting very hot, about 28 to 34 degrees celcius depending on slot and hd activity.
A scrup on a Filesystem holding 3.53 TB data ran about 3 h 40 min.
 
Those of you using All-in-one and usb flash drives for ESXi hosts, which USB drives do you use?
How big?

Matej
 
I need guidance on drives to purchase, can anyone please assist?

1) Seagate ST3000DM001
2) Western Digital RED WD30EFRX

I would go with either one of those...

Although I'm a bit disappointed in WD Red. Bought 2 pieces about a month ago and 1 was dead on arrival. Got a replacment and now its working...

On the other hand, I have 10 WD greens running for about 4 years without a problem...

Matej
 
i bought 7 x wd reds (6 x raidz2 and 1 x for migration and after that as spare) and all were ok, i guess its always the same with all drives and manufactures, you could be lucky or not.

you could get faulty drives no matter if enterprise or not and with all manufactures...
 
Quick question. How do I configure NFS so only certain IPs on my local subnet can access NFS shares?

Hi,

you could set nfs share options with:
Code:
zfs set sharenfs=rw="IP" "zfs dataset"
Only "IP" is allowed to write on the nfs share
 
Back
Top