OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

Hi,
Hopefully someone can help and point me in the right direction, my boot drive which was all that was spare when I built this is a 320gb drive, it is now failing, if I have to restart my nas box it can take 40 attempts to boot it as the drive struggle to spin up, I need to clone this off, here in lies the problem I was hoping to put this on a smaller drive, when I login to the nas I can see that only 20gb is being used, but I cant shrink the partition in gparted (unrecognized filesystem), I can browse the drive perfectly well on the local machine and it shows 280gig free, if I try and clone the cloning software sees all the drive being used so wont clone to a smaller drive, please help before this all goes t1ts up


Thanks
 
I've had omnios/napp-it running off an older,slow USB stick for awhile now. I just purchased two USB 3.0 sticks since scrubs on my rpool showed my current USB stick is about to die. I have one of the USB 3.0 usb sticks plugged in and tried to do a 'Mirror bootdisk' but cannot get it to work. See below:

fdisk -B c13t0d0p0
ok


prtvtoc /dev/rdsk/c3t0d0s0 | fmthard -s - /dev/rdsk/c13t0d0s0
fmthard: Partition 0 specified as 31198230 sectors starting at 16065
does not fit. The full disk contains 30812670 sectors.
fmthard: Partition 2 specifies the full disk and is not equal
full size of disk. The full disk capacity is 30812670 sectors.
fmthard: Partition 2 specified as 31246425 sectors starting at 0
does not fit. The full disk contains 30812670 sectors.
fmthard: New volume table of contents now in place.


zpool attach -f rpool c3t0d0s0 c13t0d0s0
cannot attach c13t0d0s0 to c3t0d0s0: I/O error


installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c13t0d0s0
stage2 written to partition 0, 283 sectors starting at 50 (abs 16115)
stage1 written to partition 0 sector 0 (abs 16065)


Please check poolstate (resilvering).

If I check the pool state it is not resilvering nor is the disk there. I am assuming its crapping out on the I/O error/not being able to attach. The old and the new stick are obviously not identical. Is this where it is going wrong? If so, whats the best way to RESTORE a napp-it back up (I found how to back it up) when doing napp-it to go?
 
I've had omnios/napp-it running off an older,slow USB stick for awhile now. I just purchased two USB 3.0 sticks since scrubs on my rpool showed my current USB stick is about to die. I have one of the USB 3.0 usb sticks plugged in and tried to do a 'Mirror bootdisk' but cannot get it to work. See below:



If I check the pool state it is not resilvering nor is the disk there. I am assuming its crapping out on the I/O error/not being able to attach. The old and the new stick are obviously not identical. Is this where it is going wrong? If so, whats the best way to RESTORE a napp-it back up (I found how to back it up) when doing napp-it to go?

you may do a regular setup
- Backup napp-it (menu system) to your datapool
- install OmniOS to new stick
- configure network, install napp-it per wget
- start napp-it and enable ssh root access (menu services -ssh)
- connect from Windows via WinSCP as root
- restore /var/web-gui/_log and /var/web-gui/_my (private napp-it menus and settings) from backup on datapool
- mirror stick via napp-it
 
Can you check:
- napp-it shows wrong ACL info
(you may update to a newer release, newest is 0.9e preview, you can evaluata this with a pro key from
http://forums.servethehome.com/solaris-nexenta-openindiana-napp/2652-napp-0-9e-preview.html )
and check if the problem persists

- acl is different
(you can use acl extension to display whole ACL info)

I tried updating to 0.9e, but the problem persists, even through I have cleared the ZFS buffer under the menu option ZFS Filesystems.

What exactly do you mean by

Can you check:
- napp-it shows wrong ACL info
(...)
- acl is different
(you can use acl extension to display whole ACL info)

By the way, I have set the ACL manually from command line, so I am sure it is the same.

Also, why is Share ACL displayed even through SMB sharing is off?
 
I tried updating to 0.9e, but the problem persists, even through I have cleared the ZFS buffer under the menu option ZFS Filesystems.

What exactly do you mean by



By the way, I have set the ACL manually from command line, so I am sure it is the same.

Also, why is Share ACL displayed even through SMB sharing is off?

This is why i estimated a buffer problem.

-can you display the ACLs (click on menu folder-ACL on the shares with smb=off
and share-ACL=full)

- can you check if you have a file /pool/filesystem/.zfs/shares/filesystem
The ACLs on this file are displayed under Share-ACL

Have you any non-default mountpoint-settings?
This is not supported
 
Last edited:
Please, please help, as above I asked how to move my boot drive to another drive as the drive was failing, as I had no answers or pointers I scoured the net and found some instructions, I have done all of the below in the thought that I was only copying and not altering my existing system, however my existing system will not boot correctly from its boot drive and nor will the copied drive, running a zfs pool from the cmd prompt after boot on my existing boot drive shows me all my zfs systems are therethe other drive boots to a grub prompt

runnign a svcs -xv

show me that the state: maintenance since nov 7 18:15
reason : Start method exited with $SMF_EXIT_ERR_FATAL


# zpool create -f mpool c5t0d0s0
# zfs set compression=on tpool


Snaphot and copy to my new mpool

# zfs snapshot -r rpool@shrink
# zfs send -vR rpool@shrink | zfs receive -vFd mpool


Prepare for booting
# rm /tpool/boot/grub/bootsign/pool_rpool
# touch /tpool/boot/grub/bootsign/pool_tpool
# zpool set bootfs=tpool/ROOT/Solaris_11_SRU4 tpool <<<< I have not done this
# cd /boot/grub/
# /sbin/installgrub stage1 stage2 /dev/rdsk/c5t0d0s0
stage2 written to partition 0, 277 sectors starting at 50 (abs 12594)
stage1 written to partition 0 sector 0 (abs 12544)
# vi /tpool/boot/grub/menu.lst



Further checking of shows /var/svc/log/system-filesystem-local:default.log

/lib/svc/method/fs-local /usr/sbin/zfs mount -a failed: exit status 1
Any help pointers greatly appreciated


Many thanks
 
Last edited:
This is why i estimated a buffer problem.

-can you display the ACLs (click on menu folder-ACL on the shares with smb=off
and share-ACL=full)

- can you check if you have a file /pool/filesystem/.zfs/shares/filesystem
The ACLs on this file are displayed under Share-ACL

Have you any non-default mountpoint-settings?
This is not supported

Here is the folder-ACL of one of the problematic filesystems:

lYcG97.png


I have those files SMB share files for the two problematic filesystems, yes, but I cannot delete them.

xolMY9.png


I don't have any non-default mountpoint settings. To be honest, I am not even sure what it is :)
 
Quick question on licensing. I have napp-it at home that I use for home pictures and videos storage. My family likes to create photos and it's starting to be quite a bit and I got nervous about what happens if my HDD fails.

I've saved up and added an extra internal HDD to do backups, and was thinking the best way would be to schedule automatic snapshots, and then send them to the new internal HDD. I found that I can do this manually with the replication feature, but that I cannot schedule it to occur automatically.

When I read the licensing for replication however it seems to be geared toward a different use-case, like off-site replication. The description says that only the "receiver" needs the license, which would lead me to believe I should be able to schedule sends without any extra license? I don't have a 2nd server.

Do I need a license to send the snaps from one pool to another on the same host or am I just being blind and not seeing it? 50 euro is a lot for me.

I looked at maybe how to script activities myself but it's over my head.

Thanks for any help.
 
Here is the folder-ACL of one of the problematic filesystems:

lYcG97.png


I have those files SMB share files for the two problematic filesystems, yes, but I cannot delete them.

xolMY9.png


I don't have any non-default mountpoint settings. To be honest, I am not even sure what it is :)

You cannot remove the share control files manually.
I would try to share/unshare the filesystem eventually with a SMB service disable/enable
 
Quick question on licensing. I have napp-it at home that I use for home pictures and videos storage. My family likes to create photos and it's starting to be quite a bit and I got nervous about what happens if my HDD fails.

I've saved up and added an extra internal HDD to do backups, and was thinking the best way would be to schedule automatic snapshots, and then send them to the new internal HDD. I found that I can do this manually with the replication feature, but that I cannot schedule it to occur automatically.

When I read the licensing for replication however it seems to be geared toward a different use-case, like off-site replication. The description says that only the "receiver" needs the license, which would lead me to believe I should be able to schedule sends without any extra license? I don't have a 2nd server.

Do I need a license to send the snaps from one pool to another on the same host or am I just being blind and not seeing it? 50 euro is a lot for me.

I looked at maybe how to script activities myself but it's over my head.

Thanks for any help.

With the free version, you can:
- replicate locally (manually only)
- use any free replication script
- sync folders (filebased) with rsync (as an other job, this is what I would do)
 
A minor question but in napp-it, is there a way to get that little window that slides up from the bottom to stay on screen? It looks like it's the console output and commands of whatever feature you have clicked on. It comes up for about 2 seconds then goes away.
 
A minor question but in napp-it, is there a way to get that little window that slides up from the bottom to stay on screen? It looks like it's the console output and commands of whatever feature you have clicked on. It comes up for about 2 seconds then goes away.

The little Windows is a minilog of the console output of current commands. It is usefull to control running processes. If a programm hangs, you can identify easily.
If you use a napp-it Pro-Version (with a monitor key or an evalkey) you can click on "Edit" (toplevel menu right of ogout) and then on "Log" to popup the whole log.
 
Custom fields for filesystems don't seem to work. As an example:

- Click on "ZFS Filesystems"
- Click a link (off or on) under COMPR
- When the page reloads, try to set compression=gzip.

You can't type in that field and some like it (at least dedup=verify). You can edit others (QUOTA, RES, etc.). Is there a specific reason for this?

I can ssh into napp-it and from there manually do what I need to, but I'm a little surprised that the ui doesn't allow it.

edit:
correction: nfs can be edited when enabling sharing
 
Last edited:
I already have Napp-It installed on a server and am in the process of setting it up on some other servers. I don't need the Eval period on the new machines and don't want to accidentally use / enable any option that gets disabled at the end of the Eval period. How can I disable / remove / uninstall the eval license or trick it into believing that the eval is over?

pcd
 
From a recovery perspective the Napp-It config can be backed up, but what do people recommend for the underlying OS, other than Users and Groups what else needs to be backed up?
What's the best way of doing this?

I ask as a number of people comment on SSD's and USB's running out of write cycles, and I'm concerned that I could soon be in a similar position.

pcd
 
I already have Napp-It installed on a server and am in the process of setting it up on some other servers. I don't need the Eval period on the new machines and don't want to accidentally use / enable any option that gets disabled at the end of the Eval period. How can I disable / remove / uninstall the eval license or trick it into believing that the eval is over?

pcd

quite simple.
Delete the evalkey at extension >> register >> edit
 
From a recovery perspective the Napp-It config can be backed up, but what do people recommend for the underlying OS, other than Users and Groups what else needs to be backed up?
What's the best way of doing this?

I ask as a number of people comment on SSD's and USB's running out of write cycles, and I'm concerned that I could soon be in a similar position.

pcd

If you have impoortant settings on rpool, you may use a zfs mirror, a sata mirror or you may clone the boot disk/stick from time to time.

Mostly the settings on rpool are not too importan on a typical NAS or SAN.
 
If I have security set up don't I need the ID's of the users and groups?
What's the easiest way of cloning the boot disks? I use a combination of USB and partitioned SSD's - with the other partitions being used for ZIL and L2ARC.

pcd
 
If I have security set up don't I need the ID's of the users and groups?
What's the easiest way of cloning the boot disks? I use a combination of USB and partitioned SSD's - with the other partitions being used for ZIL and L2ARC.

pcd

If you use CIFS, you must look at ACL not Unix permissions based on local Solaris user and SMB groups (not Unix groups)

I use Clonezilla

You may use a SSD for boot and ARC but never as ZIL as well (very bad use).
If the ZIL is not really fast regarding latency it may be slower than the onpool ZIL. Even a dedicated but slow SSD is not really helpful.

Disable sync write, accept slow secure sync writes or buy something really good like an expensive ZeusRAM or a quite affordable Intel S3700.
 
If you use CIFS, you must look at ACL not Unix permissions based on local Solaris user and SMB groups (not Unix groups)
Looks like I have a little more reading / learnng to do.

I use Clonezilla
Thank you that's the kind of info I was looking for.

You may use a SSD for boot and ARC but never as ZIL as well (very bad use).
If the ZIL is not really fast regarding latency it may be slower than the onpool ZIL. Even a dedicated but slow SSD is not really helpful.

Disable sync write, accept slow secure sync writes or buy something really good like an expensive ZeusRAM or a quite affordable Intel S3700.
I'm running on the N40 Microservers so performance is not going to be fast at the best of times and with Sata2, I figured that any SSD was going to be better than rotating rust. Looks like I'll be removing the ZIL and disabling sync writes.

Code:
zfs set sync=disabled <pool>
zpool remove <pool> <device>
Done!

pcd
 
hey guys

how can i install a older version of nappit?

im getting some NFS weird issues with newest nappit version and stable Omni OS.

more specific with PHD virtual backup, the initial setup fails because of NFS acl blocking inherited permissions
 
hey guys

how can i install a older version of nappit?

im getting some NFS weird issues with newest nappit version and stable Omni OS.

more specific with PHD virtual backup, the initial setup fails because of NFS acl blocking inherited permissions

You can install former versions down to 0.8, see
http://www.napp-it.org/downloads/changelog.html

But NFS behaviours depends on the underlaying OS and not on napp-it
 
How do I report what I think is a bug in napp-it (0.9d2)? When listing our hourly snapshots the minutes are missing from the Creation date when the hour < 10:
nappit.jpg

Or is there a setting somewhere that I haven't found?
 
How do I report what I think is a bug in napp-it (0.9d2)? When listing our hourly snapshots the minutes are missing from the Creation date when the hour < 10:

Yes this is a bug. Snapnames are correct but there is a display problem
that I will fix in next release.
 
My pool is in SUSPENDED status most probably because snapshots overrun the 90% capacity limit.

"zpool status -v Silo"

returns:
pool: Silo
state: SUSPENDED
status: One or more devices are unavailable in response to IO failures.
The pool is suspended.
action: Make sure the affected devices are connected, then run 'zpool clear' or
'fmadm repaired'.
see: http://support.oracle.com/msg/ZFS-8000-HC
scan: scrub repaired 0 in 8h26m with 0 errors on Fri Nov 1 11:26:35 2013
config:

NAME STATE READ WRITE CKSUM
Silo UNAVAIL 0 0 0
raidz1-0 UNAVAIL 0 0 0
c9t10d0 UNAVAIL 0 0 0
c9t11d0 UNAVAIL 0 0 0
c9t5d0 UNAVAIL 0 0 0
c9t6d0 UNAVAIL 0 0 0
c9t9d0 UNAVAIL 0 0 0
logs
c9t12d0 UNAVAIL 0 0 0
cache
c9t20d0 ONLINE 0 0 0

device details:

c9t10d0 UNAVAIL experienced I/O failures
status: FMA has faulted this device.
action: Run 'fmadm faulty' for more information. Clear the errors
using 'fmadm repaired'.
see: http://support.oracle.com/msg/ZFS-8000-FD for recovery

c9t11d0 UNAVAIL experienced I/O failures
status: ZFS detected errors on this device.
The pool experienced I/O failures.

c9t5d0 UNAVAIL experienced I/O failures
status: FMA has faulted this device.
action: Run 'fmadm faulty' for more information. Clear the errors
using 'fmadm repaired'.

c9t6d0 UNAVAIL experienced I/O failures
status: ZFS detected errors on this device.
The pool experienced I/O failures.

c9t9d0 UNAVAIL experienced I/O failures
status: ZFS detected errors on this device.
The pool experienced I/O failures.

c9t12d0 UNAVAIL experienced I/O failures
status: ZFS detected errors on this device.
The pool experienced I/O failures.


"zpool clear Silo"
returns:
"cannot clear errors for Silo: I/O error"

"fmadm faulted" after I have run "fmadm repaired zfs://pool............................."
reports (part of the message that is repeated for every one of the 5 HDs, in this case is the c9t10d0 disk)

Suspect 1 of 1 :
Fault class : fault.fs.zfs.vdev.io
Certainty : 100%
Affects : zfs://pool=1b442dce7e095fc5/vdev=5529f9b5db9c59c8/pool_name=Silo/vdev_name=id1,sd@n50000f000b016887/a
Status : out of service, but associated components no longer faulty

FRU
Name : "zfs://pool=1b442dce7e095fc5/vdev=5529f9b5db9c59c8/pool_name=Silo/vdev_name=id1,sd@n50000f000b016887/a"
Status : repaired

Description : The number of I/O errors associated with ZFS device
'id1,sd@n50000f000b016887/a' in pool 'Silo' exceeded acceptable

Your help for regaining pool availability will be appreciated. TIA
 
I do not know if the following are relevent, but I mention them in any case:

I found the following link:
https://forums.oracle.com/thread/2546634 which mentions a similar case.
I also run Solaris 11.1 as in the original thread.
Following instructions found there, I renamed file /etc/zfs/zpool.cache instad of deleting it.

I also use napp-it 0.9b3 (thanks Gea), but started booting from the original Solaris installation onwards.
The "Silo" pool was not at all visible.
One step before the napp-it installation, I tried importing the pool:
zpool import Silo (without -f or anything)
It was imported in degraded state and as I write it is resilvering.
When resilvering gets finished I will try again the napp-it install and see if it's ok aswell
 
Last edited:
Final part of the trilogy:
Resilvering finished, I booted from the 0.9b3 napp-it install.
The pool was not visible, so I tried to import it via the napp-it GUI.
Didn't do the trick, with no error shown.
I tried the same via the Solaris terminal and it succeded.
Now napp-it reports:

pool: Silo
state: ONLINE
status: The pool is formatted using an older on-disk format. The pool can
still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'. Once this is done, the
pool will no longer be accessible on older software versions.
scan: resilvered 26.2G in 0h30m with 0 errors on Sat Nov 23 18:31:04 2013
config:

NAME STATE READ WRITE CKSUM CAP Product
Silo ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
c9t10d0 ONLINE 0 0 0 1 TB SAMSUNG HD103UJ
c9t11d0 ONLINE 0 0 0 1 TB SAMSUNG HD103UJ
c9t5d0 ONLINE 0 0 0 1 TB SAMSUNG HD103SJ
c9t6d0 ONLINE 0 0 0 1 TB SAMSUNG HD103SJ
c9t9d0 ONLINE 0 0 0 1 TB SAMSUNG HD103UJ
logs
c9t12d0 ONLINE 0 0 0 40 GB KINGSTON SSDNow
cache
c9t20d0 ONLINE 0 0 0 80 GB INTEL SSDSA2M080

Happy ending, everything is well.
 
Hi guys. I have a question related to NFS and UNIX permissions (ACL).

I have three filesystems: A, B and C.

Filesystem A and B have "group@" equal to "modify_set". Their ownership is "root:groupnameA" and "root:groupnameB" respectively.

Filesystem C has "everyone@" equal to "modify_set". Its ownership is "root:root".

Client machine A can access filesystem A and C, client machine B can access filesystem B and C. This is handled via syncronization of UID's and GID's between the two clients and the server.

Problem:
Now, I have a virtual machine on the server (ESXi all-in-one), which needs read-write access to all three filesystems to facilitate cloud backup.

How can I handle this? The virtual machine can clearly not have the same UID/GID of client machine A and B at the same time.
 
I'm trying to replace degraded drives for the first time.
I added a spare to the pool and chose to replace the degraded disk with the spare which kicked off the resilvering.

During the resilvering, it found another degraded disk which I repeated the spare/replace process.

I checked under the remove menu in napp-it but it only allows me to remove caches/spares.

To remove the degraded disks at this point, do I simply turn off the machine and pull the disks? Will the spares automatically be part of the pool as normal disks instead of spares?

Thanks.

Code:
  pool: xpool
 state: DEGRADED
status: One or more devices is currently being resilvered.  The pool will
	continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
  scan: resilver in progress since Sun Nov 24 08:00:41 2013
    1.21T scanned out of 7.85T at 323M/s, 5h59m to go
    124G resilvered, 15.48% done
config:

	NAME                         STATE     READ WRITE CKSUM     CAP            MODELL
	xpool                        DEGRADED     0     0     0
	  raidz2-0                   DEGRADED     0     0     0
	    spare-0                  DEGRADED     0     0     0
	      c3t50014EE0ABD529D5d0  DEGRADED     0     0     0  too many errors     1000.20 GB     WDC WD1001FALS-0
	      c3t5000CCA228C46994d0  ONLINE       0     0     0     3.00 TB        Hitachi HDS5C303
	    c3t50014EE201DB6937d0    ONLINE       0     0     0     1000.20 GB     WDC WD10EACS-65D
	    c3t50014EE202777A59d0    ONLINE       0     0     0     1000.20 GB     WDC WD10EADS-00L
	    c3t50014EE20277824Ad0    ONLINE       0     0     0     1000.20 GB     WDC WD10EADS-00L
	    c3t50014EE2027783F2d0    ONLINE       0     0     0     1000.20 GB     WDC WD10EADS-00L
	    c3t50014EE20277854Ed0    ONLINE       0     0     0     1000.20 GB     WDC WD10EADS-00L
	    c3t50014EE257CCFC48d0    ONLINE       0     0     0     1000.20 GB     WDC WD10EADS-00L
	    c3t50014EE257D7A49Cd0    ONLINE       0     0     0     1000.20 GB     WDC WD10EADS-00L
	    c3t50014EE257D7A4DBd0    ONLINE       0     0     0     1000.20 GB     WDC WD10EADS-00L
	    spare-9                  DEGRADED     0     0     0
	      c3t50014EE2AD22668Dd0  DEGRADED     0     0     0  too many errors     1000.20 GB     WDC WD10EADS-00L
	      c3t5000CCA228C46124d0  ONLINE       0     0     0  (resilvering)     3.00 TB        Hitachi HDS5C303
	spares
	  c3t5000CCA228C46994d0      INUSE     currently in use     3.00 TB        Hitachi HDS5C303
	  c3t5000CCA228C46124d0      INUSE     currently in use     3.00 TB        Hitachi HDS5C303

errors: No known data errors
 
I also have an issue. Somebody mishandled the server or something like that and 3 drives were kicked out of my RAIDZ3. Unaware of this I wrote about 15GB of data to the 27TB pool, until I suspected something since it was pretty slow. I cleared the errors and the disks all appear to be OK, however the 3 drives are being resilvered. This should take something like 36 hours at current speed, with no margin if another drive decides to quit. Most of the data is backed up but that would still annoy me.

Is there any way to stop the resilvering, declaring all drives good then do a scrub to fix the little amount that should be fixed ? The resilvering drives should be consistent for 99,99% of their content !
 
What do you know ? Just as I was writing the previous message (and searching the webs for ideas, including performance tuning of the resilver) the operation actually ended, 79GB resilvered in 53 minutes.
 
Hi,

For some unknown reason my openindiana did not want to start anymore (in esxi).
So I reinstalled openindiana + napp-it in the exsi server. That works nicely.

My disk are passthroughed through the IBM M1015 and visible in napp-it.

In napp-it I used the import command (under pools). The pools are available with the correct used and unused amount of data.

But I have the following error: /zfs-pool: No such file or directory /zfs-pool/.zfs/shares/: No such file or directory /zfs-pool/zfs_z2: No such file or directory /zfs-pool/zfs_z2/.zfs/shares/: No such file or director

What can I do to have access to my data?

Thanks

Nick
 
Last edited:
I'm trying to replace degraded drives for the first time.
I added a spare to the pool and chose to replace the degraded disk with the spare which kicked off the resilvering.

During the resilvering, it found another degraded disk which I repeated the spare/replace process.

I checked under the remove menu in napp-it but it only allows me to remove caches/spares.

To remove the degraded disks at this point, do I simply turn off the machine and pull the disks? Will the spares automatically be part of the pool as normal disks instead of spares?

Thanks.

Code:
  pool: xpool
 state: DEGRADED
status: One or more devices is currently being resilvered.  The pool will
	continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
  scan: resilver in progress since Sun Nov 24 08:00:41 2013
    1.21T scanned out of 7.85T at 323M/s, 5h59m to go
    124G resilvered, 15.48% done
config:

	NAME                         STATE     READ WRITE CKSUM     CAP            MODELL
	xpool                        DEGRADED     0     0     0
	  raidz2-0                   DEGRADED     0     0     0
	    spare-0                  DEGRADED     0     0     0
	      c3t50014EE0ABD529D5d0  DEGRADED     0     0     0  too many errors     1000.20 GB     WDC WD1001FALS-0
	      c3t5000CCA228C46994d0  ONLINE       0     0     0     3.00 TB        Hitachi HDS5C303
	    c3t50014EE201DB6937d0    ONLINE       0     0     0     1000.20 GB     WDC WD10EACS-65D
	    c3t50014EE202777A59d0    ONLINE       0     0     0     1000.20 GB     WDC WD10EADS-00L
	    c3t50014EE20277824Ad0    ONLINE       0     0     0     1000.20 GB     WDC WD10EADS-00L
	    c3t50014EE2027783F2d0    ONLINE       0     0     0     1000.20 GB     WDC WD10EADS-00L
	    c3t50014EE20277854Ed0    ONLINE       0     0     0     1000.20 GB     WDC WD10EADS-00L
	    c3t50014EE257CCFC48d0    ONLINE       0     0     0     1000.20 GB     WDC WD10EADS-00L
	    c3t50014EE257D7A49Cd0    ONLINE       0     0     0     1000.20 GB     WDC WD10EADS-00L
	    c3t50014EE257D7A4DBd0    ONLINE       0     0     0     1000.20 GB     WDC WD10EADS-00L
	    spare-9                  DEGRADED     0     0     0
	      c3t50014EE2AD22668Dd0  DEGRADED     0     0     0  too many errors     1000.20 GB     WDC WD10EADS-00L
	      c3t5000CCA228C46124d0  ONLINE       0     0     0  (resilvering)     3.00 TB        Hitachi HDS5C303
	spares
	  c3t5000CCA228C46994d0      INUSE     currently in use     3.00 TB        Hitachi HDS5C303
	  c3t5000CCA228C46124d0      INUSE     currently in use     3.00 TB        Hitachi HDS5C303

errors: No known data errors

zpool detach xpool c3t50014EE0ABD529D5d0
zpool detach xpool c3t50014EE2AD22668Dd0

Should do the trick.
 
For some strange reason, I have access to my data after a reboot.

Humm

Your errors are about the smb share control files.
This indicates that you have enabled smb in a filesystem while
the smb service is not running (correctly)

A reboot or a smb service disable/enable solves the problem.
 
Is it not recommended to use OI anymore? I have an OI based napp-it server that I've been running for over a year now. I haven't followed this thread for a while, but it seems like there's now a standalone napp-it on Omni? Should I switch?

I'm really just a newbie with Linux, and command line scares me. It makes me feel good (and probably for very little good reason) to have a UI with OI, but at the same time I'm worried about future support???
 
Is it not recommended to use OI anymore? I have an OI based napp-it server that I've been running for over a year now. I haven't followed this thread for a while, but it seems like there's now a standalone napp-it on Omni? Should I switch?

I'm really just a newbie with Linux, and command line scares me. It makes me feel good (and probably for very little good reason) to have a UI with OI, but at the same time I'm worried about future support???

The main reason for me to switch to OmniOS as the default platform is:
- stable releases every 6 months
- bugfixes
- longterm support for some releases
- optional:payed commercial support
- bloody releases with newest features

while OI is a dev release only.

For professional use I would think about a move to OmniOS
But OI remains a supported platform for napp-it so you can stay on OI.

some remarks:
-You do not really need the OI-GUI, you can browse/edit your storage locally with
midnight commander or remotely via WinSCP and do all regular settings in napp-it.

- You may install a GUI on OmniOS as well. Maybee someone is interested in providing a well
tested config and installer, see http://www.perkin.org.uk/posts/whats-new-in-pkgsrc-2013Q2.html
 
The main reason for me to switch to OmniOS as the default platform is:
- stable releases every 6 months
- bugfixes
- longterm support for some releases
- optional:payed commercial support
- bloody releases with newest features

while OI is a dev release only.

For professional use I would think about a move to OmniOS
But OI remains a supported platform for napp-it so you can stay on OI.

some remarks:
-You do not really need the OI-GUI, you can browse/edit your storage locally with
midnight commander or remotely via WinSCP and do all regular settings in napp-it.

- You may install a GUI on OmniOS as well. Maybee someone is interested in providing a well
tested config and installer, see http://www.perkin.org.uk/posts/whats-new-in-pkgsrc-2013Q2.html

How does Omni get updated if I use the bundled esxi bundle with Napp it off your website? Is there some updater utility? The update functionality within napp-it is for napp-it only right?
 
Back
Top