How to upgrade from Nexenta to OmniOS?

Discussion in 'SSDs & Data Storage' started by N Bates, Jul 15, 2017.

  1. N Bates

    N Bates n00bie

    Messages:
    50
    Joined:
    Jul 15, 2017
    Hi all,

    Really strange, I couldn't login on the HardForum site, the user name was not recognised, I had to re-register, is there a way to change my current name to my old user name?
    Anyway, that's not why I am here, currently I am still running Nexenta on my home NAS using an old internal HDD, so I have decided to upgrade to OmniOS with Nappit till I read the above:

    My current NAS is as per below:

    nexenta appliance v. 0.500r nightly Jun.27.2011

    pool: NAS
    state: ONLINE
    scan: resilvered 0 in 0h0m with 0 errors on Fri Jul 7 03:53:54 2017
    config:

    NAME STATE READ WRITE CKSUM
    NAS ONLINE 0 0 0
    raidz1-0 ONLINE 0 0 0
    c2t3d0 ONLINE 0 0 0
    c2t2d0 ONLINE 0 0 0
    c2t1d0 ONLINE 0 0 0
    c2t0d0 ONLINE 0 0 0
    raidz1-1 ONLINE 0 0 0
    c2t6d0 ONLINE 0 0 0
    c2t5d0 ONLINE 0 0 0
    c2t4d0 ONLINE 0 0 0
    c2t7d0 ONLINE 0 0 0
    raidz1-2 ONLINE 0 0 0
    c2t11d0 ONLINE 0 0 0
    c2t10d0 ONLINE 0 0 0
    c2t9d0 ONLINE 0 0 0
    c2t8d0 ONLINE 0 0 0
    raidz1-3 ONLINE 0 0 0
    c2t13d0 ONLINE 0 0 0
    c2t14d0 ONLINE 0 0 0
    c2t15d0 ONLINE 0 0 0
    c2t16d0 ONLINE 0 0 0
    raidz1-4 ONLINE 0 0 0
    c2t17d0 ONLINE 0 0 0
    c2t18d0 ONLINE 0 0 0
    c2t19d0 ONLINE 0 0 0
    c2t20d0 ONLINE 0 0 0
    raidz1-5 ONLINE 0 0 0
    c2t21d0 ONLINE 0 0 0
    c2t22d0 ONLINE 0 0 0
    c2t23d0 ONLINE 0 0 0
    c2t24d0 ONLINE 0 0 0

    errors: No known data errors

    pool: syspool
    state: ONLINE
    status: The pool is formatted using an older on-disk format. The pool can
    still be used, but some features are unavailable.
    action: Upgrade the pool using 'zpool upgrade'. Once this is done, the
    pool will no longer be accessible on older software versions.
    scan: none requested
    config:

    NAME STATE READ WRITE CKSUM
    syspool ONLINE 0 0 0
    c1t0d0s0 ONLINE 0 0 0

    errors: No known data errors

    The system pool is using a different format version to the pool itself version 26 vs version 28.

    Can I just go ahead and upgrade from my current 60gb internal HDD to a 120gb SSD? should I use the OmniOS or OI?

    My upgrade steps are as per below, is this correct and safe?

    1) Log in to Napp-it
    2) Export the pool via "export pool NAS" in Napp-it
    3) Remove current internal hdd
    4) install the image either OmniOS or OI (can this be done on a windows machine first or do I have to use the NAS server to install?)
    5) Install the 120gb SSD on the NAS
    6) Boot up into Napp-it
    7) import the pool via "import pool NAS"

    Thanks for all your help
     
  2. _Gea

    _Gea 2[H]4U

    Messages:
    3,646
    Joined:
    Dec 5, 2010
    The pool export should be done in NexentaStor
    but this is not essential as you can import a pool without a prior export or even after a pool destroy unless you have all disks.

    Then install Solaris, OI or OmniOS from DVD/CD or USB installer stick onto the NAS, see
    http://www.napp-it.org/doc/downloads/setup_napp-it_os.pdf

    Then login into napp-it and import the pool.
    Check if mountpunt is /pool, under Nexenta it was /volumes/pool
     
  3. N Bates

    N Bates n00bie

    Messages:
    50
    Joined:
    Jul 15, 2017

    Thank you Gea, are Solaris and OI minimal server NAS OS's or have they got the desktop and are treated like an OS to be used for a pc rather than a minimal NAS/ server OS?
     
  4. _Gea

    _Gea 2[H]4U

    Messages:
    3,646
    Joined:
    Dec 5, 2010
    Oracle Solaris is a general use enterprise Unix targeting mainly top 500 enterprises with Cloud or very large data usages. You can install Solaris in a minimal server only version or a GUI version with a desktop for easier local management.

    OpenIndiana is based on the free Solaris fork Illumos.
    Like Solaris it is available in a minimal server edition and a GUI edition with the Mate desktop.

    OmniOS is a very minimalistic server only distribution based also on Illumos similar to OpenIndiana minimal/text edition

    All of these options are not NAS distributions but general use Unix server distributions.
    As the origin of ZFS is Solaris, its integration into the OS and services like iSCSI, NFS or SMB is mostly superiour. This is why Solaris and its forks are best suited for a NAS or SAN. A pure NAS distribution of Illumos would be the commercial NexentaStor.

    napp-it is a webbased add-on application to manage the system and storage related features of Solaris, OpenIndiana and OmniOS (or Linux but with a reduced featureset) similar to a pure NAS distribution. From a user view Solaris/Illumos + napp-it behaves like a dedicated NAS distribution.
     
    N Bates likes this.
  5. N Bates

    N Bates n00bie

    Messages:
    50
    Joined:
    Jul 15, 2017
    The only question now is which is the best OS for using on a media NAS server storing media movie files of DVD and Blue Rays, Solaris or OmniOS, I will be using SMB to share?

    I have forgotten to also add, which of the two also support the bigest hardware drivers for old and new systems?
     
  6. N Bates

    N Bates n00bie

    Messages:
    50
    Joined:
    Jul 15, 2017
    Which is the right image for the OmniOS latest and Napp-it togo barebone file to download?
     
  7. _Gea

    _Gea 2[H]4U

    Messages:
    3,646
    Joined:
    Dec 5, 2010
    The ESXi template can be downloaded from
    http://napp-it.org/downloads/napp-in-one_en.html or
    http://openzfs.hfg-gmuend.de/ as a mirror

    The Sata template for an Intel S3510-80 can be found at same location
    but this is more a sample. Cloning a disk image is more for distributors
    as you need always the same disk.

    Usually you do a regular barebone OS setup of OI, Omni or Solaris
    and add napp-it via the online wget installer.
     
  8. AveryFreeman

    AveryFreeman n00bie

    Messages:
    3
    Joined:
    Aug 6, 2016
    OmniOS developers just recently abandoned the project. I was looking into it myself because I like its features and it sounds like a solid OS being descended from OpenSolaris, but it's dead.
     
  9. _Gea

    _Gea 2[H]4U

    Messages:
    3,646
    Joined:
    Dec 5, 2010
    There is a continuation of OmniOS as a community distribution currently already with the second update . Behind the community project are some firms who use OmniOS internally and one from ETH Zurüch, see http://www.omniosce.org/ or https://gitter.im/omniosorg/Lobby

    btw
    OmniOS is not a direct descend of OpenSolaris.
    OpenSolaris was forked in the Illumos project where firms like Delphix, Joyent (a Samsung company), Nexenta and others combined their efforts to continue a free Solaris as a community distribution or one with a commercial background. Some commercial distributions llike OmniOS or SmartOS are free others like Nexenta are not or only with restrictions. Until now OpenIndiana was the main community project.

    Its the commercial support option for OmniOS at OmniTi that is not available any longer. Beside that, if you want an option, OpenIndiana is a nearly identical sister project with a different focus including general use with a desktop and a repository with many services. Focus of OmniOS is a very stable and minimalistic just enough ZFS storage server approach for iSCSI, NFS and SMB.
     
    Last edited: Jul 22, 2017
  10. N Bates

    N Bates n00bie

    Messages:
    50
    Joined:
    Jul 15, 2017
    Thank you for the great info Gea, I have installed the community distribution of OmniOS and imported the pool, now I can not see all of the vdevs, I had 5 x 4 disks in each only the below is showing and it looks like I am having a failed disk, how do I determine which disk has failed?

    Pool VER RAW SIZE/ USABLE ALLOC RES FRES AVAIL zfs [df -h/df -H] DEDUP FAILM EXP REPL ALT GUID HEALTH SYNC ENCRYPT ACTION ATIME Pri-Cache Sec-Cache
    NAS 6 12.7T/ 9.2TB 252K - - 9.19T [9.2T /11T] 1.00x wait off off - 12436642361445972779 DEGRADED standard n.a. clear errors - all all
    rpool - 111G/ 107.2GB 2.32G - - 104G [105G /112G] 1.00x wait off off - 6104832667924772593 ONLINE standard n.a. clear errors off all all
    Info: RAW poolsize does not count redundancy, usable/available size is from zfs list, df -h displays size as a power of 1024 wheras df -H displays as a power of 1000


    [​IMG]

    zpool status
    pool: NAS
    state: DEGRADED
    status: One or more devices could not be used because the label is missing or
    invalid. Sufficient replicas exist for the pool to continue
    functioning in a degraded state.
    action: Replace the device using 'zpool replace'.
    see: http://illumos.org/msg/ZFS-8000-4J
    scan: none requested
    config:

    NAME STATE READ WRITE CKSUM CAP Product /napp-it IOstat mess SN/LUN
    NAS DEGRADED 0 0 0
    raidz1-0 DEGRADED 0 0 0
    c3t3d0 ONLINE 0 0 0 1.5 TB SAMSUNG HD154UI S:0 H:0 T:0 S1XWJ1LSC00467
    5503366839276267646 UNAVAIL 0 0 0 was /dev/ad6
    c3t1d0p0 ONLINE 0 0 0 1.5 TB SAMSUNG HD154UI S:0 H:0 T:0 S1XWJ1LSC00468
    c3t0d0p0 ONLINE 0 0 0 1.5 TB SAMSUNG HD154UI S:0 H:0 T:0 S1XWJ1LSC00466
    raidz1-1 ONLINE 0 0 0
    c3t6d0p0 ONLINE 0 0 0 2 TB SAMSUNG HD203WI S:0 H:0 T:0 S1UYJ1KZ309403
    c3t5d0 ONLINE 0 0 0 2 TB SAMSUNG HD203WI S:0 H:0 T:0 S1UYJ1KZ309410
    c3t4d0p0 ONLINE 0 0 0 2 TB SAMSUNG HD203WI S:0 H:0 T:0 S1UYJ1KZ309168
    c3t7d0 ONLINE 0 0 0 2 TB SAMSUNG HD203WI S:0 H:0 T:0 S1UYJ1KZ309407

    errors: No known data errors

    pool: rpool
    state: ONLINE
    scan: none requested
    config:

    NAME STATE READ WRITE CKSUM CAP Product /napp-it IOstat mess SN/LUN
    rpool ONLINE 0 0 0
    c2t0d0 ONLINE 0 0 0 120 GB DREVO X1 SSD S:0 H:0 T:0 TA1762600550

    errors: No known data errors


    id part identify stat diskcap partcap error vendor product sn
    c2t0d0 (!parted) via dd ok 120 GB S:0 H:0 T:0 ATA DREVO X1 SSD TA1762600550
    c3t0d0 (!parted) via dd ok 1.5 TB S:0 H:0 T:0 ATA SAMSUNG HD154UI S1XWJ1LSC00466
    c3t10d0 (!parted) via dd ok 2 TB S:0 H:0 T:0 ATA SAMSUNG HD204UI S2H7J9BB502115
    c3t11d0 (!parted) via dd ok 2 TB S:0 H:0 T:0 ATA SAMSUNG HD204UI S2H7J9BB502120
    c3t13d0 (!parted) via dd ok 2 TB S:0 H:0 T:0 ATA SAMSUNG HD204UI S2H7J90B111892
    c3t14d0 (!parted) via dd ok 2 TB S:0 H:0 T:0 ATA SAMSUNG HD204UI S2H7J9BB502119
    c3t15d0 (!parted) via dd ok 2 TB S:0 H:0 T:0 ATA SAMSUNG HD204UI S2H7J90B111859
    c3t16d0 (!parted) via dd ok 2 TB S:0 H:0 T:0 ATA SAMSUNG HD204UI S2H7J90B111880
    c3t17d0 (!parted) via dd ok 2.2 TB S:0 H:0 T:0 ATA ST3000DM001-1CH1 Z1F25D2A
    c3t18d0 (!parted) via dd ok 2 TB S:0 H:0 T:0 ATA ST2000DM001-1CH1 W1E3YZCH
    c3t19d0 (!parted) via dd ok 2 TB S:0 H:0 T:0 ATA ST2000DM001-1CH1 W1E419W7
    c3t1d0 (!parted) via dd ok 1.5 TB S:0 H:0 T:0 ATA SAMSUNG HD154UI S1XWJ1LSC00468
    c3t20d0 (!parted) via dd ok 2 TB S:0 H:0 T:0 ATA SAMSUNG HD204UI S2H7J90B111886
    c3t21d0 (!parted) via dd ok 2 TB S:0 H:0 T:0 ATA TOSHIBA DT01ACA2 X3UJK0ZGS
    c3t22d0 (!parted) via dd ok 2 TB S:0 H:0 T:0 ATA TOSHIBA DT01ACA2 X3UJ9X3GS
    c3t23d0 (!parted) via dd ok 2 TB S:0 H:0 T:0 ATA TOSHIBA DT01ACA2 X3UJ5AXGS
    c3t24d0 (!parted) via dd ok 2 TB S:0 H:0 T:0 ATA TOSHIBA DT01ACA2 X3UJK0DGS
    c3t2d0 (!parted) via dd ok 2.2 TB S:0 H:0 T:0 ATA TOSHIBA DT01ACA3 Z3GH157GS
    c3t3d0 (!parted) via dd ok 1.5 TB S:0 H:0 T:0 ATA SAMSUNG HD154UI S1XWJ1LSC00467
    c3t4d0 (!parted) via dd ok 2 TB S:0 H:0 T:0 ATA SAMSUNG HD203WI S1UYJ1KZ309168
    c3t5d0 (!parted) via dd ok 2 TB S:0 H:0 T:0 ATA SAMSUNG HD203WI S1UYJ1KZ309410
    c3t6d0 (!parted) via dd ok 2 TB S:0 H:0 T:0 ATA SAMSUNG HD203WI S1UYJ1KZ309403
    c3t7d0 (!parted) via dd ok 2 TB S:0 H:0 T:0 ATA SAMSUNG HD203WI S1UYJ1KZ309407
    c3t8d0 (!parted) via dd ok 2 TB S:0 H:0 T:0 ATA SAMSUNG HD204UI S2H7J9AB500588
    c3t9d0 (!parted) via dd ok 2 TB S:0 H:0 T:0 ATA SAMSUNG HD204UI S2H7J9AB500578

    This page is updated in realtime when using the monitor extension - otherwise you must reload manually.
    On errors, check menu disks - details - diskinfo for details.

    If new disks are missing, you need to initialize the disks, use menu disks - initialize

    Thanks ffor all your help.
     
  11. _Gea

    _Gea 2[H]4U

    Messages:
    3,646
    Joined:
    Dec 5, 2010
    As the disk is completely missing you lack the information for serial or controller port. If you have sort of a disk map with all disks you can check for the one that is missing.

    A simple method would be read from or write to the pool and check for the one disk without activity led flashing.

    btw
    If you have any chance for a backup, re-create the pool with z2 vdevs. You have too many disks for z1 where a second failure of a disk within a z1 means a whole pool lost - especially as your disks seems quite old.

    Maybe a pool from a mirror of modern 8-12 TB disks may replace the whole.
     
    Last edited: Jul 23, 2017
  12. N Bates

    N Bates n00bie

    Messages:
    50
    Joined:
    Jul 15, 2017
    That is sstrange, the disk is not completely missing, all disks are physically there and attached, I agree, my drives are old and I need to upgrade to higher capacity newer drives, I know the z1 is a bad idea, I had done this some time ago and I know better now, I would when funds allow get newer drives and backup the server.

    Is it right though that 3 of the raidz1's are not showing online? last time I had a drive failure all drives showed online apart from the one that failed, this was in Napp-it 5.0 though.

    Thanks ffor all your help.
     
  13. N Bates

    N Bates n00bie

    Messages:
    50
    Joined:
    Jul 15, 2017
    Something strange I have noticed, should the pool version be showing as 6, when I was in nexenta I was on version 28, is this what causing the problem? how can I upgrade to version 28?
     
  14. _Gea

    _Gea 2[H]4U

    Messages:
    3,646
    Joined:
    Dec 5, 2010
    ZFS version 6 and pool version 37 is Oracle Solaris 11.3.
    All Open-ZFS are currently on ZFS v5 with pool version 5000 and feature flags.
    A pool update can be done via zpool or napp-it menu pools when you click on the old poolversion 26 or 28

    If all disks are shown under menu disks and missing under pools, you have an enumeration problem. This happens with portbased detection line c1t1d0 when the controller number has changed and ZFS expects the disk on a different controller (newer WWN detection does not have this problem).

    To solve this do a pool export + pool import as ZFS then re-reads all disks.
     
  15. N Bates

    N Bates n00bie

    Messages:
    50
    Joined:
    Jul 15, 2017
    Thank you Gea, I have tried to export the pool and then import, I have impoted one pool and tried to import the second and it's saying that the Pool "NAS" has already been imported choose another name, do I change the name? originally I only had one pool not two?
    Many thanks.
     
  16. _Gea

    _Gea 2[H]4U

    Messages:
    3,646
    Joined:
    Dec 5, 2010
    During the import command, ZFS reads all disks for ZFS labels. It seems that you have disks with more than one label. This can happen if you reuse disks without a prior proper pool destroy or complete reinitialisation.

    This is not critical as you cannot import such ghost pools as you do not have all disks. With more than one pool available on import, only the "last and correct one" can be imported.

    As you have imported your pool, check menu pools for validity.
     
  17. N Bates

    N Bates n00bie

    Messages:
    50
    Joined:
    Jul 15, 2017
    Really weird, I can't even connect with Napp-it now, the below is what I have done from start to finish:

    Initially I was on Nexenta and Napp-it 05.00r

    nexenta appliance v. 0.500r nightly Jun.27.2011

    pool: NAS
    state: ONLINE
    scan: resilvered 0 in 0h0m with 0 errors on Fri Jul 7 03:53:54 2017
    config:

    NAME STATE READ WRITE CKSUM
    NAS ONLINE 0 0 0
    raidz1-0 ONLINE 0 0 0
    c2t3d0 ONLINE 0 0 0
    c2t2d0 ONLINE 0 0 0
    c2t1d0 ONLINE 0 0 0
    c2t0d0 ONLINE 0 0 0
    raidz1-1 ONLINE 0 0 0
    c2t6d0 ONLINE 0 0 0
    c2t5d0 ONLINE 0 0 0
    c2t4d0 ONLINE 0 0 0
    c2t7d0 ONLINE 0 0 0
    raidz1-2 ONLINE 0 0 0
    c2t11d0 ONLINE 0 0 0
    c2t10d0 ONLINE 0 0 0
    c2t9d0 ONLINE 0 0 0
    c2t8d0 ONLINE 0 0 0
    raidz1-3 ONLINE 0 0 0
    c2t13d0 ONLINE 0 0 0
    c2t14d0 ONLINE 0 0 0
    c2t15d0 ONLINE 0 0 0
    c2t16d0 ONLINE 0 0 0
    raidz1-4 ONLINE 0 0 0
    c2t17d0 ONLINE 0 0 0
    c2t18d0 ONLINE 0 0 0
    c2t19d0 ONLINE 0 0 0
    c2t20d0 ONLINE 0 0 0
    raidz1-5 ONLINE 0 0 0
    c2t21d0 ONLINE 0 0 0
    c2t22d0 ONLINE 0 0 0
    c2t23d0 ONLINE 0 0 0
    c2t24d0 ONLINE 0 0 0

    errors: No known data errors

    pool: syspool
    state: ONLINE
    status: The pool is formatted using an older on-disk format. The pool can
    still be used, but some features are unavailable.
    action: Upgrade the pool using 'zpool upgrade'. Once this is done, the
    pool will no longer be accessible on older software versions.
    scan: none requested
    config:

    NAME STATE READ WRITE CKSUM
    syspool ONLINE 0 0 0
    c1t0d0s0 ONLINE 0 0 0

    errors: No known data errors

    The system pool is using a different format version to the pool itself version 26 vs version 28.


    All was OK on Nexenta, I have upgraded the system pool from version 26 to version 28 using Napp-it, pool upgraded fine without issues.

    I had taken out the 60gb 2.5 internaL hdd and connected 120gb ssd.

    Installed OmniOS ce and Napp-it

    Imported the pool with Napp-it, however at this stage once I imported I could only see 2 raidz pools and originally I had 5.

    I could see an error on one of the disks as per below:

    zpool status
    pool: NAS
    state: DEGRADED
    status: One or more devices could not be used because the label is missing or
    invalid. Sufficient replicas exist for the pool to continue
    functioning in a degraded state.
    action: Replace the device using 'zpool replace'.
    see: http://illumos.org/msg/ZFS-8000-4J
    scan: none requested
    config:

    NAME STATE READ WRITE CKSUM CAP Product /napp-it IOstat mess SN/LUN
    NAS DEGRADED 0 0 0
    raidz1-0 DEGRADED 0 0 0
    c3t3d0 ONLINE 0 0 0 1.5 TB SAMSUNG HD154UI S:0 H:0 T:0 S1XWJ1LSC00467
    5503366839276267646 UNAVAIL 0 0 0 was /dev/ad6
    c3t1d0p0 ONLINE 0 0 0 1.5 TB SAMSUNG HD154UI S:0 H:0 T:0 S1XWJ1LSC00468
    c3t0d0p0 ONLINE 0 0 0 1.5 TB SAMSUNG HD154UI S:0 H:0 T:0 S1XWJ1LSC00466
    raidz1-1 ONLINE 0 0 0
    c3t6d0p0 ONLINE 0 0 0 2 TB SAMSUNG HD203WI S:0 H:0 T:0 S1UYJ1KZ309403
    c3t5d0 ONLINE 0 0 0 2 TB SAMSUNG HD203WI S:0 H:0 T:0 S1UYJ1KZ309410
    c3t4d0p0 ONLINE 0 0 0 2 TB SAMSUNG HD203WI S:0 H:0 T:0 S1UYJ1KZ309168
    c3t7d0 ONLINE 0 0 0 2 TB SAMSUNG HD203WI S:0 H:0 T:0 S1UYJ1KZ309407

    errors: No known data errors

    pool: rpool
    state: ONLINE
    scan: none requested
    config:

    NAME STATE READ WRITE CKSUM CAP Product /napp-it IOstat mess SN/LUN
    rpool ONLINE 0 0 0
    c2t0d0 ONLINE 0 0 0 120 GB DREVO X1 SSD S:0 H:0 T:0 TA1762600550

    errors: No known data errors


    So, I have identified the disks witch degraded and swapped it for a new disk with Napp-it "replace"

    the system risilvered, however, I could still see only 2 raidz vdevs instead of the 5.

    I exported again in Napp-it and imported again, the system only imported 2 riadz vdevs again, the other 3 were still not imported, I could see all disks in Napp-it

    id part identify stat diskcap partcap error vendor product sn
    c2t0d0 (!parted) via dd ok 120 GB S:0 H:0 T:0 ATA DREVO X1 SSD TA1762600550
    c3t0d0 (!parted) via dd ok 1.5 TB S:0 H:0 T:0 ATA SAMSUNG HD154UI S1XWJ1LSC00466
    c3t10d0 (!parted) via dd ok 2 TB S:0 H:0 T:0 ATA SAMSUNG HD204UI S2H7J9BB502115
    c3t11d0 (!parted) via dd ok 2 TB S:0 H:0 T:0 ATA SAMSUNG HD204UI S2H7J9BB502120
    c3t13d0 (!parted) via dd ok 2 TB S:0 H:0 T:0 ATA SAMSUNG HD204UI S2H7J90B111892
    c3t14d0 (!parted) via dd ok 2 TB S:0 H:0 T:0 ATA SAMSUNG HD204UI S2H7J9BB502119
    c3t15d0 (!parted) via dd ok 2 TB S:0 H:0 T:0 ATA SAMSUNG HD204UI S2H7J90B111859
    c3t16d0 (!parted) via dd ok 2 TB S:0 H:0 T:0 ATA SAMSUNG HD204UI S2H7J90B111880
    c3t17d0 (!parted) via dd ok 2.2 TB S:0 H:0 T:0 ATA ST3000DM001-1CH1 Z1F25D2A
    c3t18d0 (!parted) via dd ok 2 TB S:0 H:0 T:0 ATA ST2000DM001-1CH1 W1E3YZCH
    c3t19d0 (!parted) via dd ok 2 TB S:0 H:0 T:0 ATA ST2000DM001-1CH1 W1E419W7
    c3t1d0 (!parted) via dd ok 1.5 TB S:0 H:0 T:0 ATA SAMSUNG HD154UI S1XWJ1LSC00468
    c3t20d0 (!parted) via dd ok 2 TB S:0 H:0 T:0 ATA SAMSUNG HD204UI S2H7J90B111886
    c3t21d0 (!parted) via dd ok 2 TB S:0 H:0 T:0 ATA TOSHIBA DT01ACA2 X3UJK0ZGS
    c3t22d0 (!parted) via dd ok 2 TB S:0 H:0 T:0 ATA TOSHIBA DT01ACA2 X3UJ9X3GS
    c3t23d0 (!parted) via dd ok 2 TB S:0 H:0 T:0 ATA TOSHIBA DT01ACA2 X3UJ5AXGS
    c3t24d0 (!parted) via dd ok 2 TB S:0 H:0 T:0 ATA TOSHIBA DT01ACA2 X3UJK0DGS
    c3t2d0 (!parted) via dd ok 2.2 TB S:0 H:0 T:0 ATA TOSHIBA DT01ACA3 Z3GH157GS
    c3t3d0 (!parted) via dd ok 1.5 TB S:0 H:0 T:0 ATA SAMSUNG HD154UI S1XWJ1LSC00467
    c3t4d0 (!parted) via dd ok 2 TB S:0 H:0 T:0 ATA SAMSUNG HD203WI S1UYJ1KZ309168
    c3t5d0 (!parted) via dd ok 2 TB S:0 H:0 T:0 ATA SAMSUNG HD203WI S1UYJ1KZ309410
    c3t6d0 (!parted) via dd ok 2 TB S:0 H:0 T:0 ATA SAMSUNG HD203WI S1UYJ1KZ309403
    c3t7d0 (!parted) via dd ok 2 TB S:0 H:0 T:0 ATA SAMSUNG HD203WI S1UYJ1KZ309407
    c3t8d0 (!parted) via dd ok 2 TB S:0 H:0 T:0 ATA SAMSUNG HD204UI S2H7J9AB500588
    c3t9d0 (!parted) via dd ok 2 TB S:0 H:0 T:0 ATA SAMSUNG HD204UI S2H7J9AB500578

    When trying to import the three raidz vdevs the system reported that the 3 raidz vdev I asm trying to import had the same name as the already impoted 2 raidz vdev and I should change the name.

    Now can't reach the server with Napp-it.
     
  18. _Gea

    _Gea 2[H]4U

    Messages:
    3,646
    Joined:
    Dec 5, 2010
    Let*s recapitulate your current state.
    You have imported a pool NAS successfully with 3 x Z1-vdevs eventually due duplicated ZFS labels from old re-used disks with a former pool information on them. The pool is reporting degraded now (one disk missing) but working as this is within the redundancy of Z1. All disks are discovered and shown under Disks. Can you read valid data from the pool?

    Your former proper pool NAS was build from 5 x Z1 vdevs and you have seen several pools NAS on your first import trial. This leads to the asumption that you do not have imported the proper pool but maybe a prior state with 3 vdevs. If only 2 vdevs were missing out of 5, your pool state would have been then unavailable due the missing vdevs with online again when the missing vdevs come back.

    Its hard to tell if you can fix that as you have already done some replacements. Is there more than one option available on pool import? Then import as readonly to avoid changing something and check if its the right one. Had you exported the pool in NexentaCore or only tried an import on OmniOS? Can you retry the old NexentaCore to import the pool again? If this works and you have not exported the pool in NexentaCore you can try to export and then import again in OmniOS otherwise backup the data first.

    If you cannot import the proper state in OmniOS and cannot import again in NexentaCore, I suppose the pool is lost. Last option I would try then is using Solaris 11.3 live (boot from CD) and try an import there as Solaris can import up to v28. If this works, do not update anything beside an export as newer ZFS is incompatible between Solaris and Open-ZFS.

    btw
    You cannot import a vdev, only a pool with all the vdevs it is build from. On current ZFS versions I have not seen problems with different but same named labels example when re-using disks but ZFS v26 is a very old state.

    If napp-it is hanging this is mostly due a hanging zpool or format command as these commands are called from within napp-it. Try these commands at console and/or restart napp-it at console via /etc/inid.d/napp-it restart
     
    Last edited: Jul 26, 2017
  19. N Bates

    N Bates n00bie

    Messages:
    50
    Joined:
    Jul 15, 2017
    Thank you for all your help Gea, I will try your above suggestions when I have the time this week end and let you know how I get on.
     
  20. N Bates

    N Bates n00bie

    Messages:
    50
    Joined:
    Jul 15, 2017
    Hummm, Napp-it will not restart from the console and nexentacore is giving at console "error 16 inconsistent file system structure and Napp-it will not start, should I put back the HDD that I took out and see whether this will make a difference, I have got the feeling that the disk is not bad seen originally the pool was showing healthy?
     
  21. N Bates

    N Bates n00bie

    Messages:
    50
    Joined:
    Jul 15, 2017
    It was no go in Solaris 11.3, the system did not find a pool, however, On OmniOS at console I can see the below:

    cannot mount '/NAS' : directory is not empty
    cannot mount 'NAS/NAS_Media' : I/O error
    svc:/system/filesystem/local:default:warning /usr/sbin/zfs mount -a failed
    t status 1
    ul 29 20:22:18 svc.startd[10]: svc:/system/filesystem/local:default: method
    b/svc/method/fs-local" failed with exit status 95.
    ul 29 20:22:19 svc.startd[10]: system/filesystem/local:default failed fatally
    transitioned to maintenance (see 'svcs -xv' for details)

    NAS console login:

    When I installed OmniOS I remember changing the name of unknown to NAS, probably this is why I have the NAS and the old NAS/NAS_Media, can I change the name NAS that I have created when installing OmniOS to unknow, may be then I could access my files?

    When I type zpool list at console on Omni OS I see the below:

    name size alloc free expandsz frag cap dedup health altroot
    NAS 41.7T 39.2T 2.48T - - 94% 1.00x degraded -
    rpool 111G 2.33G 109G - 0% 2% 1.00x online -
     
    Last edited: Jul 29, 2017
  22. N Bates

    N Bates n00bie

    Messages:
    50
    Joined:
    Jul 15, 2017
    Now hopefully I am getting somewhere, I have the below, so hopefully after swapping the drive and resilvered all will be well, I will report back:

    pool: NAS
    state: DEGRADED
    status: One or more devices has experienced an unrecoverable error. An
    attempt was made to correct the error. Applications are unaffected.
    action: Determine if the device needs to be replaced, and clear the errors
    using 'zpool clear' or replace the device with 'zpool replace'.
    see: http://illumos.org/msg/ZFS-8000-9P
    scan: resilvered 512 in 0h0m with 0 errors on Sun Jul 30 12:42:46 2017
    config:

    NAME STATE READ WRITE CKSUM CAP Product /napp-it IOstat mess SN/LUN
    NAS DEGRADED 0 0 1
    raidz1-0 DEGRADED 0 0 4
    c3t3d0 DEGRADED 0 0 0 too many errors 1.5 TB SAMSUNG HD154UI S:0 H:0 T:0 S1XWJ1LSC00467
    c3t2d0 ONLINE 0 0 0 2.2 TB TOSHIBA DT01ACA3 S:0 H:0 T:0 Z3GH157GS
    c3t1d0 ONLINE 0 0 0 1.5 TB SAMSUNG HD154UI S:0 H:0 T:0 S1XWJ1LSC00468
    c3t0d0 ONLINE 0 0 0 1.5 TB SAMSUNG HD154UI S:0 H:0 T:0 S1XWJ1LSC00466
    raidz1-1 ONLINE 0 0 0
    c3t6d0 ONLINE 0 0 0 2 TB SAMSUNG HD203WI S:0 H:0 T:0 S1UYJ1KZ309403
    c3t5d0 ONLINE 0 0 0 2 TB SAMSUNG HD203WI S:0 H:0 T:0 S1UYJ1KZ309410
    c3t4d0 ONLINE 0 0 0 2 TB SAMSUNG HD203WI S:0 H:0 T:0 S1UYJ1KZ309168
    c3t7d0 ONLINE 0 0 0 2 TB SAMSUNG HD203WI S:0 H:0 T:0 S1UYJ1KZ309407
    raidz1-2 ONLINE 0 0 0
    c3t11d0 ONLINE 0 0 0 2 TB SAMSUNG HD204UI S:0 H:0 T:0 S2H7J9BB502120
    c3t10d0 ONLINE 0 0 0 2 TB SAMSUNG HD204UI S:0 H:0 T:0 S2H7J9BB502115
    c3t9d0 ONLINE 0 0 0 2 TB SAMSUNG HD204UI S:0 H:0 T:0 S2H7J9AB500578
    c3t8d0 ONLINE 0 0 0 2 TB SAMSUNG HD204UI S:0 H:0 T:0 S2H7J9AB500588
    raidz1-3 ONLINE 0 0 0
    c3t13d0 ONLINE 0 0 0 2 TB SAMSUNG HD204UI S:0 H:0 T:0 S2H7J90B111892
    c3t14d0 ONLINE 0 0 0 2 TB SAMSUNG HD204UI S:0 H:0 T:0 S2H7J9BB502119
    c3t15d0 ONLINE 0 0 0 2 TB SAMSUNG HD204UI S:0 H:0 T:0 S2H7J90B111859
    c3t16d0 ONLINE 0 0 0 2 TB SAMSUNG HD204UI S:0 H:0 T:0 S2H7J90B111880
    raidz1-4 ONLINE 0 0 0
    c3t17d0 ONLINE 0 0 0 2.2 TB ST3000DM001-1CH1 S:0 H:0 T:0 Z1F25D2A
    c3t18d0 ONLINE 0 0 0 2 TB ST2000DM001-1CH1 S:0 H:0 T:0 W1E3YZCH
    c3t19d0 ONLINE 0 0 0 2 TB ST2000DM001-1CH1 S:0 H:0 T:0 W1E419W7
    c3t20d0 ONLINE 0 0 0 2 TB SAMSUNG HD204UI S:0 H:0 T:0 S2H7J90B111886
    raidz1-5 ONLINE 0 0 0
    c3t21d0 ONLINE 0 0 0 2 TB TOSHIBA DT01ACA2 S:0 H:0 T:0 X3UJK0ZGS
    c3t22d0 ONLINE 0 0 0 2 TB TOSHIBA DT01ACA2 S:0 H:0 T:0 X3UJ9X3GS
    c3t23d0 ONLINE 0 0 0 2 TB TOSHIBA DT01ACA2 S:0 H:0 T:0 X3UJ5AXGS
    c3t24d0 ONLINE 0 0 0 2 TB TOSHIBA DT01ACA2 S:0 H:0 T:0 X3UJK0DGS

    errors: No known data errors

    pool: rpool
    state: ONLINE
    scan: none requested
    config:

    NAME STATE READ WRITE CKSUM CAP Product /napp-it IOstat mess SN/LUN
    rpool ONLINE 0 0 0
    c2t0d0 ONLINE 0 0 0 120 GB DREVO X1 SSD S:0 H:0 T:0 TA1762600550

    errors: No known data errors
     
  23. N Bates

    N Bates n00bie

    Messages:
    50
    Joined:
    Jul 15, 2017
    Hpefully it will work out all good, now I have the below, the only thing I have found is that I had to keep starting Napp-it in OmniOS somehow via /etc/init.d/napp-it start for it to connect:

    home [​IMG] Disks [​IMG] Replace


    replace disk c2t0d0 rpool basic ONLINE ATA DREVO X1 SSD 120 GB
    c3t0d0 NAS raidz1-0 ONLINE ATA SAMSUNG HD154UI 1.5 TB
    c3t10d0 NAS raidz1-2 ONLINE ATA SAMSUNG HD204UI 2 TB
    c3t11d0 NAS raidz1-2 ONLINE ATA SAMSUNG HD204UI 2 TB
    c3t13d0 NAS raidz1-3 ONLINE ATA SAMSUNG HD204UI 2 TB
    c3t14d0 NAS raidz1-3 ONLINE ATA SAMSUNG HD204UI 2 TB
    c3t15d0 NAS raidz1-3 ONLINE ATA SAMSUNG HD204UI 2 TB
    c3t16d0 NAS raidz1-3 ONLINE ATA SAMSUNG HD204UI 2 TB
    c3t17d0 NAS raidz1-4 ONLINE ATA ST3000DM001-1CH1 2.2 TB
    c3t18d0 NAS raidz1-4 ONLINE ATA ST2000DM001-1CH1 2 TB
    c3t19d0 NAS raidz1-4 ONLINE ATA ST2000DM001-1CH1 2 TB
    c3t1d0 NAS raidz1-0 ONLINE ATA SAMSUNG HD154UI 1.5 TB
    c3t20d0 NAS raidz1-4 ONLINE ATA SAMSUNG HD204UI 2 TB
    c3t21d0 NAS raidz1-5 ONLINE ATA TOSHIBA DT01ACA2 2 TB
    c3t22d0 NAS raidz1-5 ONLINE ATA TOSHIBA DT01ACA2 2 TB
    c3t23d0 NAS raidz1-5 ONLINE ATA TOSHIBA DT01ACA2 2 TB
    c3t24d0 NAS raidz1-5 ONLINE ATA TOSHIBA DT01ACA2 2 TB
    c3t2d0 NAS raidz1-0 ONLINE ATA TOSHIBA DT01ACA3 2.2 TB
    c3t3d0/old NAS raidz1-0 FAULTED -
    c3t4d0 NAS raidz1-1 ONLINE ATA SAMSUNG HD203WI 2 TB
    c3t5d0 NAS raidz1-1 ONLINE ATA SAMSUNG HD203WI 2 TB
    c3t6d0 NAS raidz1-1 ONLINE ATA SAMSUNG HD203WI 2 TB
    c3t7d0 NAS raidz1-1 ONLINE ATA SAMSUNG HD203WI 2 TB
    c3t8d0 NAS raidz1-2 ONLINE ATA SAMSUNG HD204UI 2 TB
    c3t9d0 NAS raidz1-2 ONLINE ATA SAMSUNG HD204UI 2 TB

    with -not available-

    force poolname (optionally needed with unavail disks)




    [​IMG]

    zpool status
    pool: NAS
    state: DEGRADED
    status: One or more devices is currently being resilvered. The pool will
    continue to function, possibly in a degraded state.
    action: Wait for the resilver to complete.
    scan: resilver in progress since Sun Jul 30 13:21:06 2017
    115G scanned out of 39.2T at 223M/s, 50h57m to go
    13.1G resilvered, 0.29% done
    config:

    NAME STATE READ WRITE CKSUM CAP Product /napp-it IOstat mess
    NAS DEGRADED 0 0 27
    raidz1-0 DEGRADED 0 0 116
    replacing-0 UNAVAIL 0 0 0
    c3t3d0/old FAULTED 0 0 0 corrupted data
    c3t3d0 ONLINE 0 0 0 (resilvering) 2 TB Hitachi HUA72302 S:0 H:0 T:0
    c3t2d0 ONLINE 0 0 0 (resilvering) 2.2 TB TOSHIBA DT01ACA3 S:0 H:0 T:0
    c3t1d0 ONLINE 0 0 0 (resilvering) 1.5 TB SAMSUNG HD154UI S:0 H:0 T:0
    c3t0d0 ONLINE 0 0 0 (resilvering) 1.5 TB SAMSUNG HD154UI S:0 H:0 T:0
    raidz1-1 ONLINE 0 0 65
    c3t6d0 ONLINE 0 0 0 (resilvering) 2 TB SAMSUNG HD203WI S:0 H:0 T:0
    c3t5d0 ONLINE 0 0 0 (resilvering) 2 TB SAMSUNG HD203WI S:0 H:0 T:0
    c3t4d0 ONLINE 0 0 0 (resilvering) 2 TB SAMSUNG HD203WI S:0 H:0 T:0
    c3t7d0 ONLINE 0 0 0 (resilvering) 2 TB SAMSUNG HD203WI S:0 H:0 T:0
    raidz1-2 ONLINE 0 0 0
    c3t11d0 ONLINE 0 0 0 2 TB SAMSUNG HD204UI S:0 H:0 T:0
    c3t10d0 ONLINE 0 0 0 2 TB SAMSUNG HD204UI S:0 H:0 T:0
    c3t9d0 ONLINE 0 0 0 2 TB SAMSUNG HD204UI S:0 H:0 T:0
    c3t8d0 ONLINE 0 0 0 2 TB SAMSUNG HD204UI S:0 H:0 T:0
    raidz1-3 ONLINE 0 0 0
    c3t13d0 ONLINE 0 0 0 2 TB SAMSUNG HD204UI S:0 H:0 T:0
    c3t14d0 ONLINE 0 0 0 2 TB SAMSUNG HD204UI S:0 H:0 T:0
    c3t15d0 ONLINE 0 0 0 2 TB SAMSUNG HD204UI S:0 H:0 T:0
    c3t16d0 ONLINE 0 0 0 2 TB SAMSUNG HD204UI S:0 H:0 T:0
    raidz1-4 ONLINE 0 0 0
    c3t17d0 ONLINE 0 0 0 2.2 TB ST3000DM001-1CH1 S:0 H:0 T:0
    c3t18d0 ONLINE 0 0 0 2 TB ST2000DM001-1CH1 S:0 H:0 T:0
    c3t19d0 ONLINE 0 0 0 2 TB ST2000DM001-1CH1 S:0 H:0 T:0
    c3t20d0 ONLINE 0 0 0 2 TB SAMSUNG HD204UI S:0 H:0 T:0
    raidz1-5 ONLINE 0 0 0
    c3t21d0 ONLINE 0 0 0 2 TB TOSHIBA DT01ACA2 S:0 H:0 T:0
    c3t22d0 ONLINE 0 0 0 2 TB TOSHIBA DT01ACA2 S:0 H:0 T:0
    c3t23d0 ONLINE 0 0 0 2 TB TOSHIBA DT01ACA2 S:0 H:0 T:0
    c3t24d0 ONLINE 0 0 0 2 TB TOSHIBA DT01ACA2 S:0 H:0 T:0

    errors: 7 data errors, use '-v' for a list

    pool: rpool
    state: ONLINE
    scan: none requested
    config:

    NAME STATE READ WRITE CKSUM CAP Product /napp-it IOstat mess
    rpool ONLINE 0 0 0
    c2t0d0 ONLINE 0 0 0 120 GB DREVO X1 SSD S:0 H:0 T:0

    errors: No known data errors
     
  24. _Gea

    _Gea 2[H]4U

    Messages:
    3,646
    Joined:
    Dec 5, 2010
    This may be when you have a regular folder /NAS that hinders a mount.
    If there is an empty folder /NAS, delete it
    You can use midnight commander al console filebrowser to check


    start at console via
    mc
     
  25. N Bates

    N Bates n00bie

    Messages:
    50
    Joined:
    Jul 15, 2017
    I will let finish resilvering first and then I will take a look, the funny thing is, it looks like only raidz1-0 and raidz1-1 that are resilvering the rest is not doing anything, is this OK?
    Many thanks.
     
  26. N Bates

    N Bates n00bie

    Messages:
    50
    Joined:
    Jul 15, 2017
    Do I need to do anything to the disks were it says removed?

    id part identify stat diskcap partcap error vendor product sn
    c2t0d0 (!parted) via dd ok 120 GB S:0 H:0 T:0 ATA DREVO X1 SSD TA1762600550
    c3t0d0 (!parted) via dd ok 1.5 TB S:0 H:0 T:0 ATA SAMSUNG HD154UI S1XWJ1LSC00466
    c3t10d0 - - removed 2 TB - ATA SAMSUNG HD204UI S2H7J9BB502115
    c3t11d0 (!parted) via dd ok 2 TB S:0 H:0 T:0 ATA SAMSUNG HD204UI S2H7J9BB502120
    c3t13d0 (!parted) via dd ok 2 TB S:0 H:0 T:0 ATA SAMSUNG HD204UI S2H7J90B111892
    c3t14d0 (!parted) via dd ok 2 TB S:0 H:0 T:0 ATA SAMSUNG HD204UI S2H7J9BB502119
    c3t15d0 - - removed 2 TB - ATA SAMSUNG HD204UI S2H7J90B111859
    c3t16d0 - - removed 2 TB - ATA SAMSUNG HD204UI S2H7J90B111880
    c3t17d0 - - removed 2.2 TB - ATA ST3000DM001-1CH1 Z1F25D2A
    c3t18d0 (!parted) via dd ok 2 TB S:0 H:1 T:1 ATA ST2000DM001-1CH1 W1E3YZCH
    c3t19d0 (!parted) via dd ok 2 TB S:0 H:0 T:0 ATA ST2000DM001-1CH1 W1E419W7
    c3t1d0 - - removed 1.5 TB - ATA SAMSUNG HD154UI S1XWJ1LSC00468
    c3t20d0 (!parted) via dd ok 2 TB S:0 H:0 T:0 ATA SAMSUNG HD204UI S2H7J90B111886
    c3t21d0 - - removed 2 TB - ATA TOSHIBA DT01ACA2 X3UJK0ZGS
    c3t22d0 - - removed 2 TB - ATA TOSHIBA DT01ACA2 X3UJ9X3GS
    c3t23d0 (!parted) via dd ok 2 TB S:0 H:0 T:0 ATA TOSHIBA DT01ACA2 X3UJ5AXGS
    c3t24d0 (!parted) via dd ok 2 TB S:0 H:0 T:0 ATA TOSHIBA DT01ACA2 X3UJK0DGS
    c3t2d0 (!parted) via dd ok 2.2 TB S:0 H:0 T:0 ATA TOSHIBA DT01ACA3 Z3GH157GS
    c3t3d0 (!parted) via dd ok 2 TB S:0 H:0 T:0 ATA Hitachi HUA72302 YGGYKA5D
    c3t4d0 (!parted) via dd ok 2 TB S:0 H:1 T:1 ATA SAMSUNG HD203WI S1UYJ1KZ309168
    c3t5d0 - - removed 2 TB - ATA SAMSUNG HD203WI S1UYJ1KZ309410
    c3t6d0 - - removed 2 TB - ATA SAMSUNG HD203WI S1UYJ1KZ309403
    c3t7d0 (!parted) via dd ok 2 TB S:0 H:0 T:0 ATA SAMSUNG HD203WI S1UYJ1KZ309407
    c3t8d0 - - removed 2 TB - ATA SAMSUNG HD204UI S2H7J9AB500588
    c3t9d0 (!parted) via dd ok 2 TB S:0 H:0 T:0 ATA SAMSUNG HD204UI S2H7J9AB500578

    This page is updated in realtime when using the monitor extension - otherwise you must reload manually.
    On errors, check menu disks - details - diskinfo for details.

    If new disks are missing, you need to initialize the disks, use menu disks - initialize




    If I am not mistaken it's been at 7.48% at the last three hours, has it hanged and no longer resilvering?

    cannot open 'NAS': pool I/O is currently suspended
    Pool VER RAW SIZE/ USABLE ALLOC RES FRES AVAIL zfs [df -h/df -H] DEDUP FAILM EXP REPL ALT GUID HEALTH SYNC ENCRYPT ACTION ATIME Pri-Cache Sec-Cache
    NAS 28 41.7T/ 0 39.2T - - 0 [ /] 1.00x wait off off - 7798317525941449710 DEGRADED n.a. clear errors - 'NAS': 'NAS':
    rpool - 111G/ 107.2GB 2.32G - - 104G [105G /112G] 1.00x wait off off - 6104832667924772593 ONLINE standard n.a. clear errors off all all
    Info: RAW poolsize does not count redundancy, usable/available size is from zfs list, df -h displays size as a power of 1024 wheras df -H displays as a power of 1000


    [​IMG]

    zpool status
    pool: NAS
    state: DEGRADED
    status: One or more devices is currently being resilvered. The pool will
    continue to function, possibly in a degraded state.
    action: Wait for the resilver to complete.
    scan: resilver in progress since Sun Jul 30 13:21:06 2017
    2.93T scanned out of 39.2T at 116M/s, 91h4m to go
    260G resilvered, 7.48% done
    config:

    NAME STATE READ WRITE CKSUM CAP Product /napp-it IOstat mess SN/LUN
    NAS DEGRADED 1 1 33
    raidz1-0 DEGRADED 2 0 184
    replacing-0 UNAVAIL 0 0 0
    c3t3d0/old FAULTED 0 0 0 corrupted data
    c3t3d0 ONLINE 3 223 0 (resilvering) 2 TB Hitachi HUA72302 S:0 H:0 T:0 YGGYKA5D
    c3t2d0 ONLINE 2 0 0 (resilvering) 2.2 TB TOSHIBA DT01ACA3 S:0 H:0 T:0 Z3GH157GS
    c3t1d0 ONLINE 2 0 0 (resilvering) 1.5 TB SAMSUNG HD154UI - S1XWJ1LSC00468
    c3t0d0 ONLINE 3 0 0 (resilvering) 1.5 TB SAMSUNG HD154UI S:0 H:0 T:0 S1XWJ1LSC00466
    raidz1-1 ONLINE 0 0 65
    c3t6d0 ONLINE 0 0 0 (resilvering) 2 TB SAMSUNG HD203WI - S1UYJ1KZ309403
    c3t5d0 ONLINE 0 0 0 (resilvering) 2 TB SAMSUNG HD203WI - S1UYJ1KZ309410
    c3t4d0 ONLINE 0 0 0 (resilvering) 2 TB SAMSUNG HD203WI S:0 H:1 T:1 S1UYJ1KZ309168
    c3t7d0 ONLINE 0 0 0 (resilvering) 2 TB SAMSUNG HD203WI S:0 H:0 T:0 S1UYJ1KZ309407
    raidz1-2 ONLINE 0 0 0
    c3t11d0 ONLINE 0 0 0 2 TB SAMSUNG HD204UI S:0 H:0 T:0 S2H7J9BB502120
    c3t10d0 ONLINE 0 0 0 2 TB SAMSUNG HD204UI - S2H7J9BB502115
    c3t9d0 ONLINE 0 0 0 2 TB SAMSUNG HD204UI S:0 H:0 T:0 S2H7J9AB500578
    c3t8d0 ONLINE 0 0 0 2 TB SAMSUNG HD204UI - S2H7J9AB500588
    raidz1-3 ONLINE 0 0 0
    c3t13d0 ONLINE 0 0 0 2 TB SAMSUNG HD204UI S:0 H:0 T:0 S2H7J90B111892
    c3t14d0 ONLINE 0 0 0 2 TB SAMSUNG HD204UI S:0 H:0 T:0 S2H7J9BB502119
    c3t15d0 ONLINE 0 0 0 2 TB SAMSUNG HD204UI - S2H7J90B111859
    c3t16d0 ONLINE 0 0 0 2 TB SAMSUNG HD204UI - S2H7J90B111880
    raidz1-4 ONLINE 2 0 0
    c3t17d0 ONLINE 2 0 0 2.2 TB ST3000DM001-1CH1 - Z1F25D2A
    c3t18d0 ONLINE 2 0 0 2 TB ST2000DM001-1CH1 S:0 H:1 T:1 W1E3YZCH
    c3t19d0 ONLINE 2 0 0 2 TB ST2000DM001-1CH1 S:0 H:0 T:0 W1E419W7
    c3t20d0 ONLINE 2 0 0 2 TB SAMSUNG HD204UI S:0 H:0 T:0 S2H7J90B111886
    raidz1-5 ONLINE 2 6 0
    c3t21d0 ONLINE 2 6 0 2 TB TOSHIBA DT01ACA2 - X3UJK0ZGS
    c3t22d0 ONLINE 2 6 0 2 TB TOSHIBA DT01ACA2 - X3UJ9X3GS
    c3t23d0 ONLINE 2 6 0 2 TB TOSHIBA DT01ACA2 S:0 H:0 T:0 X3UJ5AXGS
    c3t24d0 ONLINE 2 6 0 2 TB TOSHIBA DT01ACA2 S:0 H:0 T:0 X3UJK0DGS

    errors: 19 data errors, use '-v' for a list

    pool: rpool
    state: ONLINE
    scan: none requested
    config:

    NAME STATE READ WRITE CKSUM CAP Product /napp-it IOstat mess SN/LUN
    rpool ONLINE 0 0 0
    c2t0d0 ONLINE 0 0 0 120 GB DREVO X1 SSD S:0 H:0 T:0 TA1762600550
     
  27. N Bates

    N Bates n00bie

    Messages:
    50
    Joined:
    Jul 15, 2017
    It's been at 7.48% in the last twenty hours, it's hanged and no longer resilvering? is there a way to reattach the ten drived that are showing "removed"?
     
  28. N Bates

    N Bates n00bie

    Messages:
    50
    Joined:
    Jul 15, 2017
    After reboot it looks like all the drives are showing online, the only worry currently is that only the raidz1-0 and raidz1-1 are showing as resilvering the rest from raidz1-2 to raidz1-5 are showing as attached but not doing anything as per below:

    NAME STATE READ WRITE CKSUM CAP Product /napp-it IOstat mess SN/LUN
    NAS DEGRADED 0 0 1
    raidz1-0 DEGRADED 0 0 4
    replacing-0 DEGRADED 0 0 0
    c3t3d0/old FAULTED 0 0 0 corrupted data
    c3t3d0 ONLINE 0 0 0 (resilvering) 2 TB Hitachi HUA72302 S:0 H:0 T:0 YGGYKA5D
    c3t2d0 ONLINE 0 0 0 2.2 TB TOSHIBA DT01ACA3 S:0 H:0 T:0 Z3GH157GS
    c3t1d0 ONLINE 0 0 0 1.5 TB SAMSUNG HD154UI S:0 H:0 T:0 S1XWJ1LSC00468
    c3t0d0 ONLINE 0 0 0 1.5 TB SAMSUNG HD154UI S:0 H:0 T:0 S1XWJ1LSC00466
    raidz1-1 ONLINE 0 0 84
    c3t6d0 ONLINE 0 0 0 (resilvering) 2 TB SAMSUNG HD203WI S:0 H:0 T:0 S1UYJ1KZ309403
    c3t5d0 ONLINE 0 0 0 (resilvering) 2 TB SAMSUNG HD203WI S:0 H:0 T:0 S1UYJ1KZ309410
    c3t4d0 ONLINE 0 0 0 (resilvering) 2 TB SAMSUNG HD203WI S:0 H:0 T:0 S1UYJ1KZ309168
    c3t7d0 ONLINE 0 0 0 (resilvering) 2 TB SAMSUNG HD203WI S:0 H:0 T:0 S1UYJ1KZ309407
    raidz1-2 ONLINE 0 0 0
    c3t11d0 ONLINE 0 0 0 2 TB SAMSUNG HD204UI S:0 H:0 T:0 S2H7J9BB502120
    c3t10d0 ONLINE 0 0 0 2 TB SAMSUNG HD204UI S:0 H:0 T:0 S2H7J9BB502115
    c3t9d0 ONLINE 0 0 0 2 TB SAMSUNG HD204UI S:0 H:0 T:0 S2H7J9AB500578
    c3t8d0 ONLINE 0 0 0 2 TB SAMSUNG HD204UI S:0 H:0 T:0 S2H7J9AB500588
    raidz1-3 ONLINE 0 0 0
    c3t13d0 ONLINE 0 0 0 2 TB SAMSUNG HD204UI S:0 H:0 T:0 S2H7J90B111892
    c3t14d0 ONLINE 0 0 0 2 TB SAMSUNG HD204UI S:0 H:0 T:0 S2H7J9BB502119
    c3t15d0 ONLINE 0 0 0 2 TB SAMSUNG HD204UI S:0 H:0 T:0 S2H7J90B111859
    c3t16d0 ONLINE 0 0 0 2 TB SAMSUNG HD204UI S:0 H:0 T:0 S2H7J90B111880
    raidz1-4 ONLINE 0 0 0
    c3t17d0 ONLINE 0 0 0 2.2 TB ST3000DM001-1CH1 S:0 H:0 T:0 Z1F25D2A
    c3t18d0 ONLINE 0 0 0 2 TB ST2000DM001-1CH1 S:0 H:0 T:0 W1E3YZCH
    c3t19d0 ONLINE 0 0 0 2 TB ST2000DM001-1CH1 S:0 H:0 T:0 W1E419W7
    c3t20d0 ONLINE 0 0 0 2 TB SAMSUNG HD204UI S:0 H:0 T:0 S2H7J90B111886
    raidz1-5 ONLINE 0 0 0
    c3t21d0 ONLINE 0 0 0 2 TB TOSHIBA DT01ACA2 S:0 H:0 T:0 X3UJK0ZGS
    c3t22d0 ONLINE 0 0 0 2 TB TOSHIBA DT01ACA2 S:0 H:0 T:0 X3UJ9X3GS
    c3t23d0 ONLINE 0 0 0 2 TB TOSHIBA DT01ACA2 S:0 H:0 T:0 X3UJ5AXGS
    c3t24d0 ONLINE 0 0 0 2 TB TOSHIBA DT01ACA2 S:0 H:0 T:0 X3UJK0DGS
     
  29. _Gea

    _Gea 2[H]4U

    Messages:
    3,646
    Joined:
    Dec 5, 2010
    Resilvering affects one or more disks.
    The other disks of the pool are only read to gather redundancy information for a check/repair.
     
    N Bates likes this.
  30. N Bates

    N Bates n00bie

    Messages:
    50
    Joined:
    Jul 15, 2017
    Does the below mean it's still resilvering or has it hung for whatever reason?

    cannot open 'NAS': pool I/O is currently suspended
    Pool VER RAW SIZE/ USABLE ALLOC RES FRES AVAIL zfs [df -h/df -H] DEDUP FAILM EXP REPL ALT GUID HEALTH SYNC ENCRYPT ACTION ATIME Pri-Cache Sec-Cache
    NAS 28 41.7T/ 0 39.2T - - 0 [ /] 1.00x wait off off - 7798317525941449710 DEGRADED n.a. clear errors - 'NAS': 'NAS':
    rpool - 111G/ 107.2GB 2.32G - - 104G [105G /112G] 1.00x wait off off - 6104832667924772593 ONLINE standard n.a. clear errors off all all
    Info: RAW poolsize does not count redundancy, usable/available size is from zfs list, df -h displays size as a power of 1024 wheras df -H displays as a power of 1000


    [​IMG]
    All files are stored within a storage pool. A storage pool is composed of vdevs (Virtual Devices). A vdev can either be a single drive,
    or 2 or more drives that are mirrored, or a group of drives that are organized using RAID-Z(1-3).
    The 1,2, or 3 is the number of parity drives needed for that Raid-Z.

    For example, a Raid-Z2 composed of 5 drives would have 2 parity drives and 3 drives for storage.

    A note about Raid-Z:
    If you choose to create a Raid-Z vdev be aware of: Once the vdev is created you cannot increase it's capacity by adding new drives.
    If you must increase the capacity of a pool build from a Raid-Z vdev, you have 3 choices.

    1. You can add any other vdev type to the existing pool to stripe them. (recommended are identical vdev types)
    2. You can replace each existing drive within the vdev with a larger drive, set 'autoexpand=on' and perform a resilver command.
    3. You can back up all your data to a separate location, destroy the pool and recreate it.

    The optimal number of disks in a vdev is a fraction of 128 plus the needed disks for redundancy
    ex. if you want to optimize a vdev and use 4, 8 or 16 data-disks you need:
    for Raid-Z1: 4+1, 8+1 or 16+1 disks
    for Raid-Z2: 4+2, 8+2 or 16+2 disks
    for Raid Z3: 4+3, 8+3 or 16+3 disks

    These values are best for an optimal balanced pool. But don't forget: ZFS is designed to do its best
    with any pool layout and with growing pools. Usually just add as many disks as you need, whether the
    number is optimal or the disks are 4k or not. The only basic thing you should be aware:

    If you need performance: use a pool build from mirrored vdevs, the more the faster
    If you need capacity: use a pool build from Raid-Z(1-3)

    Click on a pool for details about it's state and it's vdevs.
    see also en.wikipedia.org/wiki/ZFS#Storage_pools


    zpool status
    pool: NAS
    state: DEGRADED
    status: One or more devices is currently being resilvered. The pool will
    continue to function, possibly in a degraded state.
    action: Wait for the resilver to complete.
    scan: resilver in progress since Mon Jul 31 23:59:50 2017
    9.92T scanned out of 39.2T at 248M/s, 34h23m to go
    866G resilvered, 25.30% done
    config:

    NAME STATE READ WRITE CKSUM CAP Product /napp-it IOstat mess SN/LUN
    NAS DEGRADED 0 6 28
    raidz1-0 DEGRADED 0 2 109
    replacing-0 DEGRADED 0 2 0
    c3t3d0/old FAULTED 0 0 0 corrupted data
    c3t3d0 ONLINE 3 25 0 (resilvering) 2 TB Hitachi HUA72302 S:0 H:0 T:0 YGGYKA5D
    c3t2d0 ONLINE 3 2 0 (resilvering) 2.2 TB TOSHIBA DT01ACA3 S:0 H:0 T:0 Z3GH157GS
    c3t1d0 ONLINE 0 0 0 (resilvering) 1.5 TB SAMSUNG HD154UI S:0 H:0 T:0 S1XWJ1LSC00468
    c3t0d0 ONLINE 0 0 0 (resilvering) 1.5 TB SAMSUNG HD154UI - S1XWJ1LSC00466
    raidz1-1 ONLINE 0 2 27
    c3t6d0 ONLINE 0 0 0 (resilvering) 2 TB SAMSUNG HD203WI S:0 H:0 T:0 S1UYJ1KZ309403
    c3t5d0 ONLINE 0 0 0 (resilvering) 2 TB SAMSUNG HD203WI S:0 H:0 T:0 S1UYJ1KZ309410
    c3t4d0 ONLINE 3 2 0 (resilvering) 2 TB SAMSUNG HD203WI S:0 H:0 T:0 S1UYJ1KZ309168
    c3t7d0 ONLINE 3 2 0 (resilvering) 2 TB SAMSUNG HD203WI S:0 H:0 T:0 S1UYJ1KZ309407
    raidz1-2 ONLINE 0 0 0
    c3t11d0 ONLINE 0 0 0 2 TB SAMSUNG HD204UI S:0 H:0 T:0 S2H7J9BB502120
    c3t10d0 ONLINE 0 0 0 2 TB SAMSUNG HD204UI S:0 H:0 T:0 S2H7J9BB502115
    c3t9d0 ONLINE 0 0 0 2 TB SAMSUNG HD204UI S:0 H:0 T:0 S2H7J9AB500578
    c3t8d0 ONLINE 0 0 0 2 TB SAMSUNG HD204UI S:0 H:0 T:0 S2H7J9AB500588
    raidz1-3 ONLINE 0 0 0
    c3t13d0 ONLINE 0 0 0 2 TB SAMSUNG HD204UI S:0 H:0 T:0 S2H7J90B111892
    c3t14d0 ONLINE 0 0 0 2 TB SAMSUNG HD204UI S:0 H:0 T:0 S2H7J9BB502119
    c3t15d0 ONLINE 0 0 0 2 TB SAMSUNG HD204UI S:0 H:0 T:0 S2H7J90B111859
    c3t16d0 ONLINE 0 0 0 2 TB SAMSUNG HD204UI S:0 H:0 T:0 S2H7J90B111880
    raidz1-4 ONLINE 0 0 0
    c3t17d0 ONLINE 0 0 0 2.2 TB ST3000DM001-1CH1 S:0 H:0 T:0 Z1F25D2A
    c3t18d0 ONLINE 0 0 0 2 TB ST2000DM001-1CH1 S:0 H:0 T:0 W1E3YZCH
    c3t19d0 ONLINE 0 0 0 2 TB ST2000DM001-1CH1 S:0 H:0 T:0 W1E419W7
    c3t20d0 ONLINE 0 0 0 2 TB SAMSUNG HD204UI S:0 H:0 T:0 S2H7J90B111886
    raidz1-5 ONLINE 0 32 0
    c3t21d0 ONLINE 6 35 0 2 TB TOSHIBA DT01ACA2 S:0 H:0 T:0 X3UJK0ZGS
    c3t22d0 ONLINE 6 35 0 2 TB TOSHIBA DT01ACA2 S:0 H:0 T:0 X3UJ9X3GS
    c3t23d0 ONLINE 0 33 0 2 TB TOSHIBA DT01ACA2 S:0 H:0 T:0 X3UJ5AXGS
    c3t24d0 ONLINE 0 33 11 2 TB TOSHIBA DT01ACA2 S:0 H:0 T:0 X3UJK0DGS

    errors: 31 data errors, use '-v' for a list

    pool: rpool
    state: ONLINE
    scan: none requested
    config:

    NAME STATE READ WRITE CKSUM CAP Product /napp-it IOstat mess SN/LUN
    rpool ONLINE 0 0 0
    c2t0d0 ONLINE 0 0 0 120 GB DREVO X1 SSD S:0 H:0 T:0 TA1762600550

    errors: No known data errors
     
  31. _Gea

    _Gea 2[H]4U

    Messages:
    3,646
    Joined:
    Dec 5, 2010
    As long as the done value increases, let it resilver,
    if it hangs, reboot and try to continue resilvering

    If the replace of the faulted disk is ready, with other disks remain "resilvering",
    you can try a Pool > clear errors.

    As it seems that the pool is not mounted ?, check if there is a regular folder /NAS.
    If this is the case and the folder is empty, delete and retry to mount .
    http://docs.oracle.com/cd/E19253-01/819-5461/gaynd/index.html
     
  32. N Bates

    N Bates n00bie

    Messages:
    50
    Joined:
    Jul 15, 2017
    Thank you for all your help _Gea.
     
  33. N Bates

    N Bates n00bie

    Messages:
    50
    Joined:
    Jul 15, 2017
    is the below normal? or should I have just one line, the pool is called NAS and there is a folder called nas_media, should the first line with NAS (pool) be there or was this created when I installed OmniOS and changed unknown to NAS?

    ZFS (all properties) SMB NFS RSYNC FC,IB,iSCSI NBMAND REC AVAILABLE USED RES RFRES QUO RFQU SYNC COMPR DEDUP CRYPT FOLDER-ACL SHARE-ACL PERM RDONLY

    NAS (pool)- - - - off 128K 903G [3%] 29.3T none none none none standard off off n.a. default ACL - 755 off
    NAS/NAS_Media NAS_Media, guestok off off zfs unset on 128K 903G 29.3T none none none none standard off off n.a. default ACL - 755 off

    on problems with buffering=on, you can reload list with menu ZFS folder - reload
    Size example 1T means 1 TiB
     
  34. N Bates

    N Bates n00bie

    Messages:
    50
    Joined:
    Jul 15, 2017
    I think what is happening is that when I changed OS, OmniOS could only see initially 2 x vdev's, raidz1-0 and raidz1-1, on OmniOS/ Napp-it the resilvering started on the 2 vdev's, I then reconnected Nexenta and exported, reconnected OmniOS and could see all 6 vdev's then, I wonder whether the files NAS are the 2 vdev's that could only be seen under OmniOS originally and the other 4 vdev's are showing under the proper dubdirectory NAS\Nas_Media, how do I fix this now...at the console when I start OmniOS, I can see the below:

    cannot mount '/NAS': directory is not empty
    cannot mount 'NAS/NAS_MEDIA': I/O error
    xvc:/system/filesystem/local:default: warning: /usr/sbin/zfs mount -a failed: ex
    t status 1
    ug 2 11:06:36 svc.startd[10]: svc:/system/filesystem/local:default: method
    b/svc/method/fs-local" failed with exit status 95.
    ug 2 11:06:36 svc.startd[10]: system/filesystem/local:default failed fatally
    transitioned to maintenance (see 'svcs -xv' for details)

    I have also to restart Napp-it everytime I reboot, I am not sure how to fix this now and regain my data back?

    I cannot access my file from my windows 10 computer to check whether there is a NAS folder that is empty, is there a way to do this at the console?
     
  35. _Gea

    _Gea 2[H]4U

    Messages:
    3,646
    Joined:
    Dec 5, 2010
    NAS is your pool and NAS/NAS_MEDIA is a ZFS filesystem below.
    As you cannot mount the pool, you cannot mount filesystems below and as long as you do not fix this,
    you cannot mount the pool/ access data and napp-it will have problems on boot.

    Have you checked for a regular folder /NAS that hinders the mount? - delete if empty.
    You can use midnight commander at console (mc) or WinSCP from Windows to browse your filer.

    If you want to use WinSCP (a freeware tool) , enable SSH with root allowed (Menu Services) and connectfrom Windows via WinSCP and user root.
     
  36. N Bates

    N Bates n00bie

    Messages:
    50
    Joined:
    Jul 15, 2017
    I have this under winscp:

    http://imgur.com/a/S8ThR

    the NAS under the mnt folder is empty, so is the one below it with the NAS_Media underneth it, that is where my files reside, is it the folder under mnt that I need to delete?
    Thanks for all your help _Gea.
     
  37. _Gea

    _Gea 2[H]4U

    Messages:
    3,646
    Joined:
    Dec 5, 2010
    ZFS mounts a Pool NAS under /NAS per default, not under /mnt so /mnt/NAS is irrelevant (as long you have not set this as mountpoint - and napp-it wants the default mountpoint /NAS). What is the content of /NAS?

    Is it empty beside an empty subfolder /NAS/NAS_MEDIA ?
    If so delete /NAS as this hinders your pool to be mounted there (with data then under /NAS)
     
  38. N Bates

    N Bates n00bie

    Messages:
    50
    Joined:
    Jul 15, 2017
    My data should be under a folder called NAS and then another folder called Nas_Media, if I delete the folder NAS this will delete the subfolder Nas_Media as well wouldn't it?

    The content of /NAS/Nas_Media is where my data resides, I remember when I have installed OmniOS for the first time it has asked whether to change the name of unknown and I had changed that to NAS, maybe this is where the problem lie, but I don't know which folder to delete?
     
  39. _Gea

    _Gea 2[H]4U

    Messages:
    3,646
    Joined:
    Dec 5, 2010
    The difference between an ordinary folder, a ZFS pool or filesystem and a mountpoint can be embarassing for the beginning. If you create a ZFS pool or filesystem it is not necessery that it is visible in the filesystem. You can set a mountpoint as a ZFS property (per default /poolname) where it becomes visible after a mount command like mount -a (all) that is executed after bootup.

    Per default a pool cannot be mounted under /poolname if there is an ordinary folder named /poolname. This seems the case with you as you have an errormessage that the pool NAS cannot be mounted as there is a nonempty folder /NAS.

    You should now delete /NAS and the subfolder /NAS/NAS_MEDIA !! BUT ONLY !! if they are ordinary folders (empty, no content) to allow ZFS to mount its filesystems there.

    If you are unsure, export the pool NAS. If /NAS and /NAS/NAS_MEDIA are only mountpoints they will disappear. If the folders remain, they are ordinary folders that must be deleted to allow ZFS to mount there.
     
  40. N Bates

    N Bates n00bie

    Messages:
    50
    Joined:
    Jul 15, 2017