Become a Patron!

How to upgrade from Nexenta to OmniOS?

Discussion in 'SSDs & Data Storage' started by N Bates, Jul 15, 2017.

  1. N Bates

    N Bates n00bie

    Messages:
    10
    Joined:
    Jul 15, 2017
    Hi all,

    Really strange, I couldn't login on the HardForum site, the user name was not recognised, I had to re-register, is there a way to change my current name to my old user name?
    Anyway, that's not why I am here, currently I am still running Nexenta on my home NAS using an old internal HDD, so I have decided to upgrade to OmniOS with Nappit till I read the above:

    My current NAS is as per below:

    nexenta appliance v. 0.500r nightly Jun.27.2011

    pool: NAS
    state: ONLINE
    scan: resilvered 0 in 0h0m with 0 errors on Fri Jul 7 03:53:54 2017
    config:

    NAME STATE READ WRITE CKSUM
    NAS ONLINE 0 0 0
    raidz1-0 ONLINE 0 0 0
    c2t3d0 ONLINE 0 0 0
    c2t2d0 ONLINE 0 0 0
    c2t1d0 ONLINE 0 0 0
    c2t0d0 ONLINE 0 0 0
    raidz1-1 ONLINE 0 0 0
    c2t6d0 ONLINE 0 0 0
    c2t5d0 ONLINE 0 0 0
    c2t4d0 ONLINE 0 0 0
    c2t7d0 ONLINE 0 0 0
    raidz1-2 ONLINE 0 0 0
    c2t11d0 ONLINE 0 0 0
    c2t10d0 ONLINE 0 0 0
    c2t9d0 ONLINE 0 0 0
    c2t8d0 ONLINE 0 0 0
    raidz1-3 ONLINE 0 0 0
    c2t13d0 ONLINE 0 0 0
    c2t14d0 ONLINE 0 0 0
    c2t15d0 ONLINE 0 0 0
    c2t16d0 ONLINE 0 0 0
    raidz1-4 ONLINE 0 0 0
    c2t17d0 ONLINE 0 0 0
    c2t18d0 ONLINE 0 0 0
    c2t19d0 ONLINE 0 0 0
    c2t20d0 ONLINE 0 0 0
    raidz1-5 ONLINE 0 0 0
    c2t21d0 ONLINE 0 0 0
    c2t22d0 ONLINE 0 0 0
    c2t23d0 ONLINE 0 0 0
    c2t24d0 ONLINE 0 0 0

    errors: No known data errors

    pool: syspool
    state: ONLINE
    status: The pool is formatted using an older on-disk format. The pool can
    still be used, but some features are unavailable.
    action: Upgrade the pool using 'zpool upgrade'. Once this is done, the
    pool will no longer be accessible on older software versions.
    scan: none requested
    config:

    NAME STATE READ WRITE CKSUM
    syspool ONLINE 0 0 0
    c1t0d0s0 ONLINE 0 0 0

    errors: No known data errors

    The system pool is using a different format version to the pool itself version 26 vs version 28.

    Can I just go ahead and upgrade from my current 60gb internal HDD to a 120gb SSD? should I use the OmniOS or OI?

    My upgrade steps are as per below, is this correct and safe?

    1) Log in to Napp-it
    2) Export the pool via "export pool NAS" in Napp-it
    3) Remove current internal hdd
    4) install the image either OmniOS or OI (can this be done on a windows machine first or do I have to use the NAS server to install?)
    5) Install the 120gb SSD on the NAS
    6) Boot up into Napp-it
    7) import the pool via "import pool NAS"

    Thanks for all your help
     
  2. _Gea

    _Gea 2[H]4U

    Messages:
    3,611
    Joined:
    Dec 5, 2010
    The pool export should be done in NexentaStor
    but this is not essential as you can import a pool without a prior export or even after a pool destroy unless you have all disks.

    Then install Solaris, OI or OmniOS from DVD/CD or USB installer stick onto the NAS, see
    http://www.napp-it.org/doc/downloads/setup_napp-it_os.pdf

    Then login into napp-it and import the pool.
    Check if mountpunt is /pool, under Nexenta it was /volumes/pool
     
  3. N Bates

    N Bates n00bie

    Messages:
    10
    Joined:
    Jul 15, 2017

    Thank you Gea, are Solaris and OI minimal server NAS OS's or have they got the desktop and are treated like an OS to be used for a pc rather than a minimal NAS/ server OS?
     
  4. _Gea

    _Gea 2[H]4U

    Messages:
    3,611
    Joined:
    Dec 5, 2010
    Oracle Solaris is a general use enterprise Unix targeting mainly top 500 enterprises with Cloud or very large data usages. You can install Solaris in a minimal server only version or a GUI version with a desktop for easier local management.

    OpenIndiana is based on the free Solaris fork Illumos.
    Like Solaris it is available in a minimal server edition and a GUI edition with the Mate desktop.

    OmniOS is a very minimalistic server only distribution based also on Illumos similar to OpenIndiana minimal/text edition

    All of these options are not NAS distributions but general use Unix server distributions.
    As the origin of ZFS is Solaris, its integration into the OS and services like iSCSI, NFS or SMB is mostly superiour. This is why Solaris and its forks are best suited for a NAS or SAN. A pure NAS distribution of Illumos would be the commercial NexentaStor.

    napp-it is a webbased add-on application to manage the system and storage related features of Solaris, OpenIndiana and OmniOS (or Linux but with a reduced featureset) similar to a pure NAS distribution. From a user view Solaris/Illumos + napp-it behaves like a dedicated NAS distribution.
     
    N Bates likes this.
  5. N Bates

    N Bates n00bie

    Messages:
    10
    Joined:
    Jul 15, 2017
    The only question now is which is the best OS for using on a media NAS server storing media movie files of DVD and Blue Rays, Solaris or OmniOS, I will be using SMB to share?

    I have forgotten to also add, which of the two also support the bigest hardware drivers for old and new systems?
     
  6. N Bates

    N Bates n00bie

    Messages:
    10
    Joined:
    Jul 15, 2017
    Which is the right image for the OmniOS latest and Napp-it togo barebone file to download?
     
  7. _Gea

    _Gea 2[H]4U

    Messages:
    3,611
    Joined:
    Dec 5, 2010
    The ESXi template can be downloaded from
    http://napp-it.org/downloads/napp-in-one_en.html or
    http://openzfs.hfg-gmuend.de/ as a mirror

    The Sata template for an Intel S3510-80 can be found at same location
    but this is more a sample. Cloning a disk image is more for distributors
    as you need always the same disk.

    Usually you do a regular barebone OS setup of OI, Omni or Solaris
    and add napp-it via the online wget installer.
     
  8. AveryFreeman

    AveryFreeman n00bie

    Messages:
    2
    Joined:
    Aug 6, 2016
    OmniOS developers just recently abandoned the project. I was looking into it myself because I like its features and it sounds like a solid OS being descended from OpenSolaris, but it's dead.
     
  9. _Gea

    _Gea 2[H]4U

    Messages:
    3,611
    Joined:
    Dec 5, 2010
    There is a continuation of OmniOS as a community distribution currently already with the second update . Behind the community project are some firms who use OmniOS internally and one from ETH Zurüch, see http://www.omniosce.org/ or https://gitter.im/omniosorg/Lobby

    btw
    OmniOS is not a direct descend of OpenSolaris.
    OpenSolaris was forked in the Illumos project where firms like Delphix, Joyent (a Samsung company), Nexenta and others combined their efforts to continue a free Solaris as a community distribution or one with a commercial background. Some commercial distributions llike OmniOS or SmartOS are free others like Nexenta are not or only with restrictions. Until now OpenIndiana was the main community project.

    Its the commercial support option for OmniOS at OmniTi that is not available any longer. Beside that, if you want an option, OpenIndiana is a nearly identical sister project with a different focus including general use with a desktop and a repository with many services. Focus of OmniOS is a very stable and minimalistic just enough ZFS storage server approach for iSCSI, NFS and SMB.
     
    Last edited: Jul 22, 2017 at 4:40 AM
  10. N Bates

    N Bates n00bie

    Messages:
    10
    Joined:
    Jul 15, 2017
    Thank you for the great info Gea, I have installed the community distribution of OmniOS and imported the pool, now I can not see all of the vdevs, I had 5 x 4 disks in each only the below is showing and it looks like I am having a failed disk, how do I determine which disk has failed?

    Pool VER RAW SIZE/ USABLE ALLOC RES FRES AVAIL zfs [df -h/df -H] DEDUP FAILM EXP REPL ALT GUID HEALTH SYNC ENCRYPT ACTION ATIME Pri-Cache Sec-Cache
    NAS 6 12.7T/ 9.2TB 252K - - 9.19T [9.2T /11T] 1.00x wait off off - 12436642361445972779 DEGRADED standard n.a. clear errors - all all
    rpool - 111G/ 107.2GB 2.32G - - 104G [105G /112G] 1.00x wait off off - 6104832667924772593 ONLINE standard n.a. clear errors off all all
    Info: RAW poolsize does not count redundancy, usable/available size is from zfs list, df -h displays size as a power of 1024 wheras df -H displays as a power of 1000


    [​IMG]

    zpool status
    pool: NAS
    state: DEGRADED
    status: One or more devices could not be used because the label is missing or
    invalid. Sufficient replicas exist for the pool to continue
    functioning in a degraded state.
    action: Replace the device using 'zpool replace'.
    see: http://illumos.org/msg/ZFS-8000-4J
    scan: none requested
    config:

    NAME STATE READ WRITE CKSUM CAP Product /napp-it IOstat mess SN/LUN
    NAS DEGRADED 0 0 0
    raidz1-0 DEGRADED 0 0 0
    c3t3d0 ONLINE 0 0 0 1.5 TB SAMSUNG HD154UI S:0 H:0 T:0 S1XWJ1LSC00467
    5503366839276267646 UNAVAIL 0 0 0 was /dev/ad6
    c3t1d0p0 ONLINE 0 0 0 1.5 TB SAMSUNG HD154UI S:0 H:0 T:0 S1XWJ1LSC00468
    c3t0d0p0 ONLINE 0 0 0 1.5 TB SAMSUNG HD154UI S:0 H:0 T:0 S1XWJ1LSC00466
    raidz1-1 ONLINE 0 0 0
    c3t6d0p0 ONLINE 0 0 0 2 TB SAMSUNG HD203WI S:0 H:0 T:0 S1UYJ1KZ309403
    c3t5d0 ONLINE 0 0 0 2 TB SAMSUNG HD203WI S:0 H:0 T:0 S1UYJ1KZ309410
    c3t4d0p0 ONLINE 0 0 0 2 TB SAMSUNG HD203WI S:0 H:0 T:0 S1UYJ1KZ309168
    c3t7d0 ONLINE 0 0 0 2 TB SAMSUNG HD203WI S:0 H:0 T:0 S1UYJ1KZ309407

    errors: No known data errors

    pool: rpool
    state: ONLINE
    scan: none requested
    config:

    NAME STATE READ WRITE CKSUM CAP Product /napp-it IOstat mess SN/LUN
    rpool ONLINE 0 0 0
    c2t0d0 ONLINE 0 0 0 120 GB DREVO X1 SSD S:0 H:0 T:0 TA1762600550

    errors: No known data errors


    id part identify stat diskcap partcap error vendor product sn
    c2t0d0 (!parted) via dd ok 120 GB S:0 H:0 T:0 ATA DREVO X1 SSD TA1762600550
    c3t0d0 (!parted) via dd ok 1.5 TB S:0 H:0 T:0 ATA SAMSUNG HD154UI S1XWJ1LSC00466
    c3t10d0 (!parted) via dd ok 2 TB S:0 H:0 T:0 ATA SAMSUNG HD204UI S2H7J9BB502115
    c3t11d0 (!parted) via dd ok 2 TB S:0 H:0 T:0 ATA SAMSUNG HD204UI S2H7J9BB502120
    c3t13d0 (!parted) via dd ok 2 TB S:0 H:0 T:0 ATA SAMSUNG HD204UI S2H7J90B111892
    c3t14d0 (!parted) via dd ok 2 TB S:0 H:0 T:0 ATA SAMSUNG HD204UI S2H7J9BB502119
    c3t15d0 (!parted) via dd ok 2 TB S:0 H:0 T:0 ATA SAMSUNG HD204UI S2H7J90B111859
    c3t16d0 (!parted) via dd ok 2 TB S:0 H:0 T:0 ATA SAMSUNG HD204UI S2H7J90B111880
    c3t17d0 (!parted) via dd ok 2.2 TB S:0 H:0 T:0 ATA ST3000DM001-1CH1 Z1F25D2A
    c3t18d0 (!parted) via dd ok 2 TB S:0 H:0 T:0 ATA ST2000DM001-1CH1 W1E3YZCH
    c3t19d0 (!parted) via dd ok 2 TB S:0 H:0 T:0 ATA ST2000DM001-1CH1 W1E419W7
    c3t1d0 (!parted) via dd ok 1.5 TB S:0 H:0 T:0 ATA SAMSUNG HD154UI S1XWJ1LSC00468
    c3t20d0 (!parted) via dd ok 2 TB S:0 H:0 T:0 ATA SAMSUNG HD204UI S2H7J90B111886
    c3t21d0 (!parted) via dd ok 2 TB S:0 H:0 T:0 ATA TOSHIBA DT01ACA2 X3UJK0ZGS
    c3t22d0 (!parted) via dd ok 2 TB S:0 H:0 T:0 ATA TOSHIBA DT01ACA2 X3UJ9X3GS
    c3t23d0 (!parted) via dd ok 2 TB S:0 H:0 T:0 ATA TOSHIBA DT01ACA2 X3UJ5AXGS
    c3t24d0 (!parted) via dd ok 2 TB S:0 H:0 T:0 ATA TOSHIBA DT01ACA2 X3UJK0DGS
    c3t2d0 (!parted) via dd ok 2.2 TB S:0 H:0 T:0 ATA TOSHIBA DT01ACA3 Z3GH157GS
    c3t3d0 (!parted) via dd ok 1.5 TB S:0 H:0 T:0 ATA SAMSUNG HD154UI S1XWJ1LSC00467
    c3t4d0 (!parted) via dd ok 2 TB S:0 H:0 T:0 ATA SAMSUNG HD203WI S1UYJ1KZ309168
    c3t5d0 (!parted) via dd ok 2 TB S:0 H:0 T:0 ATA SAMSUNG HD203WI S1UYJ1KZ309410
    c3t6d0 (!parted) via dd ok 2 TB S:0 H:0 T:0 ATA SAMSUNG HD203WI S1UYJ1KZ309403
    c3t7d0 (!parted) via dd ok 2 TB S:0 H:0 T:0 ATA SAMSUNG HD203WI S1UYJ1KZ309407
    c3t8d0 (!parted) via dd ok 2 TB S:0 H:0 T:0 ATA SAMSUNG HD204UI S2H7J9AB500588
    c3t9d0 (!parted) via dd ok 2 TB S:0 H:0 T:0 ATA SAMSUNG HD204UI S2H7J9AB500578

    This page is updated in realtime when using the monitor extension - otherwise you must reload manually.
    On errors, check menu disks - details - diskinfo for details.

    If new disks are missing, you need to initialize the disks, use menu disks - initialize

    Thanks ffor all your help.
     
  11. _Gea

    _Gea 2[H]4U

    Messages:
    3,611
    Joined:
    Dec 5, 2010
    As the disk is completely missing you lack the information for serial or controller port. If you have sort of a disk map with all disks you can check for the one that is missing.

    A simple method would be read from or write to the pool and check for the one disk without activity led flashing.

    btw
    If you have any chance for a backup, re-create the pool with z2 vdevs. You have too many disks for z1 where a second failure of a disk within a z1 means a whole pool lost - especially as your disks seems quite old.

    Maybe a pool from a mirror of modern 8-12 TB disks may replace the whole.
     
    Last edited: Jul 23, 2017 at 2:57 PM
  12. N Bates

    N Bates n00bie

    Messages:
    10
    Joined:
    Jul 15, 2017
    That is sstrange, the disk is not completely missing, all disks are physically there and attached, I agree, my drives are old and I need to upgrade to higher capacity newer drives, I know the z1 is a bad idea, I had done this some time ago and I know better now, I would when funds allow get newer drives and backup the server.

    Is it right though that 3 of the raidz1's are not showing online? last time I had a drive failure all drives showed online apart from the one that failed, this was in Napp-it 5.0 though.

    Thanks ffor all your help.
     
  13. N Bates

    N Bates n00bie

    Messages:
    10
    Joined:
    Jul 15, 2017
    Something strange I have noticed, should the pool version be showing as 6, when I was in nexenta I was on version 28, is this what causing the problem? how can I upgrade to version 28?
     
  14. _Gea

    _Gea 2[H]4U

    Messages:
    3,611
    Joined:
    Dec 5, 2010
    ZFS version 6 and pool version 37 is Oracle Solaris 11.3.
    All Open-ZFS are currently on ZFS v5 with pool version 5000 and feature flags.
    A pool update can be done via zpool or napp-it menu pools when you click on the old poolversion 26 or 28

    If all disks are shown under menu disks and missing under pools, you have an enumeration problem. This happens with portbased detection line c1t1d0 when the controller number has changed and ZFS expects the disk on a different controller (newer WWN detection does not have this problem).

    To solve this do a pool export + pool import as ZFS then re-reads all disks.
     
  15. N Bates

    N Bates n00bie

    Messages:
    10
    Joined:
    Jul 15, 2017
    Thank you Gea, I have tried to export the pool and then import, I have impoted one pool and tried to import the second and it's saying that the Pool "NAS" has already been imported choose another name, do I change the name? originally I only had one pool not two?
    Many thanks.
     
  16. _Gea

    _Gea 2[H]4U

    Messages:
    3,611
    Joined:
    Dec 5, 2010
    During the import command, ZFS reads all disks for ZFS labels. It seems that you have disks with more than one label. This can happen if you reuse disks without a prior proper pool destroy or complete reinitialisation.

    This is not critical as you cannot import such ghost pools as you do not have all disks. With more than one pool available on import, only the "last and correct one" can be imported.

    As you have imported your pool, check menu pools for validity.
     
  17. N Bates

    N Bates n00bie

    Messages:
    10
    Joined:
    Jul 15, 2017
    Really weird, I can't even connect with Napp-it now, the below is what I have done from start to finish:

    Initially I was on Nexenta and Napp-it 05.00r

    nexenta appliance v. 0.500r nightly Jun.27.2011

    pool: NAS
    state: ONLINE
    scan: resilvered 0 in 0h0m with 0 errors on Fri Jul 7 03:53:54 2017
    config:

    NAME STATE READ WRITE CKSUM
    NAS ONLINE 0 0 0
    raidz1-0 ONLINE 0 0 0
    c2t3d0 ONLINE 0 0 0
    c2t2d0 ONLINE 0 0 0
    c2t1d0 ONLINE 0 0 0
    c2t0d0 ONLINE 0 0 0
    raidz1-1 ONLINE 0 0 0
    c2t6d0 ONLINE 0 0 0
    c2t5d0 ONLINE 0 0 0
    c2t4d0 ONLINE 0 0 0
    c2t7d0 ONLINE 0 0 0
    raidz1-2 ONLINE 0 0 0
    c2t11d0 ONLINE 0 0 0
    c2t10d0 ONLINE 0 0 0
    c2t9d0 ONLINE 0 0 0
    c2t8d0 ONLINE 0 0 0
    raidz1-3 ONLINE 0 0 0
    c2t13d0 ONLINE 0 0 0
    c2t14d0 ONLINE 0 0 0
    c2t15d0 ONLINE 0 0 0
    c2t16d0 ONLINE 0 0 0
    raidz1-4 ONLINE 0 0 0
    c2t17d0 ONLINE 0 0 0
    c2t18d0 ONLINE 0 0 0
    c2t19d0 ONLINE 0 0 0
    c2t20d0 ONLINE 0 0 0
    raidz1-5 ONLINE 0 0 0
    c2t21d0 ONLINE 0 0 0
    c2t22d0 ONLINE 0 0 0
    c2t23d0 ONLINE 0 0 0
    c2t24d0 ONLINE 0 0 0

    errors: No known data errors

    pool: syspool
    state: ONLINE
    status: The pool is formatted using an older on-disk format. The pool can
    still be used, but some features are unavailable.
    action: Upgrade the pool using 'zpool upgrade'. Once this is done, the
    pool will no longer be accessible on older software versions.
    scan: none requested
    config:

    NAME STATE READ WRITE CKSUM
    syspool ONLINE 0 0 0
    c1t0d0s0 ONLINE 0 0 0

    errors: No known data errors

    The system pool is using a different format version to the pool itself version 26 vs version 28.


    All was OK on Nexenta, I have upgraded the system pool from version 26 to version 28 using Napp-it, pool upgraded fine without issues.

    I had taken out the 60gb 2.5 internaL hdd and connected 120gb ssd.

    Installed OmniOS ce and Napp-it

    Imported the pool with Napp-it, however at this stage once I imported I could only see 2 raidz pools and originally I had 5.

    I could see an error on one of the disks as per below:

    zpool status
    pool: NAS
    state: DEGRADED
    status: One or more devices could not be used because the label is missing or
    invalid. Sufficient replicas exist for the pool to continue
    functioning in a degraded state.
    action: Replace the device using 'zpool replace'.
    see: http://illumos.org/msg/ZFS-8000-4J
    scan: none requested
    config:

    NAME STATE READ WRITE CKSUM CAP Product /napp-it IOstat mess SN/LUN
    NAS DEGRADED 0 0 0
    raidz1-0 DEGRADED 0 0 0
    c3t3d0 ONLINE 0 0 0 1.5 TB SAMSUNG HD154UI S:0 H:0 T:0 S1XWJ1LSC00467
    5503366839276267646 UNAVAIL 0 0 0 was /dev/ad6
    c3t1d0p0 ONLINE 0 0 0 1.5 TB SAMSUNG HD154UI S:0 H:0 T:0 S1XWJ1LSC00468
    c3t0d0p0 ONLINE 0 0 0 1.5 TB SAMSUNG HD154UI S:0 H:0 T:0 S1XWJ1LSC00466
    raidz1-1 ONLINE 0 0 0
    c3t6d0p0 ONLINE 0 0 0 2 TB SAMSUNG HD203WI S:0 H:0 T:0 S1UYJ1KZ309403
    c3t5d0 ONLINE 0 0 0 2 TB SAMSUNG HD203WI S:0 H:0 T:0 S1UYJ1KZ309410
    c3t4d0p0 ONLINE 0 0 0 2 TB SAMSUNG HD203WI S:0 H:0 T:0 S1UYJ1KZ309168
    c3t7d0 ONLINE 0 0 0 2 TB SAMSUNG HD203WI S:0 H:0 T:0 S1UYJ1KZ309407

    errors: No known data errors

    pool: rpool
    state: ONLINE
    scan: none requested
    config:

    NAME STATE READ WRITE CKSUM CAP Product /napp-it IOstat mess SN/LUN
    rpool ONLINE 0 0 0
    c2t0d0 ONLINE 0 0 0 120 GB DREVO X1 SSD S:0 H:0 T:0 TA1762600550

    errors: No known data errors


    So, I have identified the disks witch degraded and swapped it for a new disk with Napp-it "replace"

    the system risilvered, however, I could still see only 2 raidz vdevs instead of the 5.

    I exported again in Napp-it and imported again, the system only imported 2 riadz vdevs again, the other 3 were still not imported, I could see all disks in Napp-it

    id part identify stat diskcap partcap error vendor product sn
    c2t0d0 (!parted) via dd ok 120 GB S:0 H:0 T:0 ATA DREVO X1 SSD TA1762600550
    c3t0d0 (!parted) via dd ok 1.5 TB S:0 H:0 T:0 ATA SAMSUNG HD154UI S1XWJ1LSC00466
    c3t10d0 (!parted) via dd ok 2 TB S:0 H:0 T:0 ATA SAMSUNG HD204UI S2H7J9BB502115
    c3t11d0 (!parted) via dd ok 2 TB S:0 H:0 T:0 ATA SAMSUNG HD204UI S2H7J9BB502120
    c3t13d0 (!parted) via dd ok 2 TB S:0 H:0 T:0 ATA SAMSUNG HD204UI S2H7J90B111892
    c3t14d0 (!parted) via dd ok 2 TB S:0 H:0 T:0 ATA SAMSUNG HD204UI S2H7J9BB502119
    c3t15d0 (!parted) via dd ok 2 TB S:0 H:0 T:0 ATA SAMSUNG HD204UI S2H7J90B111859
    c3t16d0 (!parted) via dd ok 2 TB S:0 H:0 T:0 ATA SAMSUNG HD204UI S2H7J90B111880
    c3t17d0 (!parted) via dd ok 2.2 TB S:0 H:0 T:0 ATA ST3000DM001-1CH1 Z1F25D2A
    c3t18d0 (!parted) via dd ok 2 TB S:0 H:0 T:0 ATA ST2000DM001-1CH1 W1E3YZCH
    c3t19d0 (!parted) via dd ok 2 TB S:0 H:0 T:0 ATA ST2000DM001-1CH1 W1E419W7
    c3t1d0 (!parted) via dd ok 1.5 TB S:0 H:0 T:0 ATA SAMSUNG HD154UI S1XWJ1LSC00468
    c3t20d0 (!parted) via dd ok 2 TB S:0 H:0 T:0 ATA SAMSUNG HD204UI S2H7J90B111886
    c3t21d0 (!parted) via dd ok 2 TB S:0 H:0 T:0 ATA TOSHIBA DT01ACA2 X3UJK0ZGS
    c3t22d0 (!parted) via dd ok 2 TB S:0 H:0 T:0 ATA TOSHIBA DT01ACA2 X3UJ9X3GS
    c3t23d0 (!parted) via dd ok 2 TB S:0 H:0 T:0 ATA TOSHIBA DT01ACA2 X3UJ5AXGS
    c3t24d0 (!parted) via dd ok 2 TB S:0 H:0 T:0 ATA TOSHIBA DT01ACA2 X3UJK0DGS
    c3t2d0 (!parted) via dd ok 2.2 TB S:0 H:0 T:0 ATA TOSHIBA DT01ACA3 Z3GH157GS
    c3t3d0 (!parted) via dd ok 1.5 TB S:0 H:0 T:0 ATA SAMSUNG HD154UI S1XWJ1LSC00467
    c3t4d0 (!parted) via dd ok 2 TB S:0 H:0 T:0 ATA SAMSUNG HD203WI S1UYJ1KZ309168
    c3t5d0 (!parted) via dd ok 2 TB S:0 H:0 T:0 ATA SAMSUNG HD203WI S1UYJ1KZ309410
    c3t6d0 (!parted) via dd ok 2 TB S:0 H:0 T:0 ATA SAMSUNG HD203WI S1UYJ1KZ309403
    c3t7d0 (!parted) via dd ok 2 TB S:0 H:0 T:0 ATA SAMSUNG HD203WI S1UYJ1KZ309407
    c3t8d0 (!parted) via dd ok 2 TB S:0 H:0 T:0 ATA SAMSUNG HD204UI S2H7J9AB500588
    c3t9d0 (!parted) via dd ok 2 TB S:0 H:0 T:0 ATA SAMSUNG HD204UI S2H7J9AB500578

    When trying to import the three raidz vdevs the system reported that the 3 raidz vdev I asm trying to import had the same name as the already impoted 2 raidz vdev and I should change the name.

    Now can't reach the server with Napp-it.
     
  18. _Gea

    _Gea 2[H]4U

    Messages:
    3,611
    Joined:
    Dec 5, 2010
    Let*s recapitulate your current state.
    You have imported a pool NAS successfully with 3 x Z1-vdevs. The pool is reporting degraded (one disk missing) but working as this is within the redundancy of Z1. All disks are discovered and shown under Disks. Can you read valid data from the pool?

    Your former pool NAS was build from 5 x Z1 vdevs and you have seen several pools NAS on your first import trial.
    This leads to the asumption that you do not have imported this pool but maybe a prior state with 3 vdevs. If only 2 vdevs were missing out of 5, your pool state would have been then unavailable due the missing vdevs with online again when the missing vdevs come back.

    Its hard to tell if you can fix that as you have already done some replacements. Is there more than one option available on pool import? Had you exported the pool in NexentaCore or only tried an import on OmniOS? Can you retry the old NexentaCore to import the pool again? If this works and you have not exported the pool in NexentaCore you can try to export and then import again in OmniOS otherwise backup the data first.

    If you cannot import the proper state in OmniOS and cannot import again in NexentaCore, I suppose the pool is lost.

    btw
    You cannot import a vdev, only a pool with all the vdevs it is build from. On current ZFS versions I have not seen problems with different but same named labels example when re-using disks but ZFS v26 is a very old state.

    If napp-it is hanging this is mostly due a hanging zpool or format command as these commands are called from within napp-it. Try these commands at console and/or restart napp-it at console via /etc/inid.d/napp-it restart