How to upgrade from Nexenta to OmniOS?

Discussion in 'SSDs & Data Storage' started by N Bates, Jul 15, 2017.

  1. N Bates

    N Bates n00bie

    Messages:
    49
    Joined:
    Jul 15, 2017
    sorry for the photo, basically I have the below:

    name used avail refer mountpoint
    NAS 29.3T 903G 44.9K /NAS
    NAS@06.07.2017_04:39:25 0 - 44.9K -
    NAS/NAS_Media 29.3T 903G 29.3T /NAS/NAS_Media
    NAS/NAS_Media@06307.2017 04:39:25 779k - 29.3t -

    it looks like under mountpoints there are /NAS and /NAS/NAS_Media.
     
    Last edited: Aug 2, 2017
  2. _Gea

    _Gea 2[H]4U

    Messages:
    3,636
    Joined:
    Dec 5, 2010
    left the question
    - did you see your data with WinSCP in /NAS/NAS_MEDIA ?
    or SMB share when you enable SMB for NAS_MEDIA
     
  3. N Bates

    N Bates n00bie

    Messages:
    49
    Joined:
    Jul 15, 2017
    Under WinSCP I can not see anything under root/NAS/NAS/Media, I use to be able to access my data under nexenta, however I haven't yet anabled SMB under OmniOS, I am justy going to see how to do that, and try and anable smb.
     
  4. N Bates

    N Bates n00bie

    Messages:
    49
    Joined:
    Jul 15, 2017
    I can see that the rpool is version 5000 while the NAS is version 28, does this matter? as per below

    Pool Version Pool GUID Vdev Ashift Asize Vdev GUID Disk Disk-GUID Cap Product/ Phys_Path/ Dev_Id/ Sn

    NAS 28 7798317525941449710 vdevs: 6
    vdev 1: raidz1 9 6.00 TB 6950219439857102803
    old 17038718821738473322
    /pci@0,0/pci10de,5d@e/pci103c,3229@0/sd@3,0:a


    c3t3d0 9882927582874716959 Hitachi HUA72302
    /pci@0,0/pci10de,5d@e/pci103c,3229@0/sd@3,0:a
    id1,sd@n5000cca224cd6fcf/a
    YGGYKA5D
    c3t2d0 7081115906011607086 TOSHIBA DT01ACA3
    /pci@0,0/pci10de,5d@e/pci103c,3229@0/sd@2,0:a
    id1,sd@n5000039ff4d4ed75/a
    Z3GH157GS
    c3t1d0 10467553274682141268 SAMSUNG HD154UI
    /pci@0,0/pci10de,5d@e/pci103c,3229@0/sd@1,0:a
    id1,sd@n50024e9002917026/a
    S1XWJ1LSC00468
    c3t0d0 9967460359315372551 SAMSUNG HD154UI
    /pci@0,0/pci10de,5d@e/pci103c,3229@0/sd@0,0:a
    id1,sd@n50024e900291701d/a
    S1XWJ1LSC00466
    vdev 2: raidz1 9 8.00 TB 15224434900887650974
    c3t6d0 2718933760062897973 SAMSUNG HD203WI
    /pci@0,0/pci10de,5d@e/pci103c,3229@0/sd@6,0:a
    id1,sd@n50024e9003287e4d/a
    S1UYJ1KZ309403
    c3t5d0 626895432252130999 SAMSUNG HD203WI
    /pci@0,0/pci10de,5d@e/pci103c,3229@0/sd@5,0:a
    id1,sd@n50024e9003287e7e/a
    S1UYJ1KZ309410
    c3t4d0 6328722537480642549 SAMSUNG HD203WI
    /pci@0,0/pci10de,5d@e/pci103c,3229@0/sd@4,0:a
    id1,sd@n50024e9003281f9e/a
    S1UYJ1KZ309168
    c3t7d0 6281597696111527768 SAMSUNG HD203WI
    /pci@0,0/pci10de,5d@e/pci103c,3229@0/sd@7,0:a
    id1,sd@n50024e9003287e62/a
    S1UYJ1KZ309407
    vdev 3: raidz1 9 8.00 TB 8280806386717594142
    c3t11d0 8226101300518590913 SAMSUNG HD204UI
    /pci@0,0/pci10de,5d@e/pci103c,3229@0/sd@b,0:a
    id1,sd@n50024e92051ee25b/a
    S2H7J9BB502120
    c3t10d0 2712526074840391349 SAMSUNG HD204UI
    /pci@0,0/pci10de,5d@e/pci103c,3229@0/sd@a,0:a
    id1,sd@n50024e92051ee222/a
    S2H7J9BB502115
    c3t9d0 11678664609715971279 SAMSUNG HD204UI
    /pci@0,0/pci10de,5d@e/pci103c,3229@0/sd@9,0:a
    id1,sd@n50024e92051f4f72/a
    S2H7J9AB500578
    c3t8d0 1972305866490771301 SAMSUNG HD204UI
    /pci@0,0/pci10de,5d@e/pci103c,3229@0/sd@8,0:a
    id1,sd@n50024e92051f5060/a
    S2H7J9AB500588
    vdev 4: raidz1 9 8.00 TB 4836090754279335689
    c3t13d0 480591903005358609 SAMSUNG HD204UI
    /pci@0,0/pci10de,5d@e/pci103c,3229@0/sd@d,0:a
    id1,sd@n50024e9204529cc0/a
    S2H7J90B111892
    c3t14d0 10123417681632744638 SAMSUNG HD204UI
    /pci@0,0/pci10de,5d@e/pci103c,3229@0/sd@e,0:a
    id1,sd@n50024e92051ee249/a
    S2H7J9BB502119
    c3t15d0 16165275705694604904 SAMSUNG HD204UI
    /pci@0,0/pci10de,5d@e/pci103c,3229@0/sd@f,0:a
    id1,sd@n50024e9204529c9d/a
    S2H7J90B111859
    c3t16d0 4837743188255832981 SAMSUNG HD204UI
    /pci@0,0/pci10de,5d@e/pci103c,3229@0/sd@10,0:a
    id1,sd@n50024e9204529cb4/a
    S2H7J90B111880
    vdev 5: raidz1 9 8.00 TB 14809020084613416570
    c3t17d0 8295336206426961396 ST3000DM001-1CH1
    /pci@0,0/pci10de,5d@e/pci103c,3229@0/sd@11,0:a
    id1,sd@n5000c5004f809e45/a
    Z1F25D2A
    c3t18d0 15093684025079135704 ST2000DM001-1CH1
    /pci@0,0/pci10de,5d@e/pci103c,3229@0/sd@12,0:a
    id1,sd@n5000c5006054d24f/a
    W1E3YZCH
    c3t19d0 6312253771393921480 ST2000DM001-1CH1
    /pci@0,0/pci10de,5d@e/pci103c,3229@0/sd@13,0:a
    id1,sd@n5000c50060649a7a/a
    W1E419W7
    c3t20d0 1629898401884167417 SAMSUNG HD204UI
    /pci@0,0/pci10de,5d@e/pci103c,3229@0/sd@14,0:a
    id1,sd@n50024e9204529cba/a
    S2H7J90B111886
    vdev 6: raidz1 9 8.00 TB 2122223289305753430
    c3t21d0 16627056423915713763 TOSHIBA DT01ACA2
    /pci@0,0/pci10de,5d@e/pci103c,3229@0/sd@15,0:a
    id1,sd@n5000039ff3e3b4e1/a
    X3UJK0ZGS
    c3t22d0 12575150286689658291 TOSHIBA DT01ACA2
    /pci@0,0/pci10de,5d@e/pci103c,3229@0/sd@16,0:a
    id1,sd@n5000039ff3e39a22/a
    X3UJ9X3GS
    c3t23d0 17284241658882970112 TOSHIBA DT01ACA2
    /pci@0,0/pci10de,5d@e/pci103c,3229@0/sd@17,0:a
    id1,sd@n5000039ff3e38909/a
    X3UJ5AXGS
    c3t24d0 10749613430234994158 TOSHIBA DT01ACA2
    /pci@0,0/pci10de,5d@e/pci103c,3229@0/sd@18,0:a
    id1,sd@n5000039ff3e3b4cf/a
    X3UJK0DGS

    rpool 5000 6104832667924772593 vdevs: 1
    vdev 1: disk 9 120.02 GB 3076587348466849758
    c2t0d0 DREVO X1 SSD
    /pci@0,0/pci147b,1c12@7/disk@0,0:a
    id1,sd@ADREVO_X1_SSD=TA1762600550/a
    TA1762600550
     
  5. N Bates

    N Bates n00bie

    Messages:
    49
    Joined:
    Jul 15, 2017
    I have exported the pool, then went in winSCP and the folder /NAS/NAS_Media was still there, so I have deleted it, I turned the server back on and tried to impoort the pool and found that there were 2 pools there names NAS one with 120....... number and one with 724..... number, I remembered the pool that I had exported started with a 7 so I chose this pool to import, I had an error that the pool could not be imported because of an i/o error, so turned the server off and it rebooted by itself.

    Restarted napp-it and I have seen that the pool had impoted, I checked in win SCP and the /NAS/NAS_Media folder reappered unther root, is this right? the pool is risilvering now....we'll see.
     
  6. _Gea

    _Gea 2[H]4U

    Messages:
    3,636
    Joined:
    Dec 5, 2010
    Can you access your data in /NAS/NAS_Media?
    This does not depend on resilvering or pool scrubbing.
     
  7. N Bates

    N Bates n00bie

    Messages:
    49
    Joined:
    Jul 15, 2017
    No I can't, it looks like it is in a resilvering none ending loop, can use snapshot that I have taken before upgrading the OS? if so do I need to use the first one or the second one? my pool is now a different version, it was 28 now it's 5000, will that make a difference in using snapshots?

    ZFS NAME CREATION USED AVAIL REFER SNAPS: 5/5
    NAS/NAS_Media NAS/NAS_Media@06.07.2017_04:39:25 Thu Jul 06 04:39 2017 779K - 29.3T delete
    NAS NAS@06.07.2017_04:39:25 Thu Jul 06 04:39 2017 0 - 44.9K delete
     
  8. _Gea

    _Gea 2[H]4U

    Messages:
    3,636
    Joined:
    Dec 5, 2010
    You have two ZFS filesystem, the first is NAS that is used as a parent container for your other filesystems like NAS/NAS_MEDIA.

    As snaps are a filesystem property, you must check snaps for NAS/NAS_MEDIA where your data are. You can check snaps either via SMB/ Windows > Previous versions or directly within the filesystem ex via WinSCP in folder
    /NAS/NAS_MEDIA/.zfs/snapshot

    As this folder is hidden by default open WinSCP and click on the header /NAS/NAS_MEDIA above the filecontent. It opens a Windows where you can add /.zfs (or full path) access the hidden folder.

    The poolversion does not make a difference as this is a feature from the very beginning of ZFS
     
  9. N Bates

    N Bates n00bie

    Messages:
    49
    Joined:
    Jul 15, 2017
    Apparently there aren't any there, do I need to perhaps connect the HDD with Nexenta and somehow export the snapshots from there?
     
  10. _Gea

    _Gea 2[H]4U

    Messages:
    3,636
    Joined:
    Dec 5, 2010
    Sounds not good.
    Snaps are frozen states of former datablocks prior a modification/ delete on a ZFS filesystem. They are part of the filesystem. If your filesystem is empty without snaps there are no data.

    What is the output of
    zfs list -t snapshot
     
  11. N Bates

    N Bates n00bie

    Messages:
    49
    Joined:
    Jul 15, 2017
    Output as per below, hopefully it looks good?

    NAME USED AVAIL REFER MOUNTPOINT
    NAS@06.07.2017_04:39:25 0 - 44.9K -
    NAS/NAS_Media@06.07.2017_04:39:25 779K - 29.3T -
    rpool/ROOT/omnios@2017-07-23-10:53:38 6.47M - 752M -
    rpool/ROOT/omnios@2017-07-23-10:54:37 10.0M - 768M -
     
  12. _Gea

    _Gea 2[H]4U

    Messages:
    3,636
    Joined:
    Dec 5, 2010
    You have one snapshot of your data filesystem
    NAS/NAS_Media@06.07.2017_04:39:25 779K - 29.3T

    If /NAS/NAS_MEDIA and /NAS/NAS_MEDIA/.zfs/snapshot/06.07.2017_04:39:25
    are empty, you have no data there.

    This is curious as you have
    name used avail refer mountpoint
    NAS 29.3T 903G 44.9K /NAS
    NAS/NAS_Media 29.3T 903G 29.3T /NAS/NAS_Media

    indicating a nearly full filesystem NAS/NAS_Media
     
    Last edited: Aug 5, 2017
  13. N Bates

    N Bates n00bie

    Messages:
    49
    Joined:
    Jul 15, 2017
    Yes, my system is nearly full, I just hope that I can somehow retreive the data, if I try to execute via Napp-it the snapshot will this do anything?

    The below is what I have done, maybe that's what caused the system to screw up somehow:

    I unpluged the 60gb 2.5 HDD that had Nexentacore 3
    I did not export the pool
    I attached a new 120gb SSD and installed OmniOS and Napp-it, changed hostname from unknown to NAS
    I imported the pool, somehow the import only imported raidz1-0 and raidz1-1, the system started resilvering
    I detatched the 120gb HDD and reinstalled the 60gb HDD with Nexenta and exported the pool
    I reattached the 120gb SSD and imported the pool in OmniOS and Napp-it
     
    Last edited: Aug 5, 2017
  14. N Bates

    N Bates n00bie

    Messages:
    49
    Joined:
    Jul 15, 2017
    I have got a lot of these in my system, services log:

    WARNING: /usr/sbin/zfs mount -a failed: exit status 1
    [ Jul 25 19:09:29 Method "start" exited with status 95. ]
    [ Jul 25 19:33:01 Enabled. ]
    [ Jul 25 19:33:25 Executing start method ("/lib/svc/method/fs-local"). ]
     
  15. _Gea

    _Gea 2[H]4U

    Messages:
    3,636
    Joined:
    Dec 5, 2010
    Seems you have a mount problem
    Can you try (mount readonly manually, console as root)

    zfs mount -o ro NAS
    zfs mount -o ro NAS/NAS_MEDIA


    what you also can check
    create a ordinary folder /volumes as Nexenta mounts filesystems there via ZFS mountpoint property (see zfs set mountpoint)

    mount options, see
    http://docs.oracle.com/cd/E19253-01/819-5461/gamns/index.html
     
    Last edited: Aug 6, 2017
  16. N Bates

    N Bates n00bie

    Messages:
    49
    Joined:
    Jul 15, 2017
    when I do zfs mount -o ro NAS, I get "cannot mount '/NAS' directory is not empy, and for zfs mount -o ro NAS_MEDIA, I get "cannot open 'NAS_MEDIA' : dataset does not exist" and when I try zfs mount -o ro NAS/NAS_MEDIA, I get "cannot mount 'NAS/NAS_MEDIA : I/O error.
     
  17. N Bates

    N Bates n00bie

    Messages:
    49
    Joined:
    Jul 15, 2017
    really weird, I don't get the I/O error mesage, I checked all the connections and they're all good and napp-it see all the disks attached see below, thanks for all your help _Gea:

    id part identify stat diskcap partcap error vendor product sn
    c2t0d0 (!parted) via dd ok 120 GB S:0 H:0 T:0 ATA DREVO X1 SSD TA1762600550
    c3t0d0 (!parted) via dd ok 1.5 TB S:0 H:0 T:0 ATA SAMSUNG HD154UI S1XWJ1LSC00466
    c3t10d0 (!parted) via dd ok 2 TB S:0 H:0 T:0 ATA SAMSUNG HD204UI S2H7J9BB502115
    c3t11d0 (!parted) via dd ok 2 TB S:0 H:0 T:0 ATA SAMSUNG HD204UI S2H7J9BB502120
    c3t13d0 (!parted) via dd ok 2 TB S:0 H:0 T:0 ATA SAMSUNG HD204UI S2H7J90B111892
    c3t14d0 (!parted) via dd ok 2 TB S:0 H:0 T:0 ATA SAMSUNG HD204UI S2H7J9BB502119
    c3t15d0 (!parted) via dd ok 2 TB S:0 H:0 T:0 ATA SAMSUNG HD204UI S2H7J90B111859
    c3t16d0 (!parted) via dd ok 2 TB S:0 H:0 T:0 ATA SAMSUNG HD204UI S2H7J90B111880
    c3t17d0 (!parted) via dd ok 2.2 TB S:0 H:0 T:0 ATA ST3000DM001-1CH1 Z1F25D2A
    c3t18d0 (!parted) via dd ok 2 TB S:0 H:0 T:0 ATA ST2000DM001-1CH1 W1E3YZCH
    c3t19d0 (!parted) via dd ok 2 TB S:0 H:0 T:0 ATA ST2000DM001-1CH1 W1E419W7
    c3t1d0 (!parted) via dd ok 1.5 TB S:0 H:0 T:0 ATA SAMSUNG HD154UI S1XWJ1LSC00468
    c3t20d0 (!parted) via dd ok 2 TB S:0 H:0 T:0 ATA SAMSUNG HD204UI S2H7J90B111886
    c3t21d0 (!parted) via dd ok 2 TB S:0 H:0 T:0 ATA TOSHIBA DT01ACA2 X3UJK0ZGS
    c3t22d0 (!parted) via dd ok 2 TB S:0 H:0 T:0 ATA TOSHIBA DT01ACA2 X3UJ9X3GS
    c3t23d0 (!parted) via dd ok 2 TB S:0 H:0 T:0 ATA TOSHIBA DT01ACA2 X3UJ5AXGS
    c3t24d0 (!parted) via dd ok 2 TB S:0 H:0 T:0 ATA TOSHIBA DT01ACA2 X3UJK0DGS
    c3t2d0 (!parted) via dd ok 2.2 TB S:0 H:0 T:0 ATA TOSHIBA DT01ACA3 Z3GH157GS
    c3t3d0 (!parted) via dd ok 2 TB S:0 H:0 T:0 ATA Hitachi HUA72302 YGGYKA5D
    c3t4d0 (!parted) via dd ok 2 TB S:0 H:0 T:0 ATA SAMSUNG HD203WI S1UYJ1KZ309168
    c3t5d0 (!parted) via dd ok 2 TB S:0 H:0 T:0 ATA SAMSUNG HD203WI S1UYJ1KZ309410
    c3t6d0 (!parted) via dd ok 2 TB S:0 H:0 T:0 ATA SAMSUNG HD203WI S1UYJ1KZ309403
    c3t7d0 (!parted) via dd ok 2 TB S:0 H:0 T:0 ATA SAMSUNG HD203WI S1UYJ1KZ309407
    c3t8d0 (!parted) via dd ok 2 TB S:0 H:0 T:0 ATA SAMSUNG HD204UI S2H7J9AB500588
    c3t9d0 (!parted) via dd ok 2 TB S:0 H:0 T:0 ATA SAMSUNG HD204UI S2H7J9AB500578

    Edit:
    I have also cretaed a folder under root volumes, rebooted but still the same.
     
    Last edited: Aug 6, 2017
  18. N Bates

    N Bates n00bie

    Messages:
    49
    Joined:
    Jul 15, 2017
    I have got the below:

    zfs list -r -o name,mountpoint
    --- _lib _lib _lib/interface.pl &console 408 <- admin.pl &menu 267 ---

    NAME MOUNTPOINT
    NAS /NAS
    NAS/NAS_Media /NAS/NAS_Media
    rpool /rpool
    rpool/ROOT legacy
    rpool/ROOT/omnios legacy
    rpool/ROOT/omnios-backup-1 /
    rpool/ROOT/pre_napp-it-17.06free /
    rpool/dump -
    rpool/swap -

    And:

    zfs list -r -o name,mountpoint rpool/ROOT
    --- _lib _lib _lib/interface.pl &console 408 <- admin.pl &menu 267 ---

    NAME MOUNTPOINT
    rpool/ROOT legacy
    rpool/ROOT/omnios legacy
    rpool/ROOT/omnios-backup-1 /
    rpool/ROOT/pre_napp-it-17.06free /

    And:

    zfs list -r -o name,mountpoint rpool/ROOT/c2t0d0
    --- _lib _lib _lib/interface.pl &console 408 <- admin.pl &menu 267 ---
    cannot open 'rpool/ROOT/c2t0d0': dataset does not exist

    I am not sure whether the below relates to my issue or not:

    http://docs.oracle.com/cd/E19253-01/819-5461/ghnoq/index.html
     
    Last edited: Aug 6, 2017
  19. _Gea

    _Gea 2[H]4U

    Messages:
    3,636
    Joined:
    Dec 5, 2010
    Do not confuse yourself with rpool or disks.
    Your ZFS pool NAS is listed, you must care only about to mount and check its content ex via WinSCP.

    You had an ordinary folder /NAS after a pool export.
    Had you deleted as this hinders a mount of the pool NAS under /NAS?

    What is the output of
    zfs get all NAS | grep mount

    it should give something like
    NAS mounted yes -
    NAS mountpoint /NAS local
    NAS canmount on default
     
  20. N Bates

    N Bates n00bie

    Messages:
    49
    Joined:
    Jul 15, 2017
    thanks _Gea, as per below:

    zfs get all NAS | grep mount
    --- _lib _lib _lib/interface.pl &console 408 <- admin.pl &menu 267 ---

    NAS mounted no -
    NAS mountpoint /NAS local
    NAS canmount on default
     
  21. _Gea

    _Gea 2[H]4U

    Messages:
    3,636
    Joined:
    Dec 5, 2010
    remains the question:
    do you have an ordinary folder /NAS ?
     
  22. N Bates

    N Bates n00bie

    Messages:
    49
    Joined:
    Jul 15, 2017
    No, the server is called NAS and I have a folder called NAS_Media with my files in it.

    However under winSCP I do see a folder under root called NAS
    and then a folder in it called NAS_Media.
     
    Last edited: Aug 7, 2017
  23. _Gea

    _Gea 2[H]4U

    Messages:
    3,636
    Joined:
    Dec 5, 2010
    ok
    you have an ordinary (simple filesystem) folder /NAS that hinders the ZFS pool NAS to mount there.
    You have a "subfolder" NAS_Media with your data. This means that this is propably a mount point of the same named ZFS daughter filesystem.

    This is not like it should be done.
    I would export NAS, rename the ordinary folder /NAS to /NAS.old (as I do not know the content) and import the pool again. It should then mount as /NAS with the daughter ZFS filesysten NAS_Media as /NAS/NAS_Media
     
  24. N Bates

    N Bates n00bie

    Messages:
    49
    Joined:
    Jul 15, 2017
    After doing what you have suggested I am still getting the below:

    zfs get all NAS | grep mount
    --- _lib _lib _lib/interface.pl &console 408 <- admin.pl &menu 267 ---

    NAS mounted no -
    NAS mountpoint /NAS local
    NAS canmount on default

    I have updated the OmniOS ce package but still the same issue, I am still having to restart Napp-it everytime I switch the server off, does the /NAS folder under /mnt do anything, should I try and export delete or rename this folder and try again?
     
  25. _Gea

    _Gea 2[H]4U

    Messages:
    3,636
    Joined:
    Dec 5, 2010
    Nexenta uses /volumes/poolname as default mountpoint. Other ZFS platforms use /poolname.
    Your pool NAS use the default /NAS mountpoint.

    ZFS system do not use /mnt per default so if you have a NAS folder there, this must be a ordinary filesystem-folder that you have created (as pool NAS is not mounted) or a mountpoint from another filesystem.

    Check if your "data filesystem" is also on defaults
    zfs get all NAS/NAS_MEDIA | grep mount

    should give something like
    NAS/NAS_MEDIA mounted no
    NAS/NAS_MEDIA mountpoint /NAS/NAS_MEDIA
    NAS/NAS_MEDIA canmount on


    Try
    Set a new mountpoint to a nonexistent point
    zfs set mountpoint=/NAS2 NAS
    zfs set mountpoint=/NAS_MEDIA2 NAS/NAS_MEDIA
    mount -a

    and check for folders /NAS2 and /NAS_MEDIA2 via WinSCP
    If this does not work I have no other idea
     
  26. N Bates

    N Bates n00bie

    Messages:
    49
    Joined:
    Jul 15, 2017
    zfs get all NAS/NAS_MEDIA | grep mount returns:

    'NAS/NAS_MEDIA' dataset does not exist

    zfs set mountpoint=/NAS2 NAS:
    cannot mount 'NAS/NAS_MEDIA' : I/O error
    property may be set but anable to remount filesystem

    zfs set mountpoint=/NAS_MEDIA2 NAS/NAS_MEDIA
    cannot open 'NAS/NAS_MEDIA' : dataset does not exist

    In winSCP /NAS2 and subfolder NAS_Media have been created.

    Thank you _Gea
     
  27. _Gea

    _Gea 2[H]4U

    Messages:
    3,636
    Joined:
    Dec 5, 2010
    my fault
    Yor filesystem is NAS/NAS_Media, not ' NAS/NAS_MEDIA'
     
  28. N Bates

    N Bates n00bie

    Messages:
    49
    Joined:
    Jul 15, 2017
    Yes, you are correct, I get the below:

    zfs get all NAS/NAS_Media | grep mount
    --- _lib _lib _lib/interface.pl &console 408 <- admin.pl &menu 267 ---

    NAS/NAS_Media mounted no -
    NAS/NAS_Media mountpoint /NAS/NAS_Media inherited from NAS
    NAS/NAS_Media canmount on default
     
  29. _Gea

    _Gea 2[H]4U

    Messages:
    3,636
    Joined:
    Dec 5, 2010
    We are now at an end point.

    If your data is not in /NAS2 (as NAS is mounted now there) or below example in a subfolder and if you cannot mount NAS/NAS_Media at a new mointpoint, I have no more ideas.

    The mounting of a ZFS filesystem is only controlled with ZFS properties like canmount or mountpoint and requires no more than that the mountpoint folder does not exist on a mount command.

    Probably you are then at a point where you need a backup.
     
  30. N Bates

    N Bates n00bie

    Messages:
    49
    Joined:
    Jul 15, 2017
    I don't think a backup will make any difference now or will it?, I can't access my files at all, Samba does not seem to work, the data is still there as the pool report 29.3T of data, in Napp-it I can see 29.3T in the pool area, the system goes into a resilver loop without ever finishing the job fully and on boot up I get the message ZFS pool is unsupported, is there any other way that I can access my data to back this up?

    I have even contemplated either using FreeBSD or FreeNAS to see whether I can access my data there, but not sure wehether this will make any difference?

    Many thanks for all the time you have spent to try and help me to resolve this issue, really appreciate it.
     
    Last edited: Aug 11, 2017
  31. _Gea

    _Gea 2[H]4U

    Messages:
    3,636
    Joined:
    Dec 5, 2010
    All Open-ZFS (BSD, Illumos and ZoL) use the same ZFS v5000. If you get an unsupported pool message you are either on an old OS without support for pool ex v5000 (like Solaris or NexentaCore with support to pool v28) or the pool structure is corrupted.

    A backup must be done prior a crash. Your only option now would be a mail to illumos-discuss where ZFS developpers are around. Maybe someone there has an additional suggestion.
     
  32. N Bates

    N Bates n00bie

    Messages:
    49
    Joined:
    Jul 15, 2017
    Thank you _Gea, I will make a post in the illimos forum and see whether there are other suggestions I can try, will it be helpful if I link this thread?
     
  33. HammerSandwich

    HammerSandwich Gawd

    Messages:
    971
    Joined:
    Nov 18, 2004
    Good luck. If they can get you running, please post the solution here as well.
     
  34. HammerSandwich

    HammerSandwich Gawd

    Messages:
    971
    Joined:
    Nov 18, 2004
    Actually, I have 1 thought. Don't hold your breath!

    Can you set the mountpoint with "zpool import -o", rather than changing the option after the pool's already online? (Apologies if I missed it while quickly rereading the thread.)
     
  35. N Bates

    N Bates n00bie

    Messages:
    49
    Joined:
    Jul 15, 2017
    I have sent an E-mail to the Illumos group and amazingly I had a reply last night from a guy called Brad Stone, apparently he's the ceo of Menloware, he asked me whether I would want him to log in a have a look, if so, send him my MAC address, alternatively he linked me to a menloware recovery program to use to try and recover my system.

    I would definitely post back if/when I find a solution.
     
  36. N Bates

    N Bates n00bie

    Messages:
    49
    Joined:
    Jul 15, 2017
    Brad Stone tried various thing but I am getting the below errors, he suggeted to wait I may get others from the illumos zfs group to hopefully more ideas:

    errors: Permanent errors have been detected in the following files:

    NAS:<0x2>
    NAS/NAS_Media:<0x5>
    NAS/NAS_Media@06.07.2017_04:39:25:<0x5>
    NAS/NAS_Media@06.07.2017_04:39:25:<0x6>
    NAS/NAS_Media@06.07.2017_04:39:25:<0x16>
    NAS/NAS_Media@06.07.2017_04:39:25:<0x3b1c>
    NAS@06.07.2017_04:39:25:<0x2>
    NAS@06.07.2017_04:39:25:<0x6>
    NAS@06.07.2017_04:39:25:<0x7>
    NAS@06.07.2017_04:39:25:<0xd>
     
  37. N Bates

    N Bates n00bie

    Messages:
    49
    Joined:
    Jul 15, 2017
    This is what was posted by Brad Stone for anyone else who may have an idea on how to recover (copied from the illumos zfs groups):

    Brad Stone
    Aug 12 (3 hours ago)
    Permalink
    A couple of comments in case anyone wants to help try to recover this system.

    Summary is that a disk went bad and was replaced using "zfs replace" but the raidz1 vdev has never recovered. In particular resilvering seems to run for a while and suddenly resets itself starting resilvering at the beginning. Pool goes from degraded
    to unavailable status and needs to be rebooted to get back to a degraded state. Pool has only one dataset which can't be
    mounted (attempts to mount get I/O error).

    Attempts to import pool read-only, force import, etc. work but aren't really the issue since the pool could be imported anyway, it just can't be mounted. Pool status shows a handful of permanent errors.

    I wonder if there might be value in replacing good drive back with the bad one and trying to import using a previous txg?

    Beyond my expertise but hopefully someone else has some ideas.
     
  38. N Bates

    N Bates n00bie

    Messages:
    49
    Joined:
    Jul 15, 2017
    I think I am going to try one last thing to see whether I can the pool back, I am going to insert the drive that has gone bad and see whether this will do anything, not sure if I need to export the pool first or not or if I need to do a drive replace, any suggestions anyone as a last stab at getting my data back?
     
    Last edited: Aug 20, 2017 at 3:57 PM