How to format ZFS Disks & Create a New Pool.

Discussion in 'SSDs & Data Storage' started by N Bates, Feb 9, 2018.

  1. N Bates

    N Bates [H]Lite

    Messages:
    67
    Joined:
    Jul 15, 2017
    Can someone please point me in the right direction of how to format ZFS disks with data already on that I can no longer access and create a new pool, I will be using Napp-it to perform the pool creation.

    Also which is quicker and better supported, OmniOS ce or Solaris 11.3 minimal?

    Many thanks.
     
  2. _Gea

    _Gea 2[H]4U

    Messages:
    3,723
    Joined:
    Dec 5, 2010
    You can try Disks > initialise in napp-it.
    If this is not enough you must delete all partitions and reformat with a different filesystem. Then you can use it. (ZFS refuses to add a disk with a valid ZFS on it = not destroyed pool)

    better supported?
    from a commercial point of view this is Solaris as Oracle promised support at least until 2034

    OmniOS is OpenSource with community support but with a commercial support option
    https://www.omniosce.org/invoice.html

    Feature and performancewise Solaris (current 11.4b) is superiour with SMB3 (ZFS/kernelbased SMB server), NFS 4.1, sequential resilvering, improved dedup, auditing and best of all performance, but it is not free (only for noncommercial demo and development) and with ZFS v43 incompatible to Open-ZFS

    This is why most are on OmniOS. It is free. near to Solaris and compatible with Open-ZFS v5000
     
    N Bates likes this.
  3. N Bates

    N Bates [H]Lite

    Messages:
    67
    Joined:
    Jul 15, 2017
    Thank you _Gea as always you're the first to answer, thank you for all the help and your support on this and other forums.
     
  4. N Bates

    N Bates [H]Lite

    Messages:
    67
    Joined:
    Jul 15, 2017
    Really strange, I am getting the below when booting up:

    ZFS I/O block error all block copies unavailablle
    NAS pool not supported

    The above probably stems from my last set up, however, I have created a simple drive with all 24 drives and formated to NTFS, then created a new ZFS pool called NASS (server hostname is NAS), with 3 x 8 raidz2 vdevs.

    Before deleting the drives my previous pool was called NAS (see the drama here if you can be bothered):

    https://hardforum.com/threads/how-to-upgrade-from-nexenta-to-omnios.1939794/

    I have also noticed the below which I haven't been able to fix yet:

    Sendmail: my unqualified host name unknown ; sleeping for retry
    NAS smbd dyndns: failed to get domain

    I have accessed the /etc/hosts via WinSCP and the file contains the below:

    ::1
    :127.1.0.0.1
    :127.1.0.0.1 localhost loghost

    ::1
    :NAS
    127.1.001 NAS

    I have editted the file with various online sudjestions without sucess, anyone know what should this file contain for OmniOS?

    I have also noticed that the 3 x 8 x raidz2 is not as quick as 6 x 4 raidz1, I am not sure what my priority is speed vs data integrity?

    Many thanks for all your help.
     
  5. _Gea

    _Gea 2[H]4U

    Messages:
    3,723
    Joined:
    Dec 5, 2010
    1. sendmail requires a fully qualified hostname.
    I would simply disable sendmail (emnu services)

    2. /etc/hosts
    localhost = 127.0.0.1

    3.
    sequential performance scale with number of datadisks, iops with number of vdevs
    As in a CopyOnWrite raid performance is more limited by iops a 6 vdevs config is faster than a 3 vdevs setup

    On every single read/write, all heads in a raid/vdev must be positioned. This is why a Raid-Z [1-3] has the same iops than a single disk.
    Despite this I would go Raid-Z2 due its higher level of data security (any two disks are allowed to fail)
     
    N Bates likes this.