OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

Discussion in 'SSDs & Data Storage' started by _Gea, Dec 30, 2010.

  1. _Gea

    _Gea 2[H]4U

    Messages:
    3,649
    Joined:
    Dec 5, 2010
    I am not aware of problems with Hipster but to be honest I do not use HD shutdown.
    One thing you need to ensure is that there is no service accessing the disk regularly like a napp-it alert job or the Solaris fault management daemon fmd that you may need to disable
     
  2. spankit

    spankit Limp Gawd

    Messages:
    258
    Joined:
    Oct 18, 2010
    I have a RAIDZ2 comprised of 8x 3TB HGST 512 sector drives that have over 6 years of power on time (approx 53200hrs). While I haven't lost a drive yet, I know my days are numbered. The pool was created using ashift=9 and and I know that in the past OI/OmniOS wouldn't let you replace the drive with a modern 4k drive if the pool was configured this way. Do those limitations still exist or do I need to re-create my pool with ashift=12 so that I freely replace my drives when they start to kick the bucket?
     
  3. _Gea

    _Gea 2[H]4U

    Messages:
    3,649
    Joined:
    Dec 5, 2010
    Yes this problem persists (I suppose on any OpenZFS/ZFS).
    And if you can get it working with a 512e disk, this would result in a performance degration

    The only proper way is to recreate the pool with ashift=12. With 512n disks this happens when you mix a physical 4k disk in a vdev or if you set this in sd.conf
     
  4. stevebaynet

    stevebaynet Limp Gawd

    Messages:
    199
    Joined:
    Nov 9, 2011
    Running OmniOS + Napp-IT 17.01

    Been working great so far, but just added 10 new disks (SAS disks connected via LSI 2007 HBA)

    OS seems fine, ZFS pools still working, but in Napp-IT, if I click on disks or pools, browser just spins.

    minilog
    --- _lib _lib/illumos/get-disk.pl &get_mydisk_format 20 <- admin.pl (eval) 833 <- admin.pl &load_lib 515 <- admin.pl &get_preloads 272 ---

    exe(get-disk.pl 86): format

    I can root the box and do an iostat or zpool list and it works fine.

    Rebooting would be a PITA. any tips on where to check/look?
     
  5. stevebaynet

    stevebaynet Limp Gawd

    Messages:
    199
    Joined:
    Nov 9, 2011
    Looks like i spoke too soon. I let the browser spin and spin, wrote the above, came back 5 mins later and it came up. Now when I hit pools or disks it comes up reasonably fast.

    Not sure if this has anything to do with it, fairly certain one of the newly added disks is bad (*shakes fist!!*)

    Best to update napp-it while I am here lurking?
     
  6. _Gea

    _Gea 2[H]4U

    Messages:
    3,649
    Joined:
    Dec 5, 2010
    It seems that the format command is hanging as this is executed by napp-it to detect connected disks. Try the format command and a zpool list at console to check behaviour. The format command can be ended after listing with ctrl-c

    If format or zpool status hang, remove the new disks and insert disk by disk, wait a little and check if format is listing the disk to find a bad disk.

    If format is listing all disks but stops at a special disk for a while remove it.
    Update of napp-it to current is an option but would not help as this seems a disk problem.
     
  7. natkin

    natkin n00bie

    Messages:
    27
    Joined:
    Mar 31, 2014
    Gea,

    I do notice that as of 151022, when upgrading napp-it, the installation is broken until another login. (This actually took me a while to realize that it could be fixed with just a login, and I had been rolling back to a pre-update BE on my test machine to await a working napp-it version).

    This breakage is most inconvenient because root's run of auto.pl certainly does not wait for another login, so it is broken until then. Do you think it napp-it might run the fix-up that happens during login w.r.t. OS-specific packages (e.g., /tools/omni_022/CGI/) just after the update so as not to break auto.pl?

    Thank you!
     
  8. docjay

    docjay n00bie

    Messages:
    6
    Joined:
    Dec 1, 2014
    I'm adding another disk to Napp-it (passed through via ESXi) just for snapshots. I'm wondering if there is a way to tell Napp-IT to create snapshots onto my new disk?

    Also, it looks like there is a way to manage snapshots of my VMs through the napp-it interface? Jobs --> ESXI Hot-snaps

    Thanks for any help with this
     
  9. HammerSandwich

    HammerSandwich Gawd

    Messages:
    986
    Joined:
    Nov 18, 2004
    You'll need to create a new pool as the destination for zfs send/recv from your live pool. Older snaps can be removed from the live pool, but it must retain the last sent so that send/recv can work.
     
  10. _Gea

    _Gea 2[H]4U

    Messages:
    3,649
    Joined:
    Dec 5, 2010
    A napp-it update via About > Update usually force a logout/login with a kill of all running napp-it processes, so this should be usually not a problem.
     
  11. _Gea

    _Gea 2[H]4U

    Messages:
    3,649
    Joined:
    Dec 5, 2010
    No, not possible. A ZFS snapshot is due the CopyOnWrite filesystem simply a freeze of the current pool state, not a copy of any data.

    The ESXi snapshot option is there to do an ESXi hotsnap with its hot memory state prior the ZFS snap that is like a sudden power off to include the ESXi snapshot file into the ZFS snap. This will allow an ESXi hot restore to the running/online VM state after a ZFS snap restore.
     
    Last edited: Sep 2, 2017
  12. natkin

    natkin n00bie

    Messages:
    27
    Joined:
    Mar 31, 2014
    It's a problem every time now, and I'm saying there's a repugnant lack of grace in napp-it allowing auto.pl to become broken by an update and requiring a log in to fix it.
     
  13. _Gea

    _Gea 2[H]4U

    Messages:
    3,649
    Joined:
    Dec 5, 2010
    auto.pl is just a cronjob that is executed example every 15 minutes to check for any jobs that are due to execute like a replication. So auto.pl is uncritical. You probably talk about running jobs. An update/ downgrade cancels jobs as the whole napp-it environment may change resulting in troubles with running jobs. Usually cancelling a job should not be a problem. You also need the logout after an update/downgrade as all running parameters, menus or other internal parameters from the old menu state may be no longer valid.

    What is your OS as a logout is usually forced after an update/downgrade?
    I have had only some problems with Linux and autologout.
     
  14. WishYou

    WishYou n00bie

    Messages:
    3
    Joined:
    Oct 19, 2016
    Hi Gea!

    I recently discovered a problem with snap jobs. As of version 17.06free 'del zero' is not working anymore. None of my snap jobs are removing empty snaps after I upgraded.

    Is this a known issue?
     
  15. _Gea

    _Gea 2[H]4U

    Messages:
    3,649
    Joined:
    Dec 5, 2010
    No but I will check that for next update.
     
  16. stevebaynet

    stevebaynet Limp Gawd

    Messages:
    199
    Joined:
    Nov 9, 2011
    So I know which disk is bad and I am planning to pull it tonight and replace.

    Is there anything I need to do before pulling it? (it is not attached to any pool)

    Is there anything I should do before replacing it? (clear anything, etc)

    In the past, we have added drives a bunch at a time. My assumption now is I should add one drive, wait for it to be recognized, make sure it is good and does not lock up, then proceed to the next drive?
     
  17. _Gea

    _Gea 2[H]4U

    Messages:
    3,649
    Joined:
    Dec 5, 2010
    On modern hotplug capable hardware just plug/unplug.
    OmniOS/OI etc will detect the change after a few seconds.

    iostat keeps the removed disks in its inventory until next reboot but this does not matter.
     
  18. Mastaba

    Mastaba Limp Gawd

    Messages:
    227
    Joined:
    Apr 2, 2011
    I got this:

    [​IMG]

    Does that mean the HDD is dead?

    Also how to set email alerts? I tried entering my gmail adress but how can nappit need to know my email password account to send me email?
    I tried to test mail & TLS (? don't work)
     
  19. stevebaynet

    stevebaynet Limp Gawd

    Messages:
    199
    Joined:
    Nov 9, 2011
    Yup, bad disk, you will need to replace (you can do inside of Napp-IT, and it will resilver)

    as for the email part, curious to hear from Gea about this as well. (I dont recall if it is part of the licensed/paid add-on's, but I would like this as well)
     
    Last edited: Sep 7, 2017
  20. _Gea

    _Gea 2[H]4U

    Messages:
    3,649
    Joined:
    Dec 5, 2010
    You can set a mailserver and password in About > Settings
    then create an alert job that uses this settings (unencrypted port 25)

    If you want to use encrypted mails example with Gmail, you must install SSL and TLS on your OS,
    see https://www.napp-it.org/downloads/tls.html

    Then switch mail to TLS in menu Jobs.
    Your alert will then usev TLS
     
  21. brutalizer

    brutalizer [H]ard|Gawd

    Messages:
    1,587
    Joined:
    Oct 23, 2010
    Gea,
    Have you looked at SmartOS? It is made for virtualization.

    Regarding Bryan Cantrill and Solaris; OpenIndiana is not Solaris. I view this as if a Linux distro is stopped developing. There are other Linux distros. Illumos is open and thriving.
     
  22. _Gea

    _Gea 2[H]4U

    Messages:
    3,649
    Joined:
    Dec 5, 2010
    When Oracle closed ZFS and OpenSolaris in 2010, the last Opensource ZFS and OpenSolaris bits were forked in the Illumos project and since then developped independently from Oracle as a common effort mainly by firms like Delphix, Joyent owned by Samsung (SmartOS), Nexenta or community projects like OmniOS or OpenIndiana.

    Some of the distributions like OmniOS, OpenIndiana or SmartOS are OpenSource. They all share the same Illumos as base for the distribution, similar to Linux development and its distributions like Debian or CentOS but with a different focus on use cases.

    While OmniOS is a minimalistic stable distribution for a just enough ZFS server, OpenIndiana adds a lot of services and the Mate GUI as an option for additional use cases. SmartOS is focused as a Cloud OS with impressive virtualisation options around KVM, Solaris zones, LX zones and Docker support. SmartOS is completely running from RAM/ readonly USB sticks with limited options in the global zone. This hinders SmartOS to be a good base for a pure storage server compared to OmniOS or OI (or would require some work to make global zone settings persistent on a datapool)