OpenIndiana replacement

Discussion in 'SSDs & Data Storage' started by farscapesg1, Aug 20, 2015.

  1. farscapesg1

    farscapesg1 2[H]4U

    Messages:
    2,575
    Joined:
    Aug 4, 2004
    So, it looks like my OpenIndiana/Napp-it server on it's last leg. It's been struggling along for awhile now, but is at the point that a recent Napp-it upgrade failed and it isn't happy anymore :( Can't get the desktop to load, and I have to select a "pre-upgrade" snapshot when restarting or else I can't get into Napp-it from a webpage.

    What I'm wondering if which OS would be better to run on it. OmniOS or some other flavor of OS?

    The good news is that I have an identical motherboard and proc to build a new storage server with, then just export the drive pool settings, swap the OS drives, and import the configurations.

    ISCSI and fiberchannel card support is important as I use it as a datastore for ESXi hosts. I'm currently using fiber for that (2 hosts are direct attached with a single fiber connection to a dual port fiber card in the server). I'll be honest and say I've just been running it with sync disabled to avoid the cost of SLOG and L2ARC devices, but it is connected to a UPS 1400VA KVM and has never suffered a hard shutdown from power loss in over 3 years of usage.

    I also need it to support/run a Crashplan client since I use it to store all my media and sync that with Crashplan.. or at least the important stuff (about 2.5TB synced). I'm sure that will be a fun experience getting the client setup on a new system without trying to upload everything again ;)

    The current configuration is:
    4x 240GB SSDs in a striped mirror serving as a datastore for VMs with higher IO needs
    4x 250GB 2.5 WD Black drives in a striped mirror serving as a datastore for other VMs
    6x 3TB drives in a 2x3 striped mirror housing all my SMB shares as well as 2 iSCSI logical units for VMWare (one for large VMDKs like Server Essentials and a smaller one for reclaiming space by vmotioning VMs to).

    I will most likely rebuild the 6 drives on the system system in a RAIDZ (2 3-disk RAIDZ pools striped) for the extra space since I don't really need the striped-mirror performance for file storage. All the data is backed up to a separate HP NL54 running the Synology software, so it is just a matter of redirecting to those files until I can restore them to the main file server.
     
  2. rsq

    rsq Limp Gawd

    Messages:
    246
    Joined:
    Jan 11, 2010
    I was in the same boat as you when snoracle killed the OpenSolaris development.

    I never bothered with the OpenIndianna/Illumos projects.

    Because all my other servers were running Ubuntu Server, I tried to consolidate everything to Linux. Thanks to the LIO target and ZFS on Linux I was able to do that.

    Today, there is noting that Solaris could do that Linux cannot. Except run napp-it. For me this was no problem because I use the command line to manage my storage. For you this may be a dealbreaker.

    Features from LIO that I use and can certify that they work:
    1. iSCSI (1G and 10G ethernet)
    2. fibre channel (QLogic HBA's are a must, beware of the Emulex cards... No workey)
    3. SRP through infiniband

    I recently acquired a nice Converged ethernet switch (Brocade MP8000-B) and nowadays, I only export ZVOLs via fibre channel (On QLE2562 cards) through that switch to FCoE. Storage server has 2x8Gbit FC to the switch, all servers have 1 10G ethernet/FCoE link from the switch.

    I am supremely happy with this setup.
     
  3. _Gea

    _Gea 2[H]4U

    Messages:
    3,892
    Joined:
    Dec 5, 2010
    Linux is mainstream and most ZFS installations will be on Linux in future.
    Currently I prefer Solaris & Co as everything just works out of the box
    with the in ZFS embedded Solaris CIFS server with Windows SID, ACL and
    previous version support, the embedded NFS server, one way to identify disks (WWN),
    ZFS system disk with bootable snaps, the Comstar FC/iSCSI stack, virtual networking,
    working hot spare support and many many other features made by Sun for Solaris.
    The maker of the OS takes care of them not a douzen of others and you can select one of them.

    If you want to stay with free Solaris forks based on Illumos, your first choice is OmniOS as
    it is stable with very active development. I support OmniOS with napp-it.
    Other option is NexentaStor CE that is free for noncommercial use with the Nexenta GUI.

    You can also use Oracle Solaris. With ZFS encryption, SMB 2.1 support and LZ4, Solaris 11.3
    will be again a very strong option. Beside OmniOS, Solaris is the main option with napp-it.
    Like NexentaStor CE, Solaris in not free for commercial use.

    Another options are BSD based distributions.
     
    Last edited: Aug 20, 2015
  4. farscapesg1

    farscapesg1 2[H]4U

    Messages:
    2,575
    Joined:
    Aug 4, 2004
    Thanks! I'll take another look at Linux. I like the Napp-it interface but I rarely access it unless I get an alert that a drive is failing (which has been happening a lot lately). I need to get more familiar with Linux anyways.

    Is Ubuntu as good as any other flavor for running ZFS on Linux? I'm still toying with rebuilding the server as another ESXi host as an all-in-one and just pass the disk controllers and fiber card through to it. Might even ditch the fiber setup and just go back to iscsi since it isn't like I'm vmotion-ing a lot of data usually. It was mostly just to play around with.
     
  5. cantalup

    cantalup Gawd

    Messages:
    758
    Joined:
    Feb 8, 2012
    on my experience:
    Ubuntu is the most easy way to install or update ZoL :D

    I am using Centos 7.X due on stability since CentOS piggies back RHEL source code that very conservative on linux kernel level and has a long support in years.
    I noted that upgrading ZoL broke DKMS and had to recover manually :D.

    if you want to not know linux cmdline, use GUI :D.
    Frankly.... I never installed GUI on my server. this just wastes space and my precious memory:p


    upgrade my laptop to windows 10. no issue at all to acces my beloved smb shared
     
  6. rsq

    rsq Limp Gawd

    Messages:
    246
    Joined:
    Jan 11, 2010
    Ultimately, I am hoping that there will be a ZVOL equivalent for BTRFS. That is the way I would like to go in the future, but as of yet no code is written.

    I looked into the source code of the device mapper to try and hack this together myself. Despite being a professional software engineer, I was not smart enough to make it work. This would need some kernel developers with in-depth knowledge of the kernel's filesystem handling and device mapper. As an outsider, I expect there to be a 6 month learning curve to get something working
     
  7. ST3F

    ST3F Limp Gawd

    Messages:
    181
    Joined:
    Oct 19, 2011
    Since 1 week, to replace OpenIndina I'm testing with
    • Asrock x79 Extreme 11
    • Xeon E5 2687w
    • 4x 4 GB ECC
    • System : SSD OCZ Vertex 3
    • HDD : 16x WD SE
    • HBA : 2x M1015 flashed in LSI 9211-8i with p19 fw
    • LAN : Intel 10GbE SR
    -> Xubuntu 14.04 LTS + ZoL v 0.6.4.2 ppa stable & deb + Napp-it: Bonnie++ freezes the system :/
    -> OmniOS doesn't reconize any Ethernet ports but 10GbE is ok
    -> ZFSGuru : since 2012, this distribution has been better ; currently testing 10.03.002 + web 0.3.0 with 7x mirored vDev ; many optimization & tuning available, package installations ... very nice !

    ++

    St3f
     
  8. farscapesg1

    farscapesg1 2[H]4U

    Messages:
    2,575
    Joined:
    Aug 4, 2004
    At this point I'm really leaning towards Linux for a couple reasons, but for those of you using it.. how is the iSCSI and Fiber support? Especially when it comes to providing storage for ESXi.

    I'm using all Emulex 4GB fiber gear currently.. without a fiber switch, so just a single direct connection from each ESXi host back to the storage server. Switching back over to iSCSI, or even NFS, isn't much of a concern as I have the available ports on a Cisco 3750G switch and a couple 2-port and 4-port NICs laying around anyways.

    My current thinking for switching over to Linux;
    1) just more "hands on" with Linux
    2) Consolidate some existing VM services into the file server (NUT for UPS monitoring, Plex server, Sabnzbd/Sickbeard/Utorrent
    3) The server is running a Xeon X3440 proc, so definitely overkill for just file server duties.
     
  9. rsq

    rsq Limp Gawd

    Messages:
    246
    Joined:
    Jan 11, 2010
    farscapesg1,

    Linux now bundles a scsi target in the kernel that can be adminitstered with targetcli.

    The iscsi support is good, I expect no problems for you.

    For the fibre channel you will be disappointed to hear that only the qlogic cards are supported in target mode.

    The qlogic cards also need qlini_mode=disabled set in their module parameters.

    I have qle2462 and qle2562 adapters in use, and they all work impeccably.
     
  10. farscapesg1

    farscapesg1 2[H]4U

    Messages:
    2,575
    Joined:
    Aug 4, 2004
    Thanks. As usual.. shouldn't post in the morning before my coffee ;) Just took a quick look at the card in an ESXi host when I posted.. but forgot that I am using Emulex in the hosts and a QLogic 2462 in the storage server as OpenIndiana suffered from the limitation of only QLogic cards working in target mode. Have a spare QLE 2462 sitting here so I should be good on that end ;)
     
  11. tsrtg

    tsrtg n00b

    Messages:
    32
    Joined:
    Sep 11, 2014
    I wonder why you don't mention sequential resilver as a factor. Considering that traditional resilver takes ages and there is a risk of losing additional drives in the process of resilvering because of extra stress, isn't sequential resilver an important feature for making the array safer?
     
  12. _Gea

    _Gea 2[H]4U

    Messages:
    3,892
    Joined:
    Dec 5, 2010
  13. brutalizer

    brutalizer [H]ard|Gawd

    Messages:
    1,593
    Joined:
    Oct 23, 2010
    Sequential resilvering acts at full platter speed, 150MB/sec or so, which means resilvering is very fast (does not take days in worst case as in OpenZFS)