OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

Discussion in 'SSDs & Data Storage' started by _Gea, Dec 30, 2010.

  1. brianmat

    brianmat [H]Lite

    Messages:
    114
    Joined:
    Sep 1, 2011
    A couple quick things _Gea;

    -- For whatever reason the latest OmniOS would not work for us. We would continue to get an APD issue with our NFS shares. We had to go with OmniOS v11 r151010 and everything worked perfectly

    -- On the home > System > Network IB > ipib section the IP addresses do now display. The command line works and the interfaces are setup correctly. I can create new interfaces through it just fine, but I can't see them. This is with 0.9f6

    -- SRP didn't work out of the box even though the menu choice is there. I had to install "pkg install storage-server" to get it to work. I didn't know if it was left off intentionally or accidentally.
     
  2. _Gea

    _Gea 2[H]4U

    Messages:
    3,892
    Joined:
    Dec 5, 2010
    It was quite a time ago when I made some tests with a single user on OI/OmniOS with SMB1.
     
  3. _Gea

    _Gea 2[H]4U

    Messages:
    3,892
    Joined:
    Dec 5, 2010
    OmniTi did some improvements with NFS on current 151014. While there is no known general problem with ESXi, I would also stay with a former release with NFS if this was definitely more stable.

    As i do not use IB myself, the IB integration was done together with user Frank as a free and open community effort. You may check the menu actions at "/var/web-gui/data/napp-it/zfsos/03_system and network=-lin/033_Network_IB/" with common functions in /var/web-gui/data/napp-it/zfsos/_lib/illumos/iblib.pl . I can add missing points but need some help due missing hardware.

    "pkg install storage-server" was only needed on Solaris in the past.
    I added this into the wget installer for OmniOS as well just to be sure everything is there.
     
    Last edited: Sep 26, 2015
  4. davewolfs

    davewolfs Limp Gawd

    Messages:
    333
    Joined:
    Nov 7, 2006
    Any thoughts which would work better as a slog between Intel 750 and S3710?
     
  5. ToddW2

    ToddW2 2[H]4U

    Messages:
    4,019
    Joined:
    Nov 8, 2004
    750 isn't meant to hold up to the writes but will perform faster.
     
  6. gigatexal

    gigatexal [H]ardness Supreme

    Messages:
    7,280
    Joined:
    Jun 22, 2004
    get a 750, honestly could you do enough writes for it to get that messed up before you rebuild your array? I guess I shouldn't talk, I use an X25-e SLC slog. But still, get the 750 and save your money. An over provisioned 750 should last forever.
     
  7. ST3F

    ST3F Limp Gawd

    Messages:
    181
    Joined:
    Oct 19, 2011
    3500 or 3700
    .. or Samsung sm863.
     
  8. ToddW2

    ToddW2 2[H]4U

    Messages:
    4,019
    Joined:
    Nov 8, 2004
    good point, OP it like crazy and it should last :)
     
  9. CopyRunStart

    CopyRunStart Limp Gawd

    Messages:
    153
    Joined:
    Apr 3, 2014
    So are you saying that even with multiple clients ~300MB/s is the maximum speed of SMB1?
     
  10. _Gea

    _Gea 2[H]4U

    Messages:
    3,892
    Joined:
    Dec 5, 2010
    I have only tried a single user without performance settings like jumbo frames that may be needed for higher values.
     
  11. CopyRunStart

    CopyRunStart Limp Gawd

    Messages:
    153
    Joined:
    Apr 3, 2014
    Ok, understood.

    I know this has been asked a million times but I'm having trouble grasping SMB share permissions on ZFS.

    Is the basic premise that on the Solaris system you just create the same usernames that already exist on the Windows boxes? And then assign permissions in Napp-it based on those usernames?
     
  12. spankit

    spankit Limp Gawd

    Messages:
    262
    Joined:
    Oct 18, 2010
    Edit: Rebooted both machines and my issues still seem to be happening.


    Removed my initial post as my issues are still not resolved. Will continue to work on it and post back.
     
    Last edited: Sep 29, 2015
  13. _Gea

    _Gea 2[H]4U

    Messages:
    3,892
    Joined:
    Dec 5, 2010
    @CopyRunStart
    This is similar on any server os.
    If you need access restriction you must create users locally on you server. On access you must enter a known loginname and password. If your local windows username and password is the same, this login is skipped.

    The only other option is a centralized user database with Windows Active Directory. To use AD you must join your server to the AD.

    You can assign permissions remotely from Windows (optionally login as root to gain full permissions) or locally on OmniOS/Solaris
     
  14. _Gea

    _Gea 2[H]4U

    Messages:
    3,892
    Joined:
    Dec 5, 2010
    Thanks for the info.
    I cannot say when this is needed.
    In my own setup (OmniOS, Windows 2012 AD, public domain) I do not need.
    A simple join is enough.
     
  15. spankit

    spankit Limp Gawd

    Messages:
    262
    Joined:
    Oct 18, 2010
    Looks like my issues are still present. Rebooted both the OmniOS server and my workstation. I'm back to only being able to get onto the shares using the IP address. I'm playing around with SAMBA4 at the moment so this isn't a standard windows domain controller. I'll get one of those running quick and see if my issue's persist.
     
  16. Zarathustra[H]

    Zarathustra[H] Official Forum Curmudgeon

    Messages:
    29,245
    Joined:
    Oct 29, 2000
    FreeNAS seems conspicuously absent from this discussion.
     
  17. gigatexal

    gigatexal [H]ardness Supreme

    Messages:
    7,280
    Joined:
    Jun 22, 2004
    You're not serious right? This is a thread about a solution that would obsolete FreeNAS and other pre-packaged NAS solutions.
     
  18. Zarathustra[H]

    Zarathustra[H] Official Forum Curmudgeon

    Messages:
    29,245
    Joined:
    Oct 29, 2000
    That is not clear from the first post. It explains what ZFS is and touts ZFS as the next best thing in storage :p FreeNAS is a ZFS implementation.

    Can you explain in which way this would be superior/preferable?
     
  19. gigatexal

    gigatexal [H]ardness Supreme

    Messages:
    7,280
    Joined:
    Jun 22, 2004
    that's a topic for another thread, but I think it's clear from the post that ZFS is rad and one way to leverage ZFS in an easier to use way is to use _Gea's front end.
     
  20. CopyRunStart

    CopyRunStart Limp Gawd

    Messages:
    153
    Joined:
    Apr 3, 2014

    I should have been more specific. Basically my confusion is between how Solaris handles how *nix/solaris treats permissions vs how Windows treats permissions.

    I didn't realize I could just join the Solaris Server to the AD. Does anyone know of any good guides on integrating AD with Solaris? I did some googling but the guides seem a bit incomplete.


    EDIT: Forgot to mention I did testing regarding my earlier question on maxing out 10Gb with SMB.

    Test was: Solaris x86 Server connected to 10Gb switch with Twinnaxial cable. 10 Windows Hosts connected to the same switch with 1Gb connections. I had all 10 machines copy down the same 5GB file at once over smb. According to Solaris System Monitor, it was sending at about 8.8Gb/s or 1.1GB/s.
     
    Last edited: Sep 29, 2015
  21. _Gea

    _Gea 2[H]4U

    Messages:
    3,892
    Joined:
    Dec 5, 2010
    The superior ZFS platform among BSD, OSX, Linux or Solaris is always a very subjective statement. Many ZFS problems are common between them so no problem to discuss them here or to talk about differences between platforms. But basically this is a Solarish (Oracle Solaris and its forks) thread not a general ZFS thread. There are other forums with a focus on BSD, FreeNas, Nas4Free, ZFSGuru or ZoL.
     
  22. _Gea

    _Gea 2[H]4U

    Messages:
    3,892
    Joined:
    Dec 5, 2010
    Maybe its because from outside view, Solarish behaves quite identical to a Windows 2003 server after joining a Domain. This affects management via Windows server console or permissions regarding users, user identification with SIDs, snaps as previous versions and ntfs similar ACLs.

    You may read/google docs from Oracle but you can also use docs from Microsoft as the behaviour is quite identical for a remote user or regarding handling of a ZFS filesystem as you can move it without preparing any settings to keep permissions intact just like an ntfs disk..
     
  23. _Gea

    _Gea 2[H]4U

    Messages:
    3,892
    Joined:
    Dec 5, 2010
    Info:
    There is an update for OmniOS 151014 long term stable.
    - NVMe driver
    - NFS bugfixes

    ReleaseNotes/r151014
     
  24. gigatexal

    gigatexal [H]ardness Supreme

    Messages:
    7,280
    Joined:
    Jun 22, 2004
  25. davewolfs

    davewolfs Limp Gawd

    Messages:
    333
    Joined:
    Nov 7, 2006
    Cool!

    A question regarding L2Arc. I have a Samsung 850 Pro lying around. If I wanted to add this but don't want my L2Arc to be the size of the entire disk, how can I control it's size? Do I need to format and partition it or do I want to add the entire disk and control the L2Arc size from somewhere else?
     
  26. gigatexal

    gigatexal [H]ardness Supreme

    Messages:
    7,280
    Joined:
    Jun 22, 2004
    partition it and assign it that way.
     
  27. davewolfs

    davewolfs Limp Gawd

    Messages:
    333
    Joined:
    Nov 7, 2006
    Do I need to worry about 4k alignment or anything? Is it easy to partition a disk the wrong way?
     
  28. gigatexal

    gigatexal [H]ardness Supreme

    Messages:
    7,280
    Joined:
    Jun 22, 2004
    I think, someone correct me if I'm wrong, but you can just partition it like normal and then just add it without the need to specify alignment. I'm not sure if you can specify alignment on a slog/l2arc device.
     
  29. _Gea

    _Gea 2[H]4U

    Messages:
    3,892
    Joined:
    Dec 5, 2010
    Partitioning is one option.
    The other option is a fixed reservation = HPA (host protected area)
    You can create with tools like hdat2 http://www.hdat2.com/files/cookbook_v11.pdf

    Main advantage: you can use the full disk. No need to care about partitions.
    This is simiar to the reservations of enterprise SSDs.

    In any case, you should use a new SSD or do a secure erase to help the SSD firmware to optimize data in the background.
     
  30. gigatexal

    gigatexal [H]ardness Supreme

    Messages:
    7,280
    Joined:
    Jun 22, 2004
    you could also dd an image like

    ** assumes you've formatted and mounted the drive to /slog **
    sudo dd if=/dev/zero of=/slog/slog0.img bs=1m count=1024 //makes a 1GB image
    then use IoSetup (google it) to create the block device and add it to the zpool

    granted this is convoluted so don't do it
     
  31. danswartz

    danswartz 2[H]4U

    Messages:
    3,641
    Joined:
    Feb 25, 2011
    Is this under linux? the syntax looks like it. If so, not recommended, as there is a possible deadlock due to the block code and zfs recursing or somesuch...
     
  32. gigatexal

    gigatexal [H]ardness Supreme

    Messages:
    7,280
    Joined:
    Jun 22, 2004
    i use it on my pool to cache torrent writes. i use it in conjunction with forcing all writes to be synchronous to the slog drive so that my pool doesn't get too fragmented or so the theory goes
     
  33. _Gea

    _Gea 2[H]4U

    Messages:
    3,892
    Joined:
    Dec 5, 2010
    The ZIL/slog does not affect regular writes to your pool.
    Without sync enabled, all small writes are cached in RAM and written after a few seconds as a large sequential write. On a powerloss these few seconds are lost. As ZFS is a copy on write filesystem, your filestructure is always intakt, but not all commited writes are on disk.

    With sync enabled, the same is happening. But additionally every single small write command is logged to the ZIL device (onpool or as a seperate slog) and commited from the ZIL. This is the case to be transactional save. Example, a single financial transaction afftecs two accounts with two transactions, one where you take the money from an account and one where it goes onto another. Both or none must be done or some money can go to the digital nirwana when only the first transaction is on disk.

    The same when you use it to store VMs with older filesystems where an inline data update is followed by an update of the meta data. A crash between can corrupt such filesystems. Your application or OS must trust a commit an critical writes.

    The ZIL is only read after a crash on next reboot to finish all commited writes. Think of it like the BBU on hardwareraid.
    In your case, just disable sync and use an SSD only pool for torrents. With a regular SMB filer, sync is not used per default and is quite useless as ZFS is always valid and a file that is written during a powerloss is always corrupt.
     
    Last edited: Oct 3, 2015
  34. gigatexal

    gigatexal [H]ardness Supreme

    Messages:
    7,280
    Joined:
    Jun 22, 2004
    Hmm. Thanks gea. How do I limit or prevent fragmentation of my larger pool without periodically recreating the pool?
     
  35. _Gea

    _Gea 2[H]4U

    Messages:
    3,892
    Joined:
    Dec 5, 2010
    You cannot limit or prevent fragmentation.
    But this is only a problem if your pool is unbalanced and/or nearly full.

    So if the question is how to prevent a performance degration,
    the answer is: keep fillrate below say 70-80%

    If you use SSDs, fragmentation is no longer a problem.
     
  36. gigatexal

    gigatexal [H]ardness Supreme

    Messages:
    7,280
    Joined:
    Jun 22, 2004
    With SSDs becoming cheaper I'll move to them more. Thanks Gea. I think what I'll do is create a zvol that is 75% of the size of my zpool and only use that volume. The pool I speak of is a single 3 terabyte spindle disk.
     
  37. davewolfs

    davewolfs Limp Gawd

    Messages:
    333
    Joined:
    Nov 7, 2006
    Didn't think of that one! I did this for my SLOG.

    My thoughts were that if I partitioned with an ultra high speed NVME device, I might be able to get away with using the other part of its partition. Although not sure if that is recommended.
     
  38. davewolfs

    davewolfs Limp Gawd

    Messages:
    333
    Joined:
    Nov 7, 2006
    Question regarding ARC hit rate. Is there anyway to increase this # or is this just the nature of the algorithm and data.
     
  39. _Gea

    _Gea 2[H]4U

    Messages:
    3,892
    Joined:
    Dec 5, 2010
    Partitioning and using the other parts ex as L2Arc is an option.
    But performance of an SSD depends on Interface, controller an flash quality.
    A faster interface is not enough.
     
  40. _Gea

    _Gea 2[H]4U

    Messages:
    3,892
    Joined:
    Dec 5, 2010
    The algorithms around the ZFS Arc are one of the best at all. If you want to increase hit rate, you can add more RAM or add an L2Arc SSD (not as fast)