OpenIndiana/ napp-it + OpenSource Clustering/ High Availabilty

Just a head's-up, I won't have time to contribute to this any time soon. I was running into some major issues with UID mapping for an active directory domain between OmniOS and Linux, so I wasn't able to get NFS to play nicely. I ended up going with a FreeBSD-based solution, which offers a vanilla Samba implementation with which I could work. Good luck y'all.
 
without Solaris i would use FreeBSD too
but this seems more a problem of CIFS server (use Windows SID) vs SAMBA (use UID) problem.

You can use SAMBA on Solaris as well or the Unix extensions in AD to deploy UIDs from Windows AD
 
Ah. When I used the Active Directory functionality in napp-it, there were a couple of issues. The first being that users were not enumerated until I specifically searched for them on the command line. The second being that they were being assigned random UIDs.

Can your Active Directory capability in napp-it be used with Samba on Solaris?

I'd like to avoid rfc2307 in Active Directory because it's just another thing to have to manage. I'd prefer for UID mappings to be algorithmically generated.
 
The idea of Solaris CIFS is: be a Windows server, not a Unix server
and create a temporary random ephemeral UID during a session to be "Unix compatible"

In reality, CIFS server use Windows SID and nothing else.

This is the reason, you need a mapping Windows SID -> Unix UID for other Unix services,
either with a mapping Windows user -> local Unix user or with the help of the Windows AD Unix
extensions that gives you a real UID for a Windows user.

It may be possible to add SAMBA support in napp-it,
but do not expect this from my side in the near future. I am happy with CIFS.
So SAMBA on Solaris is currently CLI only.

(I would prefer a algorithmically generated UID on CIFS as well, but that depends on Oracle/Illumos)
 
Hi Gea, just found a wonderful article by a Guy Called Saso, who is very active in the Omnios/OI/ZFS mailing lists.

Seems he did some really great work in the HA/Clustering of ZFS using Pacemaker :

http://zfs-create.blogspot.nl/

testing it right now, and must confess, it looks VERY promissing
 
Looks very interesting.
A tested, documented and free HA option for OmniOS would be fantastic.
 
@ Gea, any chance of you incorporating this in the neat Napp-it webgui.

From my point of view this is ( maybe next to RAID Rebalance) the only thing a ZFS/NAPP-IT misses before i would be confortable to use it in a production env.

Yes maybe me or a co-worker understand CLI and what to do, but no company i know would want to create a people SPOF :)

and again, if you are in need for tester, i'd be very happy to be a crash test dummy
 
I had been working on this lately myself, using two test vm's and a shared fc array.

But I put that on hold for now, ordering some equipment, want to move from testing vm's to real hardware. But likely for me, will be october timeframe, I'm hoping to actually give this a shot.
 
@ Gea, any chance of you incorporating this in the neat Napp-it webgui.

From my point of view this is ( maybe next to RAID Rebalance) the only thing a ZFS/NAPP-IT misses before i would be confortable to use it in a production env.

Yes maybe me or a co-worker understand CLI and what to do, but no company i know would want to create a people SPOF :)

and again, if you are in need for tester, i'd be very happy to be a crash test dummy

My problem: This is very interesting but it is hard to find time for HA because I currently do not need myself and i can hardly include that in napp-it myself.

If someone writes and publish a wget online-script to setup and configure HA for a special service like CIFS, NFS or iSCSI together with a Howto manual, I can help to push such a solution and help to add thiis as a free add-on or your nonfree extension into the napp-it web-ui. HA is not trivial and without a community this is not a alternative to commercial solutions like high-availablity.com.

This was my original intention when i started this thread.

ps
I would prefer a not too expensive extension solution, where someone is working on it on a regular base
 
@ Gea,

understand were you're coming from, and if i could i would, however lack of time .....

Then again, by looking @ the stuff Saso did, it shouldn't be much more then adding a few html pages and some scripts, since pacemaker is a continuously developed project
 
I can add or help to add some menu items.
The rest must be done by others.
 
Sorry given ZFS is not a cluster file system, the best we can do is an active-passive setup, rather than an active-active setup, right?
 
Yes, but nothing stops you from mounting one zpool on one system, and another zpool on the other system, and both being active for one pool, and passive for another pool. If you wanted to limit resource wastage.
 
Yes, but nothing stops you from mounting one zpool on one system, and another zpool on the other system, and both being active for one pool, and passive for another pool. If you wanted to limit resource wastage.
Hi, I see. Is that how Nexanta works as well? If it is I guess they should really be calling it active-passive, not active-active.
 
Hello,

I'm following this article about creating ZFS cluster:
http://zfs-create.blogspot.cz/2013/06/building-zfs-storage-appliance-part-1.html

But I have a problem with Pacemaker configuration. I'm still not able to start IPaddr resource to work.

here is my crm status:

Code:
Last updated: Fri May 16 09:59:16 2014
Stack: Heartbeat
Current DC: dikobraz-ha2 (cb856380-ef15-473c-bd2b-c884e7dd13af) - partition with quorum
Version: 1.0.11-6e010d6b0d49a6b929d17c0114e9d2d934dc8e04
2 Nodes configured, 2 expected votes
1 Resources configured.
============

Online: [ dikobraz-ha2 dikobraz-ha1 ]


Failed actions:
    virtual-ip_start_0 (node=dikobraz-ha2, call=3, rc=5, status=complete): not installed
    virtual-ip_start_0 (node=dikobraz-ha1, call=3, rc=5, status=complete): not installed

Can you help me with "not installed" status? Is there anything, what I'm doing wrong?
Here is my crm configuration:

Code:
node $id="cb856380-ef15-473c-bd2b-c884e7dd13af" dikobraz-ha2
node $id="f9acc262-21bd-c45e-9f55-e4dfcdbb9196" dikobraz-ha1
primitive virtual-ip ocf:heartbeat:IPaddr \
        params ip="192.168.254.36"
property $id="cib-bootstrap-options" \
        dc-version="1.0.11-6e010d6b0d49a6b929d17c0114e9d2d934dc8e04" \
        cluster-infrastructure="Heartbeat" \
        stonith-enabled="false" \
        last-lrm-refresh="1399902820" \
        expected-quorum-votes="2" \
        no-quorum-policy="ignore"
rsc_defaults $id="rsc-options" \
        resource-stickiness="100"

Thank you for any help.
 
Good luck. I asked similar questions on saso's blog, and got a 'it works for me i do not have time to debug'. If you look at the questions I asked the issues were not the same as yours, but I asked at the end very simple 'how do you do manual failover given the crm package is not working as documented' and never got an answer despite repeated requests. So he is either not reading comments anymore, or he is ignoring me. Either way, it's not a viable option for me. I can get this working on freebsd, but apparently freebsd 10 has crappy nfs write performance. ZoL does also for reasons I describe in ZoL issue https://github.com/zfsonlinux/zfs/issues/2373. So at the moment, I have very well supported cluster software for linux (albeit with crappy write performance) or dead/incomplete/unsupported software for opensolaris derivatives. Fooey...
 
Back
Top