Recent content by huncut

  1. H

    EonNas and High availability with ZFS

    Is there anyone who is using Infortrent EonNas in High Availability configuration? Their systems are based on OpenIndiana with ZFS, which is very interesting. Do you know, how they implement active-active HA? Are they using RSF-1 as other on ZFS based systems (like Nexenta) or they have some...
  2. H

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    That disks are not bad for that price, we are using them in many ZFS pools for about year. Some of them were replaced because of failure, and their performence is worse, than WD RE4 (great disks by the way). But now, I would recommend WD RED Pro (WD4001FFSX). They are quite new, but their specs...
  3. H

    OpenIndiana/ napp-it + OpenSource Clustering/ High Availabilty

    Hello, I'm following this article about creating ZFS cluster: http://zfs-create.blogspot.cz/2013/06/building-zfs-storage-appliance-part-1.html But I have a problem with Pacemaker configuration. I'm still not able to start IPaddr resource to work. here is my crm status: Last updated...
  4. H

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    Hello, I'm using arcstat.pl script for monitoring ZFS ARC statistics (https://github.com/mharsch/arcstat) which using data from kstat. It is very interesting, that on all of my Storage servers there is every 5 seconds realy large numbers of "total ARC accesses per second" (read column)...
  5. H

    OmniOS and FibreChannel NPIV in target mode

    Is there anyone with working NPIV on any FC card in target mode on OmniOS?
  6. H

    OmniOS and FibreChannel NPIV in target mode

    Hello, thank you for reply. Yes, I have NPIV enabled switch, with NPIV support enabled on active ports. I have no idea what is wrong and why i cannot create virtual ports in target mode. Do you have any other idea what to try? My emlxs.conf console-notices=0; console-warnings=0...
  7. H

    OmniOS and FibreChannel NPIV in target mode

    I'm trying to create NPIV ports on OmniOS on Emulex and Qlogic cards in target mode without success. I have these cards: Emulex LPe11000 and Qlogic QLE2462. For Emulex I'm using emlx driver. but if I enable NPIV in target mode, I can see in my log "enable-npiv: Not supported in target...
  8. H

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    Hello, is there any simple way (simple, so other than sendmail) how to configure OmniOS to send e-mails from command line for example with mailx command? I used msmtp on Nexenta, is there anything on Omnios? I need that for some script error log notification. Thank you all.
  9. H

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    Hmmm, they are using zdb -mm <poolname> command for metalabs usage mappings. Thats interesting. So maybe this will be the best way how to find, how fragmented my pool is.
  10. H

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    Thanks for reply. I red that interesting article. I wonder how to make so nice metalabs tables, where I can see amount of fragmentation as in that article :)
  11. H

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    Thank you very much madrebel. Wery useful scripts. 2013 Jul 2 14:35:10 storage-d2-a 1424 ms, 36 wMB 2 rMB 1897 wIops 275 rIops 23+1 dly+thr; dp_wrl 205 MB .. 207 MB; res_max: 206 MB; dp_thr: 207 2013 Jul 2 14:35:15 storage-d2-a 1143 ms, 33 wMB 1 rMB 2966 wIops 97 rIops 54+1 dly+thr...
  12. H

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    I have performance problems with Nexenta. After one year problem-free operation, I have now every day problems with heavy writes, which makes zpool slow. Nexenta is used only as iSCSI target for virtual servers. I checked all ZVOLs used for iscsi targets with Dtrace script zfsio.d...
  13. H

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    Hello, is there any way how to monitor I/O on ZVOLs separately? Thank you.
  14. H

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    Hmm, clearing buffer does not help. And I'm not able to reproduce problem on another server (I created pool in Nexenta, made import to OI and after that ZIL device remove was OK, no problem). So I'm not sure if that is related to system change. If anyone have some idea how to solve this...
  15. H

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    Thanks _Gea for tips, I will try that. It seems, that this problem is only on pools originaly created on Nexenta and later imported to OI. But even if I made zpool upgrade, I still cannot remove ZIL device. I will try disk buffer, thanks.
Back
Top