Is there anyone who is using Infortrent EonNas in High Availability configuration? Their systems are based on OpenIndiana with ZFS, which is very interesting. Do you know, how they implement active-active HA? Are they using RSF-1 as other on ZFS based systems (like Nexenta) or they have some...
That disks are not bad for that price, we are using them in many ZFS pools for about year. Some of them were replaced because of failure, and their performence is worse, than WD RE4 (great disks by the way). But now, I would recommend WD RED Pro (WD4001FFSX). They are quite new, but their specs...
Hello,
I'm following this article about creating ZFS cluster:
http://zfs-create.blogspot.cz/2013/06/building-zfs-storage-appliance-part-1.html
But I have a problem with Pacemaker configuration. I'm still not able to start IPaddr resource to work.
here is my crm status:
Last updated...
Hello,
I'm using arcstat.pl script for monitoring ZFS ARC statistics (https://github.com/mharsch/arcstat) which using data from kstat. It is very interesting, that on all of my Storage servers there is every 5 seconds realy large numbers of "total ARC accesses per second" (read column)...
Hello,
thank you for reply. Yes, I have NPIV enabled switch, with NPIV support enabled on active ports.
I have no idea what is wrong and why i cannot create virtual ports in target mode. Do you have any other idea what to try?
My emlxs.conf
console-notices=0;
console-warnings=0...
I'm trying to create NPIV ports on OmniOS on Emulex and Qlogic cards in target mode without success.
I have these cards:
Emulex LPe11000 and Qlogic QLE2462.
For Emulex I'm using emlx driver. but if I enable NPIV in target mode, I can see in my log "enable-npiv: Not supported in target...
Hello,
is there any simple way (simple, so other than sendmail) how to configure OmniOS to send e-mails from command line for example with mailx command? I used msmtp on Nexenta, is there anything on Omnios?
I need that for some script error log notification.
Thank you all.
Hmmm, they are using zdb -mm <poolname> command for metalabs usage mappings. Thats interesting. So maybe this will be the best way how to find, how fragmented my pool is.
Thanks for reply. I red that interesting article.
I wonder how to make so nice metalabs tables, where I can see amount of fragmentation as in that article :)
I have performance problems with Nexenta. After one year problem-free operation, I have now every day problems with heavy writes, which makes zpool slow.
Nexenta is used only as iSCSI target for virtual servers. I checked all ZVOLs used for iscsi targets with Dtrace script zfsio.d...
Hmm, clearing buffer does not help.
And I'm not able to reproduce problem on another server (I created pool in Nexenta, made import to OI and after that ZIL device remove was OK, no problem). So I'm not sure if that is related to system change.
If anyone have some idea how to solve this...
Thanks _Gea for tips, I will try that.
It seems, that this problem is only on pools originaly created on Nexenta and later imported to OI. But even if I made zpool upgrade, I still cannot remove ZIL device.
I will try disk buffer, thanks.