Is there anyone who is using Infortrent EonNas in High Availability configuration? Their systems are based on OpenIndiana with ZFS, which is very interesting. Do you know, how they implement active-active HA? Are they using RSF-1 as other on ZFS based systems (like Nexenta) or they have some...
That disks are not bad for that price, we are using them in many ZFS pools for about year. Some of them were replaced because of failure, and their performence is worse, than WD RE4 (great disks by the way). But now, I would recommend WD RED Pro (WD4001FFSX). They are quite new, but their specs...
Hello,
I'm following this article about creating ZFS cluster:
http://zfs-create.blogspot.cz/2013/06/building-zfs-storage-appliance-part-1.html
But I have a problem with Pacemaker configuration. I'm still not able to start IPaddr resource to work.
here is my crm status:
Last updated...
Hello,
I'm using arcstat.pl script for monitoring ZFS ARC statistics (https://github.com/mharsch/arcstat) which using data from kstat. It is very interesting, that on all of my Storage servers there is every 5 seconds realy large numbers of "total ARC accesses per second" (read column)...
Hello,
thank you for reply. Yes, I have NPIV enabled switch, with NPIV support enabled on active ports.
I have no idea what is wrong and why i cannot create virtual ports in target mode. Do you have any other idea what to try?
My emlxs.conf
console-notices=0;
console-warnings=0...
I'm trying to create NPIV ports on OmniOS on Emulex and Qlogic cards in target mode without success.
I have these cards:
Emulex LPe11000 and Qlogic QLE2462.
For Emulex I'm using emlx driver. but if I enable NPIV in target mode, I can see in my log "enable-npiv: Not supported in target...
Hello,
is there any simple way (simple, so other than sendmail) how to configure OmniOS to send e-mails from command line for example with mailx command? I used msmtp on Nexenta, is there anything on Omnios?
I need that for some script error log notification.
Thank you all.
Hmmm, they are using zdb -mm <poolname> command for metalabs usage mappings. Thats interesting. So maybe this will be the best way how to find, how fragmented my pool is.
Thanks for reply. I red that interesting article.
I wonder how to make so nice metalabs tables, where I can see amount of fragmentation as in that article :)
I have performance problems with Nexenta. After one year problem-free operation, I have now every day problems with heavy writes, which makes zpool slow.
Nexenta is used only as iSCSI target for virtual servers. I checked all ZVOLs used for iscsi targets with Dtrace script zfsio.d...
Hmm, clearing buffer does not help.
And I'm not able to reproduce problem on another server (I created pool in Nexenta, made import to OI and after that ZIL device remove was OK, no problem). So I'm not sure if that is related to system change.
If anyone have some idea how to solve this...
Thanks _Gea for tips, I will try that.
It seems, that this problem is only on pools originaly created on Nexenta and later imported to OI. But even if I made zpool upgrade, I still cannot remove ZIL device.
I will try disk buffer, thanks.
Thanks for reply. It's only single drive.
I have this problem on two storage servers with OpenIndiana and different hardware, but both with LSI host bus adapter.
On third server with Nexenta (with LSI too) was no problem with ZIL removing.
The only thing that I can do now, with this device is...
I have problem with ZIL device removing. I executed:
zpool remove tank ZILDEVICE
Command was done without errors, but device is still there. I have OpenIndiana 151A7. Do you have any idea, how can I realy remove that device?
It seems, that I'm not only one with this issue, but after some...
I made some other SSD test for ZIL, according your benchmark. Maybe it could be interesting for others:
SLC Intel 311 20GB sync=disabled --------------------------- / ----------------------------- SLC Intel 311 20GB sync=always
MLC Intel 330 60GB sync=disabled...
Yes, you are right, Intel 311 seems to be slow for ZIL. I just made benchmark as you wrote.
Sequential write with sync=disabled -> 90 MB/s (on 1Gb/s iSCSI ethernet)
Sequential write with sync=always -> 41 MB/s (on 1Gb/s iSCSI ethernet)
Ok, I will try some another SSD. Thanks.
Sharing a large...
Interesting benchmark document. Thanks for sharing.
We are using Intel 311 SLC for ZIL (20GB). I'm thinking, if some large MLC could be better, for example 240GB Intel 330.
According to our tests we know, that full SSD could be (because of internal wear leveling algoritmus) as slow as...
So only solution for storage pools with heavy load is time to time make some defragmentation (copy data to another pool)? And do you think, that large svc_t numbers in iostat are good indicator of fragmented pool?
Thanks for 2), I thought that.
Hello,
I have two questions about ZFS filesystem and I would be very grateful if you can help me with answers.
1) Do you have any experience with ZFS fragmentation? We had one storage with five mirrors in pool (10 disks + SSD ZIL) and after arount one year this pool became very slow...