hello,
I played around with zfs for several weeks (I used openindiana and onmios to test) and I have a very strange issue.
if i add a zil device (a hitachi 400GB SLC SSD 4gb fc) to my pool and set sync=always, the zil device get (depending on the block size) 30-90mbits throughput BUT iops are only between 2200 and 2500. the harddisks can handle at least 8k write iops - so i decided to add a second disk to the zil - now both hdd just do 1100-1250 iops (so total still not over 2500 iops). because i thought this is strange, i added 2 more disks - and all 4 disks together are still at 2500.
i removed the zil disks and created a pool with 1 disk (not mirrored, just for testing standalone disk) - again 2500 iops only, i added 5 more disks to that pool (again standalone only) and im back to the same as above, i cant push more than 2500 iops if i use sync=always. the throughput is from 30mbits up to 800mbits (depending on blocksize) so im sure its not related to the disks itself.
i also tried the same with sata based ssd disks - and i have pretty much the same issue in regards to iops.
if i swich to sync=standard, i get up to 40k iops, but im sure this is just because ram is used to aggregate the writes to larger blocks.
i tried several kernel paramenters, but nothing works..
any idea? for me it looks like a iops limit on vdev - but i cant figure out where.. the server hardware runs with 96gb ram and dual hexa core intel xeon (so this is for sure not the issue too).
I played around with zfs for several weeks (I used openindiana and onmios to test) and I have a very strange issue.
if i add a zil device (a hitachi 400GB SLC SSD 4gb fc) to my pool and set sync=always, the zil device get (depending on the block size) 30-90mbits throughput BUT iops are only between 2200 and 2500. the harddisks can handle at least 8k write iops - so i decided to add a second disk to the zil - now both hdd just do 1100-1250 iops (so total still not over 2500 iops). because i thought this is strange, i added 2 more disks - and all 4 disks together are still at 2500.
i removed the zil disks and created a pool with 1 disk (not mirrored, just for testing standalone disk) - again 2500 iops only, i added 5 more disks to that pool (again standalone only) and im back to the same as above, i cant push more than 2500 iops if i use sync=always. the throughput is from 30mbits up to 800mbits (depending on blocksize) so im sure its not related to the disks itself.
i also tried the same with sata based ssd disks - and i have pretty much the same issue in regards to iops.
if i swich to sync=standard, i get up to 40k iops, but im sure this is just because ram is used to aggregate the writes to larger blocks.
i tried several kernel paramenters, but nothing works..
any idea? for me it looks like a iops limit on vdev - but i cant figure out where.. the server hardware runs with 96gb ram and dual hexa core intel xeon (so this is for sure not the issue too).