Hello everyone,
I have a Solaris 11 VM on an ESXi 5.1 with a Supermicro USAS2 (LSI2008) Controller passed through. To this Controller I have attached 2x 2,5 TB (WD25EZRX), 2x 500GB WD and 1x Crucial M4 128GB SSD.
For over a week now I'm trying to figure out alle the bottlenecks. To make sure the disks don't limit, I added another Crucial M4 and created a very fast SSD-Mirror.
When I create a iSCSI LUN on this zpool and mount it from a Win7 PHYSICAL Client, I can easily saturate the GBit link. The Read / Write Performance is between 100MB/s and 120MB/s - just as expected.
But if I create a NFS or SMB Share on the same zpool, the max. throughput is at ~20MB/s. I'm already googl'ing and trying to find the error for 2 days, but with no success. Now you are my last chance before I give up on this!
The VM is using vmxnet3 as NIC, but this doesnt make a difference. I tried with e1000 and measured Performance to a physical client (with iperf) was the same: Between 950 Mbit/s and 1000 Mbit/s.
Many, many thanks in advance!
I have a Solaris 11 VM on an ESXi 5.1 with a Supermicro USAS2 (LSI2008) Controller passed through. To this Controller I have attached 2x 2,5 TB (WD25EZRX), 2x 500GB WD and 1x Crucial M4 128GB SSD.
For over a week now I'm trying to figure out alle the bottlenecks. To make sure the disks don't limit, I added another Crucial M4 and created a very fast SSD-Mirror.
When I create a iSCSI LUN on this zpool and mount it from a Win7 PHYSICAL Client, I can easily saturate the GBit link. The Read / Write Performance is between 100MB/s and 120MB/s - just as expected.
But if I create a NFS or SMB Share on the same zpool, the max. throughput is at ~20MB/s. I'm already googl'ing and trying to find the error for 2 days, but with no success. Now you are my last chance before I give up on this!
The VM is using vmxnet3 as NIC, but this doesnt make a difference. I tried with e1000 and measured Performance to a physical client (with iperf) was the same: Between 950 Mbit/s and 1000 Mbit/s.
Many, many thanks in advance!