[ZFS / Solaris 11] Slow NFS / CIFS compared to iSCSI

huul1

n00b
Joined
Oct 14, 2012
Messages
3
Hello everyone,

I have a Solaris 11 VM on an ESXi 5.1 with a Supermicro USAS2 (LSI2008) Controller passed through. To this Controller I have attached 2x 2,5 TB (WD25EZRX), 2x 500GB WD and 1x Crucial M4 128GB SSD.

For over a week now I'm trying to figure out alle the bottlenecks. To make sure the disks don't limit, I added another Crucial M4 and created a very fast SSD-Mirror.

When I create a iSCSI LUN on this zpool and mount it from a Win7 PHYSICAL Client, I can easily saturate the GBit link. The Read / Write Performance is between 100MB/s and 120MB/s - just as expected.

But if I create a NFS or SMB Share on the same zpool, the max. throughput is at ~20MB/s. I'm already googl'ing and trying to find the error for 2 days, but with no success. Now you are my last chance before I give up on this!

The VM is using vmxnet3 as NIC, but this doesnt make a difference. I tried with e1000 and measured Performance to a physical client (with iperf) was the same: Between 950 Mbit/s and 1000 Mbit/s.

Many, many thanks in advance!
 
It may be the sync (ZIL) aspect. That would certainly limit your writes.

To test it - do this 'zfs set sync=disabled pool/sharename' and it will disable the ZIL for that share. (and you can confirm with 'zfs get sync' to see the Sync status of everything.)

Note - Disabling sync is dangerous as you can lose data if you lose power or your box crashes, etc, so disable it and do your testing and then enable it again.

If that speeds up your tests, then you have your answer..
 
Hi,

thank you for your answer. Unfourtunately, I have already tried that with no difference in speed. Since the pool consists entirely of a Mirror between 2 SSD's, that was exactly what I would expect.

Also: Read and Write Values are identical. Always between 18 and 21 Mb/s.

Just tried adding this SSD as NFS and and iSCSI Datastore to an ESXi Host. After that created a Windows 7 VM on both datastores and Benchmarked the HD with CrystalDiskMark.

Result: ~250Mb/s Read & 200 Mb/s write for iSCSI, 60 Mb/s Read / Write for NFS.
Better than Windows NFS, but still much behind what is possible (iSCSI).
Note: The ESXi Host was connected through 2x 1GBit Intel NIC's. Unfourtunately, the Windows 7 Client (which is my Desktop) only has 1 NIC and I don*t have a spare PCI-E NIC to test, if Performance would change wit 2x 1GBit on the Win7 Client side.
 
Try with one NIC at the host.
My suspicion is some issue between CIFS and NICs teaming.
 
Actually, the Win7 Client is connected via 1 NIC only. The Teamed NIC's are dedicated for Commnication between 2 ESXi Hosts.

But to be sure, I disabled every NIC in that Host, except 1. Still the same.

System Specs (In case it matters):

2x AMD Optern 4226 (12x 2,7GHz)
64 GB ECC RAM
 
How much vRAM have you given the Solaris VM. The more the better. Do you have any idea how big your "working set" is. An example is a Win7 VM, thats 40GB on disk. So the working set could be as much as 40GB to be cached in full. WinXP is merely 8.
 
What physical NICs are in your ESXi and Win7 boxes?

I've had major issues with crappy realteks that seriously kill certain protocols but seem fine on others. (In my case I had major issues with CIFS but iSCSI seemed to be OK)
 
Hmm, can't seem to edit my own post - Have to add that I now have PCI-E Intel NICs have no problems, other than my own stupidity when setting up VLANs!
 
Back
Top