Need help: Slow performance DS 712+ and Vmware

FMZ

n00b
Joined
Oct 8, 2009
Messages
14
Need help with my home lab that I've been trying to get going for the past year or so but have been mostly collecting dust. I've searched online everywhere but can't figure out how to fix the issue.

I have 2 identical machines that are hooked up to Cisco SG200 switch. Every port on the switch is connected at 1GB/Full. Currently, only 1 ESX is powered on and running off the USB stick (ESX 5.0.0) . Shared storage is Synology DS 712+ running RAID 1 (1 TB).

I can get roughly 80-90 MB/s transfer rate while copying .iso (roughly 3 GB) file from my Windows workstation / Firefox to a shared folder on the nas which roughly takes 20 - 30 seconds. While trying to upload the same file from ESX, it is extremely slow. I think copying the same file to iSCSI data share told me roughly 30 - 40 minutes and if I point ESX to NFS share, it tells me roughly 2.5 minutes to copy the same file.

Now I dont care about copying files, but while I was installing a VM on the nas on iSCSI data store, it was painfully slow extracting the file where I just powered off the VM.

DS712+ is currently running in Link aggregate (LA) mode which I really don't think helps anyhthing. I have taken it off LA mode, changed jumbo sizes, still can't figure out why it is slow from ESX.

Currently, Jumbo mode is disabled and LA is enabled. Vmnic2 is connected to vSwitch1

I really wanted to play around with iSCSI vs. NFS that's why I created iSCSI targets for ESX.

Any pointers would be greatly appreciated.
 
How are you copying it? Using the datastore browser in the client? If so...there's your problem.
 
There are a few things ESX doesn't do well - NFC copies are one of them (anything using the datastore browser/etc). Unfortunately, SCP isn't much better (dropbear gets no resources).

The real question is how the vmkernel datamovers work on it - which should be a lot better.
 
you're running the lastest DSM right?

Yes using the latest DSM I believe. 4.1-2668

I would sniff it with Wireshark and see if anything looks wrong.

Not sure how to do that.

How are you copying it? Using the datastore browser in the client? If so...there's your problem.

There are a few things ESX doesn't do well - NFC copies are one of them (anything using the datastore browser/etc). Unfortunately, SCP isn't much better (dropbear gets no resources).

The real question is how the vmkernel datamovers work on it - which should be a lot better.

Yes, that's what I thought myself. But installing VMs is slow as well. Now granted that I am using SATA 7200 RPM drives, but the performance shouldn't be this bad.
I can monitor the CPU / Disk / Network usage and they all stay pretty low.

Now if I copy the same files using DSM / Windows, it is pretty fast.
 
under "Inventory" (select your host) --> "Configuration" tab --> "Storage" --> "(your iscsi datastore)", does it say "Hardware Acceleration: Supported" in the bottom "Datastore Details" window?
 
under "Inventory" (select your host) --> "Configuration" tab --> "Storage" --> "(your iscsi datastore)", does it say "Hardware Acceleration: Supported" in the bottom "Datastore Details" window?

Yes it is supported
 
Try it on a NFS datastore. I've seen odd iSCSI issues at times between vSphere and Synology. Make sure you have the LATEST 4.1. There was a bad vSphere 5.1 performance bug in the early releases of 4.1.
 
So the performance is pretty horrible with NFS as well I think. You can monitor the nas and you can see where the network speed is around 2MB/s - 9 MB/s.

I am going to try something different. I'll switch around the network card itself and see if that makes a difference.
 
Makes no difference if the Jumbo frame is on or off.

I even tried totally different vmnic for storage. I am now going to download 5.1 and try that.
 
From the small testing I have done, it seems like injecting drivers for Intel 82579LM adapter causes slowness on all my NICs. Before I injected those drivers, NFS was transferring data at about 30 - 40 MB/s

http://hardforum.com/showthread.php?t=1607992&page=1

I am reinstalling ESX 5.1 and will not inject custom drivers and see how it works.
 
Last edited:
Ok much better performance on both iSCSI and NFS.

So if somebody is experiencing slow performance and they're using custom drivers for non- supported Intel NIC, please remove custom drivers and retry.
 
Back
Top