Slow Transfer to ESXi Datastore

Oswald_

Weaksauce
Joined
Apr 21, 2005
Messages
76
I have enabled the terminal on my ESXi server and setup SSH access. This ESXi server is attached to my gigabit network. When I was doing a transfer of a VM using scp to the ESXi datastore my transfer rate was HORRENDOUS ... we're talking 800/kbps max.

So, I decided to try and enable FTP thinking that the crypto overhead of scp may be the problem. I'm currently transferring with the new rate of 1-2mbps using FTP. What is causing the slowness here? Any other transmission on my giga network is flawless and FAST why the slowdown to ESXi?
 
what type of datastore is it?
what is the link speed connection on the datastore device (assuming network attached)?
Are you using your current workstation as an intermediary in the SCP/FTP operation?
what else is on your network?
is the ESXi box and it's datastore device (for the storage traffic) on a separate physical switch, or a VLAN?
are the ESXi NICs on the I/O HCL?
do they have a ToE onboard?
what is your switch?
have you tried different physical switch ports?
have you tested the nics as good in another machine?
have you tested the cable?
what are your ping times like?
are the datastore drives busy doing other things, possibly causing a lot of head-seek, and significant slowdown?
 
SO not a valid test

The busybox is massively bottlenecked, it is guaranteed to perform like total unadulterated crap. :) Use the datastore browser to upload through the vmkernel stack.
 
what type of datastore is it?
The datastore is hosted on a iSCSI server with a gigbit nic
what is the link speed connection on the datastore device (assuming network attached)?
Answered above, gigabit
Are you using your current workstation as an intermediary in the SCP/FTP operation?
The VM's are being transferred from a external hard drive attached to the workstation and being uploaded via FTP
what else is on your network?
Kinda confused what you're looking for here ...
is the ESXi box and it's datastore device (for the storage traffic) on a separate physical switch, or a VLAN?
ESXi box and the iSCSI servers are on the same network which is seperate from the VM/Client network.
are the ESXi NICs on the I/O HCL?
Not sure about the NIC but the server is a Poweredge 1900
do they have a ToE onboard?
No
what is your switch?
ProCurve 2810
have you tried different physical switch ports?
No
have you tested the nics as good in another machine?
Really? Physically pull the NIC's from the server? Nope.
have you tested the cable?
No?
what are your ping times like?
bytes=32 time<1ms TTL=64
are the datastore drives busy doing other things, possibly causing a lot of head-seek, and significant slowdown?
The iSCSI datastore these VM's are being transferred to is not in use at this time.
 
SO not a valid test

The busybox is massively bottlenecked, it is guaranteed to perform like total unadulterated crap. :) Use the datastore browser to upload through the vmkernel stack.

I'll give this a go and post back the results.
 
Buffalo TeraStations are very much not supported. How's your actual in-vm performance? Try a network copy in-vm.
 
I'm also noticing slow transfers to the local datastore of the server. I tried out of curiosity to upload the VM to the local datastore and I was getting about 5-10mbps.

Would this have anything to do with the VM's running while I'm transferring data?
 
Yes and no - as long as you're doing something through the busybox, it's going to be horribly slow. The ESXi busybox is a teeny vm with almost no resources. The only real way to test is in the VMs - what kind of performance do you get copying something from a network store to a VM's hard drive?
 
I just ran iperf on the VM (the VM was acting as the server) this is the output from the VM:

Code:
C:\>iperf -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 8.00 KByte (default)
------------------------------------------------------------
[1872] local 192.168.4.216 port 5001 connected with 192.168.4.11 port 2605
[ ID] Interval       Transfer     Bandwidth
[1872]  0.0- 9.7 sec   343 MBytes   297 Mbits/sec

And here is the reverse:
Code:
C:\>iperf -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 8.00 KByte (default)
------------------------------------------------------------
[1856] local 192.168.4.11 port 5001 connected with 192.168.4.216 port 1111
[ ID] Interval       Transfer     Bandwidth
[1856]  0.0-10.1 sec   378 MBytes   313 Mbits/sec

Transfers look fine. I'm just more and more confused by this.:confused:
 
wait, where did you run that? That actual transfer to the VM's drive?

Yes, that was a transfer from a workstation to the VM (IIS Server). Also from the VM to the workstation. The datastore for that VM is on the Buffalo.
 
Told ya. It's that the ssh/external storage stack is extremely limited. It's not supposed to be fast.
 
Last edited:
Depends- IIRC, the host version can be slow as snot too on ESXi. VC (if you have it) would be faster. Try downloading the virtual center trial, see if that version is faster :)
 
Depends- IIRC, the host version can be slow as snot too on ESXi. VC (if you have it) would be faster. Try downloading the virtual center trial, see if that version is faster :)

I'll give it a whirl.

Is it possible for a VM to access the underlying datastore filesystem? If it is, I could probably just transfer the VM's to a VM on the datastore and transfer them that way? If that makes any sense ...
 
Back
Top