Horrible disk performance under ESXi

tsrtg

n00b
Joined
Sep 11, 2014
Messages
32
Hello,

I've started building my first home server, hoping to have 3 VMs: pfSense, Solaris-based NAS and Win 7 x64. I have Supermicro X10SRH motherboard, Xeon 1650v3, 64GB RAM. ESXi 5.5u2 is installed on HGST 7K750EA (7200 RPM), which is connected to the on-board sSATA controller running in AHCI mode, and the same drive is used as datastore.

When I connect to the ESXi via SSH and run "dd" I get about 8 MB/s. I would expect it to be about 10 times better. Also when I run dd with block size 1G, I get "out of memory" error. Is that normal?

/vmfs/volumes/54a55c07-9face633-0999-002590fc98c8 # time dd if=/dev/zero of=test bs=1G count=1
dd: out of memory
Command exited with non-zero status 1

/vmfs/volumes/54a55c07-9face633-0999-002590fc98c8 # time dd if=/dev/zero of=test bs=1M count=1024
1024+0 records in
1024+0 records out
real 2m 7.62s
user 0m 4.69s
sys 0m 0.00s
= ( 1024/127 = 8 MB/s !!! )

The only virtual machine running now is pfSense (8GB space, 1 vCPU/1 thread, 1Gb allocated memory).

I get normal performance (avg 96 MB/s) when I run HD Tune test on the same hard drive (same system, same SATA port) from a bare-metal Windows installation.

Just for testing I connected a new HGST He6 as the secondary (empty) datastore to ESXi and I get similar poor performance on that drive:

/vmfs/volumes/54a90025-f456a43c-493d-002590fc98c8 # time dd if=/dev/zero of=test bs=512M count=2
dd: /dev/zero: Cannot allocate memory
Command exited with non-zero status 1

/vmfs/volumes/54a90025-f456a43c-493d-002590fc98c8 # time dd if=/dev/zero of=test bs=1M count=1024
1024+0 records in
1024+0 records out
real 1m 43.70s
user 0m 5.20s
sys 0m 0.00s
( ~ 10 Mb/s !!! Very low for the Ultrastar drive)

Reconnecting the drives from the onboard AHCI controller to IBM M1015 did not change anything in terms of performance.

What can be done to improve that?
 

lopoetve

Fully [H]
Joined
Oct 11, 2001
Messages
32,617
don't use dd from the troubleshooting console for a performance test - it'll always be dog slow.

Run a test from within a guest.
 

tsrtg

n00b
Joined
Sep 11, 2014
Messages
32
Horrible disk performance within guests is why I'm started measuring performance directly from the troubleshooting console to see if it's any better. It takes forever for a VM to even load.

Here's the dd test from Solaris guest:

root@solaris:~# time sh -c "dd if=/dev/zero of=test bs=1073741824 count=1 && sync"
1+0 records in
1+0 records out

real 1m31.018s
user 0m0.003s
sys 0m3.317s

That's 11MB/s, not much better.

root@solaris:~# rm test
root@solaris:~# time sh -c "dd if=/dev/zero of=test bs=1073741824 count=1 && sync"
1+0 records in
1+0 records out

real 1m31.733s
user 0m0.002s
sys 0m1.180s

11MB/s again, the result is stable.

Here's dd test from the same Solaris guest using raidz2 pool built from disks connected to the controller passed in the pass-through mode:

root@solaris:~# cd /zpool1/datastore/test
root@solaris:/zpool1/datastore/test# time sh -c "dd if=/dev/zero of=test bs=1073741824 count=1 && sync"
1+0 records in
1+0 records out

real 0m5.547s
user 0m0.002s
sys 0m1.201s

This is much better. But what's the problem with the datastore disks?
 

Sp33dFr33k

2[H]4U
Joined
Apr 20, 2002
Messages
2,481
In my experience the performance can be pretty bad on a mechanical hard drive if there's more than one OS on it. Although yours seems to be pretty bad.

I have a very basic ESXi host running 5.5 and I use a USB thumb drive for ESXi install and boot and then two cheap SSD's for my operating systems. For my ISO's and other testing I have a 1TB mechanical drive.

Could also be some weird driver issue in VMware that's causing the problem.
 

vFX

n00b
Joined
Sep 28, 2013
Messages
55
this is very strange

what's the speed of a scp transfer? (single ~1GB file)

do you have a controller or HBA for some tests?
 

lopoetve

Fully [H]
Joined
Oct 11, 2001
Messages
32,617
RaidZ is fast because it's using RAM to cache the writes, unless that pool is set to synchronous (in which case your write may be big enough to fill the transaction queue and get committed immediately). ESX does no caching of any kind, by design, so your DD commands to it (being single-threaded, synchronous) are going to still be slow.

Fire up a linux guest with Dynamo (or snag IOAnalyzer - it's IOmeter's Dynamo with the queue patched) and run a 4k/16k/1M test from it, let me see the results.
 
Top