Hello,
I've started building my first home server, hoping to have 3 VMs: pfSense, Solaris-based NAS and Win 7 x64. I have Supermicro X10SRH motherboard, Xeon 1650v3, 64GB RAM. ESXi 5.5u2 is installed on HGST 7K750EA (7200 RPM), which is connected to the on-board sSATA controller running in AHCI mode, and the same drive is used as datastore.
When I connect to the ESXi via SSH and run "dd" I get about 8 MB/s. I would expect it to be about 10 times better. Also when I run dd with block size 1G, I get "out of memory" error. Is that normal?
/vmfs/volumes/54a55c07-9face633-0999-002590fc98c8 # time dd if=/dev/zero of=test bs=1G count=1
dd: out of memory
Command exited with non-zero status 1
/vmfs/volumes/54a55c07-9face633-0999-002590fc98c8 # time dd if=/dev/zero of=test bs=1M count=1024
1024+0 records in
1024+0 records out
real 2m 7.62s
user 0m 4.69s
sys 0m 0.00s
= ( 1024/127 = 8 MB/s !!! )
The only virtual machine running now is pfSense (8GB space, 1 vCPU/1 thread, 1Gb allocated memory).
I get normal performance (avg 96 MB/s) when I run HD Tune test on the same hard drive (same system, same SATA port) from a bare-metal Windows installation.
Just for testing I connected a new HGST He6 as the secondary (empty) datastore to ESXi and I get similar poor performance on that drive:
/vmfs/volumes/54a90025-f456a43c-493d-002590fc98c8 # time dd if=/dev/zero of=test bs=512M count=2
dd: /dev/zero: Cannot allocate memory
Command exited with non-zero status 1
/vmfs/volumes/54a90025-f456a43c-493d-002590fc98c8 # time dd if=/dev/zero of=test bs=1M count=1024
1024+0 records in
1024+0 records out
real 1m 43.70s
user 0m 5.20s
sys 0m 0.00s
( ~ 10 Mb/s !!! Very low for the Ultrastar drive)
Reconnecting the drives from the onboard AHCI controller to IBM M1015 did not change anything in terms of performance.
What can be done to improve that?
I've started building my first home server, hoping to have 3 VMs: pfSense, Solaris-based NAS and Win 7 x64. I have Supermicro X10SRH motherboard, Xeon 1650v3, 64GB RAM. ESXi 5.5u2 is installed on HGST 7K750EA (7200 RPM), which is connected to the on-board sSATA controller running in AHCI mode, and the same drive is used as datastore.
When I connect to the ESXi via SSH and run "dd" I get about 8 MB/s. I would expect it to be about 10 times better. Also when I run dd with block size 1G, I get "out of memory" error. Is that normal?
/vmfs/volumes/54a55c07-9face633-0999-002590fc98c8 # time dd if=/dev/zero of=test bs=1G count=1
dd: out of memory
Command exited with non-zero status 1
/vmfs/volumes/54a55c07-9face633-0999-002590fc98c8 # time dd if=/dev/zero of=test bs=1M count=1024
1024+0 records in
1024+0 records out
real 2m 7.62s
user 0m 4.69s
sys 0m 0.00s
= ( 1024/127 = 8 MB/s !!! )
The only virtual machine running now is pfSense (8GB space, 1 vCPU/1 thread, 1Gb allocated memory).
I get normal performance (avg 96 MB/s) when I run HD Tune test on the same hard drive (same system, same SATA port) from a bare-metal Windows installation.
Just for testing I connected a new HGST He6 as the secondary (empty) datastore to ESXi and I get similar poor performance on that drive:
/vmfs/volumes/54a90025-f456a43c-493d-002590fc98c8 # time dd if=/dev/zero of=test bs=512M count=2
dd: /dev/zero: Cannot allocate memory
Command exited with non-zero status 1
/vmfs/volumes/54a90025-f456a43c-493d-002590fc98c8 # time dd if=/dev/zero of=test bs=1M count=1024
1024+0 records in
1024+0 records out
real 1m 43.70s
user 0m 5.20s
sys 0m 0.00s
( ~ 10 Mb/s !!! Very low for the Ultrastar drive)
Reconnecting the drives from the onboard AHCI controller to IBM M1015 did not change anything in terms of performance.
What can be done to improve that?