Im a newbie to building an all in one setup using esxi 6.0 and omnios. My basic setup is as follows:
Supermicro X10SL7-F with onboard LSI-SAS flashed to non raid
Xeon E3-1232v3
32GB ECC ram
ESXi 6.0
2x Intel GBE
8xSeagate 2TB drives (on LSI SAS passed through to OMNIOS)
250GB Samsung 850 SSD (esxi local datastore)
OMNIOS/ZFS VM
I have a primary VM for omnios 6GB memory, 4cores onboard LSI-SAS passed through.
VMware tools installed
LSI SAS, E1000, 30GB vdisk on local SSD store
I created a ZFS pool with RAIDZ2 and have it shared via smb and nfs
Windows10 VM
2 cores 6GM memory, 120GB disk on OMNIOS/NFS share with thin provisioning
E1000, LSI-SAS, Paravirtual controller
Windows7 VM
2 cores 6GB memory, 120GB disk on local share thin provisioning
E1000, LSI-SAS, Paravirtual controller
I get ok performance ~80MBPS transfer when copying from the ZFS windows share to my laptop. A little better than I was getting with my older windows system with hardware raid6.
If I copy from the ZFS windows machine to the W7 machine on the local store I get 120MBPS which is even faster.
Im having performance problems on windows VM's when the VM is located on the ZFS share.
If I try the same copy from the ZFS windows share to the W10 VM desktop the transfer starts out at ~50MBPS but very quickly falls to ~5MBPS. Overall windows VM performance is sluggish, all pointing to slow disk. Running esxtop seems to indicate low overall system load and cpu utilization.
Basically Im having severe performance issues when the VM disk is stored on the ZFS NFS share. Installs and bootup for machines is slow and sluggish.
Im not that experienced in debugging the slow disk access and all in one setup would greatly appreciate any feedback or suggestions how to proceed in debugging and any tuning tips for my setup.
Supermicro X10SL7-F with onboard LSI-SAS flashed to non raid
Xeon E3-1232v3
32GB ECC ram
ESXi 6.0
2x Intel GBE
8xSeagate 2TB drives (on LSI SAS passed through to OMNIOS)
250GB Samsung 850 SSD (esxi local datastore)
OMNIOS/ZFS VM
I have a primary VM for omnios 6GB memory, 4cores onboard LSI-SAS passed through.
VMware tools installed
LSI SAS, E1000, 30GB vdisk on local SSD store
I created a ZFS pool with RAIDZ2 and have it shared via smb and nfs
Windows10 VM
2 cores 6GM memory, 120GB disk on OMNIOS/NFS share with thin provisioning
E1000, LSI-SAS, Paravirtual controller
Windows7 VM
2 cores 6GB memory, 120GB disk on local share thin provisioning
E1000, LSI-SAS, Paravirtual controller
I get ok performance ~80MBPS transfer when copying from the ZFS windows share to my laptop. A little better than I was getting with my older windows system with hardware raid6.
If I copy from the ZFS windows machine to the W7 machine on the local store I get 120MBPS which is even faster.
Im having performance problems on windows VM's when the VM is located on the ZFS share.
If I try the same copy from the ZFS windows share to the W10 VM desktop the transfer starts out at ~50MBPS but very quickly falls to ~5MBPS. Overall windows VM performance is sluggish, all pointing to slow disk. Running esxtop seems to indicate low overall system load and cpu utilization.
Basically Im having severe performance issues when the VM disk is stored on the ZFS NFS share. Installs and bootup for machines is slow and sluggish.
Im not that experienced in debugging the slow disk access and all in one setup would greatly appreciate any feedback or suggestions how to proceed in debugging and any tuning tips for my setup.