Proxmox VE vs VMWARE ESXi for Windows 10 VM

luckylinux

Limp Gawd
Joined
Mar 19, 2012
Messages
225
Dear all,

I know it might be a strange request but please bear with me ...
I started using VMWARE ESXi about 5 years ago as my virtualization platform of choice in my home lab. Since then, due to easier backup, scripting and base OS (Debian), I switched to Proxmox VE (without subscription). All latest updates have been applied and machine has been rebooted.

For Linux / FreeBSD / OpenBSD VMs there is no problem at all. With Windows 10 VM however, it seems that it leads to very high CPU usage. I wanted to upgrade and virtualize my Desktop using Server grade hardware (Supermicro X9dri-LN4F+, 2 x E5-2697 v2, 256GB ECC RAM) and even without other VMs on the server, Windows 10 takes about ~5 minutes to boot, stays at 100% CPU usage on all 8 virtual cores during that time - 16GB virtual RAM, after which it drops to about ~30% CPU at idle.
Somehow libvirt users seem to have fixed this kind of behaviour with the "hpet" option, but since proxmox VE defaults to "-no-hpet" and a few other options have been introduced to supposidly fix the problem for Windows 10 (hyperv-stimer and another one), I am a bit at a loss about what to do ...

I would like to stay with Proxmox VE however this is definitively not usable. I tried to passthrough my GTX 1060 but also that didn't work so far with Windows 10. On other machines I could passthrough GTX 1070/1080 without issues to Linux VMs without any special option. For Windows 10 I tried to add the romfile option but also that didn't help ...

Would you suggest me to switch over to ESXi for this application? I still have one Napp-IT subscription I can use however I didn't like OmniOS too much (seemed like a dead project at one point, no more updates). Or ... just nest Proxmox VE inside ESXi and setup NFS server on Debian/Proxmox :S :S :S ?

EDIT: ESXi Limitation of 8 vCPU per VM is too tight IMHO. Other options with some management tool? XCP-NG maybe with Xen Orchestra ?
 
Last edited:
Dear all,

I know it might be a strange request but please bear with me ...
I started using VMWARE ESXi about 5 years ago as my virtualization platform of choice in my home lab. Since then, due to easier backup, scripting and base OS (Debian), I switched to Proxmox VE (without subscription). All latest updates have been applied and machine has been rebooted.

For Linux / FreeBSD / OpenBSD VMs there is no problem at all. With Windows 10 VM however, it seems that it leads to very high CPU usage. I wanted to upgrade and virtualize my Desktop using Server grade hardware (Supermicro X9dri-LN4F+, 2 x E5-2697 v2, 256GB ECC RAM) and even without other VMs on the server, Windows 10 takes about ~5 minutes to boot, stays at 100% CPU usage on all 8 virtual cores during that time - 16GB virtual RAM, after which it drops to about ~30% CPU at idle.
Somehow libvirt users seem to have fixed this kind of behaviour with the "hpet" option, but since proxmox VE defaults to "-no-hpet" and a few other options have been introduced to supposidly fix the problem for Windows 10 (hyperv-stimer and another one), I am a bit at a loss about what to do ...

I would like to stay with Proxmox VE however this is definitively not usable. I tried to passthrough my GTX 1060 but also that didn't work so far with Windows 10. On other machines I could passthrough GTX 1070/1080 without issues to Linux VMs without any special option. For Windows 10 I tried to add the romfile option but also that didn't help ...

Would you suggest me to switch over to ESXi for this application? I still have one Napp-IT subscription I can use however I didn't like OmniOS too much (seemed like a dead project at one point, no more updates). Or ... just nest Proxmox VE inside ESXi and setup NFS server on Debian/Proxmox :S :S :S ?

EDIT: ESXi Limitation of 8 vCPU per VM is too tight IMHO. Other options with some management tool? XCP-NG maybe with Xen Orchestra ?

How are you configuring the virtual disk for your windows 10 VM on proxmox? Also, what type of physical disk is it on (SSD, HDD, NAS etc..)?

I run/ran many windows 10/Server/LTSC VM's on proxmox and ESXI and ive never really noticed any bootup issues with them, ESXI 6.7 and Proxmox 5.4. Also as far as GPU passthrough goes, you need to do some manual edits to the VM config, as well as using a modified rom and driver package. Plus the windows install needs to be totally configured to use VirtIO hardware (in proxmox at least).
 
Thank you for your answer.

The Windows 10 sits on top on a NVME drive. Right now it's a striped pool (RAID0, no redundancy at all) so I'll have to move it / destroy existing pool / recreate it using mirror VDEV.
Right now that project has been put a bit on the side.

In order to have passthrough working & NVIDIA Drivers not complaining I had to convert my BIOS installation to UEFI boot installation, and boot using ovmf instead of seabios. Nothing else would help.
 
Just setup an aio Plex/freenas box using esxi 6.7 u3. Used direct IO passthrough for a Quadro p400 for Plex transcoding on win 10 and h200 hba in IT mode to freenas. Everything is working as intended. Guest VMs are running on a 1tb sata SSD. Can get up to 300mb/s on read and write using a modest 5x4tb 5700 rpm cool spin drives. Got my esxi license on eBay for super cheap. Everything just works.
 
Would you suggest me to switch over to ESXi for this application? I still have one Napp-IT subscription I can use however I didn't like OmniOS too much (seemed like a dead project at one point, no more updates).

A little bit late and maybee off topic but why did you call OmniOS dead when
- it has a stable every 6 months
- a long term stable every 2 years
- a bloody (ongoing development)
- offers ZFS in its native Solaris environment
- unique features like the multithreaded kernelbased SMB server
- even a commercial support option when needed

https://omnios.org/
 
Last edited:
Back
Top