Single core machines and VMs use 100% processor while searching for updates

dgingeri

2[H]4U
Joined
Dec 5, 2004
Messages
2,830
For many years, I have run into an issue, dating back to single core laptops in Windows XP SP2 and going through today working with Windows 2008r2 and 2012r2 VMs in both Vsphere and Hyper-V, where running Windows Updates, I will get processor activity at 100%, rendering the machine useless until it finished updates and making the updates take, consistently, three times as long to finish. This has happened to me consistently in all versions of Windows for years, and I have not seen a single case where it doesn't happen.

This has been happening for over 12 years. I've run experiment after experiment, timing things with stopwatches, creating VM after VM and reproducing the issue consistently, establishing the cause to myself. All the while swearing off single core ANYTHING because of it.

So, I came up with a rule where I will always create VMs with 2 cores. I despise single core VMs because of this issue. Yesterday, I mentioned this in another forum, and was met with a barrage of insults on how I don't understand how VMs work and how they've never run into this issue, and flat out calling me stupid for this line of thought. There was also not a single response that they ran into this issue. Nobody defended my position as something they had run into.

Surely 12 years of experience with this hasn't just be some masochistic fantasy of mine. Please, tell me SOMEBODY has run into this before.
 
Even though VMware best practices say to use the absolute minimal amount of cores necessary, my rule is every machine has 2 sockets (2 cores) regardless of utilization. I don't have issues like you describe doing it this way, which you also seem to now be doing.

Keep in mind the issue you are having with 1 core may depend heavily on your server hardware. If you are using high core low frequency CPU's in your servers, sure, 1 core could definitely get maxed out when scanning for updates. Thus the 2 socket minimal rule.

Also, only assign CPU sockets to your VMs, do not adjust the individual cores setting unless you have software that looks at sockets for licenses. i.e. if you want 2 cores per server assign 2 sockets. If you need 4 cores assign 4 sockets. etc. If you have software that is socket limited, only then assign sockets then adjust cores as needed, but make sure you match the sockets to the numa layout of the server (2 sockets or 4 sockets). Assigning your cpu this way will maximize your server vCPU performance.
 
Last edited:
My experiences have been across many hardware platforms, from Dell R910s with 8 core 2.1GHz processors and FC storage spread across 16 drives in a RAID 6 down to a Core i7 6700 on a single 1TB WD blue drive. I has been with Pentium M single core laptops back in 2005 running Windows XP with SP2 to a VM I made just yesterday (on the afore mentioned Core i7 6700) in Virtualbox running Windows 2008r2. I really don't see any bottlenecking in any part of the hardware platform, or a difference in operating system.

I also use an opposite policy on socket/cores due to Windows 2008r2 licensing. Windows 2008r2 Standard licenses for 2 sockets only, and it requires an extra license for a second pair of sockets. It also limits the OS to just 4 sockets maximum. That isn't usually a problem in that I mostly just use 2 vcpus, but I have run into problems with having to use 2 MSDN license keys to get a VM going. However, these experiences were just in Vsphere 4.1 and Virtualbox, so perhaps Windows' detecting of being a VM has improved over time, or works better with other hypervisors. I still prefer to stick with filling up a single socket with however many cores I need rather than the opposite.
 
Windows XP to at least Win 7 have all been subject to extremely long scans that leave the CPU pegged. It would be more annoying with a single core, but I don't think the core count has anything to do with the problem. They have all been down to the complexity of the data to analyze, or implementation problems of the code that is doing the analyzing. The problem seems to disappear on Win7 and Win8 once you are up to date. There are recipes for getting Win7 up to date with minimal fuss by manually applying certain downloaded updates before turning on Windows Update.
 
I could have sworn I saw a Windows update last month that addressed the CPU pegging during updates.

The few VMs I support, this would not make much difference though.
Hope you find a fix for it.

.
 
I haven't run single core anything for decades. Why not just make your VMs dual core?

/edit: Ahh, you're stuck with Microsoft licensing i.e. self induced problems. Too bad.
 
I haven't run single core anything for decades. Why not just make your VMs dual core?

/edit: Ahh, you're stuck with Microsoft licensing i.e. self induced problems. Too bad.

The comments from the other people in the thread were that using extra cores "for no reason" was "not in best practices" and could "increase wait times" for other VMs on the host. I think that's a bunch of BS, but they threw constant insults at me all day in response.

I retested it yesterday afternoon and last night with Windows 2012r2, with VMs on both a Core i7 6700 and a Ryzen 1700X, and I still get constant 100% proc usage and taking three times as long to get updates done compared to dual core, and the VM couldn't do anything else while it was updating.
 
The comments from the other people in the thread were that using extra cores "for no reason" was "not in best practices" and could "increase wait times" for other VMs on the host.
It is true because the hypervisor must schedule all cores to run simultaneously every time, but using two cores on hosts that have 4 or more threads of execution available will not be a problem compared to staying at single core.
 
It is true because the hypervisor must schedule all cores to run simultaneously every time, but using two cores on hosts that have 4 or more threads of execution available will not be a problem compared to staying at single core.

At least on linux wait times are easily monitored. I run dozens of VMs that run literally thousands of processes on some servers and the wait % on cpu is close to non existent. Granted our servers have 16 or more cores to begin with. I'm giving VMs typically 2 to 8 cores depending on the load they're planned for. The hypervisor does a good job at load balancing between the cores.
 
Back
Top