EXSi 7 and CPUs...which is better cores or threads?

DeaconFrost

[H]F Junkie
Joined
Sep 6, 2007
Messages
11,582
I have a license for ESXi 7.0. I'm setting up a small home lab for my Linux servers and for my wife to get some practice with virtualization. We work for the same company and she's looking to build on her desktop support role, and I'm looking to build on my Linux server knowledge. I have two small computers available and wasn't sure which CPU would give me better results. I do not plan on running many VMs and nothing intensive.

One is a NUC with an i5-8259U CPU (4 cores, 8 threads)
The other is a Dell Optiplex Micro tower with an i5-8400T (6 cores, 6 threads)

Which would yield better virtualizatiion results? 8 threads or 6 cores?
 
It's no different than bare-metal installs, dedicated cores are always going to be better than a hyperthreaded core.

One tip for you, when creating VM's just give them 1 socket to start and only add more sockets as needed by the workload. A common mistake new people make in virtualization is CPU allocation and oversubscribing. Don't set machines up like you would a physical machine.
 
Same architecture, more cores is the clear win. Maybe higher TDP on the 8400T as well? The 8259U has a faster turbo, so it could win singlethreaded. Some applications do see a big boost from hyperthreading, but it's usually not better than 50% more cores. Especially if you have security mitigations on, hyperthreads don't help with virtualization much at all, I don't think.
 
It's no different than bare-metal installs, dedicated cores are always going to be better than a hyperthreaded core.

One tip for you, when creating VM's just give them 1 socket to start and only add more sockets as needed by the workload. A common mistake new people make in virtualization is CPU allocation and oversubscribing. Don't set machines up like you would a physical machine.
That's a constant battle I have with my developers and app team leads. Apparently, "virtual" means infinite. No, your DEV web server doesn't need 8 cores and 64 GB of memory to host a small UI.

I appreciate the tip. Worth seeing for someone who's new to the game. I'm only new to running it at home. I've got two Dell vBlocks, two Dell VxRails, and likely more on the way (at work, not at home).
 
Right, there are instances when you can reduce the number of CPU assigned to VM's and actually improve performance if the host systems latency is too high. I can't remember how details the performance charting is on stand alone, but when you use vCenter all of the metrics you need are there.
 
That's a constant battle I have with my developers and app team leads. Apparently, "virtual" means infinite. No, your DEV web server doesn't need 8 cores and 64 GB of memory to host a small UI.

I appreciate the tip. Worth seeing for someone who's new to the game. I'm only new to running it at home. I've got two Dell vBlocks, two Dell VxRails, and likely more on the way (at work, not at home).
I feel you! And even when you show them system usage for their existing systems for the last 100 years only using .00000001% of the resources it had, they still argue about it....
A client we did this, with vRealizeOps, added in the cost per server for a 32 node, 3 cluster vxRail deployment we did, each node, depending on the cluster, ranges from $50 to $120k each. When we were doing reports for some right sizing, we showed the Dev / SAP teams just how much their Dev and Test servers would cost if they basically went to Azure or AWS....kind of put it into perspective for them.

That wasnt even getting into the fun details of contention as you noted...
 
Why not setup a prox mox server for her and VMware server for u
Why not just use ESXi and be done with it vs 2 different environments, and if this is for work, you wont find proxmox in many busines environments.
As others noted, more cores is better. CPU scheduling is pretty good these days, it is memory you want to make sure you are good on, so dont cheap on memory, as in only 8Gb (min) 16-32GB for a couple VMs, and like CPU, you do not have to give every VM 100000GB of ram. Start low, and if you find performance is hurting, add 1Gb (and no you dont have to go 2 -> 4 -> 8 -> 16 either... go 2Gb or 6GB if it needs it.
 
Why not just use ESXi and be done with it vs 2 different environments, and if this is for work, you wont find proxmox in many busines environments.
As others noted, more cores is better. CPU scheduling is pretty good these days, it is memory you want to make sure you are good on, so dont cheap on memory, as in only 8Gb (min) 16-32GB for a couple VMs, and like CPU, you do not have to give every VM 100000GB of ram. Start low, and if you find performance is hurting, add 1Gb (and no you dont have to go 2 -> 4 -> 8 -> 16 either... go 2Gb or 6GB if it needs it.
License costs :) If the goal is to get better at Linux Hosts should not matter what HyperVisor is being used :)
 
I'm sticking with ESXi because that's what we use in the corporate setting. I have a license for it, so there isn't any question as to what hypervisor to use. The goal is to give her VMWare experience while I get Linux experience. PS, we work for the same company and the same department.
 
I would go with the physical cores as well over hyperthreaded. Depending what licensing you have, could you use both systems in a vCenter? If yes, you could do EVC to level set architecture but get a little more room to play. Getting people to understand how "less = more" in the virtual space is a continuous challange. Even today.
 
I would go with the physical cores as well over hyperthreaded. Depending what licensing you have, could you use both systems in a vCenter? If yes, you could do EVC to level set architecture but get a little more room to play. Getting people to understand how "less = more" in the virtual space is a continuous challange. Even today.
This is the free, community license, so until I get my company to pay for my VMUG membership, I'm pretty much stuck at using one host, no management applicances.
 
It looks like I should have done more research. The OptiPlex 3060 I bought uses a Realtek NIC, which seems to be completely unsupported on ESXi 7.0. I had to build a custom .iso for the NIC in the NUC, but that option doesn't seem to be workable for Realtek cards.
 
And here we are thinking you got the Optiplex for free from work or something and that's why you're trying to use it. Can you send it back and if so, what's your budget for a test box?
 
It looks like I should have done more research. The OptiPlex 3060 I bought uses a Realtek NIC, which seems to be completely unsupported on ESXi 7.0. I had to build a custom .iso for the NIC in the NUC, but that option doesn't seem to be workable for Realtek cards.

That OptiPlex has pci-e slots right? Get an ex-server card off Ebay. Lots of them are low profile too.
 
The micro chassis doesn't, just an onboard 2230 for a wifi card.

https://www.dell.com/support/manual...265af2-ea0f-4aae-9b9d-ba7caf18d0dc&lang=en-us

Ok yeah. Thanks Dell for having three systems with pretty much the same name (although I think I've seen this from HP and Lenovo too). I kind of assumed since OP said 'micro tower' that it was actually the 'mini tower', because the other two don't have tower in the name, and I also didn't scroll all the way down to see that the micro doesn't have any slots. Does esxi support any USB nics (I kind of doubt it, but maybe?)
 
And here we are thinking you got the Optiplex for free from work or something and that's why you're trying to use it. Can you send it back and if so, what's your budget for a test box?
I bought it on here cheap, and I can easily turn it into an Ubuntu server or use it for some other purpose.
 
I'm building a box with a i5 8600T, 16GB of RAM and a dual port Intel 10Gig NIC for proxmox right now. I wouldn't worry about cores and threads too much as ESX is very good at oversubscribing CPU. Never oversubscribe memory. Intel NICs on the commercial side, regardless of what's happening on the consumer side, are always sure bets. I would expect that to continue for the foreseeable future.
 
Back
Top