VMware scaling

SpoonMaN-EQ

Limp Gawd
Joined
Dec 31, 2002
Messages
372
hello everyone, iv been wondering how far do the enterprise level vmware sku's scale? asuming the system has a couple of hundred GB's of ram in it and say 24-32 cores of cpu power. has anybody had the fortune to ever try this out?
 
VMware has a benchmark for such questions, called VMmark. Anandtech has performed it on some modern server hardware utilising Nehalem; this gives you an idea of how well it scales with cutting edge hardware. I'm sure you can do the math to work out how many tiles of VMs you'd be looking at for your situation.

Also a note about how you've phrased your question - VMware's free hypervisor, ESXi, is not artifically handicapped in terms of performance compared to ESX (the more enterprise level product). The functional differences come in with additional features like DRS, HA etc. I'd also argue that in the Type 2 hypervisor world also, VMware Server is not handicaped in regards to performance - you'd simply miss out on more advanced features that Workstation provides (like DX acceleration etc.)
 
hello everyone, iv been wondering how far do the enterprise level vmware sku's scale? asuming the system has a couple of hundred GB's of ram in it and say 24-32 cores of cpu power. has anybody had the fortune to ever try this out?

Depends on what you're doing. We have a client that is running 2x IBM x3850 M2s, 256GB of RAM each, 3.0GHz quad cores (4 quads per box), and is running off of IBM DS4800 with a mix of 10k FC, 15k FC, and 7.2k SATA drives. The servers are linked on the back end through IBMs x4 chipset architecture. This provides the entire set of CPUs, RAM, and PCI Express adapters as one gigantic host.

They have consolidated 57 physical servers so far onto these two boxes. By the end of the year, they expect that number to double, without having to upgrade the host machines. They plan to put another 2-stack at their DR site and run SRM over FCoE CNAs, with a 2Gb optical link between the two sites.

The count, so far:

512GB PC2-5300 ECC Reg
8x Intel Xeon 3.0GHz quad cores (32 cores x 3GHz = ESX sees 96GHz CPU power)
4x QLogic dual port FC HBAs
4x Quad port Intel GbE cards with TOE
4x73GB 2.5" 15k RPM boot drives (in 1+1 RAID-1, per host)
2x Force10 48port true-nonblocking 10GbE switches
4x QLogic 5802v 20port (Plus 4x ISLs per switch) 8Gb/sec FC switches (in 2x 1+1 stacks for dual fabrics)
1x IBM DS4800 RAID controller
4x shelves of 10k FC HDDs (16 drives/shelf)
4x shelves 7.2k SATA 1TB HDDs (16 drives/shelf)
2x shelves 15k FC HDDs(16 drives/shelf)

So what do you mean, as far as scaling...performance? Consolidation rate? Licensing implications (have a crap load of CPUs, or RAM, or attached disk to need to license)? I think the customer I have could potentially answer your question...but I don't understand exactly what the question is.
 
thanks for the replys, sabregen you hit the nail on the head. i guess i ment to phrase it as how many VM's it could scale to, didnt really mean perf. this was more of a question from me to know a bit more vmware, i know how hyper-v handles high numbers of VM's and was wondering about its competitors.
 
The basics of how high the solution can scale will really come down to just a few factors:

How much CPU horsepower you need / can accommodate on your hosts
How much RAM your need / can accommodate on your hosts
How much centralized disk need / can your hosts accommodate
How many IO adapters need / can your hosts accommodate

Using those 4 criteria, you'll eventually hit a bottleneck. RAM and add-in IO adapters are usually the hold up for most people. Virtualizing can chew RAM and disk like you wouldn't believe. CPUs see less usage than you'd think, even with the Hypervisor arbitrating all of the guest VM IO requests...this is part of the reason why virtualization is attractive, to begin with...to obtain better utilization.

Any one of those four factors could become the bottleneck. Depending on your chosen platform, you may be able to alleviate the bottlneck bu adding more RAM, more IO adapters, more disk, or more CPU...but eventually, you'll exhaust the possibilities of platform upgrades as a solution to the problem. Once you hit that point, you will have to move up to a more powerful system...and start all over again.

As you may know, VMWare does not license on the amount of RAM, IO adapters, or attached/managed disk. They do, however, license on 2CPU sockets = 1 license, and they also license all features that involve multiple-host interactions:

DRS, SRM, HA,, VMotion/Storage VMotion, ACE, VDI, etc.
 
hello everyone, iv been wondering how far do the enterprise level vmware sku's scale? asuming the system has a couple of hundred GB's of ram in it and say 24-32 cores of cpu power. has anybody had the fortune to ever try this out?

Our new pre-production ESX cluster has 5x HP BL680c blades.

Quad Hex-core cpu's @ 2.4GHz /w 128GB RAM.

ESX Cluster Total: 288GHz / 640GB RAM

196 VM's running. avg cpu 9% / 23% memory used.
 
Our new pre-production ESX cluster has 5x HP BL680c blades.

Quad Hex-core cpu's @ 2.4GHz /w 128GB RAM.

ESX Cluster Total: 288GHz / 640GB RAM

196 VM's running. avg cpu 9% / 23% memory used.

I think you may have over-spec'd your host cluster A BIT...lol
 
I think you may have over-spec'd your host cluster A BIT...lol

We use to have 3 environments. Development, pre-production, and production. Now the development teams want a new environment every few weeks to test. They only take and never give back. We also have two other clusters in pre-production which will be due back for lease in less than 12 months. so we'll get some more load on them in the next 6 months. We will probably end up ordering more before the end of the year.

I can't wait for the G6's to come out with the new cpu's. The blades will hold even more memory. IMHO 128GB max mem is short for 24 cores for most hosts.
 
Back
Top