Memory bandwith and guest performance in VMware

drshivas

n00b
Joined
Dec 1, 2011
Messages
21
Does anyone have info on memory bandwidth and guest OS performance in VMware? For example, would 16GB of DDR3 1866 have better performance than the same amount at 1333? I know in theory yes, but I am interested in practical experience or evidence.
 
Interesting question...

It not just about the guest OS or virtual performance but the performance of the system underneath the virtual platform, so you can checkout the benchmarks for the different memory. I always try to get the quest memory available for servers to keep up the clock speed, but in my experience the performance increase is nominal.
 
I only really noticed a difference on boxes that were running several VMs concurrently, and even then only when they were running ram caches of one sort or another. If you're just running one VM on top of a pretty idle host it won't matter at all performance wise. But it never hurts to get the higher binned ram if the price difference is reasonable just to give you some breathing room in the future.
 
Interesting question...

It not just about the guest OS or virtual performance but the performance of the system underneath the virtual platform, so you can checkout the benchmarks for the different memory. I always try to get the quest memory available for servers to keep up the clock speed, but in my experience the performance increase is nominal.

Tell a lie... if you have larger amount of memory and depending what the servers are doing, I have noticed a difference between different hosts with different memory configurations.
 
Thanks. My situation is multiple VMs running concurrently in VMware workstation. I've added more RAM AND upgraded to 1866 while I was at it for nominal cost. Subjectively I can "feel" things are a bit more fluid when I keep the original workload that was running on 8GB of 1333. I know disk I/O is the biggest bottle neck, and multiple VMs swapping memory to disk is a situation best avoided. That's when everything grinds to a halt. So yes, a robust subsystem under the virtual layer will pay dividends, but only of the costs and benefits are balanced.
 
Tell a lie... if you have larger amount of memory and depending what the servers are doing, I have noticed a difference between different hosts with different memory configurations.

In my research, most of the talk is about 2 sticks vs 4 sticks, more ram over less. Of course more is better, and faster is better, but how much. The question comes in is it worth boosting the memory subsystem since multiple VMs vying for resources can impact performance. I say yes, if it's cost effective. But again, just like in gaming, you may only get 1-2 FPS with higher bandwidth.
 
VMware workstation? What type of functions do the VM's serve? Is it Lab?

One use case is development testing-- alt+tabbing between multiple running instances. The other is a server use case where Workstation is configured to run like the old VMware Server (where they autostart) with multiple VMs serving different purposes needing 24/7 availability.
 
I know your questioning memory and I don't like to diverse too much from the topic at hand, but if there dedicated boxes, I'd suggest going ESXi, even if its not managed by vCenter. Type 2 will have many more bottle necks than Bare Metal or Type 1. Even memory will be slower in this arrangement because of the extra layer. Type 1 should give a large boost in performance. However I'm sure you have your reasons.

Disk IO can be important, but that depends on the amount of servers and the load on IO, such as DB servers. During a Data Center migrations I have used SSD's to sustain servers temporarily while the SAN was relocated. This worked quite well considering and was a last resort, due to performance and data protection. This might help improve things in your VMware workstation config if data protection isn't a consideration.

Starting all servers at the same time will bog down your system. With ESXi you can create an order or delay the start of VM's. I'm not sure about Workstation.

Back to the question on Memory, I think it really depends what the VM's are doing, however WMware workstation might not see a vast improvement.

If you let us know your system configs, people might see something that can help in performance at a reduced cost?
 
Thanks. I have looked at ESXi, but my CPUs don't have VT-d. I'm watching the used market for non-K 1155 cpus with VT-d. The next concern with ESXi is how to backup the bare metal. I use WHS to back up my machines, which means the entire host gets backed up. I've pulled VM backups from WHS on many occasions when I failed to make snapshots, and so far that works great. Not to mention if I brick my host, I can get that back (to a degree at least).

I do understand going to bare metal host would yield overall better performance since a layer of abstraction would be eliminated. I may do that with my server use case, but I need to check my motherboard specs. I know my other mobo supports VT-d.

Also, Workstation does let you set the boot order with delays in between. The only PITA is you need convert the VMs to "Shared VMs" which moves them around, but symlinks come in handy there. You know what's even worse that starting all the VMs at once? When you do Windows Update on the host and similar updates on the VMs at the same time. The reboot time is terrible, lol.

That's a good story about using SSDs to sustain servers during relocations.
 
You don't need VT-d, VT-d provides a means to directly pass hardware to a VM. If it support VT-x you should be ok on performance.

Backup is a complete different kettle of fish...

What is your hardware?
 
I thought VT-d was needed for ESXi. Hmm.

Well my "lab" use case is just an ASUS z68 with an i3-3245 with 16GB of RAM (1866) and a 1TB HDD. This has 6 VMs that don't necessarily need to be all on at the same time.

My server is an H61 board with a 2500K, 16GB RAM (1333) and 1TB HDD.

All are Win hosts backed up by WHS-- hardly anything to brag about, but it works since half of my VMs are Mac OS X from Lion up to Yosemite, and some Ubuntu and CentOS mixed in. I would lose that backup route if the ESXi host no longer was connected to WHS, hence my interest in backing up the entire host, lock stock and barrel.
 
I'm sorry to burst your bubble but the i3-3245 only supports 1333 and 1600 memory, so your memory is probably downclocked to 1600. There is a CPU util you can use to verify that, think its called CPU-Z, IIRC. I think your 2500K is maxed out at 1333.

I suspect you'll be ok virtualizing them with ESXi as compatibility goes, but you might need to sideload the Network Card drivers. Apart from that, I wouldn't expect to run too many VM's and the load would need to be light. Saying that you have 6 running...

As backup goes, I assume the budget is low or zero and from local storage. I know of a couple like veeam, but free will need some investigation...

When you said 24/7 availability, what do you mean? Are these business critical servers?
 
My Z68 mobo supports memory OC, so the RAM is 1866 and running at those speeds. Memtest86+ and CPU-Z both confirm, so I am set there :) I'd need to get a new board for the 2500K if I were to boost the RAM speed there. But then again, I'm not fast switching those VMs, and once they're up, the work just fine as is.

Thanks for the tip on veeam, I'll check it out.

For the server use case, I still do some hosting out of the house, so one VM is a dedicated web server that is up 24x7. I haven't gotten around to moving it to the cloud, where it should be.
 
We recently filled all 24 slots in our xSystem x3650 server for performance testing. When all slots are filled, RAM gets downclocked automatically. I don't know what was the speed before(it was highest possible) and what after, but when running a web page stress test, there was 30% difference in average loading time. Both time same test data, same test scenarios and multiple tests...
 
Just like people who put K&N filters in their cars and claim their butt dyno tells them the car is faster, it's mostly in the eye of the beholder.

Is 1866MHz RAM faster than 1600MHz and 1333MHz? Yes, of course. Is the difference big enough that if you move an unsuspecting user's virtual desktop from a host with 1866MHz to an exact same host but with 1333MHz RAM they'll immediately notice a difference and open a help desk ticket? No.
 
Back
Top