ESX VM's taking up more HD space than I thought

cyr0n_k0r

Supreme [H]ardness
Joined
Mar 30, 2001
Messages
5,360
I have some VM's inside ESX 3.5
I gave each VM 20GB for it's vmdk. Each one reports 20,971,520.00 KB
Which is right around what I expect. However, the storage is not matching up to how many VM's I should be able to store on the disks so I looked at the datastore browser.

Why does each VM have a vswp file that is equal to the memory size for each vm? Shouldn't that be stored only in RAM? Why is it making a physical file?

This screws up all my calculations on how many VM's I could run per ESX host because of this added 1GB file for each VM.
 
Because they enable you to swap a VM's ram out to disk if you overcommit memory. It's a feature. If you don't want it to use a vswp file, then set the memory reservation for the vm to 100%. It lets you use more memory than you actually have.

It's a feature since disks are cheap and ram is not. You don't want to fill the datastore, btw - 10% free, 5% minimum free for performance.
 
Psh, the servers have 16GB of ram. No overloading going to happen.
Thanks. Do I need to set the ram reservation on EACH vm? What a pain.. thats over 30 VM's I need to modify.
 
I adjusted the reservation on one of the VM's, however the vswap file remains unchanged? does this require a reboot of the VM?
 
I adjusted the reservation on one of the VM's, however the vswap file remains unchanged? does this require a reboot of the VM?

Not a reboot, but a shut down and restart. The VMX needs to close entirely.

No offence, but 16gb is small for an ESX server. Most have 32-64gb, with a few up to 128. I've seen 150-1 server consolidation before, actually.

Honestly, I'd leave the vswap - esx has a very advanced memory manager that will let you greatly overcommit memory if needed by using a balloon driver. If you set the res to 100%, you can't do that at all. Just account for the extra space - disk is cheap. Are you on a SAN, or local storage only?
 
Unfortunately local storage only.

Disks might be cheap, but having that extra GB for each VM changes the number of VM's per ESX host from 18 to 16, which means we need to buy another $3,000 server after only 1 more VM client going live instead of 4.

Kinda pisses me off that I just keep having to throw money at this software.
 
If you read the manual and configuration guide, it explains this. If you're on local storage then disks are very possibly more at a premium. Turn the reservation to 100%, shut down the vm, and then power it back up.
 
Remember not to plan for 100% disk usage. You still need space for snapshots as well as the log files.
 
Unfortunately local storage only.

Disks might be cheap, but having that extra GB for each VM changes the number of VM's per ESX host from 18 to 16, which means we need to buy another $3,000 server after only 1 more VM client going live instead of 4.

Kinda pisses me off that I just keep having to throw money at this software.

To be honest, if you run your disks so tightly packed that just TWO more GB make or break the deal, then the problem isn't with ESX.

Having the swap file is a safeguard that ensures uptime should there be any issue with physical memory. Disabling it is bad bad practice even though it will likely never be used.

It should cost you far less than 3k to add a bit more disk space, no need to buy another server.
 
To be honest, if you run your disks so tightly packed that just TWO more GB make or break the deal, then the problem isn't with ESX.

Having the swap file is a safeguard that ensures uptime should there be any issue with physical memory. Disabling it is bad bad practice even though it will likely never be used.

It should cost you far less than 3k to add a bit more disk space, no need to buy another server.
It's TWO more GB times EIGHTEEN VM's. So that's an extra 36GB.
And no, it's not easy to add more diskspace. These servers are 1U machines.
 
If you are looking at that many servers, an iSCSI device could allow you to mitigate this issue.
 
It's TWO more GB times EIGHTEEN VM's. So that's an extra 36GB.
And no, it's not easy to add more diskspace. These servers are 1U machines.

I understand what you are saying, and I agree that in a 1U config HD space isn't as abundant.

What I don't agree with is that you blame ESX ("Kinda pisses me off that I just keep having to throw money at this software") for what's apparently inadequate hardware. Use the right tool for the job.

Obviously I am in no position to tell you how to run your business. Having said that, running 18 production VMs on very tight "low spindle count" local storage isn't good practice.

You said that 16GB is plenty of memory. Well, it depends. At 18 VMs you are a good candidate for oversubscribing your memory to ensure memory availability to your clients when they need it.

ESX will use a number of memory management techniques to optimize memory usage. When it runs out of physical memory it will start swapping to the Guest OS swap file, and only when that space runs out it will swap to the vswp file.

Perhaps one solution that may work for you is to put the vswp file onto networked storage. NFS mount will work, as will any other type of networked storage. You don't have to keep that file local as odds that it will be used are small unless something goes wrong. I know you said "local storage only", but given the situation you may find some machine that you can easily set up as very slow network storage to hold the files that are likely never be used. Especially considering that you don't need to give your clients access to that local storage.

While you can essentially disable it that would IMHO only be asking for difficulties that may be tricky to troubleshoot because when they occur no one may remember that you got rid of the vswp file.
 
I would submit that the OP is using 1U pizza boxes, he's maxed out his RAM slots (and likel yht emotherboards capability), and he's past his budget at this point. He sounds cranky enough for this guess to be at least close on my assumptions.

Ant any rate, OP, you're starving your VMs, IMO. Go get one of those cheap HP home storage server appliances, configure NFS, get a cheap GbE 8 port switch, and see if you can either add RAM to your server, or get a used one on eBay or something. ESXi might be free, but undertaking it on ill equipped hardware due to lack of planning is not the software's fault.
 
Back
Top