virtual machine on HP server using vmware goes dark to the network after period

dalearyous

[H]ard|Gawd
Joined
Jun 21, 2008
Messages
1,922
i deployed a new HP server to a remote site with a few virtual machines. the VM that is the domain controller, for whatever reason, decides that after a period of time (couple hours) will go dark to the network. i can't RDP, ping, or do anything to reach the server. however, if i hop on vsphere and connect to the host i can launch virtual machine console and everything seems fine. it is running and acting normally.

if i restart the virtual machine, it will work like normal for a few hours. i have found that if i run a continuous ping to the VM from another server it will stay alive.

i know this is vague but anyone have any idea where to begin?
 
vmtools is installed and i doubt its a power issue.

accessing the server via vpshere console does not "wake" the device or make it accessible on the network again. only a restart does. also, when it does not work, the network card shows no internet access.
 
Was this a P2V or new OS installs? What version of vSphere are you using? Did you use the HP vSphere ISO?
Based on the info it does sound like power saving settings kicking in and putting the NIC to sleep.
 
Let's say you connect to the console and then ping out from the VM. Can you then ping it and connect to it remotely?
 
Was this a P2V or new OS installs? What version of vSphere are you using? Did you use the HP vSphere ISO?
Based on the info it does sound like power saving settings kicking in and putting the NIC to sleep.

server 2012, ESXi 6, used server 2012 iso. i have deployed maybe 20 of these in the last 6 months no issues.

Let's say you connect to the console and then ping out from the VM. Can you then ping it and connect to it remotely?

when you ping out, lets say google or 8.8.8.8 you get nothing. cannot ping the broken vm from another vm on the same host. however, another vm on the same host works just fine.
 
Sounds like a gateway or routing issue. You could also build a new vm dcpromo it and retire the old one to see if its a goofy windows issue
 
Does the NIC have the setting "Allow the computer to turn off this device to save power" turned off? This is on by default in Windows installations. We found this would prevent end users from connecting to their VDI desktops through View until the VM was rebooted.
 
Does the NIC have the setting "Allow the computer to turn off this device to save power" turned off? This is on by default in Windows installations. We found this would prevent end users from connecting to their VDI desktops through View until the VM was rebooted.

it does, i unchecked it. will see if that helps.
 
The only other thing I can think of is related to routing issues through VLANs. I've seen this happen a few times with VPN connections if split-tunnel is turned on, but I don't think that would apply in your case. A static route usually fixed those.

Any chance these are on different networks, VLANs or subnets? I'm not a networking guy, but I would start to agree with Vengance_01 above.
 
I swear I ran across a VMWare forum post recently about this exact issue on recent HP servers and the current 5.5/6.x vmware release. I can't find it now, but I seem to remember one workaround being to switch from E1000/E1000E to the VMXNET3 NIC. It was similar to this article, but I don't think this is the exact problem I was reading about:
http://kb.vmware.com/selfservice/mi...nguage=en_US&cmd=displayKC&externalId=2109922
 
Have the same issue when I clone a windows VM it actually has 2 default routes listed.
netstat -r should resolve this , this is an issue with the vmware tools driver for the vxnet interface and VMware also advises to use the intel one for windows.
 
my netstat -r

Screen_Shot_2015_12_08_at_10_01_07_AM.png




i have changed the NIC type to VMXNET3 NIC ... see how that goes
 
Last edited:
Back
Top