Hi guys, i was wondering if you could give your opinion on an issue we are having.
We have a cluster of 3 ESXi 5.1 hosts on a blade system HP gen8.
I set up a vdswitch to give us easier centralized management of networking and allow us to use pVLANs.
all hosts are now part of the switch and have NICs assigned to the uplinks (default trunking)
now to the issue: if i have 2 VMs on the same pVLAN (both community) as long as they are both on the same ESX host they can communicate; however if i move one VM on another ESX host the communication stops.
I think the issue is with the virtual connect used by the blades to communicate with each other inside the enclosure. I'm not an expert here but i did notice a setting (for the blades' enclosure) labeled VLAN tunneling (it claims that you need to check it to allow it to pass VLAN tags)
This setting is now unchecked...the guy that set up the blades claims this is not the issue and that the problem is with my distributed switch settings.
What is your guys take on this?
We have a cluster of 3 ESXi 5.1 hosts on a blade system HP gen8.
I set up a vdswitch to give us easier centralized management of networking and allow us to use pVLANs.
all hosts are now part of the switch and have NICs assigned to the uplinks (default trunking)
now to the issue: if i have 2 VMs on the same pVLAN (both community) as long as they are both on the same ESX host they can communicate; however if i move one VM on another ESX host the communication stops.
I think the issue is with the virtual connect used by the blades to communicate with each other inside the enclosure. I'm not an expert here but i did notice a setting (for the blades' enclosure) labeled VLAN tunneling (it claims that you need to check it to allow it to pass VLAN tags)
This setting is now unchecked...the guy that set up the blades claims this is not the issue and that the problem is with my distributed switch settings.
What is your guys take on this?