Child of Wonder
2[H]4U
- Joined
- May 22, 2006
- Messages
- 3,270
How have you guys been setting this up?
One of the advantages of UCS is that if architected correctly, we can ensure VMotion traffic will stay local to the Fabric Interconnects. Prior to vSphere 5, we'd pin VMotion to Fabric A or B only with one vmnic as active and the other as standby.
However, doing the same thing with vSphere 5.X and multi-NIC VMotion is more problematic.
In an ideal scenario, we'd like to continue to have all VMotion traffic local to the Fabric Interconnects and not traversing the client's LAN. In testing, I've found there's simply no way to do this while adhering to VMware's best practice of putting both VMotion vmkernels on the host in the same subnet and VLAN. If we do that, the vmkernel on Fabric A will try to talk to the vmkernel on Fabric B and the VMotion traffic has to go north into the LAN.
The only way I've found to prevent this is to use two subnets for VMotion: one for Fabric A and one for Fabric B. While this works, it falls outside of VMware best practices. If I use the same subnet but try to prevent Fabric A vmkernels from connecting to Fabric B vmkernels, all VMotions fail.
I've heard of some people bucking best practices and using different subnets, some people go ahead and use one subnet and don't worry about the VMotion traffic going north of the FIs.
What do you guys prefer in your implementations?
One of the advantages of UCS is that if architected correctly, we can ensure VMotion traffic will stay local to the Fabric Interconnects. Prior to vSphere 5, we'd pin VMotion to Fabric A or B only with one vmnic as active and the other as standby.
However, doing the same thing with vSphere 5.X and multi-NIC VMotion is more problematic.
In an ideal scenario, we'd like to continue to have all VMotion traffic local to the Fabric Interconnects and not traversing the client's LAN. In testing, I've found there's simply no way to do this while adhering to VMware's best practice of putting both VMotion vmkernels on the host in the same subnet and VLAN. If we do that, the vmkernel on Fabric A will try to talk to the vmkernel on Fabric B and the VMotion traffic has to go north into the LAN.
The only way I've found to prevent this is to use two subnets for VMotion: one for Fabric A and one for Fabric B. While this works, it falls outside of VMware best practices. If I use the same subnet but try to prevent Fabric A vmkernels from connecting to Fabric B vmkernels, all VMotions fail.
I've heard of some people bucking best practices and using different subnets, some people go ahead and use one subnet and don't worry about the VMotion traffic going north of the FIs.
What do you guys prefer in your implementations?