Nexus 1000v question

kalex1114

Weaksauce
Joined
Aug 14, 2013
Messages
126
Hello

I have a question for those that work with nexus 1000v in their environment.
We are doing migrations and we have vms that need to move from one cluster on nexus 1000v A to another cluster that is in nexus 1000v B.
We can't do it currently without moving those vms to standard vswitches first whcih creates extra work on source and destination clusters.

Is it possible to join one esxi host to nexus 1000v A and nexus 1000v B and then move vms to that one host and use it as a proxy. use network migration to move from one nexus to second nexus


Thanks
 
As per VMware Communities - This Limitation has existed since i have ever used the 1KV and still exists today.


http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9902/qa_c67-556624.html

Q. The service console and virtual machines typically are connected to two different vSwitches. Can two Cisco Nexus 1000V instances be started?
A. The Cisco Nexus 1000V architecture, with a single VEM per host and advance networking capabilities, allows proper segmentation of VMware ESX functions while still providing a consistent management entity. Only one VEM instance is required per physical host.

System Overview
The Cisco Nexus 1000V Series is a software-based switch that spans multiple hosts running VMware ESX or ESXi 4.0. It consists of two components: the Virtual Supervisor Module, or VSM, and the Virtual Ethernet Module, or VEM. The VSMs are deployed in pairs that act as the switch's supervisors. One or more VEMs are deployed; these act like line cards within the switch.
The VSM is a virtual appliance that can be installed independent of the VEM: that is, the VSM can run on a VMware ESX server that does not have the VEM installed. The VEM is installed on each VMware ESX server to provide packet-forwarding capability. The VSM pair and VEMs make up a single Cisco Nexus 1000V Series Switch, which appears as a single modular switch to the network administrator.
Each instance of the Cisco Nexus 1000V Series Switch is represented in VMware vCenter Server as a vNetwork Distributed Switch, or vDS. A vDS is a VMware concept that enables a single virtual switch to span multiple VMware ESX hosts. The Cisco Nexus 1000V Series is created in VMware vCenter Server by establishing a link between the VSM and VMware vCenter Server using the VMware VIM API.
VMware's management hierarchy is divided into two main elements: a data center and a cluster. A data center contains all components of a VMware deployment, including hosts, virtual machines, and network switches, including the Cisco Nexus 1000V Series.

Note: A VMware ESX host can have only a single VEM installed.
 
Last edited:
Probably easier just to shut the VMs down and then change their network and boot them back up, that's what I did last time I had to migrate between clusters using different 1000 Vs.
 
If only. Vms are in production and getting approval for shutdown of all the vms is like herding cats :)

I also have about 1500 vms to move in different clusters. So i will have to do the standard vswitch dance
 
Yeah, this is one of the reasons we decided against implementing 1000Vs when deploying our 3 UCS domains. The other main one being future NSX deployment.
 
Pretty sure the recommendation is to never implement 1000v's no matter what at this point right? :D
 
I've taken it out more often than implementing it...:eek:
 
Wasn't my decision :) It looks like we are moving away from it but not in the near future
 
Back
Top