vSphere Design Considerations - Network ?'s

So on the VIC1280 connected to a 2208 I can get 80Gb per blade?
 
Ahhh...btw..was offered a very good position at a Cisco/VMware/Dell VAR in Syracuse today. I'm very excited to start. It's been a long journey to get me here but it's finally paying off.

It's for a Sr Data Center/Virtualization Engineer. I know I still have a lot to learn and an uphill climb before me, but now I feel I'm on the right path and that's what matters.
 
Ahhh...btw..was offered a very good position at a Cisco/VMware/Dell VAR in Syracuse today. I'm very excited to start. It's been a long journey to get me here but it's finally paying off.

It's for a Sr Data Center/Virtualization Engineer. I know I still have a lot to learn and an uphill climb before me, but now I feel I'm on the right path and that's what matters.

Congrats!
 
Ahhh...btw..was offered a very good position at a Cisco/VMware/Dell VAR in Syracuse today. I'm very excited to start. It's been a long journey to get me here but it's finally paying off.

It's for a Sr Data Center/Virtualization Engineer. I know I still have a lot to learn and an uphill climb before me, but now I feel I'm on the right path and that's what matters.

Congratulations!

Edit: If you don't mind me asking what was your background before you decided down this career path? You have given me some inspiration to polish up my resume and see if I can get a job with a VAR. For me it has been a lot of back and forth over the last couple of years on what path I wanted to go down for my career. I have 10 years of experience with traditional networks, Cisco Unified Communications, and now Datacenter networking and virtualization. I was leaning heavily towards traditional networking going so far as studying to get my CCIE R&S cert. I have found however that I enjoy working in the datacenter side of things much more.

Good Luck with your new job. It sounds like you will really enjoy it.
 
Last edited:
mct, check your PM as I don't want to get this thread locked for being off topic. Thanks to everyone especially NetJunkie for helping me along. I look forward to learning and discussing what i'm learning/doing with everyone in the near future!
 
Ahhh...btw..was offered a very good position at a Cisco/VMware/Dell VAR in Syracuse today. I'm very excited to start. It's been a long journey to get me here but it's finally paying off.

It's for a Sr Data Center/Virtualization Engineer. I know I still have a lot to learn and an uphill climb before me, but now I feel I'm on the right path and that's what matters.

Excellent! Are you from Upstate, NY?
 
I know i'm diggingt his post up from the grave but, it IS the original subject.

Yesterday, I was at a customer site to begin a vSphere upgrade/SRM project. Part of this was to upgrade their current UCS-B Blades with doubling the memory capacity. I ran a Health Analyzer etc in a previous small project, and found some glaring things there that i addressed. I started to evacuate the hosts to add the memory and found that the Host would not go into maitenance mode, and on top of that the VM's that did migrate could not connect to the network on the other hosts. I validate all the VLAN's on all the hosts..etc and did some network troubleshooting on the northbound switch. What I found was that the Northbound switch MAC table was not accurate as to the network interface on the Virtual Machine's network settings.

I recall Netjunkie talking about Active/Standby load balancing in the Trainsignal UCS-B course and immediately validated what was happenning.

The previous partner that implemented this setup all vNics withing UCS-B for hardware failover AND on A fabric. They also had a teaming policy of Active/Active for the two vNics etc.

To get around this I set the other vnic to standby in the vSphere load balancing policy. I also found this was setup this way on the NFS vSwitch as well.

I wanted to clarify this:

1. You should either do hardware failover and use Active/Standby within the Kernel or VM Portgroup teaming settings?

2. If you choose to do teaming on the vSphere side with multiple vNics, you should choose 1 from A and 1 from B, no hardware failover in UCS, and you can use Active/Active based on Port ID?

3. For traffic that you do not want to traverse the upstream switch such as vMotion, you can use on vNic w/hardware failover?

I know that there are multiple ways of doing this but need some clarification....for you UCS-B guru's. I'll admit, for UCS-B, though i've taken courses, and have one in my lab..and played with it fully, i'm still fairly new with this product.

When I start the vSpher upgrade, I want to do this right so my plan is to setup a new Service profile with the correct config and apply to each blade as I do the upgrade.
 
1. If you do hardware failover there really isn't a need to put vNICs on both fabrics. If Fabric A fails it'll move the vNIC to Fabric B. This happens without the vSphere host be aware so you'd never activate that Standby vNIC.

2. Yes...but I only do that for VM traffic. Not for vMotion/FT/IP Storage. I want those things to stay on the same fabric..but for VM traffic I'll usually do active/active, UNLESS I know there is a lot of app traffic between VMs and then I may do something else. Again, try to reduce traffic leaving one FI and coming back to the other.

3. Yes. Or Active/Standby and no hardware failover (up to you).

You see a lot of UCS installs get this stuff wrong. We clean up a lot of other partner installs.

And FYI, one of our engineers, Jeremy Waldrop, just updated his UCS healthcheck script the other day:

http://jeremywaldrop.wordpress.com/2012/04/04/cisco-ucs-powershell-health-check-report/
 
Awesome man..thanks. One more question, NFS. The previous partner setup a VNX along with UCS-B and they attached the VNX to their northbound switch and didn't use appliance ports on the FI's. I understand how the Block Protocols should be setup with FC/FCoE/iSCSI, but this really is my first foray with NFS in production with UCS-B. Any pointers on how that should be setup? My thought was 2 vNIC's, one a, one b, no hardware failover, then active/active teaming on vSwitch?

I also was thinkin back on Chris Wahls blog posts about NFS load balancing. Right now, the previous vendor sold them Enterprise...so no dvSwitch.which sux..but i'm trying to set this up optimally. I think I'll be SoL though because there is only one NFS IP..etc...and I can't touch the VNX since we are not an EMC partner...know how to do it..but too much liability. Any thoughts on that Jason?
 
NFS is easy. There is no such thing as NFS load balancing so just pick a fabric, make a vNIC for it, enable fabric failover and call it good. If you want to do any load balancing you'd have to create multiple exports on the VNX and manually put some VMs on that export, others on the other...etc.

Just not much flexibility.
 
Yeah..I figured..ok..thanks again man! BTW..looking at a new Partner on the job front that has a bigger Cisco Data Center focused business so I can really work with UCS-B regularly...this job was a great start but thinkin it's time to move on....eventually..would like to get into some Product Management within VMware..etc...we'll see...
 
I really want too..oldest has two years left of HS and I don't want to remove her..BUT, i'm starting to get my wife open to it...so that's a plus. I really would like to be near Wrightsville Beach area or even northern South Carolina coast above Myrtle Beach, though i've heard people raving about Charlotte...and of course Western North Carolina is somewhat like upstate NY from what i've seen..so..we'll see, oh yeah..plus I could own an assault rifle...lol.
 
I really want too..oldest has two years left of HS and I don't want to remove her..BUT, i'm starting to get my wife open to it...so that's a plus. I really would like to be near Wrightsville Beach area or even northern South Carolina coast above Myrtle Beach, though i've heard people raving about Charlotte...and of course Western North Carolina is somewhat like upstate NY from what i've seen..so..we'll see, oh yeah..plus I could own an assault rifle...lol.

I like you would love to live in Asheville, NC or the Charlotte area.
 
I really want too..oldest has two years left of HS and I don't want to remove her..BUT, i'm starting to get my wife open to it...so that's a plus. I really would like to be near Wrightsville Beach area or even northern South Carolina coast above Myrtle Beach, though i've heard people raving about Charlotte...and of course Western North Carolina is somewhat like upstate NY from what i've seen..so..we'll see, oh yeah..plus I could own an assault rifle...lol.

I have family down in Wilmington so I am down that way a lot. Wrightsville Beach is nice. It is my preferred beach to go to with my wife and son. I live close to Charlotte and there are many nice areas in and around it to live. I don't think you could go wrong with either place.
 
Yup, moving to the Carolina's is in my wife and I's five year plan as well. My last name is Wilmington, so I find it rather fitting. :p We've been in the Carolina's numerous times and she wants near the beach. Love it down there.
 
Resurrecting again.

Let's discuss stretched cluster configs with UCS-B and networking. Obviously we have a need to vMotion intra and inter datacenters.

For example, a campus that will have two separate UCS-B "Pods" accross campus, however vSphere is setup in a single cluster configuration.

While using site affinity we still want workloads to vMotion within it's Fabric etc, but during an HA event that workload could be restarted on DC-A or DC-B and DRS affinity would pull it back.
 
Going through a similar scenario with a client of mine as well. In their case, they want to have two completely redundant pairs of UCS, networking, and SAN in the same datacenter with a vSphere cluster stretched across them. Then they'll run all their VMs in clustered pairs, one half in one physical domain and the other half in the other physical domain. Since they're required to give maximum redundancy for their customers, this is the strategy they currently use and want to maintain it even while moving to UCS.

Looking into using VPLEX to virtualize the storage between the two separate SANs to achieve this.
 
Last edited:
The answer to this is VPLEX for storage virtualization so you get true active/active storage. Split a cluster across the sites. Use affinity rules as you want to keep apps in the right DC but let HA fail them to the other side. We've done a number of deployments like this.

The key is connectivity between the sites...latency since it's sync replication (basically...)... You'll need stretched layer 2 adjacency which we do with stretched VLANs or OTV.
 
Going through a similar scenario with a client of mine as well. In their case, they want to have two completely redundant pairs of UCS, networking, and SAN in the same datacenter with a vSphere cluster stretched across them. Then they'll run all their VMs in clustered pairs, one half in one physical domain and the other half in the other physical domain. Since they're required to give maximum redundancy for their customers, this is the strategy they currently use and want to maintain it even while moving to UCS.

Looking into using VPLEX to virtualize the storage between the two separate SANs to achieve this.

We are going NetApp stretched cluster in this scenario. First deployment of UCS Central as well. Looked at VPLEX, wanted VPLEX, unfortunately the VAR that we asked to assist with EMC, could not come to the table quick enough and fundamentally couldn't quite grasp that this is a software driven design, of course the hardware must be supported, the but the overall design is driven by vSphere really.

I've vetted all the requirements..etc.so were good on the latency/bandwidth front.

Since there are essentially two UCS Domains and vMotion would have to traverse between the two I was curious on how you would configure the vMotion network to accomodate that AND at the same time accomodate interfabric and intradc vMotion..etc.
 
Your intra-fabric vMotion config doesn't change. Just bind the vMotion interface to one fabric.
 
Ahhh..ok..thanks Jason.

Yeah...and in that way if you're doing vMotion in a cluster it'll stay on the same fabric..if you need to vMotion to another cluster it'll go outside the fabric but it'll just exit via the fabric you designated. No difference.
 
We are going NetApp stretched cluster in this scenario. First deployment of UCS Central as well. Looked at VPLEX, wanted VPLEX, unfortunately the VAR that we asked to assist with EMC, could not come to the table quick enough and fundamentally couldn't quite grasp that this is a software driven design, of course the hardware must be supported, the but the overall design is driven by vSphere really.

I've vetted all the requirements..etc.so were good on the latency/bandwidth front.

Since there are essentially two UCS Domains and vMotion would have to traverse between the two I was curious on how you would configure the vMotion network to accomodate that AND at the same time accomodate interfabric and intradc vMotion..etc.

Intra-chassis VMotion will continue to be multi-vmkernel with each pinned to one fabric. Beyond that, DRS rules will be in place to ensure the even nodes of guest clusters remain in UCS domain 1 and odd nodes stay in UCS domain 2.

For inter-domain VMotion, we'd need to carry the QoS we set in UCS through the LAN from one physical domain to the other to ensure we're not killing the link between the two physical domains. The better solution would be if there was a way to link the FI's from the two UCS domains together and allow VMotion traffic only to traverse that link but there isn't a way to do that so we have no choice but to send the VMotion northbound and then over to the other domain.

However, the frequency of inter-domain VMotions will be very small. In this client's case, we may not even do cross domain vSphere clusters simply because the size of their clusters are so large.
 
This is why I love posts like these...open discussion, seeing what others are doing and how they're handling it....
 
The hard part of a setup like this is the networking. VPLEX and NetApp MetroCluster (kinda) solves the storage problem. You have to do some work on vSphere but it's easy...it's the network that gets tough. You need to make sure that the VMs have a local gateway at the site and don't trombone traffic. You need to account for external application access...if required....you need to worry about layer 2 adjacency and all that involves. That's usually the hardest part.
 
I've been looking into just that very issue the past few days. Some articles I found very informative:

http://kacangisnuts.com/


At the particular customer i'm working with, it's a college campus, DC's of course in separate buildings, however very close. They have redundant 10Gb dark fiber between the two with a latency in the 1-2ms range.

Now I need to figure out how to handle the Ingress/Egress traffic at both DC's. From the article it seems we can implement a FHRP using the same gateway in both DC's and traffic "should" take the most optimized path. That makes sense for Egress traffic, it's Ingress i'm more concerned about.
 
Back
Top