ESX Network Design - Proposed design, did I miss anything?

Why are you using standby adapters..shouldn't they be all active? Also, for IP Hash to work properly I believe link aggregation needs to be setup on the switch side....and the uplinks need to be connected to the same physical switch.
 
Well the switches are stacked so from what I understand they will be seen as "the same" switch. I'm not entirely sure on this though.

Standby adapters were suggested to keep redundancy but to not use the adapters unless absolutely necessary.
 
IMO its not worth it to use Active/Standby.

A lot people say well you can have dedicated channels for this, that and the other and ensure you still have a dedicated channel "IF" something breaks.....IF being the key word in that sentence.

Personally, I would rather setup my boxes for maximum performance by utilizing all available hardware and have it configured to failover (even if at a reduced performance level) "WHEN" something breaks. If your shop has a 99% uptime, is your network performance for say VMotion that critical for that 1% of time? That is assuming that 1% affected your network path and was not say a HDD failure which is much more common or something else not network related.

Basically I would rather have 100% of my resources available, 99% of time opposed to 75% 100% of the time.
 
The thing we were trying to avoid would be the vMotion/FT traffic affecting the VM traffic. VM traffic is much more important to us than the vMotion/FT traffic.
 
okay so I booted up my ESXi VM and did some of this networking stuff.

mgmt_nicteaming.png

subnet_nicteaming.png

vmotionft_nicteaming.png

vswitch_overview.PNG
 
The thing we were trying to avoid would be the vMotion/FT traffic affecting the VM traffic. VM traffic is much more important to us than the vMotion/FT traffic.

No i understand, my point is there are other ways to do that.
 
I would change all the adapters to Active. Setup the adapters on your physical switch with Link Aggregation. Change the Load Balance to IP HASH, failover detection to Beacon Probing, that way it detects misconfigurations as well.

You have logically segmented your traffic using Vlans. I wouldn't be too concerned about Vmotion traffic, FT, etc during failover. How often do you estimate that's going to occurr? If you're that worried about it, physically segment the VT/FT network adapters. You just wont get the benefits of using all 4 in the team.

I agree with Netjunkie and nitro...I would rather be able to pull from all resources for any function, with the exception of IP based storage.
 
Well the switches are stacked so from what I understand they will be seen as "the same" switch. I'm not entirely sure on this though.

Standby adapters were suggested to keep redundancy but to not use the adapters unless absolutely necessary.

Yes, a stack switch should be able to port-channel ports on different switches and act as one so you can use "hash by IP".

And yeah..looks like you can't do it with the standby config I mentioned so carry on.
 
Or buy the Enterprise Plus license and use Network I/O Control. ;)

Licensing already purchased. As of right now the features of Plus weren't needed. We will deal with I/O control by separating out the traffic on different cards and lose the redundancy if we need to go that route.

Hopefully later down the line I will get the budget to upgrade to Enterprise Plus.
 
Licensing already purchased. As of right now the features of Plus weren't needed. We will deal with I/O control by separating out the traffic on different cards and lose the redundancy if we need to go that route.

Hopefully later down the line I will get the budget to upgrade to Enterprise Plus.

Could be worse. I just did a design for a customer with blades that had 2 Gb ports and that was it.
 
I would change all the adapters to Active. Setup the adapters on your physical switch with Link Aggregation. Change the Load Balance to IP HASH, failover detection to Beacon Probing, that way it detects misconfigurations as well.

You have logically segmented your traffic using Vlans. I wouldn't be too concerned about Vmotion traffic, FT, etc during failover. How often do you estimate that's going to occurr? If you're that worried about it, physically segment the VT/FT network adapters. You just wont get the benefits of using all 4 in the team.

I agree with Netjunkie and nitro...I would rather be able to pull from all resources for any function, with the exception of IP based storage.

Looks like IP HASH and Beacon Probing aren't allowed together. The error in vSphere says to use Link Status only.
 
Yeah..that's my bad...I was just reading about this..should've picked up on this...yes..Link Status only..but definately Link Aggregation on the physical switch side...what are you using for the stack, a few 3750's?
 
Does Enterprise Plus work with ESXi and ESX 4.x?

And say you have only the two GB ports, the Vswitch is the one logically load balancing the Console, VMotion and VMs traffic to the physical NICs, which might be using LaCP to the physical switch?
 
Does Enterprise Plus work with ESXi and ESX 4.x?

And say you have only the two GB ports, the Vswitch is the one logically load balancing the Console, VMotion and VMs traffic to the physical NICs, which might be using LaCP to the physical switch?

Yes, ESXi and ESX can both be licensed in the same ways. No difference there.

vSphere does not support LACP unless you do the Nexus 1000v. You just do hard set port-channels.
 
I'll be damn I got something right. I'm just so proud of myself I had to reply. Glad it all got tested out, very useful information for sure. I wonder how many times this thread will get hit with google searches :).
 
Yeah..that's my bad...I was just reading about this..should've picked up on this...yes..Link Status only..but definately Link Aggregation on the physical switch side...what are you using for the stack, a few 3750's?

Stacking dell 62xx switches.
 
I'll be damn I got something right. I'm just so proud of myself I had to reply. Glad it all got tested out, very useful information for sure. I wonder how many times this thread will get hit with google searches :)

I tell ya..a lot of this is covered very well in Scott Lowe's Mastering vSphere Book..including setup and configuration of the Cisco Nexus 1000v.

I have a feeling that book will be by my side for a while! I can't praise it enough.;)
 
Now heres more of a networking question. With the stacking modules, would there be any reason to stack both the front-end and the back-end switches?
 
I racked the CX4 and the two backend switches today. Started configuring the switches and they are pretty much complete besides the VLAN for the iSCSI traffic to separate it from the management port.
 
Okay, so for the backend network traffic. Would I gain any performance by setting up a LAG between the two iSCSI ports on the servers to each of the switches or should I just leave them as they are?

I'm not a big networking guy, most my network experience is basic and while I understand the concepts well I want to squeeze every bit I can out of this hardware.
 
Last edited:
I wanted to ask you, really in general is why I rarely see anyone using vDistributed switches instead of creating multiple vSwitches for each ESXi host. It seems that it would be much easier to manage one or two distributed switches than to manage and setup the same vSwitch setup for every host?

General question really, but wanted to see if you considered that for your build? Since we are a Cisco shop, and for the reasons mentioned above, wee are currently looking into a couple of Cisco Nexus 1000v switches. Looking at the cost, it looks to be a bargin.
 
I wanted to ask you, really in general is why I rarely see anyone using vDistributed switches instead of creating multiple vSwitches for each ESXi host. It seems that it would be much easier to manage one or two distributed switches than to manage and setup the same vSwitch setup for every host?

General question really, but wanted to see if you considered that for your build? Since we are a Cisco shop, and for the reasons mentioned above, wee are currently looking into a couple of Cisco Nexus 1000v switches. Looking at the cost, it looks to be a bargin.

Yeah, Enterprise Plus was just out of our budget. We might upgrade to it at some point but just couldn't fit it in this time around.
 
Ahh..I thought you had Enterprise Plus....I just rarely see a lot of users implementing Distributed switches..of course i'm not in the sales world so I don't see a lot of designs..etc. anyway..but looking around the internet...i figured that would be a benefit that most would take advantage of.
 
Okay, some networking questions again. How important is it to use IP HASH load balancing?

Originally we were going to use 62xx series switches for frontend connectivity but due to budget this got bumped down to 54xx series switches.

I'm not even sure I could of done a one 4 port lag across two 62xx series switches.

I want to keep 2 nic ports going to each switch for redundancy, should I fall back to the active/standby setup or do something different?
 
IP Hash gives you better load balancing than say Virtual Port ID or Source MAC (basically the same thing). Will it be a problem? Very doubtful. The majority of vSphere clusters use the default Virtual Port ID. I wouldn't stress it. If it turns out to be a problem just add two more NICs to each host.
 
IP Hash gives you better load balancing than say Virtual Port ID or Source MAC (basically the same thing). Will it be a problem? Very doubtful. The majority of vSphere clusters use the default Virtual Port ID. I wouldn't stress it. If it turns out to be a problem just add two more NICs to each host.

Okay, so should I go with the Active/Standby setup or should I make all the NIC's active (2 on each switch)? Maybe make all active for VM traffic from each VLAN and limit FT/VMotion traffic to one vmnic with one failover vmnic.


Okay, so I followed the directions here in this article. It looks like in the article it was a single subnet design. When I did the vmkNIC binding to the iscsi initiator I lost all connections to the AX4-5i. Before I did the port binding I had 4 connections it looked like to the AX4-5i. I read somewhere due to a bug in the FLARE having a single subnet was problematic.

EDIT: Don't mind me, I'm stupid and made an IP typo when I remade vmk1 and vmk2. damnit!

navisphere_paths.PNG


High%20Level%20View%20of%20SAN%20Network%20Design.jpg
 
Last edited:
EDIT: Don't mind me, I'm stupid and made an IP typo when I remade vmk1 and vmk2. damnit!
 
to bump an older thread, I was wondering how this setup would change if I used Distributed switches? Do I need to do all the fancy port teaming or does that magical happen when setting up vDS?
 
to bump an older thread, I was wondering how this setup would change if I used Distributed switches? Do I need to do all the fancy port teaming or does that magical happen when setting up vDS?

You still have to do the same work, except you set the policy across the entire "datacenter". You have a some more options, like LBT in 4.1 too. Kind of nice not having to reset it up per host. If you're talking about binding vmkernel ports to the software iscsi initiator... you still have to do that.
 
Back
Top