LACP on ESXi 5.5

Zarathustra[H]

Extremely [H]
Joined
Oct 29, 2000
Messages
38,877
Hey all,

A few questions.

1.) I have read in the past that ESXi is terrible at LACP and it is best to avoid it. Is this a historical concern, or still the case with ESXi 5.5?

My ESXi does A LOT of network traffic to quite a few clients, but I only have gigabit ethernet on my network.

Currently I have dual Intel Ethernet adapters Direct I/O forwarded to my heaviest network using guests, and have bonded the two ports in the guest and on my ProCurve switch using the "trunking" menu.

This is obviously not ideal. I don't use vMotion, but I would like to regain more RAM flexibility, by not using Direct I/O forwarding, as well as more flexibly using my network capacity.

2.) I figured a setup like this would work:

Each traffic heavy guest gets a VMXNET3 virtual 10gig interface to the VSwitch. Then instead of Direct I/O forwarding the adapters, I assign them all to that vswitch, and use LACP to connect them to my physical Procurve switch using trunking.

Is this a good way to do things?


3.) Looking at the "teaming" settings in the ESXi 5.5 client, it would appear that the terminology is very different than I am used to for this stuff (though this doesn't surprise me, as every vendor seems to have their own terminology for this stuff (Unix LAGG, Linux Bond, HP Trunking, Intel Teaming, etc. etc.)

Can anyone tell me which of the settings correspond to proper LACP that will be compatible with the LACP in the Trunking menu on my ProCurve 1810G-24?

Much obliged!

--Matt
 
LACP works fine on the 11 ESXi hosts i have in production over the last year...not had any problems, all are on 5.5
 
Damn.

This video suggests I need Vcenter, so I can access the web management interface.

I'm just running a home server in my basement, and as such, it is a standalone server with a free license managed using the client :(

Is there any way to do this without using a paid license? VMWares Enterprise pricing would put this solidly outside my home server budget.
 
You need to be running vDS for LACP or else you can just do a static etherchannel so that means:

1. Need vCenter
2. Need Enterprise Plus LIcensing or you have VSAN licensing.
 
You need to be running vDS for LACP or else you can just do a static etherchannel so that means:

1. Need vCenter
2. Need Enterprise Plus LIcensing or you have VSAN licensing.

Well, that stinks.

Thanks for the heads up.

Yeah, I just saw this video which tells me that Etherchannel is not compatible with HP's LACP modes, and thus for use with IP Hash, static trunking is the only way. Not even sure if static trunking is worth it, or if I am better off just keeping my direct I/O forwarded setup I have...
 
You're truly saturating a 1Gbps link in a home lab?

Yep.

Not constantly, but at peak, when my backups are running, my MythTV DVR is recording multiple shows, I am transferring files to my NAS and multiple clients are syncing to the NAS at the same time, I have peaked at about 5GBPS across all interfaces.

Most of the time it hums along at WELL below the capability of a single gigabit interface. My link aggregation setups are really just there to help resolve video/recordings skipping, or slow file transfers when multiple things are going on at once.

I mean, my ZFS volume alone supports ~900MB/s sequential transfers, and that's before involving anything else.

My setup is really less of a "Lab" per se, but more of a "home production system" supporting everything in the house. My primary motivation for going virtual was to save money by consolidation, and reduce power useage by having one server provide overhead, rather than 8.
 
Last edited:
So, just so I know what I am dealing with. Is "Route Based IP Hash" pretty much the same thing as the Round Robin Balancing in the linux bonding driver?

I'm trying to decide if I could live with it, but I am not quite sure what I'd actually wind up living with.
 
Zarathustra[H];1041439361 said:
So, just so I know what I am dealing with. Is "Route Based IP Hash" pretty much the same thing as the Round Robin Balancing in the linux bonding driver?

I'm trying to decide if I could live with it, but I am not quite sure what I'd actually wind up living with.

This blog post seems to suggest that for most usage scenarios static vs active LACP won't make much of a difference...


That is good to know, so maybe I will do this after all..
 
You're better off adding NICs anyway, even with LACP or etherchannel you can only get a maximum of 1Gb per flow. LAGs are just round-robin(or other load balancing method) 1gb links
 
Zarathustra[H];1041439387 said:
This blog post seems to suggest that for most usage scenarios static vs active LACP won't make much of a difference...


That is good to know, so maybe I will do this after all..

i went through this same predicament. i've since got vCenter going for the vDS (among other things), but i ran a static port group (ip hash) to a 3560x for over a year with no problems. there is no performance difference, lacp vs static. i can't speak on hp gear at all though.
 
Appreciate all the feedback here.

So, I switched over to teaming using "Route Based on IP Hash", and set up my ProCurve switch using a four port static trunk, and it appears to be working perfectly.

As far as the naysayers go, yes, in a home use scenario, I have already seen total network utilization well above what a single gigabit adapter can handle.

16055671173_ec1549a8a0_o.jpg


I did have a small hickup, as I didn't realize the Management Network also needed to be switched to "Route Based on IP Hash" separately from the vswitch, or you lose access to it. I had assumed its settings would just follow the vswitch its attached to, but apparently not.

I DO have a follow up question though.

The teaming configuration window does not seem to have a setting to tell it which network adapters are part of a team, and which aren't.

16488314520_7c90ee7b57_o.jpg


Does it just assume that ALL adapters attached to the vswitch are part of that team?

What would happen if I were to add a fifth gigabit adapter to the vswitch, but attach it as a single link to a different switch (or directly to another server).

Would it work at all, or would ESXi just assume that I now have a 5 NIC team and refuse to work?

I would complain about how this is TERRIBLE functionality on VMWare's end, but I realize full well, that I am using the free version without VCenter, and Vcenter has more functionality. Just wish I could use the web interface with its more advanced configuration options without the rather extreme cost.

I'd even be willing to pay a one time ~$100 home use license (windows style), I just can't justify the enterprise style licensing VMWare licenses are based on :(

Is anyone aware of a cheaper way to get full web interface configurability?
 
Zarathustra[H];1041455975 said:
That is pretty nice, but still $200 per year is on the hefty side...
Seriously? Wow...
Anyway, glad you found a solution to your problem. IMO, if you are using that much bandwidth, you'd be better off upgrading to 10G.
 
Seriously? Wow...
Anyway, glad you found a solution to your problem. IMO, if you are using that much bandwidth, you'd be better off upgrading to 10G.

Don't get me wrong, it is much cheaper than the enterprise licenses, but for a nonproduction learning license? I mean, Microsoft gives these things away to students through their MSDNAA program.

I was thinking a one time eternal license for home/learning use would be worth just north of $100 to me. For an eternal license i guess i could be perauaded to go up to $200, but $200 per year is just ridiculous for my home server in my basement, especially considering every other piece of software I use is free and open source. I'm just not used to paying for server software.


I agree, going 10gig would be great. I actually have two Brocade BR1020 adapters I could use, and I have found transducers for them for only $18 a piece, and OM3 duplex LC fiber is dirt cheap, making it cheaper than getting a compatible twinax direct copper cable. I would only need a swithc with a single 10gig upload port, as I am only concerned about multiple.combined client loads exceeding 1gig. These switches are ridiculously expensive though, and most models with uplink ports only work switch to switch, and won't connect to an ESXi host.

10GBaseT looks to be even better than fiber, but is still well out of reach price wise.

I simply can not justify paying more than $80 for any dual port network adapter, or more than $300 for any managed switch. This is a home server, not an enterprise server.
 
I feel your pain there. Enterprise licensing sucks for those of us using these at home!

Yeah, its too bad they can't throw is a bone. I'd be willing to pay them money if a reasonably priced home license existed. I know there probably aren't that many of us, (i mean, how many people do virtualization at home at all, let alone on a dedicated server) but still, there should be SOME.

Maybe they are just afraid it is going to cut into their enterprise licensing, with people cheating the system by buying home licenses and using them for production systems?
 
Zarathustra[H];1041460927 said:
Maybe they are just afraid it is going to cut into their enterprise licensing, with people cheating the system by buying home licenses and using them for production systems?

Then they should re-evaluate their license lock out model, set an example of those they catch cheating, etc.

I'm one of those people that fail to understand why these orgs kick their biggest advocates in the nuts but shower it all over university students that have no plans to recommend the products to anyone, and barely any plans to graduate. Bunch of pud pulling "yay look at my cool smart phone yay look at my free software" circle jerkin' mofo's. Just thinking about it makes my blood boil. I'm going down the hall and punching the intern right now.
 
Back
Top