Is it possible for force link up on an esxi nic?

danswartz

2[H]4U
Joined
Feb 25, 2011
Messages
3,715
My lab has two 6.7 hosts connected back to back by a high-speed mellanox interface. Each host has a VSA running, using iSCSI multipathing, so one host going down won't hose the other host. For max performance, each host has a virtual function of the mellanox NIC passed in via SR-IOV. Sadly, I have discovered that if I take either host down (powered off), the still-up host loses link (expected), and the VSA and ESXi are unable to communicate, even though no packet would ever need to leave the box (not expected.) Is there a way to configure a nic/vswitch/vmkernel adapter to ignore the state of link? Or to force it up? Thanks!
 
If I understand this correctly you leave the second link as passive in configuration and it still goes down? Is the second path on a different or same subnet?
 
If I understand you, yes. Nothing special in the config, just 2 high speed links back to back. When host B goes down (power-wise), host A loses link, and esxi apparently will not let any traffic pass through the physical nic, so esxi sending a packet into the physical nic, and back out a virtual function to the VSA (or vice-versa) doesn't work. I couldn't figure out any way to make this work, so what I settled on was: 3 paths to the LUN on each host. One via the HS link locally, one via a 1gb link locally, and one via a 1gb link on the other host. Set the HS link as vmware fixed preferred, and it seems to work just fine...
 
Nope :( For now, I've hacked around it by setting up a 1gb link through my switch, and setting the iSCSI pathing to prefer the high-speed link (
 
Not sure what you mean? Team the 50gb and 1gb connections? And use link loss detection to switch to the 1gb link?
 
If the above is what you mean, I think I'm SOL. The reason I am passing through an SR-IOV instance is to take advantage of iSER. And it works really well. If I'm going to forego that, I may as well set up a vSwitch with vmxnet3 vnics on each storage appliance...
 
Not sure what you mean? Team the 50gb and 1gb connections? And use link loss detection to switch to the 1gb link?

yep.

In correct setup you should have 2 storages running behind lb and failover; each storage would have 2 teams, and direct connection between themselves either through hba or fiber mirror.
 
those are very bad, and buggy. Hope you will never have to deal with anything like that (same goes with infiniband) - they do sound very nice on paper, but reality is a tough b.
 
Back
Top