Dual nic, bonding, and IPMI

Red Squirrel

[H]F Junkie
Joined
Nov 29, 2009
Messages
9,211
I am looking into setting up ethernet bonding for my storage and VM server. My VM server has a dedicated IPMI port so that should not be an issue but my file server only has 2 ports with IPMI using one of them. Is there a way I can get bonding to work while also getting IPMI to work? I will also want the bonded network to be a vlan trunk, to add even more compexity. (not even sure if this can be done, I have to read up further).

Will I be able to still make IPMI work? If not, is there anything I can buy to make this work? Maybe some kind of USB based IPMI or something? I would just buy a dual nic card, but because it's a file server, all the slots are taken up by the sata cards to provide enough ports for the 24 bays. Unless someone can recommend me a non raid Linux compatible card that has more than 2 SAS ports. (each SAS port does 4 drives)

Worse case scenario I can stick to doing bonding only on the VM side, that's probably where it's more important anyway since the network has to be used simultaneously for storage and actual networking. (ex: file transfers over LAN from a VM) Though bonding on the file server side would be nice too.
 
Does the fileserver have 2 MAC address on one nic? That was pretty common for a while. If it has 2 MAC addresses you should be fine.

Lantronix SecureLinx Spider USB is also an option for IP KVM.

LSI LSI00244 (9201-16i) is also another option that you mentioned.

Does your VM server not have provisions for round-robin access to datastores? Have the fileserver have 2 IP's, round robin it up.
 
Last edited:
I suppose that's another option I can just give the file server two IPs. I can split up how I access luns, ex: I can access LUN1 by connecting to 1 IP and LUN2 through the other IP and alternate like that. In this case LUNs are basically just NFS shares.

Also, is it even possible to do what I mentioned. Port aggregation AND make the virtual port a vlan trunk port? Glad the VM server has dedicated IPMI at least since this kind of messing around with networking is defininitly not doable remotely. Well if I knew 100% what I was doing I imagine it would be. :p

That LSI card looks pretty nice too... tempting to do that, then buy a 2 port nic. Maybe for now I'll just leave things alone and do this as a future upgrade. I sorta can't reboot that server anyway because I'm pretty sure I messed up the OS beyond repair by accident. I once accidentally moved all of / into another folder. I put everything back but who knows if the permissions are right and if it would actually boot up. Whenever I do decide to reboot that server I have to be prepared to reinstall the OS. Not a very complex box to setup mind you, basic Linux with a few extra packages such as NFS.
 
Ok I just realized something weird with IPMI. Even on my other server which has a dedicated port, it seems it's not actually using that port. It's using one of the system ports. whaaa? I really don't want to have one of those ports being IPMI and having to team the other port, that will screw up my labeling and cable management. Is there a way I can change this? I want this to be proper where the two ports next to each other are actual system ports and the port on the other end is IPMI. A bit OCD I guess but it's just more tidy that way.

Edit: NM, it seems there's an option for failover. So if the port is unplugged or shutdown (which is what I did to test) it fails over. Oddly now it wont turn back on... Anyway I'll troubleshoot that, different issue.
 
Last edited:
I can confirm bonding + vlan trunk does work. Though I still have to test if I'll actually be getting 2gbps with the bondage. I'll have to start a couple separate file transfers. (I understand a single connection can only use one nic)

As for my original question, think I will keep it simple and just put two separate IPs until I can change/swap some cards around and put in a dedicated nic to keep the onboard one dedicated for IPMI.
 
I can confirm bonding + vlan trunk does work. Though I still have to test if I'll actually be getting 2gbps with the bondage. I'll have to start a couple separate file transfers. (I understand a single connection can only use one nic)

As for my original question, think I will keep it simple and just put two separate IPs until I can change/swap some cards around and put in a dedicated nic to keep the onboard one dedicated for IPMI.

If your hypervisor supports round robin multipath with failover do that. I've gotten really good performance with that setup. Esxi handles it very well. One datastore (or even multiple stores) accessed sequentially over each link yields a good performance gain over a simgle link. On top of that if one link fails the one working one will take over completly.

What hypervisor do you use? If it isn't esxi, I really recommend giving it a try. Personally I still run 5.0u3 at home. Just plain rock solid stable and fast. I am trying out 5.1 on my new home setup. I gave 5.5 a try and just wasn't happy with the limited driver support. Many drivers were removed from 5.5 that pissed me off. Even my qle2460's didn't work right.
 
Last edited:
Have not decided yet still experimenting. I want to use KVM if I can get more performance out of it.
 
Back
Top