Can someone tell me more about NIC teaming/bonding?

Nocturnal

Gawd
Joined
Jul 20, 2006
Messages
825
I see there is a 4-port gigabit Intel NIC and in one review he or she says that they have it setup through AD trunking and are getting 450MB/s through their pipe.

Is this type of setup best used when using a NAS or SAN that uses iSCSI to help speed up file transfers?
 
For our XenServer clusters, on iSCSI networks and Production networks, we team two gigabit NIC ports together and the link between the two shows a 2 gigabit link. We use LACP (802.3ad) for bonding.

Then, we bond the SANs in LACP also so everything on the back end is 2 gigabit links.
 
It can be useful. It's call Link Aggregation ( 802.3ad I believe ). It's main purpose is to provide fault tolerance on layer 2, but you can also use it to increase bandwidth.

Keep in mind that, unless you have more than a few spindles, your limiting factor is going to be disk based. When you start moving beyond a small number of spindles, fiber starts to make more sense anyway.
 
Yes, you can channel them together. It won't help a single client talking to a single server though...that connection will ride of one channel (one NIC). If you have multiple clients talking to a server they'll be distributed across the NICs and see benefit.
 
Are there any methods of bonding that make it function as a single logical interface? So you can get threads >1Gbps?
 
Are there any methods of bonding that make it function as a single logical interface? So you can get threads >1Gbps?
[ame="http://en.wikipedia.org/wiki/Link_aggregation"]Link aggregation - Wikipedia, the free encyclopedia@@AMEPARAM@@/wiki/File:Link_Aggregation1.JPG" class="image"><img alt="" src="http://upload.wikimedia.org/wikipedia/commons/5/5b/Link_Aggregation1.JPG"@@AMEPARAM@@commons/5/5b/Link_Aggregation1.JPG[/ame]

That does a good job covering the topic.
 
Are there any methods of bonding that make it function as a single logical interface? So you can get threads >1Gbps?

No. Basically, as you balance across connections you use a method to "hash" those conversations between a client and a server. Usually it's by like src/dst MAC address, or IP address, or port number.... But for a connection all those stay the same so all frames get hashed over a single physical connection. That's why multiple clients to a server sees benefit. Each client has a diff MAC or IP address and you can hash based on those.

And this is why port-channels suck for a lot of things. If you need raw speed you end up moving to 10Gb.
 
Yes, you can channel them together. It won't help a single client talking to a single server though...that connection will ride of one channel (one NIC). If you have multiple clients talking to a server they'll be distributed across the NICs and see benefit.

that is not really true for teaming. a single client and a single server could absoloutely use 4 bonded NICs (4Gb) as long as there were no other bottlenecks. i use teaming (2 NICs) between my home server and home desktop. when i need to backup or move a few terabytes of data from one to the other, it absoloutely helps (because the RAID on the server and the SSDs in my workstation are able to saturate a single 1Gbit connection).

what you said is true about load balancing or redundancy, though. if you dont team the NICs, or both ends of the connection dont fully support teaming, you would only have two (or more) redundant 1gbit connections, and normal file transfers would only use one NIC at a time.

Are there any methods of bonding that make it function as a single logical interface? So you can get threads >1Gbps?

the actual correct answer to this question is "yes". if both sides of the network link support teaming, then you will have available the full, combined, speed off all the network links that are 'teamed'. you need it to be supported on the client, the server, and any gear in between, though.

your network will always be as fast as the slowest link.
 
that is not really true for teaming. a single client and a single server could absoloutely use 4 bonded NICs (4Gb) as long as there were no other bottlenecks. i use teaming (2 NICs) between my home server and home desktop. when i need to backup or move a few terabytes of data from one to the other, it absoloutely helps (because the RAID on the server and the SSDs in my workstation are able to saturate a single 1Gbit connection).

what you said is true about load balancing or redundancy, though. if you dont team the NICs, or both ends of the connection dont fully support teaming, you would only have two (or more) redundant 1gbit connections, and normal file transfers would only use one NIC at a time.


I'm sorry but you're incorrect. Teaming is just a word some vendors use to describe port-channels and failover teams. There is no standard way to bond multiple NICs together and have a single connection use more than one. You have to split the connections.

What type of team/NIC/driver are you using on each side? What are you using to transfer the data? What speeds are you seeing from a single host to another single host?
 
I'm sorry but you're incorrect. Teaming is just a word some vendors use to describe port-channels and failover teams. There is no standard way to bond multiple NICs together and have a single connection use more than one. You have to split the connections.

What type of team/NIC/driver are you using on each side? What are you using to transfer the data? What speeds are you seeing from a single host to another single host?

MPIO would do it, for storage anyway.
 
What type of team/NIC/driver are you using on each side? What are you using to transfer the data? What speeds are you seeing from a single host to another single host?

my workstation and the server both have two intel pro1000 NICs, one using the solaris dladm driver, the other linux's lagg driver.

the switch in between is a run-of-the-mill netgear GS108-something that supports link aggregation. so there are two cables from the workstation to the switch, then two cables from the switch to the server.

I can do file transfers or jperf tests between the two PCs with the NICs acting as seperate interfaces (no aggregation), and i get speeds around 100-120MBytes/sec, which you would expect from a single GigE connection. When i turn teaming on (the network interface shows up as one single 2GBit connection), both types of tests jump up almost 200% to around 210 to 235-ish MBytes/sec depending on the files or protocols used.

that is for both a single file transfer (just a "copy this file" command), or a jperf test. it absoloutely uses both connections when they are teamed/trunked/aggregated/or whatever you want to call it. so i have no idea why you say this doesnt exist. i must have some magical pieces of hardware then :rolleyes:
 
^ people here on this forum claim it does not work with a single connection. If it does I would like to know how!
 
I think your testing methodology is flawed somewhere. The GS108 supports LACP so I assume that's the config for LAGG you are using. LACP supports the standard hash methods so there is no way for a single connection to use multple links. So either something is splitting your traffic or there is a flaw in how you're testing it.

I see that LAGG supports "roundrobin" and it doesn't give much info on that so maybe that's your setting. and it is throwing packets across both links. That won't work on most switches as you'll start throwing "MAC Flapping" errors and it'll shut the port down as it'll see a single MAC on multiple ports. Also, see this note:

"Distributes outgoing traffic using a round-robin scheduler through all active ports and accepts incoming traffic from any active port. This mode violates Ethernet Frame ordering and should be used with caution."

That's why it's not a supported function on any enterprise level switch. No guarantee of frame order. So if you're doing that and it works, great...but you won't see that on a production network anywhere.
 
Back
Top