NIC Teaming (link aggregation) bandwidth testing

Joined
Jun 24, 2001
Messages
974
I have a question about testing my actual bandwidth benefit using link aggregation. My boss asked me to look into it for servers that will be running several virtual machines but I'm not sure I have the hardware to conduct a meaningful test and I'm looking for suggestions.

My first idea would be to hook two servers using link aggregation up to a switch supporting the standard and just copy some big files to check my throughput. I do not have a switch to use for this though. They are all taken.

My boss suggested that I could just use two crossover cables and go server -> server but I have no idea how this would negotiate a link aggregation connection or if it would work at all.

Any help is appreciated.
 
I can't answer about NIC teaming, but for bandwidth testing, iperf is the standard. Copying large files between servers could be bottlenecked by hard drives, memory, etc, but iperf is almost purely bandwidth-limited.
 
Are you trying to get an aggregate of two 100mbps connections? If you are, just upgrade to a gigabit controller. I can't see any server managed by anyone that posts here using more than a gigabit, unless it's streaming porn. If it already has gigabit, and you are trying to tie two together, what the hell are you doing with that server that it can saturate a gigabit?
 
Yes, you can use a pair of cables to do some forms of link aggregation, but others would need dedicated switch support. The starting point for looking at such details would be in your NIC teaming software / support.

You need to understand however that, in general, no single connection will benefit from > 1 Gb/s bandwidth, and you need to try at least two simultaneous connections to see a benefit.

Esp. in the case of slower HD-limited clients or clients accessing small data sets, you'd need a number of them to be doing accesses at the same time before you saw a benefit, and there is a chance that you'll be bottlenecked by the storage system before you are by networking under heavy concurrent load if the networking is fast to start.
 
For aggregated connections you can either use a switch that supports it, multiple independent switches, or a direct connection. Once you've got the link actually working (pinging works and so on) you can just use iperf. Transferring big files, especially on very fast links (2gbit+) doesn't work so well since you are severely limited by the disks. I used this methodology when bonding quad gigabit links together and it worked pretty well. If you are actually moving to the multiple-gigabit territory, you will need a quality NIC. I couldn't hit 2gbit with the cruddy on board controllers without thrashing the CPU straight to hell. If you want this for a more permanent you will need a quality switch, as most switches will not be able to handle the multiple MAC addresses and those that can usually do not use a round-robin scheme for forwarding packets.
 
Thanks for the responses.

This is just for testing at the moment but the server that I plan to implement this on eventually should actually be able to saturate a gigabit connection. It will be hosting about 15 virtual machines in addition to some other tasks and it will have a SAS RAID5 array that should give adequate I/O.

I tried to test this using several client machines to pull a ~5GB file through a switch that supports link aggregation, but I saw no benefit at all. For the time being, I'll assume that this is because our network is a mess and I don't actually know where each cable is running.

I'm not sure that straight link aggregation (IEEE 802.3ad) will be supported via direct cabling but I do have multiple servers capable of it so that might be my next test (with iperf).

Any other suggestion or specific testing scenarios?
 
Back
Top