Brocade 1020 slower with jumbo frames

rsq

Limp Gawd
Joined
Jan 11, 2010
Messages
246
Hi everyone,

I am looking for some insights why my Brocade 1020 cards do not seem to like jumbo frames.

I set up a test with a direct link between 2 machines using Brocade SFP+ modules and a 1M fibre cable.

I have some sysctl settings in place to speed up the TCP/IP, but for both tests the settings were the same.

Code:
rsq@cluster3:~$ iperf -c 192.168.0.6
------------------------------------------------------------
Client connecting to 192.168.0.6, TCP port 5001
TCP window size: 9.54 MByte (default)
------------------------------------------------------------
[  3] local 192.168.0.5 port 45415 connected with 192.168.0.6 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  10.2 GBytes  8.77 Gbits/sec
rsq@cluster3:~$ sudo ifconfig p5p1 mtu 9000
rsq@cluster3:~$ iperf -c 192.168.0.6
------------------------------------------------------------
Client connecting to 192.168.0.6, TCP port 5001
TCP window size: 9.54 MByte (default)
------------------------------------------------------------
[  3] local 192.168.0.5 port 45416 connected with 192.168.0.6 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  9.03 GBytes  7.76 Gbits/sec
rsq@cluster3:~$

Performance is reduced (8.77 vs 7.76) by enabling jumbo frames? Does anyone know why?

I am completely baffled by this result... I had expected to see the jumbo frames lead in performance.
 
I assume that you set the MTU to be identical on both servers?
 
Yes, both MTU's were the same.

I have redone the test with both nodes connected to the switch, and the result was the same.
 
I did some additional testing and I came to an interesting result.

The cards seem to get faster and faster until the MTU is around 7000. Going larger sees the performance drop off quite sharply.

I tested with this command:
Code:
iperf -c 192.168.0.6 -t 100 -i 10

My numbers were:
Code:
MTU  | Speed Gbit/s
-----+-------------
1500 | 8.88
2048 | 8.42
4096 | 9.24
6144 | 9.73
6750 | 9.52
7000 | 9.86
7168 | 9.86
7500 | 9.63
8192 | 8.68
9000 | 7.62

The MTU sweet spot seems to be between 7000 and 7168. These MTU's max out the connection, even on a single stream.

I had the following optimizations on the TCP stack for this test:
From /etc/sysctl.conf
Code:
# -- 10gbe tuning  -- #

# turn off selective ACK and timestamps
net.ipv4.tcp_sack = 0
net.ipv4.tcp_timestamps = 0

# memory allocation min/pressure/max.
# read buffer, write buffer, and buffer space
net.ipv4.tcp_rmem = 10000000 10000000 10000000
net.ipv4.tcp_wmem = 10000000 10000000 10000000
net.ipv4.tcp_mem = 10000000 10000000 10000000

net.core.rmem_max = 524287
net.core.wmem_max = 524287
net.core.rmem_default = 524287
net.core.wmem_default = 524287
net.core.optmem_max = 524287
net.core.netdev_max_backlog = 300000
 
Did you update the drivers to the latest they can go? I've seen bad firmware cause issues exactly like this before
 
I decided to risk it, and updated the boot code of my cards.

The boot code is updated for sure, but I am guessing the firmware is not because modinfo bna has not changed:
Code:
rsq@bitbucket2:~$ modinfo bna
filename:       /lib/modules/3.13.0-35-generic/kernel/drivers/net/ethernet/brocade/bna/bna.ko
firmware:       ct2fw-3.2.1.1.bin
firmware:       ctfw-3.2.1.1.bin
version:        3.2.21.1
description:    Brocade 10G PCIe Ethernet driver
license:        GPL
author:         Brocade
srcversion:     FB68613D4FBA721A7B63E4D
alias:          pci:v00001657d00000022sv*sd*bc02sc00i*
alias:          pci:v00001657d00000014sv*sd*bc02sc00i*
depends:        
intree:         Y
vermagic:       3.13.0-35-generic SMP mod_unload modversions 
signer:         Magrathea: Glacier signing key
sig_key:        B1:41:4A:E9:6C:1B:0E:BB:7C:14:1F:A4:05:C1:F6:C9:8E:8A:66:F0
sig_hashalgo:   sha512
parm:           bnad_msix_disable:Disable MSIX mode (uint)
parm:           bnad_ioc_auto_recover:Enable / Disable auto recovery (uint)
parm:           bna_debugfs_enable:Enables debugfs feature, default=1, Range[false:0|true:1] (uint)

How does one update the firmware of one of these cards?
 
Some more info:
On Linux, the bna/bfa kernel driver will load the firmware from the initrd (firmware files are also in /lib/firmware).

This means that the driver must also be at the latest possible version for the latest firmware to be loaded.

The Ubuntu 14.04.1 LTS inbox driver works, but is outdated. Unfortunately, there is no Ubuntu support from qlogic.

I will try to hack the noarch package to compile on Ubuntu. Should be fun... :)
 
Back
Top