Jumbo Frames device questions

AMD_Gamer

Fully [H]
Joined
Jan 20, 2002
Messages
18,287
I just got a switch that support jumbo frames that will plug into my normal router that does not support them. How does this work when I go to send data out to the internet through the router?

I will set the computers connected to this switch for Jumbo Frames but what happens when frames destined for the internet through the non Jumbo Frame router are sent?
 
For TCP traffic (web browsing, etc) no problem. TCP negotiates the maximum segment size (MSS) that works. The intial connection request message (SYN) is always a small packet so it will go through NP.

For UDP traffic, any packets larger than the MTU size for your router's physical interface are dropped. For many/most applications this is no problem because few applications use large UDP packets. Most common fail will be some older UDP-based IPsec VPN implementations. A really good example of a UDP-based application that is pretty much fail with MTU-mismatches is Microsoft Terminal Services (Remote Desktop). But most of the time - if you use it at all - you'd be using it inside your LAN where the MTUs all match.

For other less common transport layer protocols, your mileage may vary. Some will adapt, others will not.

I happen to have this very configuration at my home. All PCs/etc, and my layer-2 switch configured for jumbo-frames, my internet router is limited to 1522-byte MTU (Mikrotik RB-450G). For almost all day-to-day use it works just fine. There are a few quirks from time to time, but its worth it for the extra performance to/from my file servers.

Note that most internet connections are limited to 1500 byte MTU (layer3, 1522 byte layer2). If your router DOES support jumbo frames on the interface then jumbo-frame traffic outbound to the internet will get fragmented by the router rather than dropped. This is better but introduces a whole new set of issues to deal with.
 
Last edited:
If you're doing 1GB I'd like to see you do a real test showing the performance gain by doing jumbo frames. Very minimal. No harm if you realize the quirks, like PigLover said but don't expect much of an increase.
 
If you're doing 1GB I'd like to see you do a real test showing the performance gain by doing jumbo frames. Very minimal. No harm if you realize the quirks, like PigLover said but don't expect much of an increase.

From my testing, its about 10% using SAMBA shares and fast RAID configuration on both ends. Its the different between getting 85-90MB/s vs 95-100MB/s on file transfers. Not huge. If you don't have fast IO on both ends - if you are reading/writing single disk configs or non-raid shares like WHS don't bother 'cuz your disks will be your bottleneck long before your network.

I did my testing over a year ago and it is not documented in a way to share.
 
Guys I am just learning here, playing around with stuff.

That's what we are all doing here - learning and sharing what we have learned. I don't think anyone was attacking.

BTW, one other consideration: from your username, I take it you are a PC gamer. Don't forget that in some cases using Jumbo frame can increase packet latency (each packet is bigger - even if you have QoS set up correctly your gaming packets still have to wait for any existing packets in flight to finish. Milliseconds at worst, but many gamers might care.
 
From my testing, its about 10% using SAMBA shares and fast RAID configuration on both ends. Its the different between getting 85-90MB/s vs 95-100MB/s on file transfers. Not huge. If you don't have fast IO on both ends - if you are reading/writing single disk configs or non-raid shares like WHS don't bother 'cuz your disks will be your bottleneck long before your network.

I did my testing over a year ago and it is not documented in a way to share.

10% is too much. If you're really seeing that much your switching gear must be really inefficient somewhere. On 1Gb it's usually low single digits. In most implementations we see people don't bother until they go 10Gb.

Well done test by Jason Boche:

http://www.boche.net/blog/index.php...mparison-testing-with-ip-storage-and-vmotion/
 
Oh, I don't know about that. I can only report what happens in my network.

Switch: HP 1810G-24. Don't think that's too inefficient, is it?
NICs are all Intel Proset.
Test was done between my workstation (dual xeon x5550s on supermicro X8DAH+) to my file server (xeon x3460 on Supermicro X8SIA-f). Hardware raids on both ends.

If there is any inefficiency in this setup, it is Microsoft and Samba itself - but I just can't run over to Linux and NFS blindly since the apps I use every day are only available on windows (or Mac if I was so inclined, but I'm not). Note that Jason was testing NFS, iSCSI and vMotion - all of which are pretty optimized network applications. Samba is anything but optimized and would tend to magnify the gains.
 
10% is too much. If you're really seeing that much your switching gear must be really inefficient somewhere. On 1Gb it's usually low single digits. In most implementations we see people don't bother until they go 10Gb.

Well done test by Jason Boche:

http://www.boche.net/blog/index.php...mparison-testing-with-ip-storage-and-vmotion/

I have read this before and do not argue the minimal improvements vs hassle but I do have a question for anyone that has 1st hand knowledge.

I was messing with an ESXi 4.1 server and had Jumbo frames enabled on everything including the guests using the vmxnet3 drivers. I did a very informal test of copying a 100MB file from W2008 guest to a EQL PS6000E and unless I did my math wrong it seemed like a 17% improvement over standard frames. Did I make a mistake or does Windows benefit more then iSCSI ?
 
Agreed that's about the same deal i read on the topic last i looked in to it .
Minimal improvements {if any} vs hassle kind of deal = why bother for the most part.
 
http://63.196.71.246/~phil/jumbo.html is an old (over 11 years) article, which provides some basic info about jumbo frames. In that testing, they got 409Mbps at 100% CPU usage with 1.5K frames and 602Mbps at 55% CPU usage with 9K frames. Just to keep things in perspective, that was on dual 300MHz Sun servers running Solaris 2.5.1.

pktsize_hist.gif

The above graph is from a study[1] of traffic on the InternetMCI backbone in 1998. It shows the distribution of packet sizes flowing over a particular backbone OC3 link. There is clearly a wall at 1500 bytes (the ethernet limit), but there is also traffic up to the 4000 byte FDDI MTU. But here is a more surprising fact: while the number of packets larger than 1500 bytes appears small, more than 50% of the bytes were carried by such packets because of their larger size.

The basic idea of jumbo frames is to carry more data with the same overhead of one single packet. Going from 1500 to 9000, you essentially have 6 times as much data per unit of overhead. As long as your stuff supports it, it's essentially just tossing in more data in each packet to make things more efficient.

However, current CPUs and NICs are more powerful than they used to be. I'm pretty sure even the worst onboard NIC today won't use even the "improved" 55% CPU shown above to get anywhere near 1Gbps. Then you have the compatibility issues with devices that don't support jumbo frames. And if you're dealing with packet loss, then each lost packet is now losing 6 times the data.

Jumbo frames are generally better because they're more efficient, but our current systems are basically so overpowered that we don't really need that extra efficiency like we did before.
 
Last edited:
Interesting.

Makes a lot of sense and im getting flash backs on the topic but really it depends what your using your network for .
My self being a gamer smaller packets and more often is better for what i use my network for seeing as i don't want to network to be on hold as it waits to have sufficient data before sending out a network packet.

When it comes to online gaming ping\response times are more important imho monster sized packets.

For raw data transfer it douse seem though that going for the jumbo size me tactic seems to be the way to go.

As for my internal network for data shares, i have messed with the jumbo size tactic but sadly regardless of if i use fire-wire or Gigabit Ethernet it seems im caped at 35\megs sec of bandwidth .Best guess is the pci bus is peeked.

As for integrated nic hardware in this day and age id love to see a review were some one douse a review at the command line or with some mega low level OS command line and limits the hardware their using\testing to 25mgz on the cpu & 4 megs main system ram and if the driver for the nic in question trys to use more then 25k of conventional memory then that means the driver is bloatware ;)

I mean the driver should be nothing more then a DMA\IRQ\And a wack of input\output ranges no if were dealing with 100% hardware based nic.

Hay man L'Eggo My 640k ram !
old nic card review p166mgz vs p2 400mgz
 
Last edited:
Back
Top