Jumbo frames for iSCSI?

danswartz

2[H]4U
Joined
Feb 25, 2011
Messages
3,715
So I got my SAN box up and running. The esxi5 server has the vmnetwork port on the regular LAN segment. The qlogic iSCSI HBA is connected directly to a second gigabit NIC on the SAN box in a different subnet. I have applied the rollup fix package to the vsphere box, and am going to start moving some VMs from the NFS datastore to the iSCSI datastore. The question: is it worth switching the dedicated iSCSI link to use jumbo frames? Since it is a point to point link, there is no issue with MTU conflict with any other host or switch. Any thoughts appreciated....
 
no no, no no, no no, no no, no no no, no no no no!

We laugh about it.


Oh, and the qla4xxx series don't support JF, so don't even try ;) (last I checked)
 
Wow, okay :) I could have sworn the bios setting for the hba had an mtu field, but maybe I was misremembering. I was mostly interested in if there was any kinda of performance benefit. Sounds like no, though :)
 
We have discussed the performance benefit, or lack there of, a few times on here. For 1Gb it's not worth it.
 
Okay, thanks. I hadn't seen those threads when first posted, since I wasn't interested in the subject. Yeah, for 1gb, sure seems like a waste of time...
 
Last edited:
So here is a better jumbo frames question.

If the network is already setup end-to-end for jumboframe support, should I go head and enable it for a new VNX or just leave it off and turn off jumboframe support on the ESX hosts.
 
So here is a better jumbo frames question.

If the network is already setup end-to-end for jumboframe support, should I go head and enable it for a new VNX or just leave it off and turn off jumboframe support on the ESX hosts

What is your fabric, 10Gb?
 
no its all 1Gb

When we did our deployment a year or so ago JumboFrames was still being pushed.
 
To me, still not worth it. Just getting a small increase in SOME cases across 4 links.

We've tested Jumbo frames in our lab and saw only about ~3% performance improvement. For the massive PITA it is in some environments to enable JFs, it's just not worth it.

10Gb is a different story.
 
We've tested Jumbo frames in our lab and saw only about ~3% performance improvement. For the massive PITA it is in some environments to enable JFs, it's just not worth it.

10Gb is a different story.

Yep..not to mention one day at 2am something is going to break and you'll rebuild a vmkernel and wonder why your datastores keep disconnecting and VMs keep dropping off...and then..oh yeah! MTU was reset back to 1500! Pass.
 
Well, the vSphere on VNX guidelines is does state enable jumbo frames to handle heavy I/O workloads. While our VNX Fabric is Fibre, our Lefthand is 10Gbe and Jumbo Frames provided a nice improvement, but again, that's 10Gbe.

Here is a link to the guide.

http://www.emc.com/collateral/hardware/technical-documentation/h8229-vnx-vmware-tb.pdf

For 10Gb, sure. For non..no... And yeah, the VNX sheet will say to do it just like every slide deck at conferences says to do it. That annoys me to no end.
 
Umm, okay. I guess I was getting the impression there was more of a qualitative difference :)
 
Umm, okay. I guess I was getting the impression there was more of a qualitative difference :)

I was told once that it also helps more because of the larger number of frames that can be sent down a 10Gb pipe vs a 1Gb pipe which also makes CPU savings more noticeable.
 
We've tested Jumbo frames in our lab and saw only about ~3% performance improvement. For the massive PITA it is in some environments to enable JFs, it's just not worth it.

10Gb is a different story.

Don't forget the retransmit penalty - overload slightly, start getting aborts, and kaboom comes much faster.
 
Back
Top