Intel NIC settings...a tweak too far?

daglesj

Supreme [H]ardness
Joined
May 7, 2005
Messages
5,847
Okay I've been using Intel NICs for ages. They work fine and have plenty of configuration settings.

Now I like to tinker a little and I've gone through all the advanced settings switching them on and off, putting the buffers to max etc. Results seem to vary here and there. Plus I'm just not anal enough to spend hours and hours benching every setting. Why do that when I have access to a site full of folks who love doing that all day? ;)

You look up articles on tweaking/Intel NIC settings and it seems half say switch everything on and the other says switch it all off except buffers to max. :confused:

So what I'm asking is is there any consensus on what to have on or off on your Intel NIC settings?

Do or don't you bother?:cool:
 
Nope I don't ever bother because I am never transferring lots of data over my network to even care. If it works and latency is food along with near expected throughput then who cares
 
Doesn't matter at all. You can tweak till your blue in the face but you're probably talking about going from ~940mbps to like ~960mbps. Just leave the defaults as Intel probably set them that way for a reason, so unless something is really off there shouldn't be a need to change it.
 
Yeah but your reponses are lame and not [H] and he didnt ask for your excuses.

OP there are no real global set of options that work for everyone. A lot of network performance is dependent on your switch. What kind of switch do you have? You can tailor your setttings to mesh nicely with your switch.

For instance jumbo frames play nicely with large sequential files and harshly with small files. But jumbo frames make no real difference over a 1 gbit pipe. Its served best at 10gb connections.
 
Yeah but your reponses are lame and not [H] and he didnt ask for your excuses.

OP there are no real global set of options that work for everyone. A lot of network performance is dependent on your switch. What kind of switch do you have? You can tailor your setttings to mesh nicely with your switch.

For instance jumbo frames play nicely with large sequential files and harshly with small files. But jumbo frames make no real difference over a 1 gbit pipe. Its served best at 10gb connections.

Yeah I did play around with Jumbo a bit a few years ago but that's the one setting that seems to be universally listed as "Was useful for 5 minutes back in one day in 2002 but no longer!"

I just have a simple metal gigabit SOHO TP-Link Smart switch so nothing fancy. All my gear has to sit in the living room so enterprise gear with fans is a no no. Do I do lots of data transfers? Not really but its nice to know ones network is running as slick as it can do with the minimum of effort.

At the end of the day I was just wondering what folks thoughts were and if there are one or two key options that tend to make the most difference.
 
i tried messign around with jumbo frames size. but ti only got me serioesu degraidng performance.

without jumbo frames my transfering of files in windows file sharing was around 115MB/s (peaked 125MB/s)
as soon as i would enable any kind of size jumboframes my transfer drate would drop to around 35MB/s to 55MB/s depending on size.
 
The only settings I mess with are the buffer sizes, maxing them out as this makes a difference on slower cpus. There's also a setting for CPU interrupts which you can tweak depending on what you want (in proset), but I just leave that alone.
 
Okay I've been using Intel NICs for ages. They work fine and have plenty of configuration settings.

Now I like to tinker a little and I've gone through all the advanced settings switching them on and off, putting the buffers to max etc. Results seem to vary here and there. Plus I'm just not anal enough to spend hours and hours benching every setting. Why do that when I have access to a site full of folks who love doing that all day? ;)

You look up articles on tweaking/Intel NIC settings and it seems half say switch everything on and the other says switch it all off except buffers to max. :confused:

So what I'm asking is is there any consensus on what to have on or off on your Intel NIC settings?

Do or don't you bother?:cool:
Defaults for me. Maybe just jumbo frames at 9k
 
If you really want to know....

Disable all LSO (security issue and slower than host processor these days), disable interrupt moderation ( two sections for this), leave checksum alone, TX and RX buffers max out at 2048 due to disabled interrupt moderation, if you have RSS queues place it to 2 queues if quad core and 4 (if possible) with 6 to 8 cores-better with 8, disable EEE. That is pretty much it.
 
Jumbo frames only if doing iscsi on that lan, everything else usually default. Sometimes RSS scaling should be disabled, sometimes not, SR-IOV sometimes depending on OS, bla bla bla.

For desktop use, turn off power saving and that's about all I do
 
Seems like no clear winner here. I'll keep looking around.

Thanks for your input folks.:)
 
Seems like no clear winner here. I'll keep looking around.

Thanks for your input folks.:)
And there really isn't. It's really about your network environment--managed vs unmanaged, other nodes, overall traffic, etc. Some of the settings are dependent on your cpu as well, but only really if the cpu is underpowered (ie like using server cards to help offload the cpu processing a bit).
 
Samir is right, there's no one way to set it up. Depending on OS and application some settings work better than others
 
And one of the best tools to compare the results of messing with settings is iperf. That's how I figured out what not to change, lol.
 
i tried messign around with jumbo frames size. but ti only got me serioesu degraidng performance.

without jumbo frames my transfering of files in windows file sharing was around 115MB/s (peaked 125MB/s)
as soon as i would enable any kind of size jumboframes my transfer drate would drop to around 35MB/s to 55MB/s depending on size.


........And that would be because something in your networking chain does not support Jumbo Frames or at least the frame size you selected.
 
........And that would be because something in your networking chain does not support Jumbo Frames or at least the frame size you selected.
You are indeed incorrect here.
All equipment are verified for up to 9k jumbos frame. Some even 12k
Performance testing on both 3k and 9k was terrible for some reason. but jumbo frames support was not it
 
You are indeed incorrect here.
All equipment are verified for up to 9k jumbos frame. Some even 12k
Performance testing on both 3k and 9k was terrible for some reason. but jumbo frames support was not it

I'm not going to argue if you should enable jumbo frames, as you'll probably not get out of it what you are looking for.

But operationally the proof is in the testing.

Assuming QOS or VLAN tagging it not involved did you test at 1500, 4000 and 9000?

Did you enable and match the maximum frame size on both ends?

IF you are using tagging, did you edit the registry values for the Frame size to allow for the extra 4k bytes?
 
I'm not going to argue if you should enable jumbo frames, as you'll probably not get out of it what you are looking for.

But operationally the proof is in the testing.

Assuming QOS or VLAN tagging it not involved did you test at 1500, 4000 and 9000?

Did you enable and match the maximum frame size on both ends?

IF you are using tagging, did you edit the registry values for the Frame size to allow for the extra 4k bytes?

You kinda already know from my thread her
https://hardforum.com/threads/slow-50mb-s-file-sharing-in-win7.1944311/ ;)

I tried 1500m 4k and 9k.
Verified with ping -f that its was going through without packet fragmentation

1500 was the best option for me with up to 125MB/s transfer speed in windows file sharing
4000 was second best at around 85MB/s
9000 was third best with a drop all the way down to 55MB/s

However both NICs are Realtek. Might as well just be a realtek issue. Driver updates was tried though.

The connection was pc > cable > Buffalo 24port 1gbit switch > cable > other PC.

i am defiantly not debatting that 9k jumbo frames should be the better option for raw file transfer speed.
Just that you have to test it afterwards as not everything runs as expect when it comes to jumbo frames.

QoS and VLAN tag has not been involved.
 
I can't remember the last time I've read about a positive experience with jumbo frames.
 
I can't remember the last time I've read about a positive experience with jumbo frames.

Yeah every article I see on Jumbo Frames just says its obsolete and not to touch it. I also guess the fact that everything else on the network has to be configured the same kind of adds in potential issues and makes it too much of a liability.I've tried all the settings between my PCs and NAS and the 1500 setting just worked the most reliably.

If it was really worth it the default would be 9000 I guess.
 
And what's the real potential gain of jumbo frames anyways? 42 bytes x 6 = 252 more bytes? I could understand if the packets per second would be the same and the payload is 9000 vs 1500--then that would be a significant increase (and would be beyond 1Gbps). But it seems that all jumbo frames does is eek out just a bit more payload when compared to a regular string of packets. I think 2.5/5Gbps or LAGs has more potential than jumbo frames.
 
Just remember if you go for Jumbo Frames, the entire path between the two points has to allow it. If you run 4000Byte frames between the Server and Switch, but the client PC is connected to a switch that only does 1500Byte you are going to get Fragementation which can cause plenty of issues.

SamirD also mentioned LAGs. Note these are never perfectly load balanced. They will use an algorithm to load balance between the ports. These days most solutions will do it based on source/destination IP address and TCP/UDP Port, but this can result in only restrictions between two points on a specfic application.
 
Back
Top