Looking into toying with jumbo frames and have a question or two...

jimthebob

Gawd
Joined
Mar 23, 2013
Messages
889
So I know my switch can support the MTU of 9000, I know my PC's and server can, but what I'm not sure about is what would happen to my Roku stick or nVidia Shield if I up the frame size?

Another option, which I'm not sure is possible, is to utilize both GB connections on my main PC and server (since those are the two that get most large file traffic). Would it be possible to configure one of the GB connections to use a standard MTU size for video streaming to local devices and configure the other GB connections to use Jumbo frames only when talking to other devices that support it?
 
I've played around with MTU many times in the past to little benefit. I've also looked up loads of articles and they all pretty much say don't bother with it. I think it was a feature that 'seemed a good idea at the time'.

Basically hardware has moved on and made it of little use now.
 
So I know my switch can support the MTU of 9000, I know my PC's and server can, but what I'm not sure about is what would happen to my Roku stick or nVidia Shield if I up the frame size?

Another option, which I'm not sure is possible, is to utilize both GB connections on my main PC and server (since those are the two that get most large file traffic). Would it be possible to configure one of the GB connections to use a standard MTU size for video streaming to local devices and configure the other GB connections to use Jumbo frames only when talking to other devices that support it?

What model switch is it?
 
TRENDnet TEG-S81g

I know it's not a high end switch but I did check the manufacturer site and it "said" it supported it...
 
If your server is enabled with Jumbo, but your client is not, you will get I/O errors. The endpoints need to both have the same setting. Enabling Jumbo frames on your switch just means that it will accept them. It does not convert the frames from Jumbo to non-Jumbo or vice-versa.
If the Roku or Shield do not connect to the server, they will be fine.
 
If your server is enabled with Jumbo, but your client is not, you will get I/O errors. The endpoints need to both have the same setting. Enabling Jumbo frames on your switch just means that it will accept them. It does not convert the frames from Jumbo to non-Jumbo or vice-versa.
If the Roku or Shield do not connect to the server, they will be fine.

Unfortunately they do, the servers streams to all my devices in the house. I have seen where you could make the change to the nVidia Shield but I don't think the same is true for the Roku. So, my answer is clear; replace all my Roku Sticks with nVidia Shields, it's the clear choice!!
 
The only time it is okay to use jumbo frames is when your enabled network is physically segregated from the non-enabled network. Otherwise you will run into issues down the road - and troubleshooting those kind of network issues is never fun. The only real application I've ever used jumbo frames for is on a SAN where everything is running jumbo and you are sure of it.
 
The only time it is okay to use jumbo frames is when your enabled network is physically segregated from the non-enabled network. Otherwise you will run into issues down the road - and troubleshooting those kind of network issues is never fun. The only real application I've ever used jumbo frames for is on a SAN where everything is running jumbo and you are sure of it.

Yep, use jumbo frames for iscsi SAN connections and not for general network purposes here as well. Too much hassle and no real benefit. We don't even enable it for esxi hosts with 10gig connections and they have no problem maxing out those 10g nics.
 
Thanks for the input guys. I was looking into this option because for some reason, my main PC (all SSD based) and server (4tb raid 1 hgst drives) has random write performance when copying files. Sometimes the connection will go 98%+ and well over 100 meg transfer speeds but most of the time it only does about 80% saturation at around 85 or so meg transfer. This is when I have no clients streaming, no updates or backups of any kind running, and it's confusing me. Both PC's are more than capable (i7 on main rig, i3 on server side both with SATA 6g) so I don't know where my issue could lie.
 
Thanks for the input guys. I was looking into this option because for some reason, my main PC (all SSD based) and server (4tb raid 1 hgst drives) has random write performance when copying files. Sometimes the connection will go 98%+ and well over 100 meg transfer speeds but most of the time it only does about 80% saturation at around 85 or so meg transfer. This is when I have no clients streaming, no updates or backups of any kind running, and it's confusing me. Both PC's are more than capable (i7 on main rig, i3 on server side both with SATA 6g) so I don't know where my issue could lie.

Big files, small files, etc make a difference. SMB has a lot of overhead, it's a chatty protocol.

If you really want to test your network performance, put iperf on both sides and run various tests with different thread counts and see what kind of speed you get. My guess is it will be near 98-99% every time you run iperf.
 
Big files, small files, etc make a difference. SMB has a lot of overhead, it's a chatty protocol.

If you really want to test your network performance, put iperf on both sides and run various tests with different thread counts and see what kind of speed you get. My guess is it will be near 98-99% every time you run iperf.

I'll have to try that when I get off work and back home. As far as file size goes, most of the stuff I transfer is TV/Movie's ranging anywhere from 500mb to 4+gb in size. Occasional ISO copying when I want to play a game but not too much of that. It's not like I'm copying gigabyte upon gigabyte of small pictures or other random files.
 
Im running just 3k here cause it is the only size i couldt find that all my card supported. some of my realtek cards does not do 9k even though they specify it
 
Back
Top