Jumbo frames? & gs116

erebus720

Gawd
Joined
Jul 16, 2003
Messages
1,017
I picked up a GS116 from ebay, seen another member who purchased one from the same seller. I don't have the switch yet.

Later found out that the switch does not support jumbo frames..

We store all our data on a server in the basement, (docs, media etc). We do have the need to move large amounts of data over the network (isos, backups, etc)

Will the lack of jumbo frame support be that noticeable? Is it worth going thru the hassle of getting a diffrent switch?
 
Probably not. I have never found a huge difference from using jumbo frames TBH. In most environments it tends to cause more issues than it solves.
 
I don't see what OS you are using but I have heard Vista doesn't always play nice with that setting. I have a GS116 connected to CAT6 wiring and find that the limiting factor in my system has been wiring issues and PC hardware limitations. I run 30-60 MB/s and seem to be more limited by the old hard drives in my Server and onboard NIC's.
 
Some revisions of the GS116 do support jumbo frames, which can help especially with older / lower-end hardware such as some consumer NAS boxes, but is probably otherwise not worth the trouble.
 
We don't have any vista machines on our network. xp and various linux boxes.

I went ahead and had the seller cancel the transaction...

I can get the non jumbo frame gs116 for about $60 shipped.. Or the one on egg..

I just don't want to jip us on performance.
 
Last edited:
Jumbo frames definitely do help and are worth it. Well, unless you have Marvell NICs, which I have not been able to get to work right. Realtek and especially Intel do well in my experience. Marvell just freezes the entire computer when I try to enable the setting - and this is with the latest drivers (as of a couple months ago when I last tried it, at least)
 
Is newegg the cheapest place to pickup a gs116 that does support jumbo frames?

I can't seem to find many places that can tell me wether theirs support jumbo frames.

Anyone else care to chime in? Don't want to spend the extra 70 bucks if It just wont be worth it.

The NAS/server is a 2800+ with numerous 750 and 500 gb sata drives.
 
A new unit should have a lifetime warranty (except for the power supply unit), while a Netgear refurb would have a 30 day or so warranty, and some higher chance of failure. I'd go with a new unit for this reason if I could afford it.

According to an old Netgear support page, jumbo frames are supported since this version: GS116 Serial number starting 19E or 140x5B or 140x5C
 
Remember with gigabit networking and jumbo frames your limited severely by everything else in your network.
1)Your server must be decent to sustain a constant transfer rate
2)Your server needs to have a dedicated NIC (Intel Preferred + PCI-E if possible)
3)Your switch must support jumbo frames (You know this)
4)Your wiring must be good and test out at gigabit speeds
5)Your workstations must have Intel/ Realtek NICS (Dedicated prefered)

So if everything else will work and all you need is a gigabit switch, great. If not, be prepared to spend a bit getting everything else up to speed so you can even use jumbo frames.
 
The NAS/server is a 2800+ with numerous 750 and 500 gb sata drives.

More details? Motherboard, NIC, OS, storage controller, file system?

Contrary to some other opinions, I'd want jumbo frame support especially when the system is less than stellar.
 
Most Realtek onboard network chips will do jumbo frames. Of course, the performance won't be near a dedicated Intel card, but the jumbo frames still help.

That said, I'd take a PCI Intel card without jumbo frames over onboard Realtek with jumbo frames. Intel with jumbo frames, and also PCI-E, make things lean even more towards Intel.
 
http://sd.wareonearth.com/~phil/jumbo.html has a lot of good technical info on jumbo frames.

Essentially you're handling bigger chunks of data with the same amount of overhead. This leads to higher throughput with less processing. One study showed a 50% increase in throughput with 45% less CPU load.

If you can use it, you should. Be aware that everything on the network (or VLAN) needs to support it though.
 
The NAS is a NF7-m xp pro 2gbram, realtek 8169,

As far as the controller goes, Were running two of these

http://www.newegg.com/Product/Product.aspx?Item=N82E16815124020

Were looking for options for a better solution for a controller.

As it looks we may just get the cheaper non jumbo frame switch.

From what Ive read wed gain about 15mbps?

By everything, that just means every device from end to end. Correct? There are a number of machines that do not have gigabit, and never will need gigabit.
 
Looking at the specs it looks like sneakernet may be a more efficient solution than jumbo frames at least for the really big files.
 
The NAS is a NF7-m xp pro 2gbram, realtek 8169,

As far as the controller goes, Were running two of these

http://www.newegg.com/Product/Product.aspx?Item=N82E16815124020

Were looking for options for a better solution for a controller.

As it looks we may just get the cheaper non jumbo frame switch.

From what Ive read wed gain about 15mbps?

By everything, that just means every device from end to end. Correct? There are a number of machines that do not have gigabit, and never will need gigabit.

The NF2 chipset has a reputation of being about the best chipset of its kind, but unfortunately, despite that, it's the biggest limitation of your current setup, and improving on that amounts to replacing the motherboard, cpu, etc..

Chipsets of that generation force you to use PCI gigabit NICs and sometimes, as in your case, PCI storage controllers. These then compete for the same limited bandwidth, and limit your maximum throughput. The only way out is to get a newer chipset which is not so limited to the PCI bus -- with on-board or add-on PCIe NIC and storage controllers.

If you're going to stick with this system, it is a good idea to get a jumbo-capable switch, as that will reduce the PCI and CPU overhead, but another way to look at it is that you're still going to be limited by the rest of the system, and from that perspective you would be better off investing the difference towards a subsequent overhaul off the PCI bus, and perhaps also towards a newer OS.

The OS is another limitation, so while the PCI bandwidth limitations will limit the maximum throughput possible, so will the OS itself, and the practical difference in performance between an ideal hardware setup and what you have with that OS might not be as large as one might assume. If you're OK with say 2-3x the throughput which you get with 100 Mb/s, then you might be able to achieve that without any further computer upgrades, rendering the issues I've pointed out above moot.

Moreover, cabling is jumbo-capable by definition, and gigabit is typically auto-crossover, so you could set up a direct cable connection between two points and test the difference that jumbo frames makes for yourself.
 
I don't see us upgrading the system for atleast 4-6 months. Anything I get my hands on that may have pcix would be used in the theater as a new htpc. *prays to the DD gods*

The 500 and 750 push 62 and 65mb/s according to a chart I dug up on the net.

We will be adding a few newer tb+ drives sometime this summer..

I don't think I can justify the $70+ for the 15mb/s (possibly?) increase.

I checked my roomies workstations and only one of them will support jumbo frames. He would need a few new nics, (all PCI...)

Exactly what situations would reap the benefits of jumbo frames?
 
Exactly what situations would reap the benefits of jumbo frames?

You should be able to see the difference in "raw" networking benchmarks on the throughput and CPU utilization, and similarly during file transfers.

The best way to check this, as with anything performance-related, is to do actual measurements in context, which in your case would amount to running a cable directly between the server and a jumbo-capable client, enabling jumbo frames on both ends, and using some benchmark methods -- at least a large very file transfer (e.g. a DVD ISO), optionally a "raw" networking benchmark in addition to tell you just what the network's doing. Network alone is not very meaningful in your case as the PCI bus is a shared bottleneck.

A 15 MB/s improvement sounds optimistic to me, as that would likely be a large amount relative to the base throughput, but try and see is still the only way to know.
 
Back
Top