GigE network but not gigE speeds?

Fryguy8

[H]ard|Gawd
Joined
Sep 26, 2001
Messages
1,707
2 computers, both running linux, with the same network card (intel gigE pci-e), connected via a netgear prosafe switch. For testing purposes they are currently not connected to anything else.

Both cards are properly detected, and ethtool shows them as both being set up as 1000MB/s speed, and the switch is lighting up as they are both gigE connections. Both are coming up as full duplex, etc etc etc.

But transfers are of typical 100mbps speed (3-4 MB/s), testing via iperf, scp, and ftp (proftpd).

What else can I do to cause this gigE hardware to actually run at gigE speeds?
 
if CAT5e, while technically it supports 'gigabit' ethernet, realistic transfer speeds in many cases top out around 350mbps, less with poor quality cable. For true gigabit ethernet speeds you need CAT6 cable properly terminated.

This is a complete fabrication.
 
those nics work very well under linux, what kernel are you using? also are you using the kernel supplied driver or building one from source? Another thing like others mentioned make sure your cables are atleast cat5e pref cat6, the speeds you gettings (3-4mb/s) are pretty slow even for 100mbit. Also did you trying running a test without using physical storage device? like dd through netcat or something?
 
But transfers are of typical 100mbps speed (3-4 MB/s), testing via iperf, scp, and ftp (proftpd).

3-4 MB/sec is no where near 100mbs speed; you should be closer to 12 megabytes/sec with 100bT. Sounds like a speed/duplex mis-match; Check 'ifconfig' on each end for errors; can your switch report interface speed & errors?


Without any tuning of the base OS or using non-default parameters, iperf on GigE should easily hit 300 megabit/sec.
 
Flint beat me to it, what you are seeing is definitely not typical of 100Mbits Ethernet. A realistic number is somewhere in the neighborhood of 8-11Mbits due to any kind of overhead at the time of transmissions.

Another thing that I want to add, I have seen the EXACT same thing you are seeing with scp, 3-4Mbits when using WinSCP. I never really looked into weather it was a limitation of the protocol or the client. Opened a quick Samba share on the same box and saw my gigabit speeds using Cat5E.


What category of ethernet cable are you using? If CAT5, then 100mbps is normal, if CAT5e, while technically it supports 'gigabit' ethernet, realistic transfer speeds in many cases top out around 350mbps, less with poor quality cable. For true gigabit ethernet speeds you need CAT6 cable properly terminated.
ugh, I guess this is what I have to deal with in a few weeks when I move to NC. Horrible knowledge from the only cable provider in the area. Are you serious with this statement? Do your homework son.:rolleyes:
 
3-4 MB/sec is no where near 100mbs speed; you should be closer to 12 megabytes/sec with 100bT. Sounds like a speed/duplex mis-match; Check 'ifconfig' on each end for errors; can your switch report interface speed & errors?


Without any tuning of the base OS or using non-default parameters, iperf on GigE should easily hit 300 megabit/sec.

Exactly. You should be getting well more then 3-4 MB/s even with 100meg. 12.5 MB/s is the theoretical max for a 100mb connection (12.5 * 8 = 100).

Maybe you can try bypassing your switch and connecting the cards directly with a crossover cable?

IPerf is a great tool to test with. Do you get the same results if you reverse which machine is the server and which one is the client?
 
intel nics are pretty good also on win
i use some dual 1000 at servers with win 2k3 and they offer ~900-~950 megabitsps
what switch do you use?
 
Eliminate possibilities. Try a crossover cable first and see if that changes anything. Also, try both small and large files and make sure any kind of file protection/checking is turned off.
 
Crossover cables are a no no on GigE nics since they have auto crossover built in.

Then a crossover wouldn't matter, either way, he's trying a direct connection and he's trying a new cable.
 
K, thanks for the advice, those of you who actually bothered to read my whole post.

I'm using Cat6 Cable. The length of one of them was excessively long (50+ ft when it only needs to be like 15), so I swapped that out just to make sure. No change.

I next did a direct connection without the switch, no change.

And for those of you who've been asking me to check with multiple programs, please note that I have. As for speed/duplex matching, I thought I made it perfectly clear that I'm 100% sure that both cards are running at 1000MB with full duplex.

Both machines are using ubuntu linux, using stock kernel (though not the same kernel, 1 is server kernel, 1 is the standard desktop kernel). I haven't compiled a kernel in quite some time, and wanted to use the machines to get work done rather than learn and tweak linux machines (I went through that with 2 years of LFS to learn linux). The driver seems to get detected fine, and again is being detected and set up as a gigE full duplex connection.

Anything else I should be checking?
 
There could be updated drivers for your NICs, especially if they are fairly new. You may want to search the Ubuntu forums as well, if you haven't already. There might be a bug with auto detection and its using an inferior driver (had this issue with a broadcom chipset in Fedora Core 6 a while back)

You might want to try forcing 100Mbit/Full Duplex on each card and see if you obtain higher speeds, 3~4MB/sec seems pretty slow even for 100M, especially in seemingly ideal conditions (short cat6 cable, direct connection). You may be able to force 1000Mbit/Full Duplex connection as well, I've seen auto negotiation cause problems with GigE cards, unlikely with the same brand but still possible.

You may want to look into Ethernet testing/sniffing programs to determine if you're getting errors on your link, if so could either be driver or a bad NIC since you've pretty much eliminated bad cable/switch.
 
no updates on the drivers. The reason i picked intel nics in the first place was because they are generally considered among the best available for desktop/personal use, and they have exceptional linux/driver support.

Manually forcing 100mb doesn't change anything, neither does manually forcing to 1000mbit. Both yield the exact same results.
 
Yea, the only issues I've ever had with Intel NICs are typically related to bad cable or auto negotiation issues, usually related to whatever I've connected them to.

Well, you've also knocked out auto neg and drivers as possible causes.

At this point, I'd try some Ethernet testing programs to see if you're getting any CRC alignment errors or collisions, these can cause bad speed issues as well. While running the programs, try transferring large and small single and multiple files, see if there is any common ground as to why these occur (for me its a curiosity thing, just moving large files will probably be enough to generate errors if any exist.) Here's a reference of some common ethernet errors;
http://www.ncat.co.uk/cisco-documents/Trouble-shooting-understanding-Ethernet-errors.htm

If you want to get really radical, you could also try another (bootcd based) linux distro and see if the same issues occur. Might also want to try Windows, you can use BartPE to create a small, bootable XP disk if you have a copy of XP laying around so you don't have to deal with the whole reformatting thing.

If all that comes back negative, might want to try another brand or another model. You've pretty much eliminated just about everything except the cards themselves.
 
At this point, I'd try some Ethernet testing programs to see if you're getting any CRC alignment errors or collisions, these can cause bad speed issues as well.

That's what I'd say too, put a sniffer on it and see if it stinks.
 
Do you have a third machine (at any speed - 10/100 or gigabit) that you could test against? 3-4 MB/s is awfully slow, especially with something like iperf. What does iperf's UDP test show?
 
can you type this at your machines and send us results
"sudo hdparm /dev/sda"

Let's leave hard drives out of a networking issue; if iperf is slow, then hard drive speeds/specs/setup have nothing to do with it.
 
Glad to see incorrect info posted on some website.

Auto-crossover is part of the GigE spec. Every GigE copper device has it. So no crossover cable is needed for GigE, ever.

That is not true, auto MDI-X is not a requirement of the gigabit Ethernet specifications. Unless you can cite the exact portion of the spec in IEEE 802.3ab that states auto MDI-X detection is required, I think you are just repeating some BS you heard from some on else. Some one may have confused you by saying auto-negotiation is required, auto-negotiation is the speed (100mbps or 1000mbps) not MDI-X detection and auto-negotiation is a part of the spec not auto MDI-X.
 
Glad to see incorrect info posted on some website.

Auto-crossover is part of the GigE spec. Every GigE copper device has it. So no crossover cable is needed for GigE, ever.

Not to burst your bubble, but you shouldn't speak in absolutes. I have ran into many issues where I had to disable MDIX auto-negotiation on a gige copper link. This was usually due to devices of different brands not negotiating properly. Most commonly I have had this between my devices and a ISP device. 95% of the time it works properly, but not always.
 
Sorry to cause this controversy by suggesting a crossover cable. I wasn't thinking about gigabit using all 8 wires. I was just thinking how to eliminate the switch from the equation.

Another option is a liveCD, maybe ubuntu is doing something funky? I'm a fan of sysresccd. It doesn't come with iperf, but you should be able to grab it with wget. Worst case you waste a CDR and a few minutes of your time.

I'm still curious if your tests report the same for both test directions. I had a similar gigabit issue, but my bottleneck was only one way. Turned out moving the nic to another slot fixed it for me.
 
I would also suggest checking your MTU size on your adapter with ifconfig (if your adapter supports jumbo frames, I assume). If it's some around 1500, try setting it to the card's maximum or the card with the "least common denominator" on your network.

For example, I had the same problem with GigE speeds,therefore I increased the MTUs on both computers linked to each other via crossover cable. My maximum MTU size on "Computer 1" was ~9000 and 7200 on "Computer 2", so I set both computers with a MTU size of 7200. According to iperf, my speeds went from ~40 mbits/sec to ~700 mbits/sec.

IIRC the command to set the MTU size on an adapter in linux was:

ifconfig ethN mtu <size>

where N is the number for your ethernet adapter and <size> is your MTU number

Hope that helps
 
I think you are just repeating some BS you heard from some on else.

Nope.

Some one may have confused you by saying auto-negotiation is required, auto-negotiation is the speed (100mbps or 1000mbps) not MDI-X detection and auto-negotiation is a part of the spec not auto MDI-X.


Yeah, thanks, I know the difference.
 
Not to burst your bubble, but you shouldn't speak in absolutes. I have ran into many issues where I had to disable MDIX auto-negotiation on a gige copper link. This was usually due to devices of different brands not negotiating properly. Most commonly I have had this between my devices and a ISP device. 95% of the time it works properly, but not always.

Ahh yes, this I would agree with. So ok, I concede that once in a blue moon you would need a crossover.
 
Tests are the same in both directions. to the people suggesting possibly the HD, it's not. bonnie++ and hdparm are both reporting non-cached reads of the drives well in excess of 100 MB/s (it's a raid5 array, /dev/md0).

I would set up a different MTU and use jumbo frames, the problem however is that when this network is actually part of a larger network that contains several 10/100 devices, pretty much eliminating the possibility of jumbo frames from what I understand.

Testing with a 3rd machine doesn't help matters. Using a dell laptop with a 10/100 port plugged into the gigabit switch gives me the same transfer speeds, about 3MB/s to both gigabit devices, using iperf. Testing the dell laptop to both machines individual, direct connected with a crossover cable yields the same results.

Edit: I might as well make this clear so people don't bother to waste time asking. Yes the network is disconnected from all 10/100 devices when testing.
 
I thought I'd pop in, because I'm having similar issues as the OP and don't want to repost a thread... if that's cool...

I just got my gigabit switch, and Cat6 cables today, set everything up, works flawlessly, shows up as 1Gbps on the router itself, and in windows.. transfer speeds are slow at ~30MBps... before I was getting about 10MBps which makes sense for 10/100... but 30 seems really low for gigabit...

I tried a cross-over (just a cat 6 to each computer, set static IPs and accessed files there) same transfer speeds of ~30MBps... :confused:

Any ideas? Both computers have dual gigabit, I'll try the other ports to see if it makes a difference but right now it's slow... and meh...
 
I just got my gigabit switch, and Cat6 cables today, set everything up, works flawlessly, shows up as 1Gbps on the router itself, and in windows.. transfer speeds are slow at ~30MBps... before I was getting about 10MBps which makes sense for 10/100... but 30 seems really low for gigabit...

30 MB/s is a very reasonable figure for actual file transfer performance over gigabit. It's around 3x as fast as good 100 Mb/s, which is a big improvement in real terms.

Synthetic network-only performance tests generally show faster capability though -- typically anywhere from around 400 Mb/s (50 MB/s) to nearly 1 Gb/s (~120 MB/s).

The difference between synthetic network performance and actual file transfer performance is a difficult "gray zone" where numerous potential issues come into play -- HD performance, file transfer protocol overhead, OS performance, tuning, PCI/bus performance, and even CPU load/performance some cases. Windows/SMB in particular (at least before Vista/SMB 2.0) has some performance issues. Vista has its own bunch of bleeding-edge issues, but has at times performed very well for me.

Often synthetic network-only performance can be tuned higher -- starting with using different test tools or parameters, including things such as driver and network stack options. Sometimes there are pesky PCI/driver/other implementation issues that prevent high performance. Sometimes tuning the network layer itself doesn't help much with file transfer performance.
 
Well that sucks :eek:

I checked again, it's averaging more like 250Mbps... I was just guessing (using the windows time + transfering a 700meg file and dividing the time I got)

It's better, I just was hoping for more...

I have 2 Raptors in Raid-0, and it's transferring from a SATA 500 gig 7200.10 drive... I'd think it would be able to handle more than 250Mbps...?

I tried transfering from 3 different HDD's (7200.7 120 gig, 7200.10 500 gig, and 7200.10 250 gig) and all got 250Mbps... with different files of different sizes

Just tried the 500gig IDE drive, same stuff... 250mbps...
 
valve1138
Would you mind citing your source then? I have seen no mention of auto mdi-x negotiation being a required portion of the spec. What I have seen is a standard method to implement it in the standard but no where have I understood it to be "required".
 
I'm sure I'm going to look like a douche for pointing this out, but I can't resist...

According to 40.4.4 (Automatic MDI/MDI-X Configuration) in Section 3 of IEEE Standard 802.3-2005 (obtainable at http://standards.ieee.org/getieee802/802.3.html):

"Automatic MDI/MDI-X Configuration is intended to eliminate the need for crossover cables between similar devices. Implementation of an automatic MDI/MDI-X configuration is optional for 1000BASE-T devices. If an automatic configuration method is used, it shall comply with the following specifications. The assignment of pin-outs for a 1000BASE-T crossover function cable is shown in Table 40&#8211;12 in 40.8."

As of the 2005 spec (and unlikely to change in the near future) auto MDI-X is optional, not required.

Fryguy8, have you tried checking for collisions and/or CRC alignment issues?
 
I'm sure I'm going to look like a douche for pointing this out, but I can't resist...

According to 40.4.4 (Automatic MDI/MDI-X Configuration) in Section 3 of IEEE Standard 802.3-2005 (obtainable at http://standards.ieee.org/getieee802/802.3.html):

"Automatic MDI/MDI-X Configuration is intended to eliminate the need for crossover cables between similar devices. Implementation of an automatic MDI/MDI-X configuration is optional for 1000BASE-T devices. If an automatic configuration method is used, it shall comply with the following specifications. The assignment of pin-outs for a 1000BASE-T crossover function cable is shown in Table 40–12 in 40.8."

As of the 2005 spec (and unlikely to change in the near future) auto MDI-X is optional, not required.

Thank you drgnCabe, I was having difficulty finding that myself.
 
Well that sucks

Well, I think ~3X improvement is pretty good.

If you have lots of RAM, and aren't afraid of the registry, etc., you could try setting LargeSystemCache=1, rebooting, and retesting.

http://technet2.microsoft.com/windo...a031-4461-9e72-59197a7507b61033.mspx?mfr=true

Clarification: With a relatively "small" test file such as yours, this suggestion could give misleading results for part of the test at least. This was not my intention -- increase the test file size to 4 GB or more to get it over whatever RAM amount you have so that large amounts of actual caching aren't what determine the results.
 
Well, I think ~3X improvement is pretty good.

If you have lots of RAM, and aren't afraid of the registry, etc., you could try setting LargeSystemCache=1, rebooting, and retesting.

http://technet2.microsoft.com/windo...a031-4461-9e72-59197a7507b61033.mspx?mfr=true

It's good... I was hoping for more

I have 4 gigs in this machine, and 2 gigs in the other machine (Which has all the HDD's and Files)

Only 32bit OS so I only see 3.25 gigs, but I'll try that...

EDIT: Nope, speeds dropped to 170mbps, I changed it back and restarted and they're back to 250...

Oh and I'm copying a 4.4 gig file... same speeds... I tried 10-12 gig files too... no difference... I don't think I have single files much bigger :)
 
How can I do this?

ifconfig will give you basic stats (errors, dropped packets, overruns, framing, carrier, etc.)

Code:
eth0    Link encap:Ethernet  HWaddr 00:0B:CD:93:B7:BB
          inet addr:192.168.11.68  Bcast:192.168.11.255  Mask:255.255.255.0
          UP BROADCAST NOTRAILERS RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:1016782 errors:0 dropped:0 overruns:0 frame:0
          TX packets:1807075 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:67919945 (64.7 Mb)  TX bytes:2674922619 (2551.0 Mb)
          Interrupt:10 Base address:0x5000

The above is an example of one of my slackware boxes, you can also use ethtool to gather information, configure and run selftests ("ethtool -t offline <adapter>" to bring the adapter off line default when used with - or "ethtool -t online <adapter>" to run tests while the adapter is still connected). There is also mii-diag (man page http://linux.die.net/man/8/mii-diag) which may be a separate install depending on your distro.

If you've rebooted, try transferring a large (100mb though I'd do a 1g file) amount of data through the interface, ifconfig only captures interface stats since last reboot.
 
Back
Top