My gigabit is only going 100MbPS


Apr 15, 2006

I have been having a little bit of a problem with my gigabit connection, basically i don't get that speed when i transfer files over the network..

I have 2 pc's next to each other connected with a crossover cable.

Each PC has 2 network cards, one going to my router with a 192 ip address and another connected to each other with the crossover cable and i gave them both a 169.x address.

My server is a P3 600Mhz, with a IDE hard drive and a PCI Gigabit network card.

My Workstation is a P4 with 2 builtin gigabit network cards. MoBo is Asus P5wd2 Premium and the "Marvell Yukon 88e8001/8003/8010" Nic card is plugged in with the server.

When i check the LAN status of both PC's, they say I am connected on 1GB.

Alright, so I mapped a drive and made sure it's connected with the 169 (\\\drive) and I transfer a file by dragging it over, the task manager states I don't go above 10% of my network traffic.

I connect via FTP and make sure it's connecting to the 169.254.x.x, same thing. I only transfer at 100MbPs.

I can't think of what the problem may be. Any insight please?
It could be that the P3 box simply can't move data any faster than that, especially with an IDE drive. It also could be that the path through your router (which I'm assuming is 10/100) has been given as a higher priority route. Did you try unplugging the connection to your router?
the PCI bus (as well as IDE) has limited bandwidth. Its not unlikely that your saturating the PCI bus with the transfers. My suggestion though is to try multiple concurrent transfers (multiple FTP downloads/uploads) to see if the cap might be on a per connection basis for some reason.
Actually I have not tried it with the network card unplugged or disabled. it's worth a shot.

I was also wondering about the write speed of the hard driveo n my P3, but from my understanding, the IDE drive can write at speeds of 40MBPS, therefore my speed should go at least 320MbPS.

And if it is a priority issue with my router, I would assume this is something I can fix in the BIOS or some sort? How would I go about to change that? It would have to do something with the processor.


OHhhh PS, another big thing I forgot to mention. I handmade my cable and made sure all 8 wires were crimped. The wire is a Radioshack wire, and i read different sites. some said you need a cat6 cable, others said no you don't a Cat5 wire will do just fine.

My status does state i am connected at 1Gbps, so i don't know what's true... hopefully someone can clear the air about that one.
For a real test, try using a bandwith-testing program, like iperf. It will take your hard drive out of the equation, and test just bandwidth.
Thanks Flint.

I downloaded the software and i have no idea what switches to activate and whether or not I need to run this on both pc's (one as client and other as server?)

Can you give me a basic switch to test the bandidth on the network?

iperf somethingsomething

Thanks bud
First, on the 'server' (the recieving PC) run like
iperf.exe -s -w 65500

(the -w 65500 part isn't neccessary, but it increases the TCP window size to give more accurate results).

On the "client" (the sending PC) run as

iperf -c <server's IP address here>
Thanks Flint,

on a default 8K window size i got speeds of 220Mbit

I ran the calculations in the document and mine should be at 122K.

i did a -w 122K and it peaked at 336Mbit. if i higher my window size speed decreases.

I am going to try a different cable. Anyone know a gooe website to get a small crossover cable for a gigabit network? (1-2 feet max)

I ran the following registry change on my windows XP machine and the windows 2k3 server:


Windows size to 128KB
The other is to enable a higher then 64KB window size.

I am still capped at 12MB ps through the FTP, probably the hard drive or the processor has something to do with it... i have no idea!
You shouldn't use a crossover cable with GigE.

Try a straight through cable, it will auto detect and corssover the NIC automagically.
OHhhh PS, another big thing I forgot to mention. I handmade my cable and made sure all 8 wires were crimped

Ding, ding, ding, ding, ding.

Chances of Joe average crimping a cable and getting gigabit speeds is remote at best. GigE is much more sensitive to issues where throughput is a direct result.

X-over is fine by the way but he right that the GigE NIC's should autodetect and adjust to the type of cable you have. I'd get a professionally made regular straight through, or X-over for that matter, and benchmark. You won't get 12.5MB/s with an IDE interface (nor 320Mb/s either for that matter) but you will likely see a good bump on a homemade gigabit copper cable.
flixxx said:
I am going to try a different cable. Anyone know a gooe website to get a small crossover cable for a gigabit network? (1-2 feet max)

We have found the problem. The minimum distance a cable can be is 3 feet. This doesn't cause any problems with 10/100 but can cause a lot of trouble with gigabit. This issue can be aggravated by it being a direct machine to machine path, switches seem to handle it better. How long of a cable are you currently using??

I use both my homemade and professional made cables and have found no difference inbetween the two. I have also found no difference inbetween cat5, 5e and 6. Every single one of these tends to max out at close to my hard drives transfer rate. (this advice only works for small home networks)
Ha, who would of known a short cable might do it? my cable is about 6inches long because hte pc's are right next to each other.

I appreciate everyones input. I'm leaving work now, i'll go make a "straight" 4 foot cable and see what results iperf gives me.

In the meantime, i e.mailed a few people from e.bay for price quotes on some Cat6 or 7 cables... honestly they all look the same to me.

I don't even mind transferring at the speeds i am now, but dammit i paid for the gigabit and I want to get a gigabit.

P.S.: What do people think of those Sata cards (PCI)? do they get good speeds out of it. I'd like a cheap upgrade to my server to use SATA.
flixxx said:
In the meantime, i e.mailed a few people from e.bay for price quotes on some Cat6 or 7 cables... honestly they all look the same to me.

Last time I talked to my corp's cable vendor...7 wasn't even attempted yet. They have been working on 6a (cat6 augmented), but they sounded like it won't be available till sometime around sept to nov...and even then, those cables (at least the sample they had on hand) was the size of my pinky finger. Cat6 will do 10Gbps at 55' or we were told...depending on the quality of the line.
I'm surprised that nobody has suggsted yo uexamine the configuration of your network card drivers. Most of the cards ship with default settings that aren't at all efficient; particularly, they'll be configured to use a really small frame size.
flixxx said:
In the meantime, i e.mailed a few people from e.bay for price quotes on some Cat6 or 7 cables... honestly they all look the same to me.

I don't know if you have a Fry's nearby, but they sell cheap short cables and you'd have it right now. For short home runs, without much risk of outside signal interference, you don't really need cat6. Won't go into the electrical engineering aspect of it, but basicaly, if you aren't running a gigantic electrical coil next to it, you are fine with regular cat5.
I'd say the speed of the P3 is the biggest limiting factor. I've got a dual P3 Xeon 600MHz server that barely pushes anything faster than 100Mbps with a gigabit card. When I move the data over to a dual 3.0GHz P4 Xeon, it flies.

I've heard that anything under 1GHz, don't expect much speed increase. There are just so many inturrupts generated that it overwelms the system. If you can enable jumbo packets, it *might* help the speed out, but I really wouldn't expect much from such a low clocked machine.
I have a dual proc P3 running at 1GHz that gets more than 900 megabits/second with netspd, no problem.
Back to basics:

Check the CPU and RAM useage on both systems during file transfers. If your CPU is maxing out, that can limit your speed.

Check to see if your server's hard drive needs to be defragged. Then do a read speed test on it and see what actual speeds you get.

What NICs do you have? Not all NICs are created equal, especially Gb. Do a little research on your NIC, see if there are tweaks or new drivers that will help performance.

Use a 6- to 10-foot straight-through patch cable, preferably prefab.

Bear in mind that if you're using 32-bit PCI, you'll never acheive true Gb speeds.
Thanks for all the responses.

I'll run all the tests on the server when i get home tonight.

As far as it goes, the P3 is running a PCI 32bit Dlink gigabit card (i'll get the model when i go home), and i have the option to enable "jumbo frames" - (i've read some of the downsides to jumbo frames and i'm not worried, I have another network card in it streaming all my videos in the home)

Anyway, my P4 (Asus p5wd premium - Reviewed by Hardocp :p ) has the "Marvell Yukon"
Here are the settings:

802.1p Support: Off
Flow Control: on
Hardware Checksumming: On
Interrupt moderation: on
Jumbo frames: Disabled
Max IRQ per Sec: 5000
Network address: Not present
Number of Receive Buffers: 50
Number of Transmit Buffers: 50
Speed & Duplex: Auto-sense

now that I look at it I get the feeling the Buffer might have something to do with it, I tohught the "TCPWindowSize" in the registry was in control of that though.

*UPDATE*: I am also going to give you the settings on my second Network card on the P4... This is connected to my router, but I noticed I have more options, and it seems more detailed. (Basically, I don't have a problem swapping network card to connect to the server and the other to my routerk.)

Intel PRO/1000 PM

Adaptive Inter-Fram Spacing: Enabled
Enable PME: OS Controlled
Flow Control: Generate & Respond
Interrupt Moderation Rate: Adaptive
Jumbo Frames: Disabled
Link Speed & Duplex: Auto-Detect
Locally Administered Address: Not Present
Log link State Event: enalbed
Offload Receive IP Checksum: On
Offload Receive TCP Checksum: On
Offload Transmit IP Checksum: On
Offload Transmit TCP Checksum: On
QoS Packet Tagging: Disabled
Receive Descriptors: 256
Transmit Descriptors: 256

Anyway, since i can't change anything on the server right now, i'll wait for your responses and suggestions.
The good news is that you're using some Intel hardware; they're the most stable and have the best drivers. The Mavell drivers aren't so bad. But there's some real junk out there -- unfortunately, DLink is one of 'em. (In my opinion, anyway. LinkSys also has some problems.)

If you can, you should use the Intel card on the "internal" network and hook the Marvell card to your "External" netowrk.

For your internal network, your biggest problem is not using Jumbo frames. After that, jack the Receive and Transmit descriptors settings for both machines as high as you can.
Rock on, thanks bud.

I like the Intel Drivers, it doesn't have basic "Enable, Disable" features. It's got features like:

Jumbo Frames: Disabled, 9014 Bytes, 4088 bytes, 16128 bytes.

unfortunately, I don't believe the Dlink drivers have that kind of options so i'll have to conform with whatever Dlink tells me, or I can chuck the Dlink nic card and buy an Intel PCI (If they have any)

I'll go home tonight and make some changes.
About three years ago the company I worked for reviewed every server NIC we could get our hands on. Intel was hands-down the fastest on the market, about 5% above anyone else. And the CPU useage when the NIC was maxed out was actually well below average. They've got some insanely efficient drivers.

FYI, we also found that LSI made the fastest SCSI RAID controllers and Fujitsu made the fastest SCSI drives. Oh, and Windows Server 2003 was 15-20% faster than Windows XP on hard drive reads. We did some cool stuff there :D