Minimum system requirements for Gigabit?

imzjustplayin

[H]ard|Gawd
Joined
Jan 24, 2006
Messages
1,171
I directly attached an ethernet cable from my laptop that has intergated gigabit ethernet (by intel thankfully) and I installed an Intel Pro 1000MT ethernet card in a PIII 1GHZ desktop. When I transfer files over the network between the two systems, the transfer rates are much slower than I believe they should be and I'm not quite sure why. I'm pretty sure the cable is Cat5e and the cpu utilization isn't peaking at 100% so...
 
So the intel pro 1000 MT card is a PCI car and the most you will probably ever get is about 300 Mb because of the limitation of the PCI buss.
 
At the lower end of gigabit file transfers, you're likely to be limited by the hard drives, not the networking. Laptop hard drives are generally a fair bit slower than modern desktop drives.
 
So the intel pro 1000 MT card is a PCI car and the most you will probably ever get is about 300 Mb because of the limitation of the PCI buss.

That makes no sense whatsoever...

The PCI Bus has 133MB/s bandwidth. Gigabit Ethernet is only 125MB/s.


And if you're going to argue that you can't utilize all of that bandwidth, you're wrong because I've seen raid controller cards use up to 100MB/s and those were with some old ass drives.

At the lower end of gigabit file transfers, you're likely to be limited by the hard drives, not the networking. Laptop hard drives are generally a fair bit slower than modern desktop drives.

Good point but IIRC I wasn't getting the speeds my laptop harddrive can go up to.. Something like only 12MB/s, a lot less than what it's able to do..

Also when in 100Mb network mode and I'm transferring files to and from the server, the max I can utilize of the network is only 60% and I'm wondering if the hub could be the issue despite no other traffic on the hub.
 
I would be curious to know the answer here as well.

I have done transfers via 100mb and I get around 11mb for transfers and with my Gigabit setup I get only around 14 -16mb

One end is my Desktop with onboard Gigabit and a Raptor X HDD. Other end is a server with a Gigabit PCI card and onboard SCSI Raid.

Seems like Gigabit just cannot use up that much speed, but still, seems like it should be faster.
 
I would be curious to know the answer here as well.

I have done transfers via 100mb and I get around 11mb for transfers and with my Gigabit setup I get only around 14 -16mb

One end is my Desktop with onboard Gigabit and a Raptor X HDD. Other end is a server with a Gigabit PCI card and onboard SCSI Raid.

Seems like Gigabit just cannot use up that much speed, but still, seems like it should be faster.

Well there is one thing, you probably should be using jumbo frames as there is too much overhead with the default frame of 1500 bytes which is used in 10/100 networks. Use a frame size of 9600 bytes and you'll likely see a performance improvement...
 
I have done transfers via 100mb and I get around 11mb for transfers and with my Gigabit setup I get only around 14 -16mb

One end is my Desktop with onboard Gigabit and a Raptor X HDD. Other end is a server with a Gigabit PCI card and onboard SCSI Raid.

What size files are you using for testing? Small files will always travel slower due to overhead factors. I'd say a minimum of around 100 MB to get a meaningful result, and then a minimum of as big as your RAM in order to (somewhat) factor out effects of the file system cache in order to measure sustained disk + network performance.

Really large files are more appropriate when you're hitting high speeds (because then you don't want to be mislead by cache speed), but are also painful when you're dealing with slow speeds. So ramping up is a good idea.

More details on hardware and software might help us guess what the max performance of this setup might be.

Generally performance improvement is done via analysis of bottlenecks -- measuring subsystem performance in isolation and as integrated systems.

There are no "minimum hardware requirements for gigabit" beyond the obvious, because actual 1000 Mb/s file transfers cannot be achieved in practice. So then "gigabit" in practice is a range of possible values, where even 15 MB/s, being 50% higher than typical maximum 100 Mb/s networking, is good performance improvement, which can be achieved inexpensively these days.
 
That makes no sense whatsoever...

The PCI Bus has 133MB/s bandwidth. Gigabit Ethernet is only 125MB/s.


And if you're going to argue that you can't utilize all of that bandwidth, you're wrong because I've seen raid controller cards use up to 100MB/s and those were with some old ass drives.

.

Yes the entire buss which is typicaly responsible for more than just a NIC you put in. Typical PCI Gig Nics run get about 300Mb. If you get anything above 500Mb on a PCI card your getting better than most people. (even if your pushing it directly from a ram disk)
 
Well there is one thing, you probably should be using jumbo frames as there is too much overhead with the default frame of 1500 bytes which is used in 10/100 networks. Use a frame size of 9600 bytes and you'll likely see a performance improvement...

not quite. jumbo frames may increase performance to a degree, but not to a large degree. If his CPU is reasonable, it can feed a gigabit connection. I'm running an iSCSI SAN with default frame sizes because the core switch doesn't support larger frames and I'm able to push ~100MB/s sustained write/read to it.
 
Back on topic, yes hard drive would be a big limitation, network card or not.

Ive done 110 or so MB/s over gig pulling from I think 8 sources on a lan to a raid array. But average a mere 20mB/s to a single drive (usually more tho)
 
I can BARELY get 100MB/sec with my Raid 5 array... on a synthetic benchmark....
 
All these apparent contradictions show that this area is not generally well understood and that there are several different issues that can come into play.

Are PCI GbE NICs actually limited to around 500 Mb/s let alone 300 Mb/s? No. Such figure are valid for file transfers using single drives, but that's largely because single drives are themselves limited around 500 Mb/s (62.5 MB/s). Actually, that'd be a very good figure for a single to single drive transfer, making the 300 Mb/s more valid for the usual case.

E.g. Actual file transfer using a PCI NIC, without jumbo frames. Rate ~ 68 MB/s i.e. ~ 544 Mb/s. Well, that's not much more than 500 Mb/s, but it is more. With more effort I might get it a bit higher. And if I can do real file transfers at this rate, I can do synthetic network-only transfers which are faster.

ftp> get 5.gb
200 Port command successful
150 Opening data channel for file transfer.
226 Transfer OK
ftp: 5000000000 bytes received in 73.56Seconds 67969.88Kbytes/sec.

Oh wait, what about jumbo frames? Are jumbo frames necessary and beneficial in all cases? No. However, they certainly can be significantly beneficial in some cases, especially when dealing with PCI NICs (though not always).

ftp> get 5.gb
200 Port command successful
150 Opening data channel for file transfer.
226 Transfer OK
ftp: 5000000000 bytes received in 58.08Seconds 86091.12Kbytes/sec.

Yep, they help here, and at ~689 Mb/s, blow away the supposed 500 Mb/s limit. Note that these were obviously not done with single drive to single drive transfers. Moreover note that I was able to hit 68 MB/s without jumbo frames, so clearly jumbo frames don't define some sort of magic bottleneck which explain performance at the level of 15-20 MB/s.
 
My motherboard has onboard video and sound, yet the onboard video is apart of the pci bus. Some motherboards have the onboard ide on the pci bus. You might not just have that gigabit card using the bus...
 
My motherboard has onboard video and sound, yet the onboard video is apart of the pci bus. Some motherboards have the onboard ide on the pci bus. You might not just have that gigabit card using the bus...
True but if you're using the motherboard's own IDE headers that are suppose to (imo) be on the north bridge shouldn't be utilizing the PCI bus for its operations, have its own direct link... When did they start doing that?
 
those speeds are messured in megabits right?... so converted 1 megabit = .125megabytes

so max speeds are only 125 megabytes a second?

correct me if im wrong
 
B = byte = 8 bits = 8 b ( = "octet")

MB = megabyte = 10^6 bytes = 8 * 10^6 bits = 8 megabits = 8 Mb

Gb = gigabit = 10^9 bits = 1000 * 10^6 bits = 1000 megabits = 125 megabytes = 125 MB.

http://en.wikipedia.org/wiki/Megabyte
http://en.wikipedia.org/wiki/Mebibyte

But there's plenty of confusion between MB and MiB. iperf for example gives measurements in Mb or MiB.

http://dast.nlanr.net/Projects/Iperf/iperfdocs_1.7.0.html

-f, --format [bkmaBKMA]

A letter specifying the format to print bandwidth numbers in. Supported formats are

'b' = bits/sec 'B' = Bytes/sec
'k' = Kbits/sec 'K' = KBytes/sec
'm' = Mbits/sec 'M' = MBytes/sec
'g' = Gbits/sec 'G' = GBytes/sec
'a' = adaptive bits/sec 'A' = adaptive Bytes/sec


The adaptive formats choose between kilo- and mega- as appropriate. Fields other than bandwidth always print bytes, but otherwise follow the requested format. Default is 'a'.
NOTE: here Kilo = 1024, Mega = 1024^2 and Giga = 1024^3 when dealing with bytes. Commonly in networking, Kilo = 1000, Mega = 1000^2, and Giga = 1000^3 so we use this when dealing with bits. If this really bothers you, use -f b and do the math.

E.g iperf output:

[ ID] Interval Transfer Bandwidth
[540] 0.0- 3.0 sec 354 MBytes 990 Mbits/sec

The "Transfer" column uses M = 2^20, and the "Bandwidth" column here uses M = 10^6.

/grumble

Windows generally uses MiB and calls it MB (i.e. M = 2^20)

/grumble

The Windows FTP tool however uses KB properly (i.e. K = 10^3).
 
Back
Top