Data transfer rate in wireless network...What is the limiting factor?

sram

[H]ard|Gawd
Joined
Jul 30, 2007
Messages
1,699
I have an old laptop which I decided to sell, so i'm now in the process of saving all data in it to my main storage machine. Of course I could have done it in many different ways, but I decided to network my laptop with my main machine using wifi. I enabled sharing for my laptop data drive so that I can map it in my machine and copy the data over. All is fine except that i'm not able to find a logical reason for the super low transfer speed. It fluctuates between 1.5 MB/s and 700 KB/s. In some few instances, it reached 5 KB/s :confused:. It is okay but why this speed?


WiFi limit can't be less than 54 Mbps = 6 MB/s

The slowest drive which will be my laptop drive is at worst SATA II drive(7200 RPM) so this shouldn't factor in...

The interface is either sata 1 or 2 so this shouldn't factor in also.

Only one more machine is wirelessly hooked to my WiFi router to access the internet.

Does the way the files/folders are organized play a role on this? I mean: For example, how much different will it to copy one large file of 20 GB size vs copying 10,000 files inside 500 folders with a total size of 20 GB ?

Edit: The laptop is placed 2 meters from the wireless router.

Thanks.
 
Last edited:
Quite a lot of factors really:
  • Laptop's wireless card
  • Router itself
  • Network protocol (FTP is sometimes faster, SAMBA is sometimes slower, etc)
  • Interference from microwaves, cordless phones, thick walls, other wirless devices
  • Type of files being moved i.e Smaller but large number of files (like pictures or movies) will severely impact the transfer speeds.
  • Possible Anti-virus influence.
 
Some important questions for you to answer:

1. Is it just PC to PC that is slow or is internet slow too? Try speedtest.net and compare results to your main machine. Is the main machine ethernet?

2. How is the performance for the other wireless computer that is on your router?
 
Each active device connected to a wifi access point cuts the available bandwidth in half. And network overhead cuts into that bandwidth as well. So even with only one device you get less than 54MBps with the best possible signal. Then taking account for the microwaves, cordless phones, other Wifi access points within 3 channels of yours, and all sorts of other noise sources means even less available bandwidth.

Also, 10,000 files will certainly take a lot longer to copy than a single file of the same total size. Every file has a sequence of I/O operations that have to be performed to find where the file is located on the disk. Each of those I/O operation can take several milliseconds to perform if the system hasn't already cached it from previous accesses. So it could be be 15 IOPS to look up the data for one file (and add more the deeper into the folder hierarchy you get) versus 150,000 IOPS for 10,000 files, plus the number of IOPS to actually transfer the data in the file.
 
Last edited:
Quite a lot of factors really:
  • Laptop's wireless card
  • Router itself
  • Network protocol (FTP is sometimes faster, SAMBA is sometimes slower, etc)
  • Interference from microwaves, cordless phones, thick walls, other wirless devices
  • Type of files being moved i.e Smaller but large number of files (like pictures or movies) will severely impact the transfer speeds.
  • Possible Anti-virus influence.

Like I suspected. I have a huge number of files. I think this is it.
 
Some important questions for you to answer:

1. Is it just PC to PC that is slow or is internet slow too? Try speedtest.net and compare results to your main machine. Is the main machine ethernet?

2. How is the performance for the other wireless computer that is on your router?

Internet is fine actually. Yes the main machine is connected via ethernet. Performance for other wireless computers isn't bad.
 
WiFi uses collision avoidance and a lot of error-correction. The result is that data throughput is roughly half of the advertised link speed.

The fastest actual data throughput you'll ever see theoretically out of a 54 Mbps wireless G connection is 27 Mbps. In practice it's even lower, and others have described those additional factors.
 
WiFi uses collision avoidance and a lot of error-correction. The result is that data throughput is roughly half of the advertised link speed.

The fastest actual data throughput you'll ever see theoretically out of a 54 Mbps wireless G connection is 27 Mbps. In practice it's even lower, and others have described those additional factors.

This. Link speed is not the same as actual transfer rate. There a lot of protocols involved that introduce overhead, some of them necessary to overcome the limitations of the physical layer.
 
This. Link speed is not the same as actual transfer rate. There a lot of protocols involved that introduce overhead, some of them necessary to overcome the limitations of the physical layer.

Indeed. If only the enterprise IT managers calling me complaining about how they can't get 100 Mbps of data throughput between a 100 Mbps fiber link on the west coast and a 100 Mbps fiber link on the east coast understood.

The best I can do to explain is that I can send 100 Mbps worth of 1s and 0s coast to coast, but in order to ensure the traffic actually goes where it's supposed to, ends up in the right order, and isn't missing pieces, there's a lot of other data that needs to be sent across the line to ensure the above.

Then there's factors like the speed of light that play into it as well, as often the sending and receiving ends have to stop sending all those 1s and 0s for a moment to await confirmation from the other end that that the data was received intact or if any of it needs to be retransmitted (TCP does this). One of the most noticeable and common examples of this is when people try to use Samba (notably CIFS/SMBv1), a protocol designed for use on LANs, across WAN VPN links. There is a lot of back and forth that goes on between transmission of blocks of data (limited I think to 64 KB or something like that, I'd have to look it up) to confirm the integrity of the received data that adds a lot of waiting time between the intervals in which further data is actually transmitted.

On a LAN, with a 2ms response time between 2 devices that are having files sent between them, sending a half-dozen packets back and forth to confirm that a 64KB block was properly transmitted and received means only a dozen milliseconds between data blocks, and good throughput rates result. On a WAN link where it takes 80ms for data to travel coast-to-coast, you'd end up with nearly half a second of wait time between 64 KB blocks, which absolutely destroys throughput. It is also worth noting that SMB is a higher-level protocol that operates on top of TCP, so add in the times it takes for TCP to ACK/SYN for its own error detection.

Anyhow, link speed versus throughput are commonly misunderstood because everyone sells the theoretical values (hardware providers, ISPs, etc. This isn't just because we want to make ourselves look better, it's because we have no way of predicting what kind of protocols will be used to transport data over our medium.
 
Last edited:
WiFi uses collision avoidance and a lot of error-correction. The result is that data throughput is roughly half of the advertised link speed.

The fastest actual data throughput you'll ever see theoretically out of a 54 Mbps wireless G connection is 27 Mbps. In practice it's even lower, and others have described those additional factors.

This. Link speed is not the same as actual transfer rate. There a lot of protocols involved that introduce overhead, some of them necessary to overcome the limitations of the physical layer.

Indeed. If only the enterprise IT managers calling me complaining about how they can't get 100 Mbps of data throughput between a 100 Mbps fiber link on the west coast and a 100 Mbps fiber link on the east coast understood.

The best I can do to explain is that I can send 100 Mbps worth of 1s and 0s coast to coast, but in order to ensure the traffic actually goes where it's supposed to, ends up in the right order, and isn't missing pieces, there's a lot of other data that needs to be sent across the line to ensure the above.

Then there's factors like the speed of light that play into it as well, as often the sending and receiving ends have to stop sending all those 1s and 0s for a moment to await confirmation from the other end that that the data was received intact or if any of it needs to be retransmitted (TCP does this). One of the most noticeable and common examples of this is when people try to use Samba (notably CIFS/SMBv1), a protocol designed for use on LANs, across WAN VPN links. There is a lot of back and forth that goes on between transmission of blocks of data (limited I think to 64 KB or something like that, I'd have to look it up) to confirm the integrity of the received data that adds a lot of waiting time between the intervals in which further data is actually transmitted.

On a LAN, with a 2ms response time between 2 devices that are having files sent between them, sending a half-dozen packets back and forth to confirm that a 64KB block was properly transmitted and received means only a dozen milliseconds between data blocks, and good throughput rates result. On a WAN link where it takes 80ms for data to travel coast-to-coast, you'd end up with nearly half a second of wait time between 64 KB blocks, which absolutely destroys throughput. It is also worth noting that SMB is a higher-level protocol that operates on top of TCP, so add in the times it takes for TCP to ACK/SYN for its own error detection.

Anyhow, link speed versus throughput are commonly misunderstood because everyone sells the theoretical values (hardware providers, ISPs, etc. This isn't just because we want to make ourselves look better, it's because we have no way of predicting what kind of protocols will be used to transport data over our medium.


WOW, a very detailed answer!!! I like it. Thanks guys.....especially you Electrofreak, it is all clear now.
 
If you could, I would try to zip/rar as many files as possible so you're transmitting a few large files rather than a lot of smaller ones.
 
In practice a perfect 2x2 MIMO 54G connection using WPA PSK gets approx 23mbps of usable throughput or 2.8MB/s unidirectional

A perfect 2x2 20Mhz 2.4Ghz wireless N connection connection negotiated at 144mbps using WPA2 AES PSK can achieve 80mbps of usable throughput or 10MB/s unidirectional
 
walk your ass to the router, connect the laptop wired. that will save you hours.
 
walk your ass to the router, connect the laptop wired. that will save you hours.

Indeed. But my ass is too big to move unfortunately:D . And I I already said this can be done in so many other ways. I was mainly interested in knowing the reasons behind the low wifi transfer rate.
 

Aren't buffers made big enough to take into account your link and route delays and keep your pipeline full? Well my understanding is more on a local board bus, so I guess maybe the internet is too chaotic for that to work?
 
Aren't buffers made big enough to take into account your link and route delays and keep your pipeline full? Well my understanding is more on a local board bus, so I guess maybe the internet is too chaotic for that to work?

Buffers can only do so much with a slow connection, but it's not really buffering that's an issue. It's the fact that only so many packets will be transmitted before the sender stops to wait for an acknowledgment that those packets actually got where they were going. In his explanation, you have Samba sending ACKs for it's own use, then TCP wamtimg its own ACKs for the ones that Samba is sending back and forth... it gets nasty.
 
As previously mentioned, CSMA/CA is a big factor.

You'll exhibit similar behavior if you used a standard hub in a wired network with collisions/etc.
 
Aren't buffers made big enough to take into account your link and route delays and keep your pipeline full? Well my understanding is more on a local board bus, so I guess maybe the internet is too chaotic for that to work?

Buffers actually are the main contributors to delays, because a buffer is a way of delaying traffic until it can be sent. This is because the amount of traffic being sent varies dramatically moment to moment. This generally happens at aggregation points, but can even happen at an access port, where a single device (like a computer) can burst traffic at rates that exceed the capability of the port on the switch to process and forward the data.

In short, your traffic is buffered when it travels through your LAN, and through the WAN. The buffers are necessary, but delay the traffic. The internet is indeed chaotic. Your data is being pushed over high-capacity connections along with massive amounts of other data from other users, and this data can peak and ebb in the same way it can on a LAN, and buffers are constantly in use.

I guess what I'm getting at is that the server and the host need to communicate over long distances and have to await responses from one another between exchanging blocks of data. As that data is sent out across the LAN and across the internet, it's delayed just like traffic merges and a traffic jam forms. Sometimes, if it's delayed long enough, it's dropped, and has to be resent. This certainly doesn't help the server and the client to communicate any more quickly, as they still need to wait for a packet to make its way across the cloud and a response to work its way back before they can exchange further information.
 
Your 2.4GHz band may be rendered worthless in your area by other WAPs and intereference. You can try getting a dual-band router and a 5GHz dongle for the laptop and see if that helps. In my area 2.4GHz stutters, stalls and sometimes comes to a complete halt regardless of channel, with 5GHz I get about 50Mb/s actual transfer rate.

Or try a powerline adapter.
 
Back
Top