How am I getting 200-220MB/s over standard GigE between 2 computers?

westrock2000

[H]F Junkie
Joined
Jun 3, 2005
Messages
9,250
I have 2 computers right next to each other and each one has a dedicated GigE NIC with a crossover cable between the two. I give them dedicated IP addresses and use that for Windows mapping network drives. For some reason I get 200MB/s between these 2. I have used Jumbo Frames and stuff in the past, but normally that's only good for like 110-120MB/s.

KbOumrp.jpg
 

pendragon1

Fully [H]
Joined
Oct 7, 2000
Messages
25,288
I have 2 computers right next to each other and each one has a dedicated GigE NIC with a crossover cable between the two. I give them dedicated IP addresses and use that for Windows mapping network drives. For some reason I get 200MB/s between these 2. I have used Jumbo Frames and stuff in the past, but normally that's only good for like 110-120MB/s.
magic, be happy. ;) what nics are you using(models)?
 

westrock2000

[H]F Junkie
Joined
Jun 3, 2005
Messages
9,250
What does your ethernet ports show?
Well this is interesting. What's happening is that both NIC's are being utilized. The dedicated network between the 2 computers and then also the normal network NIC that the computers communicate publicly on. So I guess it's aggregating the bandwidth of both NIC's. I didn't know this was done by default. I thought this was only possible with special setups.

p98YZ5S.jpg
 

pendragon1

Fully [H]
Joined
Oct 7, 2000
Messages
25,288
did you enable nic teaming or have either cards oem software installed that might enable it?
 

bman212121

[H]ard|Gawd
Joined
Aug 18, 2011
Messages
1,780
For any program to saturate multiple links, you need to have as many or greater number of streams going as the number of links. Normally you'd be bonding together two ports via LACP to act as one, but even then you still need multiple streams to take advantage of it.

What you've stumbled upon is that SMB3.0 was designed to recognize multiple connections and use them in aggregation together. This has actually been a thing Since Windows 8 / Server 2012. I can't find the blog post anymore but I remember reading about it and testing it out back in 2012 and it worked flawlessly out of the box. It is link agnostic and I think we even proved you could augment bandwidth if you used multiple wireless links. It's pretty neat stuff that a lot of people probably don't realize is baked right into every Windows OS.


EDIT: Here you go!

https://docs.microsoft.com/en-us/ar...-a-feature-of-windows-server-2012-and-smb-3-0


Courtesy of this [H] post from 2014.

https://hardforum.com/threads/windows-server-2012-r2-and-smb-multichannel.1824605/
 
Last edited:

westrock2000

[H]F Junkie
Joined
Jun 3, 2005
Messages
9,250
did you enable nic teaming or have either cards oem software installed that might enable it?
Nope, nothing special done. Using whatever drivers Windows installed. After realizing what was happen I look up link aggregation and saw that teaming stuff. But that looked to be something else (and not sure if even supported anymore on Windows Pro).
 

bman212121

[H]ard|Gawd
Joined
Aug 18, 2011
Messages
1,780
Nope, nothing special done. Using whatever drivers Windows installed. After realizing what was happen I look up link aggregation and saw that teaming stuff. But that looked to be something else (and not sure if even supported anymore on Windows Pro).

Correct, this has nothing to do with teaming or Link Aggregation. You would use those features to compliment this in order to provide fault tolerance to the link. SMB Multichannel is purely a software implementation so it works on any combination of hardware that provides multiple connections.
 

SamirD

2[H]4U
Joined
Mar 22, 2015
Messages
4,059
Very cool to see this in action and how little it took to make it happen. Makes me wonder how much faster a 2.5Gbe card in each system would make this setup (or 10Gbe). If the systems are physically close enough 10Gbe cards with a dac can be had for as cheap as $60 shipped.
 

bman212121

[H]ard|Gawd
Joined
Aug 18, 2011
Messages
1,780
Even now storage is still the limiting factor. If you had 2 x 2.5gbe cards you'd need to be able to handle roughly 640MBps of traffic. That's more than SATA3 offers, so you'd need MVNe SSDs to saturate it. If it were spinners that's about the amount of bandwidth I can squeeze out of a full DAS shelf with 12 3.5" HDDs. From what I've seen with 10gbe, you'd probably need to start making tweaks as it's not trivial to just saturate one link, let alone two.
 

sram

[H]ard|Gawd
Joined
Jul 30, 2007
Messages
1,391
Well guys this is interesting, but I still don't see how data can move at a rate faster than the link (1Gbps = 125 MB/s) can allow !?!? . The only way I can see that happening is if data is exiting Machine 1 through its NIC at a rate faster than 1Gbps, and entering Machine 2 through its NIC also at a rate faster than 1Gbps which will simply mean the NIC's are capable of more than 1Gbps transfer rate. Why are they advertised as 1Gbps if they can do more? I simply don't see it.

Somebody might be able to explain it in simple english to me.

No, I didn't read the whole thing. I'm too lazy to do that ^_^ Maybe later
 

Nobu

Supreme [H]ardness
Joined
Jun 7, 2007
Messages
5,409
Well this is interesting. What's happening is that both NIC's are being utilized. The dedicated network between the 2 computers and then also the normal network NIC that the computers communicate publicly on. So I guess it's aggregating the bandwidth of both NIC's. I didn't know this was done by default. I thought this was only possible with special setups.

View attachment 314895

Well guys this is interesting, but I still don't see how data can move at a rate faster than the link (1Gbps = 125 MB/s) can allow !?!? . The only way I can see that happening is if data is exiting Machine 1 through its NIC at a rate faster than 1Gbps, and entering Machine 2 through its NIC also at a rate faster than 1Gbps which will simply mean the NIC's are capable of more than 1Gbps transfer rate. Why are they advertised as 1Gbps if they can do more? I simply don't see it.

Somebody might be able to explain it in simple english to me.

No, I didn't read the whole thing. I'm too lazy to do that ^_^ Maybe later
It's using two network links, the LAN and WAN, to transfer the files. Combined theoretical bandwidth is 2Gbit, or about 250MB/s.
 
  • Like
Reactions: sram
like this

sram

[H]ard|Gawd
Joined
Jul 30, 2007
Messages
1,391
It's using two network links, the LAN and WAN, to transfer the files. Combined theoretical bandwidth is 2Gbit, or about 250MB/s.
Oh okay. I see it now.

"then also the normal network NIC that the computers communicate publicly on" . I missed that!

But wait, just to make sure: So we have 4 NIC's in total for this whole setup or scenario, right? Computer A has an NIC that connects to the internet, and a 2nd one that links it via a crossover cable to Computer B which also has an NIC that connects it to the internet in addition to the one the crossover cable is hooked up to. Am I right?

It is indeed interesting. You learn a thing everyday. Thanks.
 

GotNoRice

[H]F Junkie
Joined
Jul 11, 2001
Messages
10,019
SMB multichannel works great, and is a fantastic cheap way to get network speeds above gigabit. Gigabit switches with 16-24+ ports are cheap and can be found easily. Network cards, including nice intel dual-port and quad-port PCIe adapters can be found cheap.

I use 3x gigabit in my main computer and my file server (in both cases, onboard gigabit + dual-port PCIe card). When transferring files between my main computer and my file server I can get speeds above 2.5GbE.

It's also flexible. For example, you could have a server connected to a 10GbE port, and a client with 4x gigabit connections, and be able to transfer at 4x Gigabit speeds. That makes these switches that only have one or two 10GbE ports, but 16+ gigabit ports, a lot more useful.

There are some limitations, some more obvious than others:
-Transfers are only as fast as the slowest link between the two computers. So if you had computers with 4x gigabit connections connected to totally different switches, with a 1Gb link connecting the two switches, you would be limited by that 1Gb link. On the other hand, if you had a 10GbE link connecting the two switches, then you would be good to go.
-This technology applies to SMB only, which is great for file transfers, but obviously not everything that uses the network uses SMB...
-The connections need to be of identical speeds, as your speed will be reflected as a multiple of the slowest connection. So if you have a 1GB connection and a 100Mb network connection, you won't get 1100Mbps, instead your combined speed using SMB multi-channel will be 200Mbps (2x 100Mb). Slower than if you had just used the gigabit connection by itself. This isn't usually an issue, but it can be. I tried to add a USB 2.0 gigabit adapter to a laptop that already had a gigabit connection, hoping to use the combined bandwidth of the two. But since it was a USB 2.0 adapter, the speed was limited to ~300Mbps, which was giving me a combined ~600Mbps - again, it ended up being faster just using the regular gigabit connection. For PCIe or even USB 3.0+ gigabit adapters, this shouldn't be an issue.
 
Last edited:
  • Like
Reactions: sram
like this

westrock2000

[H]F Junkie
Joined
Jun 3, 2005
Messages
9,250
But wait, just to make sure: So we have 4 NIC's in total for this whole setup or scenario, right? Computer A has an NIC that connects to the internet, and a 2nd one that links it via a crossover cable to Computer B which also has an NIC that connects it to the internet in addition to the one the crossover cable is hooked up to. Am I right?

That's right. The built-in Intel on the motherboard and then an add-on PCI-E card (TP-Link brand in this case)
 
  • Like
Reactions: sram
like this
Top