Server to Server Network Speed Anomoly

PCMusicGuy

[H]ard|Gawd
Joined
Feb 9, 2006
Messages
1,222
So I have 6 servers (Dell, Server 2012 R2) here in a test bed, where each has 6 network ports. Each server has a team consisting of two ports, where the 1st port goes to one Hirschmann Mach104 Gigabit switch, and the 2nd port goes to another. The switches can be thought of as an A/B setup for redundancy and the switches are uplinked/bridged/connected/ or whatever other term you prefer to call it.

Transferring files between the computers using the default admin network share results in seeing gigabit speed transfers (~110MBps). Let's call this network A.

One of the 6 servers, connects to an additional network (call this network B), also in a teamed fashion, through another separate pair of gigabit switches. On this other network, there are some 100Mb limited devices to which this server can communicate. Assuming all computer are off and I boot them. This server on both networks can achieve full GB speed (using a large file transfer as a test) talking to the other servers. If I use this server to talk to a 100Mb device, then this server is limited to 100Mb speeds on both A and B networks. The other five servers, only part of network A, work at expected gigabit speeds.

I've check the Hirschmann management software for all 4 switches and the 4 ports for this server and all indicate that this server is connected at 1Gbps everywhere. The OS also indicates that it is connected at 1Gbps as well for all ports. For some reason though, the actual throughput is limited to 100Mb on all ports. If I disconnect the B network, the server will achieve the gigabit speeds again.

The only other bit of information I can provide from memory is that we use the built in Server 2012 R2 teaming. I am trying to understand why it behaves this way. Any ideas?
 
Last edited:
Have you tried hard coding the interfaces for the devices that you are testing 1G connection on "network B" to be 1G rather than autonegotiate? I'm wondering if other ports negotiate a slower throughput if it affects other ports as well (bug?).
 
Sounds like a subnet issue. Are the switches/network adapters on different subnets?

Also, Mb = Megabit, MB = MegaByte(8Mb). Gotta keep the terms straight to make it easier to understand/help.
 
I have not tried setting the speeds manually. Everything is set to auto-negotiate. It is something that could certainly be tried though.

It could be a subnet issue, but I'm not sure why that would affect speeds. Network A is on 192.168.2.x, mask 255.255.255.0, while network B is on 192.168.1.x, mask 255.255.255.0.

Also, I've edited the post so I am referencing 100MB as 100Mb, so that the Mbps/MBps straight.

When I get to the office this week I will see if I can do a little more digging.
 
Off the top of my head it sounds like it's working as expected... So basically you're saying that the server is downgrading it's throughput to 100mbit when it detects a 100mbit link. Because the two adapters are bound together, they are both going to downgrade their speed at the same time. I don't understand at all why these are "teamed" in the first place though. If you have two separate networks, there is no need to team these adapters together because you're not gaining anything by doing so. (As shown by the fact that you're only getting 100MBps instead of 200MBps)
 
Back
Top