2x10gbps SMB multichannel adventure

ochadd

[H]ard|Gawd
Joined
May 9, 2008
Messages
1,421
Attempting to upgrade my home Server 2022 Hyper-V server and my Windows 11 desktop by adding Intel X550-T2 NICs with both ports active. Had solid performance for a single 10 gig connection but nothing more. This is my first time messing with SMB multichannel and all the reading I've done says it "just works". I've spent half a day wrenching on this, if it helps anyone in the future.

Created a NIC SET team on the server, created a HyperV vswitch, assigned it to my file server VM. Everything shows 20 gbps in the VM but I can't get anything over 9.3 gbps through it w/ iperf and 16 streams. Single stream of 3.2 gbps. CPU isn't maxed out and it's in a x16 physical slot that should have PCIe 3 x4 throughput. SMB multichannel shows enabled. Standard MTU of 1500, can't do jumbo frames due to incompatible devices that need to communicate with the server.

Real life performance I did a full backup of my PC and both NICs (2 ports on same card) each loaded to about 2 gbps., about 480 MBps. A straight SMB file copy between the two peaks at about 730 MBps. Crystaldisk mark to a shared drive about 1 GBps. Two NICs in each machine are being leveraged but it's no better than a single 10 gig link for single client workloads.
1727559793230.png



The host server has 4 NICs.
1 gbps = Dedicated WAN port passed through to firewall
2.5gbps = Dedicated LAN port for firewall.
10gbps X550-T2 = Both ports in SETTEAM

1727553659413.png

1727553773018.png

1727554042982.png

1727554634154.png


I decided to break the team and pass the NICs though individually to the VM. Get nearly the same performance with a single X550 port as I did with two in a team. Tested both ports by themselves.

1727555504022.png

1727555644933.png


So I decided to pass the second port through to the VM. How I imagine Microsoft envisions SMB multichannel being used. Two individual NICs with seperate IP addresses.

1727555785559.png


Better. Getting about 13 gbps total. Each NIC loaded at around 6.5 gbps.

1727559417362.png

1727560924621.png



1727559473143.png

iperf still reporting a little over 9 gbps
1727559689530.png


A full backup I saw peak about 680 MBps so a 200 MBps improvement, 1.6 gbps. Tangible difference showing SMB multichannel favorably over the NIC team.

I didn't know before I started that Microsoft removed the ability to create a NIC team in Windows 11. Using the Intel utilities no longer works.
Some Powershell commands that could be handy. SETTEAM is the name I called the team. x5501 and x5502 are the name of the NICs, adapters.

Get NIC adapter names:
Get-NetAdapter

To create the SET team (Switch Embedded Teaming) since LBFO teaming is now deprecated, didn't know that either. You can create a LBFO team in Server Manager but Hyper-V will not allow creating a vSwitch with it. There is no GUI to do this.
New-VMSwitch -Name SETTEAM -NetAdapterName "x5501","x5502"

To set a load balancing type:
Set-VMSwitchTeam -Name SETTEAM -LoadBalancingAlgorithm HyperVPort
or
Set-VMSwitchTeam -Name SETTEAM -LoadBalancingAlgorithm Dynamic

Check if SMB multichannel is active
Get-SmbMultichannelConnection
better formatted version:
Get-SmbMultichannelConnection | format-list

Remove/delete the SET NIC team.
Remove-VMSwitch "SETTEAM"

Edited for clarity and wording.
 

Attachments

  • 1727559446790.png
    1727559446790.png
    110.4 KB · Views: 0
  • 1727554509006.png
    1727554509006.png
    21.7 KB · Views: 0
Last edited:
Interesting. My understanding of SMB multi-channel is that it is simply another link from one host to another. Ie, separate IPs on each end from the primary link which is also a separate IP.

So for example if you have 2x 10Gb dual port nics on host2 and a dual port 10Gb on host2, SMB multichannel would create 4x links from host1 to the 2x links on host2. And in theory this would max out the bandwidth going into host2 since host1 is capable of 80Gb and host1 can do 40Gb.

Now, I don't have anything new enough to play with smb multi-channel so I just have different IP subnets on each interface and manually set up my copies to use a particular link by IP address.
 
I've been having internet issues on my main PC for a few months now and can't figure out what is going on, but today when I was messing with it with a USB 2.5GbE nic attached, I noticed that it was only going at 1.1-1.2 Gbps.
I then noticed that the onboard 2.5GbE nic was doing 1.2 Gbps.
I didn't do anything in Windows to "team" the nics, and that is what is happening right, the nics are teaming together?
2.5GbE over 2 Nics.png
 
I've been having internet issues on my main PC for a few months now and can't figure out what is going on, but today when I was messing with it with a USB 2.5GbE nic attached, I noticed that it was only going at 1.1-1.2 Gbps.
I then noticed that the onboard 2.5GbE nic was doing 1.2 Gbps.
I didn't do anything in Windows to "team" the nics, and that is what is happening right, the nics are teaming together?
It wouldn't be teaming and but could be SMB multichannel depending on how your network is configured. Your routing device or server would need to have two NICs as well. Team is a specific technology for bonding network cards together and presenting them as a single logical interface to the operating system. You set this up intentionally via powershell or Server Manager. A load balancing algorithim balances the traffic between the two physical NICs. Typically used for multi-threaded, multiuser environments. Think of a file server with 300 users or a server connection to a SAN via ISCSI for storage traffic.

Your file copy speed is exactly what I got from my 2.5gbps NIC setup before this upgrade.
 
It wouldn't be teaming and but could be SMB multichannel depending on how your network is configured. Your routing device or server would need to have two NICs as well. Team is a specific technology for bonding network cards together and presenting them as a single logical interface to the operating system. You set this up intentionally via powershell or Server Manager. A load balancing algorithim balances the traffic between the two physical NICs. Typically used for multi-threaded, multiuser environments. Think of a file server with 300 users or a server connection to a SAN via ISCSI for storage traffic.

Your file copy speed is exactly what I got from my 2.5gbps NIC setup before this upgrade.
I didn't setup anything at all, either in powershell or server manager. All I did was unplug the onboard nic's cable from the switch and plugged in the USB one with it's own cable to the switch.
When I was done I just plugged in the onboard and noticed it was splitting the transfer between the two 2.5GbE nics. I've always had the Intel gigabit nic plugged in as well but all traffic has always gone through the onboard Realtek.

The strange issue I am having is network transfers go at full speed but internet traffic only goes at 1300Mbps. All the other machines do 2300Mbps.
The USB adapter gets 2300 on every machine other than this so that nic works perfect, so this is some kind of Windows thing that I can't figure out.
1727631287608.png
speedtest.png
 
I didn't setup anything at all, either in powershell or server manager. All I did was unplug the onboard nic's cable from the switch and plugged in the USB one with it's own cable to the switch.
When I was done I just plugged in the onboard and noticed it was splitting the transfer between the two 2.5GbE nics. I've always had the Intel gigabit nic plugged in as well but all traffic has always gone through the onboard Realtek.

This is the correct behavior for multichannel SMB. It's all automatic, and enabled by default. Teaming is separate, something you have to specifically configure, and not required for SMB multichannel to function. In your file copy example, you were being limited to ~1.2Gbps on each of the two nics because you only had a single 2.5GbE nic on the other end.

The strange issue I am having is network transfers go at full speed but internet traffic only goes at 1300Mbps. All the other machines do 2300Mbps.
The USB adapter gets 2300 on every machine other than this so that nic works perfect, so this is some kind of Windows thing that I can't figure out.
View attachment 682485View attachment 682486

Whatever the issue is with your internet speeds, it is NOT related to SMB multichannel. SMB is a protocol for file transfers on a LAN. It doesn't affect internet traffic.

My guess would be a USB issue. Even older USB 3.0 (aka 3.1 Gen1, aka 3.2 gen1) is 5Gbps, but that is the theoretical limit of the spec, not necessarily what you will get in practice with every USB 3.0 port. It could be that the USB 2.5GbE nic on that system is simply being limited to ~1.4Gbps due to USB issues. When you're transferring files to another system on your LAN using SMB, and that other system has a single 2.5GbE adapter, then your system with the USB 2.5GbE nic and the onboard 2.5GbE nic is splitting that traffic in half, putting the work assigned to that USB 2.5GbE nic under that ~1.4Gbps threshold, thus you are seeing full 2.5GbE combined speeds across the two nics.

I personally use 4x 1GbE adapters in my primary file server, 4x 1GbE adapters in my main rig (3 of which are USB). No teaming. All plugged into the same switch. It all works seamlessly.

multipathSMBx4a.jpg


One thing I have noticed, particularly in the past, is odd behavior when using mismatched nic speeds. For example, if you had a system with a 100Mbps nic and a 1Gb nic, SMB would transfer files at 200Mbps (2x slowest link speed) instead of 1100Mbps. At some point in the not too distant past, that did seem to have been fixed (I have now seen examples of it fully utilizing multiple network interfaces with mismatched speeds), but it still seems to be somewhat inconsistent. I think that using multiple nics with the same link speed, and disabling any slower extra interfaces you have, is still the most reliable way to go.
 
One thing I have noticed, particularly in the past, is odd behavior when using mismatched nic speeds. For example, if you had a system with a 100Mbps nic and a 1Gb nic, SMB would transfer files at 200Mbps (2x slowest link speed) instead of 1100Mbps. At some point in the not too distant past, that did seem to have been fixed (I have now seen examples of it fully utilizing multiple network interfaces with mismatched speeds), but it still seems to be somewhat inconsistent. I think that using multiple nics with the same link speed, and disabling any slower extra interfaces you have, is still the most reliable way to go.
I didn't mention it in the original post but I may have seen that. Had the 20gb NIC team and 2.5gb NIC passed to the VM and was getting about 5 gbps. Assumed it was just my NIC team not performing how I'd hoped but it could have been defaulting to 2x the slowest connection.
 
One thing is for sure--diagnosing multichannel smb will test your math skills! :D

In fact, these would have been fascinating word problems compared to the 'train a is moving and train b is moving' ones.
 
Back
Top