ochadd
[H]ard|Gawd
- Joined
- May 9, 2008
- Messages
- 1,421
Attempting to upgrade my home Server 2022 Hyper-V server and my Windows 11 desktop by adding Intel X550-T2 NICs with both ports active. Had solid performance for a single 10 gig connection but nothing more. This is my first time messing with SMB multichannel and all the reading I've done says it "just works". I've spent half a day wrenching on this, if it helps anyone in the future.
Created a NIC SET team on the server, created a HyperV vswitch, assigned it to my file server VM. Everything shows 20 gbps in the VM but I can't get anything over 9.3 gbps through it w/ iperf and 16 streams. Single stream of 3.2 gbps. CPU isn't maxed out and it's in a x16 physical slot that should have PCIe 3 x4 throughput. SMB multichannel shows enabled. Standard MTU of 1500, can't do jumbo frames due to incompatible devices that need to communicate with the server.
Real life performance I did a full backup of my PC and both NICs (2 ports on same card) each loaded to about 2 gbps., about 480 MBps. A straight SMB file copy between the two peaks at about 730 MBps. Crystaldisk mark to a shared drive about 1 GBps. Two NICs in each machine are being leveraged but it's no better than a single 10 gig link for single client workloads.
The host server has 4 NICs.
1 gbps = Dedicated WAN port passed through to firewall
2.5gbps = Dedicated LAN port for firewall.
10gbps X550-T2 = Both ports in SETTEAM
I decided to break the team and pass the NICs though individually to the VM. Get nearly the same performance with a single X550 port as I did with two in a team. Tested both ports by themselves.
So I decided to pass the second port through to the VM. How I imagine Microsoft envisions SMB multichannel being used. Two individual NICs with seperate IP addresses.
Better. Getting about 13 gbps total. Each NIC loaded at around 6.5 gbps.
iperf still reporting a little over 9 gbps
A full backup I saw peak about 680 MBps so a 200 MBps improvement, 1.6 gbps. Tangible difference showing SMB multichannel favorably over the NIC team.
I didn't know before I started that Microsoft removed the ability to create a NIC team in Windows 11. Using the Intel utilities no longer works.
Some Powershell commands that could be handy. SETTEAM is the name I called the team. x5501 and x5502 are the name of the NICs, adapters.
Get NIC adapter names:
Get-NetAdapter
To create the SET team (Switch Embedded Teaming) since LBFO teaming is now deprecated, didn't know that either. You can create a LBFO team in Server Manager but Hyper-V will not allow creating a vSwitch with it. There is no GUI to do this.
New-VMSwitch -Name SETTEAM -NetAdapterName "x5501","x5502"
To set a load balancing type:
Set-VMSwitchTeam -Name SETTEAM -LoadBalancingAlgorithm HyperVPort
or
Set-VMSwitchTeam -Name SETTEAM -LoadBalancingAlgorithm Dynamic
Check if SMB multichannel is active
Get-SmbMultichannelConnection
better formatted version:
Get-SmbMultichannelConnection | format-list
Remove/delete the SET NIC team.
Remove-VMSwitch "SETTEAM"
Edited for clarity and wording.
Created a NIC SET team on the server, created a HyperV vswitch, assigned it to my file server VM. Everything shows 20 gbps in the VM but I can't get anything over 9.3 gbps through it w/ iperf and 16 streams. Single stream of 3.2 gbps. CPU isn't maxed out and it's in a x16 physical slot that should have PCIe 3 x4 throughput. SMB multichannel shows enabled. Standard MTU of 1500, can't do jumbo frames due to incompatible devices that need to communicate with the server.
Real life performance I did a full backup of my PC and both NICs (2 ports on same card) each loaded to about 2 gbps., about 480 MBps. A straight SMB file copy between the two peaks at about 730 MBps. Crystaldisk mark to a shared drive about 1 GBps. Two NICs in each machine are being leveraged but it's no better than a single 10 gig link for single client workloads.
The host server has 4 NICs.
1 gbps = Dedicated WAN port passed through to firewall
2.5gbps = Dedicated LAN port for firewall.
10gbps X550-T2 = Both ports in SETTEAM
I decided to break the team and pass the NICs though individually to the VM. Get nearly the same performance with a single X550 port as I did with two in a team. Tested both ports by themselves.
So I decided to pass the second port through to the VM. How I imagine Microsoft envisions SMB multichannel being used. Two individual NICs with seperate IP addresses.
Better. Getting about 13 gbps total. Each NIC loaded at around 6.5 gbps.
iperf still reporting a little over 9 gbps
A full backup I saw peak about 680 MBps so a 200 MBps improvement, 1.6 gbps. Tangible difference showing SMB multichannel favorably over the NIC team.
I didn't know before I started that Microsoft removed the ability to create a NIC team in Windows 11. Using the Intel utilities no longer works.
Some Powershell commands that could be handy. SETTEAM is the name I called the team. x5501 and x5502 are the name of the NICs, adapters.
Get NIC adapter names:
Get-NetAdapter
To create the SET team (Switch Embedded Teaming) since LBFO teaming is now deprecated, didn't know that either. You can create a LBFO team in Server Manager but Hyper-V will not allow creating a vSwitch with it. There is no GUI to do this.
New-VMSwitch -Name SETTEAM -NetAdapterName "x5501","x5502"
To set a load balancing type:
Set-VMSwitchTeam -Name SETTEAM -LoadBalancingAlgorithm HyperVPort
or
Set-VMSwitchTeam -Name SETTEAM -LoadBalancingAlgorithm Dynamic
Check if SMB multichannel is active
Get-SmbMultichannelConnection
better formatted version:
Get-SmbMultichannelConnection | format-list
Remove/delete the SET NIC team.
Remove-VMSwitch "SETTEAM"
Edited for clarity and wording.
Attachments
Last edited: