Windows Server 2012 R2 and SMB Multichannel?

vr.

Supreme [H]ardness
Joined
Mar 30, 2010
Messages
4,432
I've been poking about YouTube and reading Microsoft blog posts about this SMB multichannel and it sounds great! Except I can't find hard specifics of what is truly necessary to bake bread before going to the grocery store to get the ingredients!

I'm thinking I'd like to take a couple of Windows 2012 R2 server with some internal disk, add the Hyper-V role to them and distribute their 4 x 1 GbE as 2 x 1 GbE to separate switches.

Assuming a single 1 GbE up-linking those switches is not enough bandwidth, do you have to LACP "N" x 1 GbE on those separate switches to get 'r done?

If not LACP, do you have stack these switches for SMB multichannel to auto-magically discover a "supported configuration"?

And what's this talk about not teaming adapters and letting DHCP hand out an IP to each NIC in each server? Won't the server be confused about its host identity and FQDN?
 
All you need is IP connectivity on all the links between the client and server. I prefer to use separate subnets per SMB Multichannel path just so I can ensure static control over where the SMB traffic goes, but it's not required.
 
http://blogs.technet.com/b/josebda/...ature-of-windows-server-2012-and-smb-3-0.aspx

1.2. Requirements

SMB Multichannel requires the following:

At least two computers running Windows Server 2012 or Windows 8.

At least one of the configurations below:

Multiple network adapters
One or more network adapters that support RSS (Receive Side Scaling)
One of more network adapters configured with NIC Teaming
One or more network adapters that support RDMA (Remote Direct Memory Access)

Server 2012 and Windows 8 both support SMB 3.0

It works with or without LACP however in my tests I found it to be faster without LACP.

Your concerns about multiple IPs and FQDN are misplaced. The only real issue is with adapter binding orders or if you're using IP addresses and not DNS when pointing to the servers. Obviously if your switch to switch connections are only 1Gb then there's not a lot of benefit when traversing those links however if A) Everything is on the same switch, B) You have LACP or 10Gb switch connections or C) Switch stacks - then you will see huge improvements

However, keep in mind that LACP isn't a perfect load balancing method either, it hashes and depending on the switch even an LACP link might not help because of the way they hash. You could distribute the load over multiple switches with multiple uplinks. SMB 3.0 is suppose to dynamically determine if there is a path to the endpoint.
 
Last edited:
I pretty much have the exact same question - tons of articles about the theory behind the tech but not one video or article that says - plug in 2 nics, right click on one - select SMB multichannel, configure IP or anything like that.
 
[SOLVED] 10GbE: SMB Direct and SMB Multichannel on Windows 8.1 | SmallNetBuilder Forums
From what i've tested NIC needs RSS to work. And not just switch to turn it on but there needs to be an option to set CPU number for nic.
I've tested only 1 subnet without LACP. Works on 2 NICs between Server 2012R2 and Win 8.1. All Nics were different. One Intel CT, One Realtek, One Broadcom and another Realtek
Intel 1000PT dual didn't worked.
 
Your concerns about multiple IPs and FQDN are misplaced. The only real issue is with adapter binding orders or if you're using IP addresses and not DNS when pointing to the servers.

How are my concerns about the IP and FQDN misplaced? If those 4 NIC's on each host are connected to different subnets the DNS of each host will be in a fairly constant state of uncertainty since default behavior has "register this connection's address in DNS" checkbox ticked. All 4 adapters on both servers will re-register their IP's periodically. If the servers are connected to subnets A and B and those subnets are not isolated, meaning a 2012R2 client on subnet A can talk to subnet B, then the client taking the shortest path to the server becomes a question mark.
 
How are my concerns about the IP and FQDN misplaced? If those 4 NIC's on each host are connected to different subnets the DNS of each host will be in a fairly constant state of uncertainty since default behavior has "register this connection's address in DNS" checkbox ticked. All 4 adapters on both servers will re-register their IP's periodically. If the servers are connected to subnets A and B and those subnets are not isolated, meaning a 2012R2 client on subnet A can talk to subnet B, then the client taking the shortest path to the server becomes a question mark.


From what I've seen your data is always going to use the shortest route (defined metric) in your routing table. SMB 3.0 using dynamic Teaming (has one IP for the team) always starts out using the primary nic when servicing a request and when responding sends an arp request to the local switch so that the requested data travels to the second nic's MAC. This way you get most of the benefits of having a team (LAG) or (LACP) without actually making one. Seriously you're over thinking it. I used it on the server end, connected to HP2910 switches that are 10Gbe connected back to our main switch stack. The result is that I occasionally see throughput higher than 1Gbps to and from the server when there are simultaneous requests from multiple users. I probably have 8 our of 70 services using it in this fashion, and it just works.
 
that's very good idea actually !!

you can cluster a couple of free hyper-v servers to behave like a dual-controller iscsi san / smb3 has

SMB 3.0 File Server on Free Microsoft Hyper-V Server 2012 R2 (Clustered)

nice step-by-step guide

I've been poking about YouTube and reading Microsoft blog posts about this SMB multichannel and it sounds great! Except I can't find hard specifics of what is truly necessary to bake bread before going to the grocery store to get the ingredients!

I'm thinking I'd like to take a couple of Windows 2012 R2 server with some internal disk, add the Hyper-V role to them and distribute their 4 x 1 GbE as 2 x 1 GbE to separate switches.

Assuming a single 1 GbE up-linking those switches is not enough bandwidth, do you have to LACP "N" x 1 GbE on those separate switches to get 'r done?

If not LACP, do you have stack these switches for SMB multichannel to auto-magically discover a "supported configuration"?

And what's this talk about not teaming adapters and letting DHCP hand out an IP to each NIC in each server? Won't the server be confused about its host identity and FQDN?
 
Back
Top