Xenserver, NFS, and link aggregation

wizdum

[H]ard|Gawd
Joined
Sep 22, 2010
Messages
1,943
I'm setting up a XenServer cluster with two hosts and one NFS storage array and i'm getting a little lost in the terminology around LACP and bonded NICs. The NFS server has 5x 1Gb ports in it. Both hosts have 2x 1Gb ports in them that i'd like to dedicate to storage. The NFS server will be connected to the hosts via a gigabit managed switch that is dedicated to the storage network.

Is it possible to set up LACP on both the hosts and the NFS server to achieve close to 2Gbps between them? Or will I still be limited to 1Gbps since each set of bonded NICs shares the same MAC address?

Advice? Is there a better way that I am overlooking?

Last question, am I over-complicating this for no reason? The NFS storage array has 12x 7200RPM drives in RAID10, will it even be able to saturate a 1Gb link?
 
You can aggregate links with LACP no problem. But no single transfer will exceed 1Gb/s. Depending on the hashing method the switch uses. It'll send traffic down the different links in the LACP bundle based on src/dst IP and Port.

Yes, Storage array should be able to saturate 1Gb/s no problem. I've got a NAS with 5 drives that will saturate 1Gb/s easy.

If you're really looking fast storage, Your next upgrade should be 10Gb Ethernet for the storage array network.
 
You can aggregate links with LACP no problem. But no single transfer will exceed 1Gb/s. Depending on the hashing method the switch uses. It'll send traffic down the different links in the LACP bundle based on src/dst IP and Port.

Yes, Storage array should be able to saturate 1Gb/s no problem. I've got a NAS with 5 drives that will saturate 1Gb/s easy.

If you're really looking fast storage, Your next upgrade should be 10Gb Ethernet for the storage array network.

Yeah, I knew that any one transfer wont exceed 1Gbps. I was hoping that I could get 1Gbps to multiple VMs simultaneously if the VM host had several NICs bonded together. For example, if 4 requests to 4 separate VMs all come in at the same time, the requests are balanced across 4 1Gb NICs. After looking over the XenServer docs, I don't think this is possible. I may have to live with 1Gbps for now.

I don't think 10Gbps is an option at this point in time, that may be an upgrade further down the road. Any recommendations on some cheap 10Gbe NICs and a small switch? I'm talking used on eBay cheap here.
 
Yeah, I knew that any one transfer wont exceed 1Gbps. I was hoping that I could get 1Gbps to multiple VMs simultaneously if the VM host had several NICs bonded together. For example, if 4 requests to 4 separate VMs all come in at the same time, the requests are balanced across 4 1Gb NICs. After looking over the XenServer docs, I don't think this is possible. I may have to live with 1Gbps for now.

I don't think 10Gbps is an option at this point in time, that may be an upgrade further down the road. Any recommendations on some cheap 10Gbe NICs and a small switch? I'm talking used on eBay cheap here.

Correct, The VM isn't actually talking directly to the Storage. The Xen Host is. Meaning even with multiple VM's, The SRC/DST IP on all the packets is coming from the same place.

Now, Let's say you had 10 Xen hosts, And one storage appliance with 4x GigE in an LACP bundle. The different Xen hosts would get hashed onto different links in the bundles. But that's about it.

As for a Switch recommendations, I'm pretty out of the loop. The 10Gb stuff I work with is all Routing type equipment (Mikrotik Cloud Core routers with SFP+ interfaces..etc)
 
Correct, The VM isn't actually talking directly to the Storage. The Xen Host is. Meaning even with multiple VM's, The SRC/DST IP on all the packets is coming from the same place.

Now, Let's say you had 10 Xen hosts, And one storage appliance with 4x GigE in an LACP bundle. The different Xen hosts would get hashed onto different links in the bundles. But that's about it.

As for a Switch recommendations, I'm pretty out of the loop. The 10Gb stuff I work with is all Routing type equipment (Mikrotik Cloud Core routers with SFP+ interfaces..etc)

Having 10x hosts is more of the direction that I want to head in anyway. They don't need fast storage links, the problem is that they have too many VMs on each host. We can get the hosts dirt cheap on eBay anyway. It would still be worth it to bond the LAN interfaces on the XenServer Hosts though, right? Since that would be a "many to one" situation.

To sidetrack this topic, how do you like the Cloud Core Routers and CC Switches? I just picked up 3 of the switches, and am looking at some of the routers to use to segment a very large /8 flat network(6+ separate sites linked with fiber, local and remote resources, etc.).
 
Having 10x hosts is more of the direction that I want to head in anyway. They don't need fast storage links, the problem is that they have too many VMs on each host. We can get the hosts dirt cheap on eBay anyway. It would still be worth it to bond the LAN interfaces on the XenServer Hosts though, right? Since that would be a "many to one" situation.

Personally, I wouldn't bond the interfaces on the Xen Hosts. For storage traffic at least, it's a 1-1 relationship. The Xen server talking to the storage array. Now, The actual VM traffic to the internet would be One to many. But I doubt you need that kind of bandwidth per-host for the VM's. Just my thoughts.

Now, The interfaces on the Storage array I would bond. As you scale into more physical hosts. The bonding will become more useful.

You could apply the same theory to a NAS in an office. One person accessing it over a bonded interface will only get hashed to one interface. But as you add more "users" (In your case, Xen hosts) you get more info to hash, And the traffic is better split.

To sidetrack this topic, how do you like the Cloud Core Routers and CC Switches? I just picked up 3 of the switches, and am looking at some of the routers to use to segment a very large /8 flat network(6+ separate sites linked with fiber, local and remote resources, etc.).

Never touched the switches. Don't know that I want to. Just with Mikrotik's track record. I'd rather just have an HP switch (like a 1810-24g) if I'm only switching.

As for the routers. We love them. I've got 20 or so deployed in various scenarios. Mostly tower core routers (We're a wisp). Some of them approaching line rate Gig-E on a few interfaces.. Don't even break a sweat. Running OSPF/MPLS on the core. No BGP except for the borders (X86 Routers).
 
Back
Top