How many 10 GbE NICs should you use

KapsZ28

2[H]4U
Joined
May 29, 2009
Messages
2,114
Our biggest issue is the budget, but I want everything to run optimally. Considering what we are going from, it probably doesn't take much to make a huge improvement.

Current setup:

ESXi Hosts: 6 - 1 Gb NICs
2 - Management
2 - Storage (NFS)
2 - Production

NetApp - FAS3240
3 - 1 Gb NICs per head. Two heads Active/Active

Approximately 400 VMs running in this environment.

Due to the budget, we are thinking about just adding 2 - 10 Gb NICs to each ESXi server and also the NetApp. The switches will be Arista 7050's and 7150's. We will also be adding a FAS8040 with 10 Gb, but that is going in a different datacenter which will eventually be our primary location for VMware. The FAS8040 will be iSCSI.

If we are stuck with only 2 - 10 Gb NICs per ESXi host, what should we run over them? Obviously at least storage. But what about vMotion? I am pretty sure vMotion on the storage network is not best practice. Plus I read that with 10 Gb and 9000 MTU, the Storage vMotion is actually slower.

And what about production? Do we really want to add production on to the same 10 Gb NICs?

Ideally I would love to have 6 - 10 Gb NICs. 2 for prod, 2 for storage, and 2 for vMotion.

What are you thoughts?
 
The problem with only having two and separating them is that you have no failover.

I would personally bond them and run everything over the two with QoS to ensure priority to the storage and vMotion traffic.

If you're going to maintain the 6 1Gb NICs then you could use those for production and management traffic. I've never been a big fan of dedicated NICs for anything other than ultra secure or ultra important traffic. I'd much rather logically separate traffic and maintain maximum failover and throughput for all traffic.

If I were you I'd fight for 4 10Gb ports per server. 2 Dual Port NICs would do well, leaving two ports for back end and 2 ports for production traffic.
 
Last edited:
Two 10Gbps Nics (two ports) are plenty.
The major question here is: Do you plan to scale up to use 10Gbps of throughput and how critical is uptime?
The old Highway analogy works here. Larger road but same amount of cars?

Things to consider:
- You are not pushing more than 1Gbps per flow atm, do you see link saturation in your production network atm?
- Storage can eat up "ports" easy but you are only using up to 2Gbps atm. Do you have a bandwidth issue here or more of a latency issue because of "packets/IOPS/transactions per second over these nics?

Reducing network port counts can cause latency issues. Dropping from six to two ports will reduce your "packet queue" to one-third of what it was. Physical network adapters are a time division multiplexer. Everything gets a time slice. Bigger pipe, bigger slice. Less ports, less pies to slice up (at the same time).... mmm pie.

But, enough doom and gloom about port reduction. As MysticRyuujin said. If you can fight for two 10Gbps adapters with two ports each (four total 10Gbps ports) you should be golden. But, consider the extra ports for redundancy and up time more than bandwidth. Telling the boss you need to drop an extra $800 per server to ensure uptime will (almost) always works better then saying you need faster toys. If the services are not that critical, eeh, more ports means more cables, more things to monitor and manage.

I rarely suggest hitting a book but in this case the "VMware Networking for Administrators" book touches on this topic 100%. The last few chapters gives rock solid examples of why and how to use 1Gbps links or two/four 10Gbps links.

http://www.amazon.com/Networking-VMware-Administrators-Press-Technology/dp/0133511081

I've had my hands in VMware since 2006. I manage 15,000+ VMs, vExpert, VCP etc... and I still learned a few things from the book.

Quick side note: The Intel x540-Tx (copper 10Gbps) adapters have a driver bug that Intel and VMware wont fix.... If you plan to use these adapters, ping me and I can tell you the issue and fix.

Nicholas Farmer
http://pcli.me
 
As an Amazon Associate, HardForum may earn from qualifying purchases.
We are not currently saturating our 1 Gb storage network, but it is a bit dated and there is a mix of Brocade and Cisco.

I believe the main issue with four 10 Gb ports per ESXi host is we will not have enough ports on the new 10 Gb switches for all the servers. I'll have to confirm for sure.

I was actually planning on using Intel X520-DA2 NICs in all the servers. Do you see a problem with using those? Is there much of a difference between the X520 and X540?
 
X540 I believe is copper whereas the X520 is SFP+, although I could be mistaken. I would echo the two dual port cards in each host to have card redundancy as I am a huge fan of having zero single points of failure in my environments that I design. Alternatively you could have a separate vSwitch running the 1Gb NICs as your failover VM Network instead, but you will have to manually intervene to make the change.

Also, are you already sold on the Arista switches? I just deployed a pair of Brocade VDX 6740s in VCS mode and absolutely love them.
 
X540 I believe is copper whereas the X520 is SFP+, although I could be mistaken. I would echo the two dual port cards in each host to have card redundancy as I am a huge fan of having zero single points of failure in my environments that I design. Alternatively you could have a separate vSwitch running the 1Gb NICs as your failover VM Network instead, but you will have to manually intervene to make the change.

Also, are you already sold on the Arista switches? I just deployed a pair of Brocade VDX 6740s in VCS mode and absolutely love them.

Yeah, the X520's are definitely SFP+ which is fine for us. The Brocade's is what we originally wanted, but we worked out a deal with someone that has the Arista gear, so there is a pretty big cost savings.
 
The intel x520 line wasn't impacted by the driver bug. I could only reproduce it on the x540s. The x520s use a slightly different driver.
Side note: I have one of the new Netgear 10Gb copper switches in my home lab. It can do line rate like any Cisco switch. If cost is an issue and you don't technically need enterprise class with support.

Nicholas Farmer
 
Back
Top