10Gb between 2 ESXi Hosts

jad0083

[H]ard DCOTM March 2022
Joined
Apr 30, 2006
Messages
160
Good Day,
I'm wanting to setup a cheap 10Gb direct connection between 2 ESXi hosts (5.0 and 5.5u2). As I understand it, I setup a vSwitch on each Host and assign one of the 10Gb ports to the switch, then connect the ports directly. Planning to run 2 Brocade 1020 CNA cards off of fleabay, and connect the 2 cards directly (without a switch), the question is, how to connect them?

Would these work?

http://www.ebay.com/itm/NEW-Genuine...194?pt=LH_DefaultDomain_0&hash=item259fbbe142

Or is it better to get 2 SFP+ Tranceivers, with OM3 fiber?

2x - http://www.ebay.com/itm/Genuine-Bro...494?pt=LH_DefaultDomain_0&hash=item259ab17aae
1x - http://www.ebay.com/itm/15M-LC-LC-D...548?pt=LH_DefaultDomain_0&hash=item5b005f6c2c

Was looking around, and this is pretty much what I can scour over the net, New to 10Gb and FCOE/SFP+, so wanted to run it by you experts first :)
 
Last edited:
As an eBay Associate, HardForum may earn from qualifying purchases.
No point going with fiber. The Brocade cable will work just fine.
 
Alright, I ended up getting one of those active FCoE SFP+ to SFP+ cable, I'll update once I get it.
 
Just a quick update; I've installed the 2 BR-1020, and using a DAC cable:

rrFV7Zu.png


Between a Solaris and a Debian VM on different hosts, just using a quick single threaded test. MTU 9000 all though out the vswitches and VMs.

Also waiting on 10gb transceivers and om3 fibers from Fiberstore, once they get here, I'll test and see if there's any difference performance-wise.

Note:
I've also updated both hosts to ESX 6.0, used the latest QLogic networking drivers for the HBAs.
10GB goodness for < $200 :)
 
Last edited:
I can't see your pics to see what you've got going on. If it is a performance issue, the question would be what do the disks at each end bench at? 10gbe will be more than 1200MB/s

I am using some emulux adapters with a dac cable as well. Typically I am seeing 200-600MB/s depending on where it is read/writing to/from.
 
I can't see your pics to see what you've got going on. If it is a performance issue, the question would be what do the disks at each end bench at? 10gbe will be more than 1200MB/s

I am using some emulux adapters with a dac cable as well. Typically I am seeing 200-600MB/s depending on where it is read/writing to/from.


Nah, he's pretty happy. Images show iperf results of 9.27-9.42Gbps.
 
transceivers and om3 cable arrived. Throughput's pretty much the same. Here's a real-world smb throughput between my windows box to one of the hosts via the new transceivers, in case anyone is curious:
IZKRnO4.png
 
Yeah SMB sucks. It is only able to use 1 CPU core for calculations. If it could use all available cores and the storage subsystem could handle the throughput I bet it would max.

I use intel sr cards and Cisco 4948 top of rack switch for 10gb in my home and through windows I get around 275-350MiB throughput. Iperf shows 100% util at around 9 point something Gib/s

Im using an ZFS (Freenas) server with 8 WD Reds in RaidZ2 and internal testing shows well in excess of 600MiB/s transfer rates. My windows 7 box is using an 250GB Samsung Pro SSD.
 
Last edited:
You need to tweak Samba quite a bit to get 400+.
I pretty much max out 1Gbit on my old C2D box using ZFS and GZIP-4 with about 40% CPU load so having a beefier machine (CPU) and Samba 4.1+ should pull it off quite nicely.

@ tangoseal
You aren't running ZFS 9, you have ZFS v5 and most likely 9.2 or 9.3 of FreeBSD.

...

You probably need something like this to get faster speeds... (or possibly change 64240 to 128480)
socket options = TCP_NODELAY IPTOS_LOWDELAY SO_RCVBUF=64240 SO_SNDBUF=64240
server max protocol = SMB3

and in /etc/sysctl.conf

net.inet.tcp.sendspace=262144
net.inet.tcp.recvspace=262144
net.inet.tcp.sendbuf_max=4194304
net.inet.tcp.recvbuf_max=4194304
net.local.stream.recvspace=65536
net.local.stream.sendspace=65536
kern.ipc.maxsockbuf=16777216
kern.ipc.somaxconn=8192
kern.ipc.nmbclusters=262144

//Danne
 
Yeah SMB sucks. It is only able to use 1 CPU core for calculations. If it could use all available cores and the storage subsystem could handle the throughput I bet it would max.

I use intel sr cards and Cisco 4948 top of rack switch for 10gb in my home and through windows I get around 275-350MiB throughput. Iperf shows 100% util at around 9 point something Gib/s

Im using an ZFS 9 (Freenas) server with 8 WD Reds in RaidZ2 and internal testing shows well in excess of 600MiB/s transfer rates. My windows 7 box is using an 250GB Samsung Pro SSD.

You have a 4948 in your house?

:D
 
@ tangoseal
You aren't running ZFS 9, you have ZFS v5 and most likely 9.2 or 9.3 of FreeBSD.

The 9 was a keystroke error. I didn't catch.

FreeNAS 9.3 (Not FREEBSD) even though it is BSD, is running ZFS v28 I believe. I can check later but I dont have time right now too do so.
 
You have a 4948 in your house?

:D

Yes but not the E series. It is the 4948 server switch and it's absolutely overkill in every way imaginable.

I got a great deal on the switch from an upgrade pull. So I bought it and never looked back.

It is absolutely unstoppable speed period! As far as a home, or small, or hell a multi-thousand user enterprise can't even peg this switch as a top of the rack server switch.
 
Back
Top