10gb without switch for ESXi?

farscapesg1

2[H]4U
Joined
Aug 4, 2004
Messages
2,648
Without money for a 10gb switch right now, I was wondering if it is possible (though I'm sure not officially supported) to use 10gb for vmotion without a switch for 2 ESXi hosts and one storage server?

I've been using iSCSI via 4gb fiber in my old setup for a couple years (with sync=disabled) but I needed to rebuild the storage server and thought I would try moving to NFS over 10gb since I already had 2 X520-DA2 cards and cables. So I ordered a X520-DA1 for the second host, with the idea that I could put in one dual card in the storage server, one in my primary host, and the DA1 in the secondary host and just directly attach both to the storage.

During my testing I configured one 10gb port on the storage server with an IP address, set up a standard vmkernel vswitch on the primary host for vmotion only with an IP address... and it worked. However, my confusion is how to handle the second host. Do I just need to assign a separate IP for the second storage port and configure a vswitch on the second host? If the NFS storage for each host is named the same, will that allow vmotion between hosts.. as well as storage vmotion between the SAN storage and local host storage?

Or.. am I wasting my time in a home environment without a 10gb switch and I should just stick with iSCSI over fiber, add a SLOG, set Sync=Always and be done? It's not like I "need" better performance for my limited use (Plex server, Torrent/Usent, Unifi controller, WSE server, and 3-4 testing VMs). I was looking at NFS mainly for easier management (i.e - snaps and simpler backup options).
 
Hmm, so would I bridge the 10g ports with one of the 1g ports.. Or just bridge the two 10gb ports together?
 
You can bridge some or all.
Only problem to care. If you assign an ip to one link and the link go down, you cannot access that ip any longer over the bridge.
 
Back
Top