I assume it didn't work in vswitch1 because the home router is in vswitch0 (the management network), which is presumably the rest of his home network - by default pfsense would have had dhcp set for the WAN nic, so it would be expected to get an IP from the home router. I don't understand why you created an extra switch - there was no need for that. Delete vswitch2, rename vswitch1 to LAB Virtual LAN and move the two guys on vswitch2 to vswitch1 (that's what I was suggesting to begin with...)
Curious why it didn't work then. Probably some funky vswitch setting. I guess I'm confused - if the hosts on the physical lan can't talk to the management network subnet, you cant' access/manage the vsphere host. If they can, it isn't isolated. Not sure what you were getting at...
Zarathustra[H];1041886318 said:I have eliminated the consumer router all together in favor of pfSense.
Just in case there are any security concerns with vswitches, I have passed through one of my server nics directly to the pfsense guest and use that as wan.
Over the years I have used two separate setups for the lan side.
1.) pfsense lan port -> vswitch (with guests) -> second server NIC -> external hardware switch for lan.
2.) pfsense lan port is direct forwarded second NIC -> external hardware switch (with home clients) -> back in to internal viswitch using a third NIC.
#2 gets (marginally) better latencies for home clients (virtual nics and virtual switches add more latency than direct forwarded nics and hardware switches). The difference is extremely tiny though. (0.1 to 0.2ms?) The downside of this approach is that you need to install an additional NIC though, as I have yet to encounter a server motherboard with three NIC's.
Wow.. just Wow.
1. If there were security concerns with vswitches there would be many businesses in a lot of trouble. You are not operating a bank from home. Best use virtual switches where they are meant to be used
2. What you're doing is unnecessarily using hardware and losing features (eg vmotion)
3. Each gigabit hop is about 0.3-0.7 ms - so you're adding latency (more than you're suggesting) - test it with hrping http://www.cfos.de/en/ping/ping.htm - Total latency with your proposed setup would be in the order of 2-3ms at a guess
4. Paravirtualised nics have less latency through esxi - where possible, use vmxnet3/open-vm-tools/vmtools - long and short of it is that you're talking about 10 microseconds latency difference between passthrough and paravirtualised (https://www.vmware.com/files/pdf/techpaper/network-io-latency-perf-vsphere5.pdf) - Only microtransaction traders should be worried about this.
5. Latency across vswitches between VMs is extremely small
The solution is simple:
Nic-Wan (vswitch0) - >PFSense
Nic-Lan (vswitch1) -> PFsense/Other VMs/network
You will save 10 microseconds of latency if you passthrough the WAN, and another 10 microseconds if you passthrough the LAN - IMO this is within margin for error.
IF you want to isolate your lab from the rest of the network set up vlans using PFsense and a vlan aware switch..