Network design for home lab environment (DMZ, clients, etc) v2.0

idea

Gawd
Joined
Jan 24, 2005
Messages
615
Hey everyone,

I made this thread once before but some things have changed so I need more help. I have a pretty vanilla set up here. I don't think it's anything unusual. I am looking for "best practices" when dealing with clients, servers, virtual guests, DMZ, etc. Does anyone has any examples (hopefully network diagrams) of how a home lab network should be set up? Thanks!

I have it all working right now but I'm not happy because I know it's not secure. I don't have a DMZ or VLANs set up. There is no NAT. Everything has routes set up. Basically I have it set up very simple like this:

PHYSICAL:
Code:
10.0.0.0/24 - ASUS RT-N16 and all wired/wireless devices
10.0.0.1 - Gateway for wired/wireless devices (this is the ASUS)
10.0.0.10 - ESXi vmnic0 (vSwitch0) - Dedicated for VMKernel management
10.0.0.254 - ESXi vmnic1 (vSwitch1) - only pfSense gateway has access and this is it's WAN IP
10.0.0.100-150 - DHCP for clients

INSIDE THE ESXi ALL IN ONE: All network traffic stays virtualized within the box, with the exception of the pfSense's WAN
Code:
10.0.1.0/24 - All ESXi guests are connected to vSwitch5 (no physical NIC)
10.0.1.254 - Gateway for all virtual guests (pfSense 2.0 virtualized, WAN port is the ASUS above)
10.0.1.100-150 - Server range

I have available to me:

  • My ISP's cablemodem w/ single port
  • ASUS RT-N16 Router (4x GigE ports) running Tomato firmware (VLAN capable)
  • Linksys WRT54G Router (4x 100Mbit ports) running Tomato firmware (VLAN capable)
  • 8-port unmanaged gigabit switch
  • Netgear 48-port Managed Switch (old as hell, GUI sucks, don't really want to use it)
  • ESXi/Solaris "All in One" which has:
    - 4 Gigabit NICs
    - 20 virtual guests, 5 of which provide services to the internet and should be in DMZ
  • Multiple client machines/wireless devices in trusted private LAN
 
Last edited:
Despite this being for virtualization, it seems to be more about networking so you'd probably fair off better there.
 
I'm not sure I understand the point of the firewall only being for the VMs. The DMZ being behind your home network doesn't make much sense to me.

I'm not familiar with pfSense, but I imagine it is very similar functionally to IPCop, which I do have experience with.

Here is how I run ESXi w/ a virtualized firewall:

Cable modem > NIC in ESXi dedicated to IPCop as WAN ("red" in IPCop).

IPCop then provides a DMZ ("orange") on a virtual NIC/switch to whatever guests I want there (basic web server is all I'm running at the moment).

IPCop then provides LAN to a dedicated NIC ("green") which connects to a physical switch. That switch has my wireless routers set up as access points, and this is how the rest of the devices in my network connect to the internet (HTPC, laptops, etc.). Connect a virtual switch to this to give any non-DMZ VMs internet/network access.

Management network can go two ways. Either connect it virtually to the green zone (LAN/non-DMZ NIC), or give it its own dedicated NIC and just connect it to the physical switch. You have enough NICs where you could just give it its own dedicated NIC to your physical switch. I believe this is the preferred route.

The only downside to this setup is that if the ESXi machine goes down, all internet traffic goes down with it. UPS would be highly suggested in this case. If the power goes out briefly and the ESXi machine went down and didn't come back up correctly, and I wasn't home, my wife would be a complete wreck not knowing how to get it back up and running.

This is where I guess it could be beneficial to have a wireless router in front of the ESXi machine, but then that somewhat defeats the purpose of running a firewall + DMZ on the ESXi machine if your clients are all exposed anyways.

IMO, you need to choose between security or reliability in that case.
 
I'm not sure I understand the point of the firewall only being for the VMs. The DMZ being behind your home network doesn't make much sense to me.

Actually, the firewall is disabled, I'm sorry that I used that word and I will edit my original post to clarify. I am using it purely as a gateway/router, to keep all guest traffic virtualized within the vSwitch. pfSense routes traffic inbetween the physical and virtual switch

Here is how I run ESXi w/ a virtualized firewall:

Cable modem > NIC in ESXi dedicated to IPCop as WAN ("red" in IPCop).

I personally would not choose to virtualize my entire network's WAN. Too risky. I choose to use a low power SoHo network appliance with open source firmware based off of m0n0wall. Recovering from a power loss or ESXi failure is easy since I just connect to it's VPN server and use IPMI to get everything back up
 
I personally would not choose to virtualize my entire network's WAN. Too risky. I choose to use a low power SoHo network appliance with open source firmware based off of m0n0wall. Recovering from a power loss or ESXi failure is easy since I just connect to it's VPN server and use IPMI to get everything back up

Why is it risky?
 
If you have a managed switch that can do VLANs you can just use that instead of dedicating NICs for any specific purpose. I run Untangle in my vSphere cluster as my firewall/nat/whatever using VLANs with shared NICs. No need to dedicate.
 
Why is it risky?

Very simple... not only is my ESXi "all in one" far from a production environment, its very purpose happens to be to serve me with a test/dev platform. If anything should happen to it, my trusty VPN router will allow me to remote in to fix it via my server's console redirection. A little $75 router pays for itself after a one crash.
 
So, I now understand that I can set up another vSwitch for DMZ. No NIC needed, since all DMZ servers are virtual guests. Can anyone help me understand what firewall rules need to be in place to make this happen? I can't seem to wrap my head around how it's supposed to work.
 
Very simple... not only is my ESXi "all in one" far from a production environment, its very purpose happens to be to serve me with a test/dev platform. If anything should happen to it, my trusty VPN router will allow me to remote in to fix it via my server's console redirection. A little $75 router pays for itself after a one crash.

Gotcha. Risky for your config. :) Makes sense. I have 3 nodes in my vSphere cluster so if one host dies my Untangle VM will just restart on another automatically.
 
Gotcha. Risky for your config. :) Makes sense. I have 3 nodes in my vSphere cluster so if one host dies my Untangle VM will just restart on another automatically.

Did you say you have 3 ESXi nodes in a vSphere cluster? Is that free??
 
Back
Top