VLANs and Hyper-V

hawk82

[H]ard|Gawd
Joined
Oct 2, 2001
Messages
1,473
I'm helping a colleague setup a colocated server that is running Windows Server 2012 Datacenter w/Hyper-V to be used in a multi-tenant environment. My colleague's company developed an inhouse application that their customers use, each customer with their own VM on the Hyper-V server. We need to isolate each VM as much as possible network-wise.
Currently 3 customers with plans to expand rapidly.

Here's the current setup:

Datacenter rack -> Fortigate firewall (managed by datacenter) w/3 public IPs available -> RFC1918 addressing -> D-link switch (supports up to 32 VLANs) -> Intel server with 4 physical NICs.
Server nics #1-4 connects to switch ports #1-4.
Firewall lan port #1 connects to switch port #16.
More IPs available as needed.
Customers will either connect by WWW or RDP or both, depending on their needs.

So if I understand this correctly:
1. Each guest VM will be put onto its own VLAN, by going into the Hyper-V manager and editing the guest VM properties and adding the VLAN (customer AAAA VLAN 100, customer BBBB VLAN 101, customer CCCC VLAN 102, etc).
2. All customers run off same virtual switch, or if bandwidth becomes a concern, split them to the 2nd physical nic and run 2nd virtual switch from it.
3. Add VLAN 50 (any number as example) to the virtual switch in Hyper-V, for hypervisor management access.
4. On the D-link switch where physical NIC #1 connects to switch port #1, enable trunking and VLAN membership 50,100,101,102.
5. Ask datacenter to add VLAN tagging to their firewall's lan port #1.

Not sure how to handle untagged traffic. There shouldn't really be any if I understand this correctly. My colleague will probably dump the D-link switch before the VLANs are all used up. I recommended him to get something Cisco or HP.

Am I missing something else?
 
Is each static IP being PATed to each subnet then or a 1:1 NAT right to the server?

What I currently do at home:

- 4 NICs in LACP to switch. This port channel is also a trunk.
- Each VM gets assigned to the appropriate VLAN
- Firewall does intervlan routing with firewall rules and handles the NAT.

Theoretically there shouldn't be any untagged traffic as the virtual switch should tag egress traffic from the VMs and trunk them to the firewall. The firewall would do the same. I would still assign the native VLAN to a dummy VLAN though.
 
Well first of all, you should consider doing NIC Teaming. If your D-LINK supports it use LACP, otherwise use a switch independent ACTIVE/ACTIVE mode.

This will save you from trying to spread out your VLANs on your NICs. It also makes managing the Hyper-V switch stuff much easier and provide tolerance in case a NIC dies or you need to unplug a cable or something.

As /usr/home said make sure your switch isn't doing any VLAN routing as you want it to have to pass through the firewall.

The rest of your understanding is good. Each VM gets its own VLAN.
 
Right now just static IP being PAT'd to NAT ip on each guest VM. The guest VMs only need a couple of ports opened to the outside, thus doing NAT. Right now all of the guests share a single RFC1918 subnet (i.e.192.168.115.0/24). Each guest VM is assigned (on paper) its own public IP address. Right now just a /29 but the datacenter has more IPs available with proof of usage.

Once I get to VLANs, will each guest VM be put onto their own RFC1918 subnet? (ex. Guest VM #1 gets 192.168.116.0/24, Guest VM #2 gets 192.168.117.0/24, etc)

I thought about doing NIC Teaming, but read somewhere online that it was not recommended. (can't find the article at the moment) I also wanted to keep this simple, as most of the guest VMs from our estimates will be low traffic usage, RDP & web traffic only to employees of my colleague's clients.

The switch currently being used is a D-Link DGS-1100-16 16port "smart switch". It does support link aggregation as D-link calls LACP/nic teaming if I understand all of the marketing terms correctly. The switch only supports 32 VLANs however. But if the firewall will be doing all of the VLANs, great. I am not sure what model Fortigate firewall that the datacenter provided to my colleague. The datacenter manages it for them. I kinda wished my colleague had their own firewall, as it would make changes a lot easier.
 
You could also use the public IPs right on the VMs. As long as you firewall everything properly.

Personally, I'd get a used Cisco 3560 if all you need is 10/100 and replace that D-Stink.
 
I would love to see an article that says NIC teaming is not recommended heh...I'll show you a billion that say it is.

And yes each VLAN should be its own subnet...though you don't have to give them full /24s.

Even if the firewall supports more than 32 VLANs it wont matter if your switch only supports 32...you're limited to 32 either way.
 
I thought about using public IPs directly on the VMs. However some of the potential customers may need more than one guest VM for hosted AD DC, Exchange, etc. So I think sticking with RFC1918 private lans and keeping it a homogeneous network is probably the best administrative way.

I cannot find that article, but I could have sworn it recommended against teaming. Maybe the article was old. Oh well, no big deal.

The switch as I already mentioned to my colleague and in this thread will be replaced at some point. Definitely with a higher end unit like Cisco or HP Procurve.
 
Back
Top