Nested VM options

feistyfish

n00b
Joined
Oct 25, 2014
Messages
13
I'm starting a home lab club at my university. Many people are showing interest but don't have a spare computer they can dedicate to running a host. To remedy this I'm wondering if i can set up a host that can handle virtual hosts. The reason for this is that I want people to be able to manage an entire lab environment including their own hosts, not just guest administration.

So the question i have is:

Do ESXi, hyper-v or xenserver allow for this type of use case?

Ideally i would want a platform that would allow for any of the hosts. IE if ESXi will let you run esxi, hyper-v and xen hosts that can in turn function as proper hosts for guests.

I've been looking at hyper-v as i have the most experience with it, but it seems like it will let you install a new host, but any of the v-host's guests will crash.

Does anyone have any experience or knowledge with this stuff?

Hyper-v was looking promising but it seems to have a problem with nested xen scenarios.
 
yes you can run vmware inside of vmware without much of an issue. never did it with Xen.
 
I've been doing nested hypervisors on my home lab for over a year now.I used to have a physical ESXi 6.0U2 with two Hyper-V 2012R2 hosts. This worked fairly well except that Linux guests would randomly kernel panic on the Hyper-V side, and any special extensions were not stable (ie. AVX, SES).
VMWare officially considers nested virtualization experimental even in 6.5. But, since the release of Windows Server 2016 Microsoft fully supports nested virtualization in Hyper-V.

Since 2016 has launched, I've migrated everything over to it and now have a physical 2016 nano server host with a nested 2016 Hyper-V Core hypervisor. I have had it running for about a month now without a hitch. It takes a bit of powershell it get it all running, but works very well actually. I would definitely recommend going 2016 Hyper-V for the nested lab.

In regards to Xen, Citrix is out, but Xen Project does have experimental support in version 4.5 I believe. Still, this would NOT be a simple setup at all. I would recommend just going with Hyper-V

In order to set a VM guest with Hyper-V, you will need to do the following:
1. Create the VM (gen 1 or 2 doesnt matter). Be sure to include a NIC
2. Edit the VM configuration and under the advanced features of the NIC, set it to allow MAC address spoofing
3. Open a Powershell session on the physical host
3. Run the following command to enable virtualization on the VM: Set-VMProcessor -VMName <Name of guest> -ExposeVirtualizationExtensions $true
4 (Optional). If you want to do VLAN trunking to the guest hypervisor, also give this command: Set-VMNetworkAdapterVlan -VMName <Name of Guest> -Trunk -NativeVlanId <native vlan> -AllowedVlanIdList <allowed vlans>

From here, you can boot up the VM and install the OS.

ALSO, you need to note that your processor MUST have VT-d extensions and that they are turned on in BIOS. If it does not, then this will obviously not work. Windows Server 2016 will not even let you install the Hyper-V role if these do not exist, so you'll find out pretty quickly. Any Intel processor Nehalem (Xeon X5600 chips... Any E4 or E5 will be good) or newer will be fine.
 
I've been doing nested hypervisors on my home lab for over a year now.I used to have a physical ESXi 6.0U2 with two Hyper-V 2012R2 hosts. This worked fairly well except that Linux guests would randomly kernel panic on the Hyper-V side, and any special extensions were not stable (ie. AVX, SES).
VMWare officially considers nested virtualization experimental even in 6.5. But, since the release of Windows Server 2016 Microsoft fully supports nested virtualization in Hyper-V.

Since 2016 has launched, I've migrated everything over to it and now have a physical 2016 nano server host with a nested 2016 Hyper-V Core hypervisor. I have had it running for about a month now without a hitch. It takes a bit of powershell it get it all running, but works very well actually. I would definitely recommend going 2016 Hyper-V for the nested lab.

In regards to Xen, Citrix is out, but Xen Project does have experimental support in version 4.5 I believe. Still, this would NOT be a simple setup at all. I would recommend just going with Hyper-V

In order to set a VM guest with Hyper-V, you will need to do the following:
1. Create the VM (gen 1 or 2 doesnt matter). Be sure to include a NIC
2. Edit the VM configuration and under the advanced features of the NIC, set it to allow MAC address spoofing
3. Open a Powershell session on the physical host
3. Run the following command to enable virtualization on the VM: Set-VMProcessor -VMName <Name of guest> -ExposeVirtualizationExtensions $true
4 (Optional). If you want to do VLAN trunking to the guest hypervisor, also give this command: Set-VMNetworkAdapterVlan -VMName <Name of Guest> -Trunk -NativeVlanId <native vlan> -AllowedVlanIdList <allowed vlans>

From here, you can boot up the VM and install the OS.

ALSO, you need to note that your processor MUST have VT-d extensions and that they are turned on in BIOS. If it does not, then this will obviously not work. Windows Server 2016 will not even let you install the Hyper-V role if these do not exist, so you'll find out pretty quickly. Any Intel processor Nehalem (Xeon X5600 chips... Any E4 or E5 will be good) or newer will be fine.

vt-d? why would you need that for nested virtualization? did you mean vt-x?
 
I've been doing nested hypervisors on my home lab for over a year now.I used to have a physical ESXi 6.0U2 with two Hyper-V 2012R2 hosts. This worked fairly well except that Linux guests would randomly kernel panic on the Hyper-V side, and any special extensions were not stable (ie. AVX, SES).
VMWare officially considers nested virtualization experimental even in 6.5. But, since the release of Windows Server 2016 Microsoft fully supports nested virtualization in Hyper-V.

Since 2016 has launched, I've migrated everything over to it and now have a physical 2016 nano server host with a nested 2016 Hyper-V Core hypervisor. I have had it running for about a month now without a hitch. It takes a bit of powershell it get it all running, but works very well actually. I would definitely recommend going 2016 Hyper-V for the nested lab.
[ ... ] .

what do you use nested virtualization for? lab use only or is there any production scenario already?
 
vt-d? why would you need that for nested virtualization? did you mean vt-x?

You are correct. I had brain farted at the time and defaulted to VT-D. VT-X is correct. Still, server 2016 requires VT-X to even install the Hyper-V role. So a lot of very old hardware will be stuck on 2012R2 for Hyper-V

what do you use nested virtualization for? lab use only or is there any production scenario already?

I actually use it as a mix for work and personal. I own an IT consulting firm with a few partners, and my personal servers host a semi-production site for AD, SCCM, and VDI. I abstracted the physical servers to the first layer of virtual. So, level 1 would effectively be servers that would traditionally be physical:
Layer 1 (simulating the physical servers and devices):
--Personal hypervisor
--Company hypervisor
--Personal AD domain controller
--Company AD domain controller
--Virtual firewalls/routers
Layer 2 (exists on the layer 1 hypervisors):
--All other would be servers or VDI

I have three physical HP DL380 G7s with 72GB RAM and running 2016 nano server (joined to my personal domain). I used to have a an Equalogic SAN that I used with iSCSI for storage, but have since moved onto Storage Spaces Direct (Microsoft's take on VMWare vSAN).
On each physical host, I have two virtual hypervisors, one for personal and one for company. The physical hosts are also clustered for the layer 1 AD server and firewalls, but the virtual hypervisors are not HA protected since they are forming their own clusters. I thankfully have an old Cisco 3750 switch that binds this all together so I can maintain VLANs and VRFs, which means the personal and company resources are fully isolated networking all the way to the public internet.

Being that the virtual hosts are clustered, they don't need any HA at layer 1. Dynamic memory and live migration of layer 2 VMs works great too. I've yet to see any issues with the nested virtualization on 2016. On the roadmap, I've been intending to implement a network controller and shielded VMs, which would effectively make my on site compute run the same as it does in Azure.

In practice, I only see two practical/standard implementations for nesting hypervisors:
1. Fully controlled private clouds. If you want to have systems hosted in a private cloud, but want full control of even the hypervisors. This scenario would be rare, but could also greatly simplify billing and security by taking over a significant portion of the infrastructure internally
2. Training labs. This is the ideal situation. You can give someone a complete sandbox for extremely low cost. Want to train someone on how to build a cluster? All you need are two VMs. I can imagine this really taking off in some schools where you build an environment as you continue through courses and maintain your own personally built environment from start to finish.

There are other fringe cases for this, but those are at least the two that might become not uncommon in the industry. At least in my case, it allows me to share physical resources across two separate and isolated environments
 
Last edited:
well 1 is a bit of a paranoid but hey "only the paranoid survive" (c) A. Grove

:)

2 is a most common case indeed !
 
Yes you can do a nested vm. But when I did my certification I ran Esx on bare metal on my w520 laptop. Been happily running for 5 years no reboot needed. It can max out at 32gb ram has 3 drives in the laptop, but you can run as many as you want as it has esata on the laptop, esata on the dock and ton of usb 3.0 ports which also work as you can passthrough usb ports to the vm and the expresscard etc cpu and mobo support vt-d. Also it has a built in UPS with the extended battery. When the power goes out my esx host keeps running happily for 7 hours.
 
Back
Top