Hyper-V vs VMware

Joined
May 22, 2006
Messages
3,270
***DISCLAIMER***​
This thread is not intended to start a religious war on Hyper-V vs VMware! I recently held a session at a technical conference intended to outline the actual differences between Hyper-V and VMware that isn't found in the marketing fluff, blogs, and FUD out there on the internet. A lot of people are genuinely interested in both products and it's my goal to give a fair, objective, and balanced look at the key differentiators between the two products and what matters to IT and the business they support.

For example --

  • Did you know you can hot-add a SCSI controller in VMware but not Hyper-V?
  • That both offer a Site Recovery Manager product but VMware's is installed on site and Hyper-V's must live in Azure?
  • Hyper-V can automatically migrate a running VM if the virtual switch the VM is on loses network connectivity but the same switch is up on a different host?
  • How is Hyper-V's Dynamic Memory different than VMware's memory strategy? When should it be used?
  • VMware offers hyper-converged options like VSAN but Microsoft does not believe in hyper-convergence?
  • VMware can convert a virtual disk between thick and thin provisioning while the VM is running but Hyper-V cannot?

We'll even eventually talk about price and I don't mean "Hyper-V is free!!!!" that you may read on blogs. Hyper-V's licensing model is less expensive but what about other costs? Are the two private cloud suites of products really apples to apples? What about the cost of migrating to Hyper-V? What about support? Personnel costs to manage it? Just because the licenses are cheaper doesn't mean the solution is.....

I'll build out this thread over time and we'll hopefully get more and more discussion on what I have to say. I'll begin with how both products are the same and then move into differences in regards to compute, high availability, memory, networking, storage, administration, and private cloud.

More content coming soon and I expect this thread to be a living, breathing document as we (yes, WE! I am by no means the authority on both products and fully expect to be corrected on points as we discuss) add more information over time....

***ADDITIONAL DISCLAIMER***

I work for an IT VAR who partners with both VMware and Microsoft. I have worked with both products as a pre-sales architect and post sales engineer. I hold both VMware and Microsoft certifications -- VCP 3/4/5, VCAP5-DCA, VCAP5-DCD, MCSA 2012, and MCSE Private Cloud.​

EDIT - 12/10/2014

These comparisons will all be between vSphere 5.5 and Hyper-V 2012 R2. While both are expecting new major releases next year, I don't think comparing what's coming (even for those of us with inside knowledge) is going to give us a true apples to apples comparison so I want to stick with what's already been released. Next year when the new products are released I'll come back and add to the thread.
 
Last edited:
COMPUTE

VMware CPU Requirements

ESXi 5.5 will install and run only on servers with 64-bit x86 CPUs.
ESXi 5.5 requires a host machine with at least two cores.
ESXi 5.5 supports only LAHF and SAHF CPU instructions.
ESXi 5.5 requires the NX/XD bit to be enabled for the CPU in the BIOS.
To support 64-bit virtual machines, support for hardware virtualization (Intel VT-x or AMD RVI) must be enabled on x64 CPUs.

https://pubs.vmware.com/vsphere-55/index.jsp#com.vmware.vsphere.install.doc/GUID-DEB8086A-306B-4239-BF76-E354679202FC.html

Hyper-V CPU Requirements

Minimum: A 1.4 GHz 64-bit processor with hardware-assisted virtualization. This is available in processors that include a virtualization option—specifically, processors with Intel Virtualization Technology (Intel VT) or AMD Virtualization (AMD-V) technology.
A NX bit-compatible CPU must be available and Hardware Data Execution Prevention (DEP) must be enabled.
A CPU with second-level address translation support (SLAT) is required for client Hyper-V in Windows 8 and 8.1.

http://technet.microsoft.com/en-us/library/jj647784

How They're the Same

Up to 320 Logical CPUs per host
Up to 64 Virtual CPUs per guest
NUMA topology can be exposed to guest
Live Migrate VMs to hosts with different CPU generation but not different CPU vendor
-VMware EVC, Hyper-V CPU Compatibility
CPU reservations, limits, and weight per guest

How They're Different

CPU Hot Plug

-VMware can hot add CPUs to a running VM. This capability can be enabled on a per VM basis. Hyper-V does not have CPU hot plug, but one could start a VM with a higher number of vCPUs and use limits to artificially neuter the vCPU count then those limits can be increased while the VM is running. I wouldn't recommend this because it'd be a nightmare to maintain and manage. Just bounce the VM and give it more vCPUs. :p

Latency Sensitivity

-VMware introduced a feature in 5.5 called Latency Sensitivity which essentially is a means of telling the CPU Scheduler you want a VM's vCPUs to have exclusive access to physical CPU cores and to bypass the virtualization layer to reduce CPU latency and jitter. Applications that are sensitive to latency can benefit from this feature. Hyper-V has no such feature, just the ability to reserve CPU resources for a VM.

http://www.vmware.com/files/pdf/techpaper/latency-sensitive-perf-vsphere55.pdf Page 5

CPU Affinity

-In VMware you can manually assign a vCPU to a pCPU. Cisco Unified Communications actually requires this to ensure there is no resource contention for the application. Can't do it in Hyper-V.

CPU Scheduling

-VMware's CPU Scheduler is built on a legacy Unix scheduler, while Hyper-V schedules vCPUs based on Windows thread management just as it does for any other multithreaded application. VMware's CPU Scheduler prefers to keep multiple vCPU guests in lock step if possible but in newer versions of VMware Relaxed Co-scheduling has loosen the lock step requirement and multiple vCPUs no longer need to wait for exactly the same amount of pCPUs for processing to occur.

-CPU Ready time is an important metric to measure in VMware. Keeping tabs on CPU Ready is critical and the more vCPUs you add to a host, the higher CPU Ready becomes which means all VMs should start with the fewest vCPUs possible and only be increased if needed. General rule of thumb is to keep CPU Ready below 10% (as measured in esxtop) or 2,000 ms (as measured from the VMware client real time) but you should really keep those numbers less than half that. In Hyper-V, due to the different means of scheduling vCPUs, you can be more liberal with assigning vCPUs. In fact, Microsoft used to recommend a maximum vCPU/pCPU ratio of 8:1, then 10:1, but have now thrown it out completely. Watch VM performance and adjust as needed. To monitor "CPU Ready" in Hyper-V, open Performance Monitor on the host and check "Hyper-V Hypervisor Virtual Processor - CPU Wait Time per Dispatch" and select a vCPU. Bear in mind this is measuring CPU Ready in nanoseconds so a reading of 10,000 means 0.010 ms. Performance Monitor is also reporting the time in 1 second incriments while VMware is reporting a summation over 20 seconds. This means when Performance Monitor reports 50,000 ns of CPU Wait Time per Dispatch that's the same as VMware reporting 0.5 ms using Real Time in the VMware client.

This means for general workloads you should be able to attain higher vCPU counts per VM and per host without suffering the same level of negative affects as you would in VMware. In my lab, I have a VM with 8 vCPU assigned (the host has a single 8 core pCPU) and even when the other host is in Maint Mode and all my VMs are running on a single host (all 25 of them) the "CPU Ready" time of my 8 vCPU VM only averages around 1 ms per vCPU in Perf Mon, which would be the same as 20ms in the VMware client Real Time graph for CPU Ready. 20ms per vCPU of CPU Ready time is extremely low for a 8 vCPU VMware VM. But bear in mind under most workloads in a properly architected environment you'll run out of RAM, disk, or network bandwidth long before CPU becomes an issue.

http://pubs.vmware.com/vsphere-55/topic/com.vmware.ICbase/PDF/vsphere-esxi-vcenter-server-55-resource-management-guide.pdf
http://www.vmware.com/files/pdf/techpaper/VMware-vSphere-CPU-Sched-Perf.pdf
http://www.virtuallycloud9.com/index.php/2013/08/virtual-processor-scheduling-how-vmware-and-microsoft-hypervisors-work-at-the-cpu-level/
http://www.altaro.com/hyper-v/hyper-v-virtual-cpus-explained/

vNUMA

-Both hypervisors can expose the underlying NUMA topology to the guest OS. In VMware, this is disabled by default until you either enable it manually or assign 9+ vCPU to a VM. Hyper-V exposes NUMA by default and can be disabled on a per host basis. Also, if you enable CPU hot plug in VMware, NUMA exposure is disabled and the same is true if you use Dynamic Memory in Hyper-V.

http://www.vmware.com/pdf/Perf_Best_Practices_vSphere5.5.pdf Page 44

vCPUs

-VMware gives you the option to assign vCPUs as either virtual sockets or virtual CPU cores to the guest OS. This can be useful when an operating system or application has a limit of so many CPU sockets and much more useful when an application is licensed on a per socket basis. Hyper-V vCPUs are assigned as virtual cores only, but Windows 2012 or newer guests recognize them as virtual processors.
 
Last edited:
HIGH AVAILABILITY

VMware HA Requirements

vSphere High Availability requires 2-32 ESXi hosts be clustered together
- Clustering the hosts requires vCenter server
- High Availability is licensed and requires Essentials Plus licensing or higher
- High Availability requires shared storage (FC or iSCSI shared block storage or NFS shared file storage)
- Persistent IP addresses for hosts (either static IP or DHCP reservations)

https://pubs.vmware.com/vsphere-55/index.jsp#com.vmware.vsphere.avail.doc/GUID-BA85FEC4-A37C-45BA-938D-37B309010D93.html

Hyper-V HA Requirements

Microsoft Clustering Service (MSCS) is used for virtual machine HA
- MSCS requires a trusted Active Directory domain to be set up
- MSCS is a feature installed in Windows Server and comes with the OS
- MSCS is included in Windows Server 2012 R2 Standard, Datacenter, and Hyper-V 2012 R2 Core
- MSCS does not require System Center Virtual Machine Manager
- High Availability requires shared storage (FC or iSCSI shared block storage or SMB3 shared file storage)

http://technet.microsoft.com/en-us/library/jj612869.aspx

How They're the Same

- High Availability for VMs (automatically restart VMs if the host they reside on fails)
- Both require shared storage for HA
- Storage agnostic Virtual Machine replication technology (replicate running VM to another host/cluster or VMware/Microsoft public cloud)
- Live migration of running VMs from one host to another and/or one datastore to another
- Site Recovery Manager products to provide DR and failover orchestration

How They're Different

HA Maximums

- VMware 32 hosts and 4,000 VMs per cluster, Hyper-V 64 hosts and 8,000 VMs per cluster

http://www.vmware.com/pdf/vsphere5/r55/vsphere-55-configuration-maximums.pdf

http://technet.microsoft.com/en-us/library/jj680093.aspx

VMware FT

- VMware offers a feature called Fault Tolerance that takes High Availability a step further. The state (not storage) of a VM is actively mirrored to another host in the cluster. If the host the VMs primary mirror is running on fails, the secondary VM immediately takes over and the VM experiences no downtime, unlike HA where the VM goes down hard and is started on another host.

- Fault Tolerance is a great technology but has a lot of hardware maximums that make it difficult to run for most applications, such as the VM being unable to have more than 1 vCPU. FT also requires Enterprise Plus licensing.

- Fault Tolerance is built into ESXi and requires a vmkernel configured to send FT logging to the other hosts in the cluster. 1Gb networking will work, but 10Gb is recommended since FT is actively keeping the VM’s memory in sync between the mirrors.

https://pubs.vmware.com/vsphere-55/...UID-83FE5A45-8260-436B-A603-B8CBD2A1A611.html

https://pubs.vmware.com/vsphere-55/...UID-7525F8DD-9B8F-4089-B020-BAA4AC6509D2.html

VMware App HA

- New in vSphere 5.5 is Application High Availability. Like FT, it, too, requires Enterprise Plus licensing. App HA can monitor select applications running inside VMs and attempt to restart their services or the entire VM if they stop. It will also send alerts or trigger user defined actions should a service fail.

- App HA is deployed as a virtual appliance which requires 2 vCPU, 4GB RAM, 20GB storage, and 1Gb networking. App HA also requires vRealize Hyperic, a part of the vRealize suite which incurs an extra cost beyond vCenter and ESXi.

http://pubs.vmware.com/appha-11/ind...UID-4659356B-3173-44F5-91D4-94ED26B6EC93.html

Hyper-V Protected Networks

- Hyper-V has its own unique HA feature called Protected Networks. If enabled on a VM, should the vSwitch the VM is attached to lose all network uplinks, Hyper-V will Live Migrate the VM to another host in the cluster whose vSwitch of the same name is still up. Although a properly architected environment will usually be designed to avoid these kinds of failures, it can still be useful for smaller hosts or branch offices where normal network design scenarios aren’t viable, such as spreading vSwitch uplinks across more than one physical network card.

- Protected Network has no requirements and is part of MSCS.

http://technet.microsoft.com/en-us/library/dn265972.aspx#BKMK_VMHealth

VMware Replication vs Hyper-V Replication

- Both hypervisors offer a storage agnostic, asynchronous VM replication solution. vSphere Replication is a virtual appliance that is deployed on the ESXi host. It requires Essentials Plus or higher licensing, vCenter, and the appliance needs 2 vCPU, 4GB RAM, 12GB of storage, and an IP address. Each appliance can handle up to 500 replication sessions. Additional appliances can be added and only require 512MB of RAM each.

- vSphere Replication does not offer any automation or orchestration when you want to failover the virtual machines. Site Recovery Manager (or scripting) is required for that. The lowest possible RPO (Recovery Point Objective, or how often the VM is re-synchronized) you can achieve with vSphere Replication is 15 minutes and the maximum is 24 hours. You can also create several point in time replicas so a VM can be restored from different time points.

- vSphere Replication does not perform any compression of traffic as it is sent to the receiving appliance nor any encryption, though to save bandwidth for the initial replication you can seed the VM being replicated by other means. vSphere Replication does not support FT.

- Hyper-V Replication is built into the OS and does not require SCVMM. Replication can be performed between standalone hosts, clusters, or a mix of the two. To enable Replication for a cluster, the Hyper-V Replication Broker cluster role must be added to the cluster and an IP address and DNS name assigned to it.

- Like vSphere Replication, Hyper-V also offers multiple point in time replicas and the ability to seed the VM rather than sending the initial sync over the network.

- Hyper-V Replication can achieve a minimum RPO of 30 seconds and a maximum of 15 minutes. It can also optionally compress the replication traffic sent over the network and encrypt it. Compression does require CPU resources on the host, but it will only use available resources to perform the encryption. During a replication sync, if the host’s CPU resources are demanded by the VMs, replication compression will be scaled back or disabled so there is no disruption to the VMs.

- Both products can replicate to and from any type of datastore: same to same, block to file, file to block.

https://pubs.vmware.com/vsphere-55/...UID-6F4E1F93-C901-4D51-835F-43D93E5D154B.html

https://pubs.vmware.com/vsphere-55/...UID-8006BF58-6FA8-4F02-AFB9-A6AC5CD73021.html
http://technet.microsoft.com/en-us/library/jj134172.aspx

Live Migration and VMotion

- VMware and Hyper-V are both capable of migrating a running VM from one host to another without disruption to the VM. In VMware this is called VMotion and Hyper-V calls it Live Migration. Shared storage is not a requirement for either product. Each also offers Storage VMotion/Live Storage Migration which allows you to move a running VM’s virtual disks from one datastore to another. However, an oddity on Hyper-V’s part is that after a VM is moved to another datastore, it won’t always clean up the empty folders left behind.

- In both products, segregating the migration traffic is important since it can easily saturate the network adapters on the host. Proper architecture is critical both from a performance and security perspective. Migration traffic should not be routed (save for circumstances like stretched clusters across two different geographies) and should exist in its own subnet and VLAN accessible only by the hosts. As a migration occurs, the contents of the VM’s memory are being sent over the network so security is crucial. One advantage with Hyper-V is that you do have the option of using IPSec to encrypt the migration traffic while VMware offers no encryption option.

- VMware’s VMotion is very simple to set up. Checkmark “VMotion” on a vmkernel on the host and so long as the VMotion vmkernels on each host can communicate, VMotion will work. On Hyper-V it will attempt to use any network for Live Migration by default and I highly recommend trimming that down to only the network(s) you want it to use. Also, Live Migration only uses CredSSP authentication to establish the Live Migration between hosts which means you can only kick one off locally. You can enable Kerberos auth by simply checking it on the host then assigning constrained delegation to each Hyper-V AD Computer object to use the Live Migration protocol with each other. You can also specify a subnet to attempt a Live Migration on first, then subsequent subnets to attempt if the first subnet has no connectivity.

- There are a few things that will break VMotion in VMware if enabled, such as SR-IOV (unless you’re using Cisco UCS VM-FEX *shameless Cisco UCS plug*) or sharing SCSI busses between VMs in virtual mode. Hyper-V does not have this limitation and Microsoft has taken the stance that no new features will break Live Migration.

- Both VMware and Hyper-V offer ways to speed up migration beyond a single network connection. VMware has the ability to use up to 16x 1Gb or 4x 10Gb NICs for VMotion. Setting this up requires binding the individual vmkernels to physical NICs, similar to what you’d do with iSCSI vmkernels, and the use of a single subnet and VLAN (multiple will work but is not supported). In Hyper-V you simply configure on each host with subnets you’d like it to use for Live Migration then enable SMB3 Live Migration. Alternatively, if using multiple network ports is not an option, Hyper-V offers Live Migration Compression to get nearly double the performance from a single NIC. Like replication compression, it will only use available CPU resources to avoid hurting VM performance on the hosts. Hyper-V also supports RDMA offload to offload the Live Migration workload to the network adapters chipset, further increasing the migration speed. In fact, with RDMA and SMB3 Live Migration, with enough adapters the host’s RAM bus will eventually become your bottleneck.

http://kb.vmware.com/selfservice/mi...nguage=en_US&cmd=displayKC&externalId=2007467

http://www.aidanfinn.com/?p=14907
http://technet.microsoft.com/en-us/library/dn550728.aspx

vCenter and SCVMM High Availability

- vCenter does not currently have a highly available option. You cannot use MSCS to protect a vCenter installation but it is a good idea to virtualize vCenter itself because it can take advantage of HA, VMotion, Storage VMotion, etc. Since vCenter requires at least 2 vCPUs, FT for a virtualized vCenter VM is not supported. VMware did offer a product called vCenter Heartbeat which provided clustering of vCenter but it has gone End of Sale. vCenter does, however, support SQL clustering of its database.

- System Center Virtual Machine Manager does support MSCS of VMM itself and the SQL database. The Library server VMM uses for storing templates, ISOs, host deployment VHDs, etc. also supports clustering with the Scale Out File Server cluster role. Bear in mind that not as many of the features in Hyper-V rely on VMM to stay up as VMware so the need to create a highly available VMM server is less important than vCenter.

http://kb.vmware.com/selfservice/mi...nguage=en_US&cmd=displayKC&externalId=1024051

http://technet.microsoft.com/en-us/library/gg610675.aspx

Site Recovery Manager

- VMware’s Site Recovery Manager product is awesome. I’ve personally installed it many times and have walked away from many clients feeling relieved that they now have a DR strategy in place they can test and validate. Hyper-V also offers Site Recovery with most of the same functionalities as VMware, but with key design differences. One being that Microsoft acquired a product called InMage that allows VMware VMs and physical servers to be replicated and failed over to Azure. This is a powerful tool for heterogeneous environments that want to replicate to a public cloud. However, as of now you can only replicate a Generation 1 Hyper-V virtual machine to Azure presumably since Azure is only running Hyper-V 2012 and not R2.

- VMware SRM is installed on premise in the client’s source and destination datacenters while Hyper-V Site Recovery must live in Azure. This means internet connectivity at the remote location is a requirement to actually orchestrate the failover of your datacenter there. It does NOT mean your VMs can only failover to Azure itself. Like VMware, replicating and failing over your VMs to their public cloud is an option but not a requirement.

- Both offer the ability to replicate VMs to their public cloud, however if you use Azure's InMage product to replicate VMware VMs or physical servers to Azure, you can't fail them back to your private cloud.
 
Last edited:
MEMORY​

Requirements

VMware - 4GB required, 6GB if using VSAN
Hyper-V - 1GB required for Hyper-V 2012 R2 Core, 2GB for Windows 2012 R2 Hyper-V
(seriously, why would you go smaller?)

How They're the Same

Maximum 4TB per host
Maximum 1TB per guest
Guest memory weight

How They're Different

Reliable Memory

-VMware introduced a feature in vSphere 5.5 called Reliable Memory which essentially works with the memory in the host to identify memory errors and avoid using it. It can also interface with hardware vendors to better detect and predict memory faults. Hyper-V doesn't offer this.

http://www.vmware.com/files/pdf/vsphere/VMware-vSphere-Platform-Whats-New.pdf

Memory Overcommitment

-VMware allows admins to over subscribe memory on a host. If a host has 16GB of RAM, there is nothing stopping you from turning on 5 virtual machines, each with 4GB of vRAM. To compensate for the overcommitment, VMware will begin using memory reclaimation techniques, like Transparent Page Sharing, ballooning , compression, and swapping to still give the VMs the RAM they need.

Ballooning is a driver in VMware Tools that will begin consuming available memory inside the guest OS and informing the hypervisor it may assign this memory to other guests. Compression, as the name suggests, compresses memory to reduce usage. Finally, swapping begins using datastore disk space as RAM for the VMs which causes high disk IO and hurts performance not only for the VM or VMs swapping, but any other VMs residing on the datastore where the swapping occurs. Hyper-V does not allow overcommitment when using static memory to assign vRAM to the VMs.

If you try powering up a 20GB static vRAM VM on a 16GB host and it will fail to start. The exception is if you're using Dynamic Memory (explained below). If the VM has a startup memory value that would cause the host to be over-committed but has a minimum memory value that would alleviate over-commitment, Hyper-V will use swap space to start the VM which will then balloon memory away hopefully eliminating the over-commitment. If this doesn't happen, the VM will continue using swap space and performance will greatly suffer.

http://pubs.vmware.com/vsphere-55/topic/com.vmware.ICbase/PDF/vsphere-esxi-vcenter-server-55-resource-management-guide.pdf

Local VM SSD swap space

-A local SSD can be designated as local swap space on a VMware host if it runs out of memory for the VMs rather than letting the swapping occur on the datastores. Not only is the local SSD likely going to be faster and lower latency than the shared datastore, it also keeps that toxic, swapping IO off the datastores. In Hyper-V, there is no equivalent feature, but you can individually designate on each VM where you'd like their virtual swap file to reside, which could be a local host SSD.

https://pubs.vmware.com/vsphere-51/index.jsp?topic=%2Fcom.vmware.vsphere.resmgmt.doc%2FGUID-4505B56B-57B1-413F-ACE3-BA2A85C3EC88.html

Memory Strategy, VMware vs. Hyper-V Dynamic Memory

-Here's a big differentiator between the two that I touched on a bit when discussing memory overcommitment, which is the overall memory strategy of the two products.

-When you assign a VM 4GB of memory in VMware, you're statically assigning 4GB of RAM to it. Beyond that, you can reserve memory on the host for the VM to prevent it from sharing that memory with other guests or from it being reclaimed by the hypervisor during memory contention or you can set a limit below 4GB on the VM so it sees 4GB of RAM available to it but in actuality only 2GB, for example, is really available with any active memory above 2GB in the VM being provided by swap (please, NEVER set a memory limit on a VM!). You can also increase or decrease the VM's memory weight, which gives its memory a higher or lower priority than other VMs during periods of memory contention. Hyper-V offers a weight, but not a reservation or limit (in reality Hyper-V always reserves the memory for the VM since there is no overcommitment, page sharing, or compression).

-In VMware a feature called Transparent Page Sharing attempts to deduplicate memory between VMs in an effort to save on memory usage. However, since most modern OS's use large memory pages the effectiveness of TPS has declined. Also, a security vulnerability has been discovered in TPS that has prompted VMware to disable it by default. Hyper-V does not deduplicate memory in any way.

-Hyper-V assigns memory to VMs in one of two ways: static or dynamic. Static simply reserves X amount of RAM for the VM and the amount doesn't change. Dynamic Memory is essentially thin provisioning your VM's memory. You assign a startup amount, minimum, and maximum. When the VM boots, the guest OS sees whatever the startup RAM amount is. However, after booting is complete and the Hyper-V integration tools start (similar to VMware Tools), the amount of memory the VM is assigned will fluctuate based on demand. If the VM begins using less than the startup memory assigned to it, Hyper-V will actively balloon memory away from the VM allowing the hypervisor to reclaim that RAM. If the VM's demand increases beyond the startup amount, Hyper-V will hot plug RAM into the guest OS. With Hyper-V as the hypervisor and Windows as the guest OS, Microsoft does a very good job at ballooning away unneeded memory. I have VMs in my lab consuming less than 400MB of memory since that's all they really demand. Should they need more, first the balloon driver will give memory back, then Hyper-V will hot plug more memory into the VM up to the maximum memory number I specify. Bear in mind, not all operating systems and applications support Dynamic Memory so be sure to validate this before using it on a VM (or validate they support Memory Hot Plug or Hot Add if you can't find if they support Dynamic Memory specifically).

-Dynamic Memory has a few caveats as well. First, by default a Hyper-V VM will go into a suspend state if the host is gracefully shut down without putting it in Maintenance Mode. Hyper-V creates a .bin file that lives on the datastore with the VM which is always the same size as the memory assigned to the VM. That way if the VM needs to be suspended, adequate datastore space is already reserved for the VM's memory contents to be dumped to. However, since Dynamic Memory can change the amount of memory assigned to the VM, this .bin file will also grow which causes disk IO on the datastore. Make sure to set the VM's shutdown action to "Power Off" or "Shutdown Guest OS" if you use Dynamic Memory which eliminates this .bin file.

-Once you get into applications that require large amounts of RAM, Microsoft recommends using static RAM instead of Dynamic Memory. Keep this in mind when architecting a solution.

http://blogs.technet.com/b/danstolts/archive/2013/03/06/virtual_2d00_memory_2d00_management_2d00_dynamic_2d00_memory_2d00_much_2d00_different_2d00_than_2d00_memory_2d00_over_2d00_commit_2d00_become_2d00_a_2d00_virtualization_2d00_expert_2d00_part_2d00_3_2d00_of_2d00_20.aspx
http://pubs.vmware.com/vsphere-55/topic/com.vmware.ICbase/PDF/vsphere-esxi-vcenter-server-55-resource-management-guide.pdf

VMware RAM Hot Plug

-You can enable RAM Hot Plugging on each VM in VMware so long as the guest OS and application supports it. Hyper-V does not have this feature for static memory assignments, but you can increase the maximum memory of a VM using Dynamic Memory while it's running.

http://www.virtualizationadmin.com/blogs/lowe/news/vsphere-51-hot-add-ram-and-cpu.html
 
Last edited:
NETWORKING

VMware Network Requirements

- At least one Gigabit network interface on the VMware HCL
- Two or more recommended to separate management traffic from VM traffic
- Portfast enabled on connected switches

https://pubs.vmware.com/vsphere-55/...UID-DEB8086A-306B-4239-BF76-E354679202FC.html


Hyper-V Network Requirements

- At least one Gigabit network interface with Microsoft WHQL drivers
- Portfast enabled on connected switches
- Properly configure Network Adapter Binding order (Management first, CSV second)

http://technet.microsoft.com/en-us/library/dn550728.aspx


How They're the Same

- Virtual switches (VMware offers two, Standard and Distributed (vDS) but the latter requires Ent Plus licensing and vCenter)
- 3rd party Virtual Switches available like Nexus 1kv, IBM 5000v (vDS only in VMware)
- Bandwidth management, can control egress maximum bandwidth
- Network Quality of Service (QoS) (vDS only in VMware)
- NIC teaming with various options including source port hash, LACP, active/standby (LACP vDS only in VMware)
- Change vNIC MAC addresses
- Virtual Machine Device Queuing – each VM NIC is given a queue on the physical NIC
- Support 10Gb and 40Gb network adapters
- Port ACLs in the vSwitch (vDS only in VMware)
- PVLANs and VLAN Trunking (PVLANs vDS only in VMware)
- SR-IOV offered


How They're Different

VMware vSwitches

- VMware offers two types of virtual switches (vSwitches): Standard and Distributed. Standard vSwitches come with all versions of ESXi and support NIC teaming. VM virtual NICs can be distributed across the NICs using Source Port ID Hash, IP Hash, MAC Address Hash, or a simple Active/Standby configuration. Standard vSwitches can also perform traffic shaping by limiting egress traffic bandwidth.

- Distributed vSwitches (vDS) are available only with Enterprise Plus licensing and offer many more configuration options and features. They also allow for faster network configuration of new hosts as they serve as a template for how a host’s network is to be configured. Additional features available on a vDS include: 3rd party virtual switching, additional Load Balancing options, Netflow, CDP/LLDP, and Network IO Control.

o 3rd party virtual switches such as the Cisco Nexus 1000v and IBM 5000v can be loaded into ESXi to replace the VMware vSwitch. This allows the virtual switch to be managed via CLI just as any Cisco or IBM switch would be. Most companies find this attractive since it silos network management of the virtual environment off to the network team rather than having the VMware admins perform this function.

o A vDS offers two new load balancing options over a Standard vSwitch: LBT and LACP. Load Based Teaming (LBT) will automatically move virtual interfaces (VM and vmkernel ports) to different vSwitch network uplinks if an uplink experiences more than 75% throughput for 30 seconds. Rather than using LACP which uses various random properties to balance traffic across the uplinks, LBT actually responds to network congestion as it occurs. When possible, LBT is the usually the preferred method of load balancing, but LACP offers over 20 different methods of load balancing as well, none, however are reactive like LBT.

o Use of a vDS supports Netflow, a protocol that allows collecting and monitoring IP traffic. This is helpful when monitoring web traffic, for example, from the VMs.

o The VMware vDS also supports Cisco Discovery Protocol and Link Layer Discovery Protocol, both can be used to receive and advertise information to other devices on the network. CDP and LLDP are useful when you’d like network devices, like switches, to automatically broadcast their system name, location, management IP or other handy information so it’s easily viewable by other network devices.

o Network IO Control takes bandwidth management on the Standard vSwitch much further by performing actual network QoS. While a standard vSwitch can only limit outgoing bandwidth, a vDS can limit outgoing and incoming. When using Network IO Control, it can also prioritize certain types of network traffic to ensure they have access to minimum bandwidth requirements during times of congestion and suffer less latency. When used with Load Based Teaming, this provides a proactive and prioritized form of network quality of service for the VMs and host network traffic.

https://pubs.vmware.com/vsphere-55/...UID-350344DE-483A-42ED-B0E2-C811EE927D59.html

https://pubs.vmware.com/vsphere-55/...UID-B15C6A13-797E-4BCB-B9D9-5CBC5A60C3A6.html

https://pubs.vmware.com/vsphere-55/...UID-0D1EF5B4-7581-480B-B99D-5714B42CD7A9.html


Hyper-V vSwitch

- Hyper-V also has a vSwitch but only one kind. No additional licensing is needed for any of the features, unless you load a 3rd party vSwitch like the Nexus 1000v which means you’d pay Cisco.

- The Hyper-V vSwitch can only have a single network adapter or network adapter team connected to it whereas VMware allows for multiple adapters to be connected to it and VMware creates the team itself. As of Windows 2012, Windows itself can create network teams which you can use to connect to the vSwitch. Many network adapter vendors like Intel and Broadcom offer their own NIC teaming drivers as well. However, Windows teaming works very well and offers more configuration options.

- A NIC team in Windows can be load balanced either using Hyper-V Port, LACP, or Dynamic Load Balancing options. The Hyper-V Port option is just like VMware’s default load balancing option of source port ID and simply pins a virtual NIC (host or VM) to a specific interface when it’s connected either through powering on or being migrated to the host. LACP is also an option but there are only a small number of load balancing algorithms to choose from. Adapters can also be added as standby interfaces to a team as well. Dynamic Load Balancing is a step beyond the Hyper-V Port option by randomly assigning VM vNICs to members of the team for incoming traffic but uses a TCP Port Hash to load balance outgoing traffic across all members of the team. This is the recommended method of NIC Team Load Balancing for Hyper-V 2012 R2. You can also use Dynamic Load Balancing in Switch Dependent mode and have the switch use EtherChannel to load balance incoming traffic as well. This is not an intelligent method of load balancing like VMware's LBT but more like LACP.

- Hyper-V’s vSwitch is considered an open extensible switch, meaning companies are free to write and develop extensions for the switch. Cloudbase and 5Nine both offer extensions for the Hyper-V switch as well as many other partners, many of them focused on security.

- Hyper-V also offers some unique security settings in the vSwitch: DHCP Guard and Router Advertisement Guard.

o DHCP Guard, as the name suggests, stops a VM from making DHCP offers. In other words, you can enable this setting for all VMs that are NOT an authorized DHCP server and leave it disabled for those that are. Default is disabled.

o Router Advertisement Guard can be used to stop a VM from advertising itself as a router. When enabled the following packets will be discarded by the vSwitch: ICMPv4 Type 5 (Redirect message), ICMPv4 Type 9 (Router Advertisement), ICMPv6 Type 134 (Router Advertisement), ICMPv6 Type 137 (Redirect message).

- A few other cool features are vRSS (virtual Receive Side Scaling), IPSec offload, and RDMA.

o vRSS is basically RSS for virtual machines. RSS allows multiple CPU cores on a server to handle network workloads. vRSS enables this feature for the VM so it, too, can benefit by spreading network workloads across multiple cores.

o IPSec offload allows VMs to use IPSecTO which offloads IPSec encryption algorithm calculations to the host’s physical network adapter.

o RDMA or SMB Direct is another offloading feature available on high speed network adapters (10Gb and 40Gb iWARP, Infiniband, or RoCE Converged Network Adapters). This technology allows the network adapter to access host memory directly and use very little CPU power. This can drastically increase throughput when using SMB Direct connections to SMB3 storage or when using SMB Direct Live Migrations. In fact, when using SMB Direct for Live Migration you are only limited to the speed of the host’s RAM bus when performing Live Migrations if you give it enough network adapters.

http://msdn.microsoft.com/en-us/library/windows/hardware/jj673961(v=vs.85).aspx

http://blogs.technet.com/b/keithmay...er-2012-do-i-need-to-configure-my-switch.aspx

http://blogs.msdn.com/b/virtual_pc_guy/archive/2014/03/24/hyper-v-networking-dhcp-guard.aspx

http://blogs.msdn.com/b/virtual_pc_guy/archive/2014/03/25/hyper-v-networking-router-guard.aspx

http://technet.microsoft.com/en-us/library/dn383582.aspx

http://technet.microsoft.com/en-us/library/jj134210.aspx


VMware and Hyper-V Network Design

- I won’t get into the weeds and nitty gritty here but I do want to point out some key points to note when architecting a VMware network design vs a Hyper-V one.

- Typically I’ve found designing the network for VMware tends to be easier than Hyper-V. One of the hardest aspects of Hyper-V for me as a long time VMware nerd was understanding the differences in networking. For example, In VMware you add physical NICs to a vSwitch which creates a NIC Team but in Hyper-V you first create a NIC Team then assign it to a vSwitch.

- In Hyper-V when you create a vSwitch you have the option of checking “allow management OS to share this network adapter” which simply creates a virtual NIC for the management OS to use. Think of it as a vmkernel port in VMware. You can either leave this virtual NIC untagged or assign it a VLAN from the Virtual Switch Manager. However, you can create multiple vNICs for the management OS on a single vSwitch but it can only be done through Powershell. This is troublesome because I often do just this, such as creating a Management vNIC and a CSV vNIC for the management OS. Or what if your host has only 2x 10Gb connections? You’ll most likely want to team them for redundancy but then ALL the management OS’s network adapters will need to be vNICs on this team, such as Management, CSV, iSCSI, Live Migration, etc. Big caveat to this… by using a vNIC for SMB or Live Migration, for example, you lose the advanced features of the physical network adapter like RSS and RDMA! Plan carefully!

o Example: a typical Hyper-V network design could be the following:

2x 1Gb NICs teamed using Hyper-V Team Load Balancing and attached to a vSwitch with 2 vNICs presented to management OS, one for Management and one for CSV
2x 1Gb NICs not teamed used for Live Migration traffic, each on a different subnet, SMB Live Migration selected for a total of 2Gb of Live Migration bandwidth
2x 1Gb NICs teamed using Hyper-V Team Load Balancing and attached to a vSwitch with no vNICs presented to management OS, this vSwitch is for guest traffic only
2x 1Gb NICs not teamed used for iSCSI or SMB storage traffic, each on a different subnet, use Powershell to set SMB Multi-channel constraints to limit SMB storage access through these NICs only

- To further complicate things, in VMware you can create Port Groups which allow you to easily assign VLANs to your VMs’ vNICs. Hyper-V does not have Port Groups and forces you to assign the VLAN individually to each VM vNIC. If you’re comfortable with scripting this isn’t such a huge deal but is a manual step you don’t need to worry about in VMware. If you’re running Virtual Machine Manager (VMM) then you can use Logical Networks to assign VLANs automatically similar to a Port Group.

- However, a Logical Network does NOT equal a Port Group! A Logical Network acts as a means of creating an identity for a certain type of traffic on a singular or per-tenant basis that exists across multiple clusters and datacenters. If you had two Hyper-V clusters, one in New York and one in Dallas, you could create a Logical Network called “SQL Data” for SQL VMs and specify two Network Sites inside the Logical Network, one stating that SQL Data is VLAN 201 and 10.1.20.0/24 in New York but the other site says VLAN 401 and subnet 192.168.4.0/22 is SQL Data in Dallas. If you assign the “SQL Data” Logical Network to a VM you create on the Dallas cluster, it will get the Dallas VLAN. If you create or move the VM to New York it will get the New York VLAN!

http://pubs.vmware.com/vsphere-55/i...UID-0BBDC715-2F93-4460-BF07-5778658C66D1.html

http://technet.microsoft.com/en-us/library/jj721568.aspx


VMM IP Pools, MAC Pools vs vCenter IP Pools

- Both vCenter and VMM allow you to create IP Pools so you can let them assign IPs for you automatically rather than keeping track of them yourself. Key difference though is that VMware’s IP Pools are designed for vApps (think vCOPs, vDP, etc.) not for individual virtual machines. When creating a VM in vCenter you can’t point it at an IP Pool and have vCenter give the new VM an IP. In VMM, however, you can assign an IP Pool to a Logical Network/VM Network and then VMM will hand out IPs automatically but only when you create a VM from a template. VMM will interact with the VM through the Hyper-V integration tools and configure the guest’s vNIC with the static IP address and the VM will retain that IP until you delete the VM. This is really useful for Private and Public Cloud environments since you can give tenants pools of IP addresses and let the software hand them out. You can also integrate IPAM with VMM so who has what IP is monitored there as well.

- In VMM you can also create MAC Pools which operate the same as IP Pools.

https://pubs.vmware.com/vsphere-55/...UID-4BEEFDA9-E6FF-4D28-B0F3-B02864D8795B.html

http://technet.microsoft.com/en-us/library/jj721568.aspx


Network Virtualization

- VMware recently introduced a product called NSX which is a revolutionary product for networking just as ESX was for servers. NSX essentially virtualizes and manages all aspects of the network in software: VLANs, firewalls, load balancing, routing, etc. This is extremely powerful especially for large Private and Public Clouds. NSX is available as a separately licensed product for VMware but is not limited to virtualizing only VMware networks. It can do the same for other hypervisors as well. This technology is based on VXLAN and requires you to increase your network’s infrastructure to a larger MTU and to flatten the entire network since NSX will handles VLANs and so on itself. If I’m slaughtering NSX in this description, I’m sure lopoetve or NetJunkie can correct me. :)

- VMM also offers network virtualization built in, though not to the same degree as NSX. It uses NVGRE and IP Rewrite (similar to NAT) to virtualize networks allowing cloud tenants to use the same VLANs and subnets while still segregating their traffic. Like NSX, Windows also has a software load balancer that can be used with VMM (basically the legacy Windows NLB we all know). However, there are no software firewalls like what NSX offers besides what runs in the guest. The other options are to use vSwitch ACLs or a 3rd party firewall extension for the vSwitch.

- Hyper-V and VMM network virtualization is heavily dependent on Powershell and scripting so if you’re not familiar with those, managing it is going to be a huge challenge.

http://www.vmware.com/files/pdf/products/nsx/VMware-NSX-Datasheet.pdf

http://www.virtualizationadmin.com/...ive-hyper-v-network-virtualization-part1.html


SR-IOV

- Both products offer SR-IOV, a feature that allows the virtual NIC of a VM to bypass the hypervisor and get direct access to the physical network adapter so long as that adapter supports SR-IOV. This is attractive for applications that require minimal network latency and the highest possible network performance. Wall Street trading firms are a great example of this where every nanosecond of network latency matters.

- However, since the physical network adapter is now being passed through, in a way, to the VM, this prevents VMotion and a slew of other features from functioning in VMware. This isn't the case with Hyper-V as it will briefly decouple the virtual network adapter from the physical adapter during the Live Migration to allow the VM to move to another host. This can also be achieved in VMware but only if you're running on Cisco UCS and using the virtual interface card, like a VIC1240, through a feature called VM-FEX. Downside to this is another layer of complexity and another component that must be managed and maintained along with VMware and Cisco UCS itself. I've done upgrades on environments using VM-FEX and it's not fun! The fact that Microsoft implemented SR-IOV while still allowing Live Migration to function is part of their overall strategy to not introduce any new features that will break Live Migration.

http://kb.vmware.com/selfservice/mi...nguage=en_US&cmd=displayKC&externalId=2038739

http://blogs.technet.com/b/jhoward/...d-to-know-about-sr-iov-in-hyper-v-part-1.aspx

http://www.cisco.com/c/en/us/suppor...ng/ucs-manager/116374-configure-vmfex-00.html
 
Last edited:
STORAGE

VMware Storage Requirements

- Hardware RAID, local drive, SAN, or USB/SD flash to boot from
- Minimum of 1GB boot device, 5.2GB required for VMFS volume and 4GB scratch space
- VM Datastores can be local hardware RAID, local disk(s), SAN, or NFS NAS

https://pubs.vmware.com/vsphere-55/...UID-DEB8086A-306B-4239-BF76-E354679202FC.html

Hyper-V Storage Requirements

- Hardware or software RAID, local drive, or SAN to boot from
- Minimum of 32GB boot device for Windows with full GUI, 8GB minimum for Hyper-V Core but 20GB recommended
- VM Datastores can be local hardware or software RAID, local disk(s), SAN, or SMB3 share
- Technically you can find a way to boot Hyper-V from USB flash or SD but I wouldn’t recommend it

http://technet.microsoft.com/en-us/library/dn610883.aspx

How They're the Same

MPIO, Including Round Robin
Storage Offloading (VAAI for VMware and ODX for Hyper-V)
Storage VMotion (requires vCenter in VMware)
Block Protocols - FC, FCoE, iSCSI
Thick and Thin Virtual Disks
Pass-through Disks and Shared Virtual Disks for VM Clustering
Use NPIV to present FC LUNs directly to guest (VMware presents as physical RDM which guest sees as SCSI disk, Hyper-V as virtual FC card guest uses to access FC LUNs just like a physical server)
VM Snapshots
Using Labels to Identify Datastores Based on Performance, SAN, etc. (requires vCenter for VMware and VMM for Hyper-V)

How They're Different

File Protocols - NFS3 vs SMB3

- VMware uses NFS as a file protocol for datastores while Hyper-V uses SMB3. NFS v3 is the same NFS we’ve all come to know and love while SMB3 is a new protocol introduced with Windows 2012.

- NFS v3 does not support any sort of MPIO. You’ll still want to provide at least 2 uplinks to the vSwitch your NFS vmkernel lives on but it won’t load balance across those two uplinks unless you change things up like mounting datastores with different IPs (NFS Datastore1 mounts via IP1, NFS Datastore2 mounts via IP2 on the NAS, and so on) or DNBS Round Robin (which I wouldn’t do since I don’t want NFS relying on DNS) but you’ll still only get a single uplink’s speed when accessing a single datastore, even if you use a vDS and LACP. Because of this, 10Gb networking is definitely a big plus when using NFS. Compared to SMB3, however, NFS is very simple to setup and manage.

- SMB3 does perform load balancing and path failover. The more network adapters you throw at SMB3 on the client and NAS, the more bandwidth you can get. For example, in my lab each Hyper-V host has 4x 1Gb dedicated connections for access to my SMB3 NAS which also has 4 links. Because SMB3 actually load balances the traffic across all 4 NICs, I can get 4Gb of bandwidth. Inside a VM, I can read and write at 4Gb to its virtual disk (yay all SSD NAS!). If one NIC or path goes down, I’ll still get 3Gb without interruption.

- SMB3 support is still emerging on a lot of 3rd party storage products and even those that support it may not support all the features yet. Netapp comes to mind in that they support path failover but not load balancing yet (at least, as of 4 months ago when I last checked). Also, you may experience some quirks when trying to set up the 3rd party SMB3 storage. EMC VNX supports SMB3 but it isn’t as simple as creating a share. You’ll need to go into the CLI of the VNX to enable some features and create the share in a specific way. On top of all this, you’ll need to ensure share and NTFS permissions are all set properly. You’ll also want to use SMB Multi-channel Constraints (Powershell commandlet) to limit which interfaces are used to access the SMB3 shares otherwise if your NAS is also serving the storage on the management subnet your host uses, it will use that path to access the NAS as well.

- To make matters worse, some 3rd party products have difficulty working with VMs living on SMB3 shares. Up until several months ago, Veeam backups didn’t work properly if you used SMB3 storage exclusively. Even some of Microsoft’s own products, like Windows Server Backup, won’t work. You also can’t perform a P2V or V2V directly onto a SMB3 share. You’d first have to convert the server and store on a block device then Live Storage Migrate it to a SMB3 share.

- Both NFS and SMB3 support offloading such as VAAI and ODX which enables supported storage arrays to handle certain tasks rather than the host, like cloning files.

http://www.vmware.com/files/pdf/techpaper/VMware-NFS-Best-Practices-WP-EN-New.pdf
http://technet.microsoft.com/en-us/library/jj134187.aspx
http://blogs.technet.com/b/yungchou...s-server-2012-hyper-v-over-smb-explained.aspx

Block Protocols – VMFS vs CSV

- In the block protocol arena, I feel VMware has a big advantage here. The VMFS file system was built specifically for virtualized workloads on block storage. Windows still uses NTFS, which is a great file system but it wasn’t built with virtualization in mind. As such, Microsoft had to create Cluster Shared Volumes (CSV), so NTFS could be shared between multiple Hyper-V hosts. CSVs are basically a file system over top of NTFS so Hyper-V can use it as a shared block datastore.

- A CSV works by allowing all the members of a Hyper-V cluster simultaneously read and write to a shared block device but one of the cluster members owns the metadata of the file system. This works fine under normal conditions with only a very small performance hit for a cluster member writing to a CSV it does not currently own. However, if access to a LUN is lost by a cluster member or during certain operations (both initiated intentionally or unintentionally) it can cause the CSV to go into Redirected Mode. This means all access to the block device MUST go through the cluster member that owns metadata. Essentially the other cluster members access the block device via SMB over the CSV network. As you can imagine, performance in this scenario is very poor. Bear in mind, when Redirected Mode occurs has been lessened in 2012 R2 vs earlier versions of Hyper-V but it is still a consideration whereas it is not in VMware.

- CSVs do have two advantages: CSV Encryption and Caching.

o CSVs can be encrypted using Bitlocker which natively comes with Windows Server. This can be helpful if your company requires everything to be encrypted. With Hyper-V you can do so right through the OS rather than encrypting at the guest level or using a 3rd party solution.

o You can also use host RAM as read cache on a CSV. This works great for avoiding VDI boot storms or simply taking some of the IO off the storage array. Technically you can allocate up to 80% of the hosts’ RAM as cache as you’d like but Microsoft doesn’t recommend more than 64GB. Bear in mind, this amount of cache is per CSV, so if you set Cache to 2GB and your cluster has 4x CSVs, then each host will allocate 8GB of cache (2GB times 4 CSVs).

o VMware does not offer RAM caching unless you purchase VMware Horizon (formerly View) in which it is designed for helping combat boot storms.

- Both VMware and Hyper-V support offloading such as VAAI and ODX for block datastores as well so long as the storage supports it.

http://technet.microsoft.com/en-us/library/dn265972.aspx
http://technet.microsoft.com/en-us/library/dn265972.aspx

vFRC or local SSD swap caching

- VMware does not offer their own local RAM caching, but they do offer local SSD caching, called vFRC. This feature is available only in Enterprise Plus, but enables you to use local SSD space as read cache for VMDKs. vFRC is enabled on a per VMDK basis so you’ll need to manually manage which VM and VMDK get how much cache. It’s a powerful tool if you want to accelerate the reads on some VMs and keep their heavy IO off the storage array.

- In VMware you can also use local SSD to for VM swap files. This way if a host runs out of RAM and is forced to use swap space to serve VM that swap can come from local SSD and not shared storage. When VMs are forced to swap on shared storage it kills performance. At least this way, the VMs will still suffer a performance hit from having to swap, albeit to fast SSD, but it won’t affect every other VM on the shared storage whose hosts are NOT swapping.

- Hyper-V does not offer local SSD caching but you can manually select where a VM’s swap file is to go which could be local SSD if you wanted but that same local path needs to exist on all the hosts in the cluster.

https://pubs.vmware.com/vsphere-55/...UID-07ADB946-2337-4642-B660-34212F237E71.html
https://pubs.vmware.com/vsphere-55/...A85C3EC88.html?resultof=%22%73%77%61%70%22%20

vSAN

- VMware offers an add-on product called vSAN which enables you to use local SSD and hard drives in the hosts as a shared datastore. This eliminates the need for a shared storage array and is an excellent product.

- VMware even offers a product called the vSphere Storage Appliance (lopo can correct me here but I think it’s eventually going away) which uses virtual appliances to virtualize the host’s storage to leverage it as a shared datastore whereas vSAN actually runs in the hypervisor itself. It, too, is an add-on product.

- As of now, Microsoft’s official stance is that they do not believe in hyper-convergence because compute and storage resources do not scale the same. Their focus is on the Scale Out File Server cluster which works great as a highly available SMB3 storage option for Hyper-V virtual machines but is not hyper-convergence (like Simplivity or Nutanix). 3rd parties like Starwind do offer products that enable hyper-convergence on Hyper-V but MS has no official plans to offer anything of their own.

http://www.yellow-bricks.com/2013/08/26/introduction-vmware-vsphere-virtual-san/
https://pubs.vmware.com/vsphere-55/...UID-7DC7C2DD-73ED-4716-B70D-5D98D02F545B.html

VMware Storage IO Control and SDRS

- VMware offers two cool storage features: Storage IO Control and Storage DRS. Storage IO Control acts as a Quality of Service mechanism for all the VMs accessing a datastore. By using shares, you can grant certain VMDKs higher priority over others for when a datastore is experiencing periods of high latency (30ms is the default). This feature can be highly beneficial by curtailing “noisy neighbors” from hogging all the IO on a datastore and choking out the other VMs. Hyper-V offers nothing like Storage IO Control except the ability to set minimum and maximum IOPs on each virtual disk.

- VMware also has Storage DRS. Like regular DRS, Storage DRS can automatically assign VMs to datastores based on available capacity and can automatically move VMs between datastores based on performance inbalance. You can create a cluster of block or file datastores (can’t mix and match block and file) so when you Storage VMotion or create a datastore, you can simply point at the cluster datastore resource and let SDRS decide where it should go. However, bear in mind that in some scenarios, such as when using a storage array that does tiering, you don’t want the automated VM Migrations to occur since this will appear as hot data to the storage array causing it to needlessly tier data that doesn’t need to be. You can also use Storage DRS to put a datastore in maintenance mode and, like a host in maintenance mode, all the VMs on the datastore will automatically be evacuated so you can be sure nothing is running on it.

- Hyper-V does offer the ability to label datastores and assign them to a cloud. It will also assign new VMs to the datastore with the most available free space out of the datastores contained within that label, but it does not take performance into account nor does it monitor datastore performance and proactively migrate VMs around to balance the load.

http://www.yellow-bricks.com/2010/09/29/storage-io-fairness/
http://www.yellow-bricks.com/2012/05/22/an-introduction-to-storage-drs/

Software RAID

- VMware will not install to software RAID or “fake” RAID. For most hardware this isn’t an issue since many servers come with hardware RAID of some sort. Windows does support software RAID if you’re using supported drives and Windows itself can create software RAID after installation so you can mirror your boot disk.

VMware boot from SD/USB flash

- VMware can install to SD cards or USB flash disks. This is very convenient when you don’t want to waste hard drives on the ESXi host and once ESXi boots, it’s just running in RAM anyway so even if the flash card/drive fails, ESXi will continue to run it just can’t boot up again. While you can install Windows on the same media, I would strongly advise against it. Even Hyper-V core is more disk intensive than ESXi and performance in the host OS will suffer. Being able to boot to SD or USB Flash is a great bonus with VMware.

Converting disks from Thick to Thin and vice versa

- Both hypervisors offer thin and thick provisioned virtual disks. However, only VMware allows you to change a virtual disk from thick to thin or thin to thick while a VM is powered on by using Storage VMotion. In Hyper-V the VM has to be powered off to perform the conversion.

Hyper-V Differencing Disks

- Hyper-V does offer a type of virtual disk that VMWare does not: the differencing disk. A differencing disk is really just a snapshot of a parent virtual disk. You can use a differencing disk to test changes on a VM without actually affecting the real data. When you’re done, just delete the differencing disk. There is a performance hit for using a differencing disk just like for snapshots and you don’t want to keep it around too long as the more writes occur, the bigger the differencing disk gets. It can be handy for VDI deployments, though, if the storage array can handle the load and you’re not using them as persistent desktops.

- VMware Horizon’s linked clone technology is similar to Differencing disks but can only be used for VDI deployments, not to mention purchasing Horizon.

http://technet.microsoft.com/en-us/library/hh368991.aspx

CBT – Changed Block Tracking

- VMware has a feature called Changed Block Tracking or CBT. Many backup products rely on CBT to tell them what blocks have changed in a VMDK since the last backup so the VM can be backed up much more efficiently and without needing the software to scan the VM’s file system. Hyper-V has nothing like CBT right now and must rely on 3rd party storage filter drivers to perform the same task. This works, but adds another layer of complexity to Hyper-V and yet another 3rd party add-on that can fail. Sometimes these 3rd party drivers can even cause a CSV to go into Redirected Mode which will really hurt performance on the cluster.

http://kb.vmware.com/selfservice/mi...nguage=en_US&cmd=displayKC&externalId=1020128

VHD/VHDX disks

- One cool thing about VHD and VHDX disks is that they’re easily mountable in any modern Windows OS. Simply go to Disk Management, right click it, and choose to mount a VHD then browse to its location. Very easy way to connect up a VHD and grab data out of it.

Various Hyper-V Storage Weirdness

- Can’t mount a local ISO or one from a SAN datastore in Hyper-V like you can in VMware. Must either be in the host, on a network share, or in a Library Share in VMM, and when you do use a Library Share you’ll need to set up Constrained Delegation in AD for the Library server so the hosts can mount the ISO without copying it locally first! Much easier to mount up an ISO in VMware.

- Can’t hot add a SCSI controller to a VM in Hyper-V but have been able to in VMWare for a long, long time.

- Hyper-V still has virtual IDE controllers required to boot a Generation 1 virtual machine. Hyper-V has Gen 1 and Gen 2 VMs, something analogous to VMware’s virtual hardware versions. If a VM is Generation 1 it must boot from a virtual IDE disk. Only Windows 8/2012 or newer guests OS’s can be Generation 2 VMs which can use virtual SCSI disk boot.

- When you Live Storage Migrate a VM to another datastore, the folder on the old datastore isn’t deleted. First noticed this in Windows 2012 and figured it would be corrected in 2012 R2 but it wasn’t. Doesn’t affect anything but does make it confusing when you look at the folder structure inside a datastore.
 
Last edited:
ADMINISTRATION

VMware Administration Requirements

- VMware Client required to manage individual hosts or vCenter (no 5.5 features available in legacy client)
- vSphere Web Client required to manage vCenter and all 5.5 features plus many of the latest versions of add on products like VDP

Hyper-V Administration Requirements

- Hyper-V Manager to manage individual or multiple hosts
- Failover Cluster Manager to manage individual or multiple clusters
- Virtual Machine Manager (VMM) optional to manage individual hosts and clusters

How They're the Same

Powershell!
Remote Administration tools
Wide array of 3rd party tools and plugins

How They're Different

vCenter and VMM (not versus)

- Every VMware admin is familiar with vCenter and what it brings to the table. vCenter not only serves as a management solution for vSphere but is also a requirement for many features (HA, VMotion, vDS, DRS, FT, to name a few). vCenter can be managed through a legacy client you install on your workstation or through a web client. VMware has chosen to retire the legacy client and all new features will only be manageable in the web client. While this is a kind of progress, many people still prefer to use the old client as the web client is slower and for those of us that have used the legacy client for nearly 10 years, it takes time to learn a new interface. Still, the web client is a good product and not very hard to learn.

- vCenter comes in Foundation or Standard Edition. The only difference between them is that Foundation can only manage 3 hosts maximum.

- A common misconception I hear when discussing Hyper-V to VMware folks is that VMM is the Hyper-V equivalent to vCenter and this isn’t the case. While you can manage several Hyper-V hosts and clusters through VMM, VMM is NOT vCenter. VMM was designed to be a Private Cloud enablement platform first, a management portal second. This means that while you can perform many management tasks on your Hyper-V environment through VMM, its primary purpose is to enable a software defined datacenter by controlling all aspects of your Private Cloud: servers, storage, networking, load balancing, VM and application provisioning, content libraries, and patching across one or many datacenters. Also, very few of the advanced features in Hyper-V require VMM to function. Live Migration, Live Storage Migration, HA, Replication, and many other features do not require VMM.

- vCenter does not come with ESXi and is purchased separately per vCenter instance. vCenter is also not included in the vCloud bundles either. VMM is part of the System Center suite which means you don’t just buy VMM, you buy everything in System Center, and license each host to use it. vCenter is much cheaper than VMM because of this but it’s not an apples to apples comparison since the entire vCloud Suite would more closely align to System Center.

Rolling Upgrades

- This is a massive pain point for Hyper-V. For those of us who have worked with VMware for years, we know how easy it is to upgrade to a new version. Upgrade vCenter, put a host in maintenance mode to VMotion VMs away, upgrade or reinstall, move on to the next host. Hyper-V does not offer this option. To go from Hyper-V 2008 R2 to 2012 or 2012 R2, you’ll have to build a new cluster either with new servers or by evicting an existing server to start a new cluster. On top of this, you cannot Live Migrate a VM from 2008/2008 R2 to newer Hyper-V which means the upgrade will require EVERY VM TO GO OFFLINE. If you’re using CSVs, the process isn’t too bad since you can swing an entire CSV to the new cluster quickly which means you simply power down each VM on that CSV, run a wizard to attach the CSV to the new cluster, then power the VMs back up. Without a CSV, you’re looking at manual export and imports or using a 3rd party product to move the VMs quickly such as Veeam Replication.

- Once you’re on Hyper-V 2012 or 2012 R2, upgrades become easier as you can Live Migrate from 2012 to 2012 R2. You still need to create a new cluster and in order to Live Migrate a VM from one cluster to another, each cluster will need to have their own storage. If you don’t have the resources to evict and build a new cluster with its own storage, then you’ll have no choice but to incur VM downtime.

- After the upgrade, you’ll update the Hyper-V Integration tools just like you would VMware Tools in VMware.

- I know I said I wouldn’t mention upcoming releases, but because the lack of a rolling upgrade is a HUGE deal in Hyper-V, this limitation is going away in the next major release due Fall 2015.

Hot Add/Remove Virtual Hardware

- VMware allows you to hot add and remove most virtual hardware. However, Hyper-V only allows hot adding disks right now. You cannot hot add or remove a virtual NIC, CPU, virtual FC adapter, or SCSI controller.

Host and VM Deployment

- Auto Deploy is a nice feature from VMware that allows you to quickly deploy VMware hosts. The hosts can be stateful or stateless, meaning they can either be installed to a hard disk or flash drive (stateful) or you can simply let ESXi pull its files over the network and run solely in RAM (stateless). Host Profiles are used to ensure the host is configured correctly. Some even go so far as to run a stateless only environment which means they only need patch the base image and then reboot the hosts at their convenience and each will boot with the latest patches installed. vCenter and Enterprise Plus licensing is required for Auto Deploy.

- VMM and Hyper-V offer host deployment as well using Windows Deployment Services and VHD templates. You create a template for your hosts not too differently than you would for a VM, PXE boot the host, and it pulls down the VHD from VMM and installs it to disk. Personally, I find other methods better at deploying hosts such as SCCM or MDT with Powershell, but the ability is there. Unlike Auto Deploy, VMM can do stateful deployments only. However, you can use VMM to create a Microsoft Failover Cluster between hosts, rather than using Failover Cluster Manager, which is nice.

- VMM also has the ability to deploy applications inside VMs created from templates. SQL itself has an entire template type which allows you to create a true “SQL VM” Template that not only deploys the new VM with a name, IP, joined to the domain, etc. but also automatically installs and configures SQL server, IIS, Active Directory Domain Services, or any other Windows Role, Feature, or application that can be virtualized with another System Center product App-V.

http://kb.vmware.com/selfservice/mi...nguage=en_US&cmd=displayKC&externalId=2005131
http://technet.microsoft.com/en-us/library/gg610653.aspx

vApps

- VMware can place multiple VMs in a logical container called a vApp. This allows you to place all VMs that pertain to a particular application together so you can control in what order the VMs start up or shut down.

- Hyper-V does not have anything equivalent to a vApp but there are Service Templates. Unlike vApps, Service Templates are not for managing multiple VMs but for deploying and updating multiple VMs. It’s essentially taking VM Templates a step further by deploying multiple VMs together. For example, you could create a Service Template for a Remote Desktop Services farm comprised of a RDS Broker, Gateway, and 2 Session servers. Not only can you deploy the VMs, but you can also configure the Service Template to allow for provisioning additional Remote Desktop Session servers when load is particularly high. Go into the Service Template, highlight the Session server portion of the Template, right click, and deploy another one. The Service Template can be set up to provision the new VM and automatically configure it and join the farm. When not needed, you can decommission the VM, too.

https://pubs.vmware.com/vsphere-55/...UID-E6E9D2A9-D358-4996-9BC7-F8D9D9645290.html
http://technet.microsoft.com/en-us/library/gg675074.aspx

Hyper-V Automatic VM Activation

- Hyper-V now allows for automatically activating other Windows Server 2012 R2 VMs if they’re running on a 2012 R2 host. Simply use the MVMA Activation key (found on Technet) and the VM will talk to the host through the Hyper-V Integration tools. “Hey host. Do you have an activated Datacenter license? Yes? Great, then I’m automatically activated.” Licensing the host with Windows Server Datacenter automatically licenses all Windows Server VMs running on that host. This feature is extremely useful when using Datacenter licensing, especially for VMs running in the DMZ since they won’t need to talk to Microsoft over the internet or an internal KMS server to activate.

http://technet.microsoft.com/en-us/library/dn303421.aspx

Hyper-V Enhanced Session Mode

- Another handy feature is Enhanced Session Mode. Essentially allows you to RDP to a VM through the host it’s running on rather than having to rely on a legacy console connection. This means you can get a true RDP session to a VM even if you don’t have network access to it. All you need is network access to the host.

http://technet.microsoft.com/en-us/library/dn282274.aspx

Performance Monitoring

- This is a pet peeve of mine. I love going into vCenter and easily seeing performance metrics in near real time for the VMs and hosts. VMM and Hyper-V Manager do not offer this and you’ll be forced to use Windows tools like Performance Monitor, SCOM, or Powershell to monitor your hosts’ or VMs’ performance (from outside the VM). There are 3rd party solutions as well such as Veeam One which can give a more “vCenter-ish” performance view.

Security

- One advantage with ESXi is that it has a very small attack surface. There are effectively no viruses (correct me if I’m wrong) for ESXi and the hypervisor comes heavily locked down by default. You can take it a step further by enabling Lockdown Mode so the host can only be managed through vCenter.

- Hyper-V is running a Windows kernel which has MANY viruses out there for it. You can run Hyper-V with a full blown GUI just like any other Windows server or you can scale the GUI back so only Server Manager and other management MMCs are available. You can take it even further and use Hyper-V core which is a command line only interface. It is recommended to use Hyper-V Core to minimize Hyper-V’s attack surface and minimize the number of patches needed. I’d echo this with one caveat: if you’re not comfortable with Powershell or command line, stick with the GUI for starters. Once you get the hang of things, you can always put the host in maintenance mode, run a command or wizard, and uninstall the GUI and reboot or reverse the process and re-install the GUI.

- For patching Hyper-V, you can use traditional methods just like any other Windows box. You can also use Cluster Aware Updating which will patch the cluster in its entirety by putting a host in Maintenance Mode, patching, rebooting, stopping Maintenance Mode, testing the host’s connectivity, then moving on to the next automatically. VMware Update Manager can do this as well for VMware clusters but I trust it a bit less based on personal experience. Not that it would crash the cluster, but that an odd error would pop up and the update wouldn’t complete successfully which means I’m more likely to have to babysit the updates.

http://technet.microsoft.com/en-us/library/dn741280.aspx
http://blogs.technet.com/b/mspfe/ar...er-aware-updating-in-windows-server-2012.aspx

Overall Manageability and Architecture

- As a long time VMware guy, learning Hyper-V took some time. The way much of it works was pretty alien at first. Since I wasn’t a Windows guy either, it probably took even more time than it should have. This is a key differentiator between the two hypervisors. VMware is relatively easy to pick up and go. Many times I’ve sat with client who knew little about VMware and, after a day of knowledge transfer, they’ve felt pretty comfortable with the day to day management tasks of the product. Hyper-V, on the other hand, has a steeper learning curve and more oddities that make you scratch your head. I’d say VMware is easy to learn but has many, many more layers under the surface which take a lot of research and experience to master. Hyper-V is more difficult to learn and it, too, has more layers beneath the surface that will require a great deal of time to master. On top of that, a good Hyper-V admin should also be a good Windows admin in general since knowledge of AD and general Windows management is also key.

- That being said, Hyper-V is also a lot touchier than VMware is. Just like VMware in its earlier days (better make that swap partition big enough and give the console more memory!) Hyper-V is temperamental. If any of you work in the consulting field, you know the dread of meeting a client’s “VMware guy” for the first time. He may be great, but too often he doesn’t know a vmkernel from a vmnic. The kind of havoc someone like that can bring to a VMware environment is twice as worse with Hyper-V. Invest in competent personnel to manage Hyper-V or make sure to train the people you already have!

VM Conversion Tools

- VMware Converter has been out for a long, long time and is an indispensable tool in a VMware admin’s tool belt. It allows for easy P2V and V2V conversions and works great if you need to shrink a VM’s VMDK. Hyper-V has Microsoft Virtual Machine Converter (now version 3.1) which can do the same thing but can also convert a physical machine or VM to an Azure VM. But if you’re using SMB3 shares as your storage backend for VMs, you’re out of luck. I have yet to find ANY VM Conversion software that will P2V or V2V a VM directly to a SMB3 share. For now you’ll have to first convert onto a block storage device (host local storage or CSV) then Live Storage Migrate to your SMB3 share. However, lopo tells me Tintri will soon have a conversion product for their storage that will do this. Hallelujah!

http://blogs.technet.com/b/tommypat...available-for-download-p2v-support-added.aspx

Various Hyper-V Administration Weirdness

- You cannot rename a running VM in VMM, but can through Hyper-V Manager and Failover Cluster Manager. Huh?

- If you create a VM in VMM then want to use the Copy Cluster Roles Wizard to move the CSV that VM lives on to another cluster (like during a Hyper-V 2008 R2 upgrade to a new 2012 R2 cluster) the VM will disappear. You’ll have use Powershell to get it back. Say what?

- Cannot PXE boot a Generation 1 VM with a Synthetic Network Adapter (higher performing virtual NIC), only a Legacy Network Adapter (much slower). Only Generation 2 VMs (which requires the VM be running Windows 8/2012 or newer) can PXE boot from a Synthetic Network Adapter.. Seriously?

- Remember how awesome SMB3 and RDMA Direct Live Migrations were? Throw as many 10Gb or 40Gb connections at your Live Migration network as you want and watch VMs migrate at the speed of RAM. Unless they’re a Generation 1 VM then they’ll only use a single link. Only Generation 2 VMs benefit from this. Doh! Keep this in mind if your environment is mostly Windows 7/2008 R2 or older guest OS’s!

- I'm sure I'll think of more....
 
Last edited:
Things I find interesting:
#1. Drivers - It's often hard to find drivers for ESXi for non-enterprise grade hardware but Hyper-V, being Windows Server, it seems fairly easy to get it running on non enterprise hardware if you should so desire.

#2. SMB - Hyper-V can take advantage of some cool storage and networking features with SMB 3.0 including scale out file servers, native NIC teaming, multipath, failover.

#3. Management - vCenter seems like it should be more flexible and last I checked you couldn't have active/standby vCetner servers (has that changed?) but I also find the configuration for management of a Hyper-V cluster to be needlessly complex. Having to configure constrained delegation / kerberos double hopping, firewall rules, and just configuring the cluster itself is annoying.

#4. Management - System Center = ughhhhh, I am not a systems engineer, I'm not a Microsoft expert, but the little experience I have trying to install and configure that stuff makes me hate it. I setup an entire Hyper-V cluster at home, starting from scratch, and just getting VMM installed was so painful. Also every time you move up (2008 2008R2 2012 2012 R2) you need a new management OS just to get all the features out of the Hyper-V tools.

vCenter = meh. vCenter is alright and I haven't messed with the new versions but in the past the things I've noticed mostly revolve around disaster recovery limitations and the need to license that stuff. There were some funny methods people came up with to have a backup vCenter server.

#5. Costs - For a windows shop, starting from scratch Hyper-V wins, I think. Not starting from scratch, or not a windows shop then probably VMWare? This one is iffy.
 
1. True
2. That's coming with the next version of ESX and NFS4.
3. That's coming, eventually ;)
 
Awesome, looking forward to this. I've used vSphere for years and have been meaning to get more experience with Hyper-V so that I can give our clients the best objective information (and continue to work on stamping out XenServer ;) ).
 
This is going to be good :). I have used ESX for clusteryears now, and had to build up a Hyper-V cluster at an old employer, with SCCM / VMM. It was a bad experience, but a couple of years ago. It'll be good to get hopefully mostly neutral, fact based info.
 
Also, for purposes of this discussion let's try to steer clear of future offerings. Both products have new features road mapped but we're months out from the next major release of either platform and no one should be implementing either for months after that.

When the new vSphere and Hyper-V are out next year then we'll update the thread with new info.

I'm going to try hitting each subject once a week starting with compute soon....
 
Since you mention SRM, I'm assuming that any "add-ons" are fair game. Are you going to include 3rd party tools and such as well (free and paid)?
 
Awesome, looking forward to this. I've used vSphere for years and have been meaning to get more experience with Hyper-V so that I can give our clients the best objective information (and continue to work on stamping out XenServer ;) ).

why the xen hate?
 
Interested, just because. I don't leg hump one over the other but everything at the house is hyper-v/windows/linux. Years back it was esxi/windows.
 
I would've also reserved space to discuss comparisons around Enablement, Support, Community, etc, which is where Microsoft falls completely flat.:D
 
why the xen hate?

Dead product, more or less. Good hypervisor, open-source, but no momentum or growth behind it at all any more, and no "new" customer interest. It's dead, jim. XenApp/XenDesktop are still alive and well though.

Right now it's KVM/ESX/Hyper-V/Docker for the "virtualization" world. Xen/Jails/Zones/Qemu are either specialty products for very specific use cases, or dead and dying.
 
Compute info added!

I'm curious if there are any oddball CPU requirements that are different between the two? For instance for the Windows 8(.1) version of Hyper-V you must have SLAT but not for the Server version. Are there any CPU features that one can take advantage of (or require) that the other cannot?
 
I'm curious if there are any oddball CPU requirements that are different between the two? For instance for the Windows 8(.1) version of Hyper-V you must have SLAT but not for the Server version. Are there any CPU features that one can take advantage of (or require) that the other cannot?

Added CPU requirements.
 
I like the integration of hyper-v with windows guests. For smaller deployments, I also think it is way quicker to deploy and manage. Finally I like the ease of a normal windows server as the host for file management and driver setup.
 
I like the integration of hyper-v with windows guests. For smaller deployments, I also think it is way quicker to deploy and manage. Finally I like the ease of a normal windows server as the host for file management and driver setup.

I'll be talking about that in the Administration section. :)
 
Can VMWare do dynamic memory on Windows 7 VMs? Hyper-V cannot unless it's 7 enterprise or 8+ plus I've never really noticed dynamic memory doing much on Hyper-V for the Ubuntu guests I have.
 
I use both VMware and Hyper-V, Hyper-V is more affordable as it includes advanced features such as clustering, Hyper-V replica, site recovery at no additional cost. 90% of vendors who offer Virtual Appliances for VMware also have a Hyper-V equivalent (Barracuda, F5, Cisco, EMC, CheckPoint, etc..). Cisco V1000 switches are free for Hyper-V, but cost for VMware.

System Center Datacenter is only 4K per Socket and includes the entire System Center Suite for unlimited VMs.

Each Hypervisor has its benefits and weaknesses, but when low cost (relative), ease of management (Windows Admins are easy to find), integration into existing infrastructure and overprovisioning are required Hyper-V is usually the solution.

I am currently working on High profile project that was planned for VMware 5.51, we had to change to Hyper-V in order to meet the clients requirements without compromise.

1. MSCS on Hyper-V supports Shared VHDX
A) Both support Raw disk, Disk pass-through, ISCSI or Virtual Fiber, but is more complex and negates many benefits of Virtualization
B) VSAN, not supported by MSCS
C) Shared VMDK Supported, but you lose DRS and storage DRS

2. CPU Scheduling (Hyper-V supports a threaded model not requiring Lock Step)
A) VMware Gang Scheduler is a deal breaker if you are working with large VMs and over provisioning.


http://www.virtuallycloud9.com/inde...-microsoft-hypervisors-work-at-the-cpu-level/

Hyper-V supports Azure Site Recovery both On-Premise to Azure and On-Premise to On-Premise.

http://azure.microsoft.com/en-us/documentation/articles/hyper-v-recovery-manager-configure-vault/

I actually use System Center Orchestration for my Site recovery automation.

http://blogs.technet.com/b/privatec...-with-system-center-for-planned-failover.aspx
 
Can VMWare do dynamic memory on Windows 7 VMs? Hyper-V cannot unless it's 7 enterprise or 8+ plus I've never really noticed dynamic memory doing much on Hyper-V for the Ubuntu guests I have.

VMware can't do Dynamic Memory at all, it's a Hyper-V feature. It only works if the guest OS supports Dynamic Memory or memory hot add.

If the Ubuntu guest is new enough, Dynamic Memory should be supported.

http://technet.microsoft.com/en-us/library/dn531029.aspx
 
ESXi requires 4G in 5.5, 6G for VSAN.

A) VMware Gang Scheduler is a deal breaker if you are working with large VMs and over provisioning.
Wait, what?

C) Shared VMDK Supported, but you lose DRS and storage DRS
Storage drs doesn't work, but DRS sure does.
 
Let me Clarify:

Gang Scheduler has issues with Large VMs and Sharing overcommitted CPUs, VMware has to wait for all CPUs to be free to execute: Example I have 7 Hosts each host has 48 Cores (4 x12) and 1 TB of memory, I have multiple VMs with 32 vCPUs, I received significant wait times as 32 pCPUs had to be free in order to execute.

http://blogs.vmware.com/vsphere/2014/02/overcommit-vcpupcpu-monster-vms.html



DRS: Is not supported on 5.5.1 with shared VMDK and MSCS only with RDM.

The Guest must be locked to specific servers and can only move when powered down.

http://kb.vmware.com/selfservice/mi...nguage=en_US&cmd=displayKC&externalId=1037959

http://longwhiteclouds.com/2013/09/16/vsphere-5-5-windows-failover-clustering-support/

http://kb.vmware.com/selfservice/mi...nguage=en_US&cmd=displayKC&externalId=1004617
 
In theory it should, but not according to the documentation and my initial testing. I only have 3 months to bring 2 complete sites online with Site resiliency from the ground up.

vMotion Requirements

To enable the use of DRS migration recommendations, the hosts in your cluster must be part of a vMotion network. If the hosts are not in the vMotion network, DRS can still make initial placement recommendations.

To be configured for vMotion, each host in the cluster must meet the following requirements:


■

The virtual machine configuration file for ESX/ESXi hosts must reside on a VMware Virtual Machine File System (VMFS).


■

vMotion does not support raw disks or migration of applications clustered using Microsoft Cluster Service (MSCS).


■

vMotion requires a private Gigabit Ethernet migration network between all of the vMotion enabled managed hosts. When vMotion is enabled on a managed host, configure a unique network identity object for the managed host and connect it to the private migration network.



I expect this to be resolved with ESXi 6 and / or better guest support, as for the Gang Scheduler, that is a downside of using a *nix based hypervisor, I'm not sure when that will be resolved any time soon.

Also CPU hot add is a great feature of VMware as long as you are running the latest and greatest OSs you still have to reboot.

http://www.petri.com/vsphere-hot-add-memory-and-cpu.htm
 
vMotion does not support raw disks or migration of applications clustered using Microsoft Cluster Service (MSCS).

Yes, without changing the default settings you can't share a VMDK between two clustered guests and still VMotion. However, VMware does support enabling the multi-writer flag on a VMDK which would allow VMotion to work so long as the clustered guests and applications also support this so corruption inside the VMDK doesn't occur.

As for Gang Scheduling, this is just how the CPU Scheduler in VMware works and they've made a lot of improvements on it since the early days. Like I recently added to the CPU post, a properly architected environment usually isn't going to experience CPU issues before running out of RAM, disk, or network first. Apples to apples, a VMware environment will have higher CPU Ready time than an exactly duplicated Hyper-V environment, but a VMware VM with 100ms of CPU Ready vs a Hyper-V VM with 1ms will not have a noticeable performance impact to users.
 
Let me Clarify:

Gang Scheduler has issues with Large VMs and Sharing overcommitted CPUs, VMware has to wait for all CPUs to be free to execute: Example I have 7 Hosts each host has 48 Cores (4 x12) and 1 TB of memory, I have multiple VMs with 32 vCPUs, I received significant wait times as 32 pCPUs had to be free in order to execute.

http://blogs.vmware.com/vsphere/2014/02/overcommit-vcpupcpu-monster-vms.html
Yes, if you're sharing it with many other guests. properly configured, NUMA mapped, etc - you can make that work perfectly with a 1-1 mapping of vCore/pCore though - and do it with effectively 0 overhead too if you're willing to do a bunch of tuning

So, there's some discrepencies there from what actually works and doesn't - I'll double check a few things.

Fixed.



Depends on if he's meaning shared VMDK with multi-writer enabled or SCSI bus sharing. With the former, VMotion would still work but with the latter it wouldn't, correct?

So, all the docs still say you can't vmotion with shared busses - but oddly enough, you actually can. ;) Unless they disabled that again...

In theory it should, but not according to the documentation and my initial testing. I only have 3 months to bring 2 complete sites online with Site resiliency from the ground up.

vMotion Requirements

To enable the use of DRS migration recommendations, the hosts in your cluster must be part of a vMotion network. If the hosts are not in the vMotion network, DRS can still make initial placement recommendations.

To be configured for vMotion, each host in the cluster must meet the following requirements:


■

The virtual machine configuration file for ESX/ESXi hosts must reside on a VMware Virtual Machine File System (VMFS).


■

vMotion does not support raw disks or migration of applications clustered using Microsoft Cluster Service (MSCS).


■

vMotion requires a private Gigabit Ethernet migration network between all of the vMotion enabled managed hosts. When vMotion is enabled on a managed host, configure a unique network identity object for the managed host and connect it to the private migration network.



I expect this to be resolved with ESXi 6 and / or better guest support, as for the Gang Scheduler, that is a downside of using a *nix based hypervisor, I'm not sure when that will be resolved any time soon.

Also CPU hot add is a great feature of VMware as long as you are running the latest and greatest OSs you still have to reboot.

http://www.petri.com/vsphere-hot-add-memory-and-cpu.htm

A few wrong things in that list - VMFS isn't a requirement (obviously - NFS works too, as does VSAN and VVOL). And MSCS was, at least briefly, supported with vMotion as well (although that was a quiet feature - one VMware didn't talk about much) - I'm wondering if it was a mistake still or something that didn't quite work like they wanted and they changed after the fact (5.5 GA you can, at least in my lab I used to have).
 
So, all the docs still say you can't vmotion with shared busses - but oddly enough, you actually can. ;) Unless they disabled that again...

I don't think it was ever enabled. If you enable SCSI bus sharing VMotion simply isn't possible since it's to allow VMs on the same host to share the VMDKs. VMotioning the VM would break that. There is a virtual and physical mode for bus sharing and the VMware documentation says virtual allows sharing on the same host, physical on any host, but I'm pretty sure I've seen VMotion fail for either setting. :p

I've had a few clients with this enabled and it was a massive PITA. One had P2V'd a Windows 2000 MS cluster and had it turned on. Even though they had evicted a node from the cluster and all that was left was the single VM, the bus sharing had to be left alone or it would break the application.

EDIT: Even thinking about this gives me a headache.

Here's more documentation I found on it.

https://pubs.vmware.com/vsphere-55/index.jsp#com.vmware.vsphere.mscs.doc/GUID-E794B860-E9AB-43B9-A6D0-F7DE222695A1.html?resultof=%2522%256d%2573%2563%2573%2522%2520%2522%256d%2573%2563%2522%2520

Like Sheriff Buford T. Justice said, my rule of thumb is "you can think about it. But don't do it."
 
Last edited:
Oh no, it was - trust me on that, I filed the issue to engineering at the time to say "WTF, why is this suddenly 'working', and is it supposed to be?" The answer we got was: "We think it's supposed to be working all right". At one point in time recently, you could indeed migrate shared bus clusters >_<
 
Oh no, it was - trust me on that, I filed the issue to engineering at the time to say "WTF, why is this suddenly 'working', and is it supposed to be?" The answer we got was: "We think it's supposed to be working all right". At one point in time recently, you could indeed migrate shared bus clusters >_<

Another example of a vendor quieting releasing a new feature and letting the customers beta test it? :)
 
Back
Top