Hyper-V vs VMware

Or someone deciding that a limitation was strange, "fixing" it, and not telling anyone... and it perhaps getting rolled back later :p
 
Pretty sure there is a Cisco 1000v free version for vSphere too; although, to use it, you do have to have distributed switches. Perhaps that was what was meant, vs cisco's pricing.
 
Pretty sure there is a Cisco 1000v free version for vSphere too; although, to use it, you do have to have distributed switches. Perhaps that was what was meant, vs cisco's pricing.

Yes there is a free version for VMware and it does require Enterprise Plus licensing.
 
What about VirtualBox? *dons flame suit* :p
Until I figure out a few hardware things, I'm stuck on VirtualBox. Mainly wireless adapter compatibility.
 
I've been using Hyper-V for a while (we use it in our company) and just recently started using VMware with a customer.

So far I like Hyper-V better if only because its feels more familiar. In the coming months I hope I have enough experience to see which is better and in what case.
 
HyperV 2012R2 is better than ESX5.5.

HyperV 2012 and below, ESX is better.

That's kindof a broad brush statement. I think the comparison is more nuanced than that (and thank Child Of Wonder for pointing out a lot of those nuances).

For some users your claim here might be true. For others, not so much. It depends on a lot of details about your application, your requirements and your experience.
 
That's kindof a broad brush statement. I think the comparison is more nuanced than that (and thank Child Of Wonder for pointing out a lot of those nuances).

For some users your claim here might be true. For others, not so much. It depends on a lot of details about your application, your requirements and your experience.

Exactly. Like if you want to use iSCSI or not. Right now I am working with a client to add more iSCSI paths to their Hyper-V host and I want to scoop my eyes out with a spoon. I hate MS iSCSI..... :mad:
 
Both hypervisors have a slew of new features planned for their releases next year. We're keeping the discussion based on what's available today, but will update when the new products are released.
 
Informative thread!

EDIT: shared this thread with several colleagues and co-workers
 
Last edited:
Child,

Thinking of moving from napp-it as my primary storage to Server R2. Thinking of using 4 2TB hard drives as a 3+1 with tiered storage spaces with two columns, with two 120 gig ssd's as a tier then passing it back to esxi via iscsi. Mainly for home experimentation, but I have 5 family members at home who share internet/file storage etc, and I'm finding it a pain in the ass to do permissions correctly on ZFS when it would be ten times easier on R2. Thoughts? Not overly concerned about speed, and I figure this will be fairly fast for it's intended use.
 
Child,

Thinking of moving from napp-it as my primary storage to Server R2. Thinking of using 4 2TB hard drives as a 3+1 with tiered storage spaces with two columns, with two 120 gig ssd's as a tier then passing it back to esxi via iscsi. Mainly for home experimentation, but I have 5 family members at home who share internet/file storage etc, and I'm finding it a pain in the ass to do permissions correctly on ZFS when it would be ten times easier on R2. Thoughts? Not overly concerned about speed, and I figure this will be fairly fast for it's intended use.

Not really the place for this discussion, should be in the Storage forum.

Short answer, you can't do 3+1 in tiered storage spaces. You're limited to mirrored and simple vdisks only. Your 4 drives would have to be 2+2.

Also, for iSCSI I wouldn't use Microsoft iSCSI Target. Just grab Starwind's free iSCSI target. It actually supports VAAI. You can also use vdisks out of Storage Spaces with Starwind iSCSI images for your ESXi datastores.
 
Child,

Thinking of moving from napp-it as my primary storage to Server R2. Thinking of using 4 2TB hard drives as a 3+1 with tiered storage spaces with two columns, with two 120 gig ssd's as a tier then passing it back to esxi via iscsi. Mainly for home experimentation, but I have 5 family members at home who share internet/file storage etc, and I'm finding it a pain in the ass to do permissions correctly on ZFS when it would be ten times easier on R2.

If you connect a Solaris server and a Windows server from your Windows desktop you will see no difference in ACL and inheritance settings (beside that Windows respects first deny then allow rules while Solaris respects the ACL order but this is only relevant on deny rules). From Solaris view there are some additional ZFS restrictions with aclinherit and aclmode and a mapping option to Unix users. If you keep the default pass-through mode and ignore mappings, you are 100% Windows alike.
 
Let me start off by saying that yes, I am more familiar with Hyper-V in Windows Server 2012 R2 than with ESXi, but I have worked with both and as has been stated here, there are strengths and limitations on both sides.

Now, that being said, almost every server rollout that I've done for customers has been with Hyper-V. The customers that I work with are small businesses, offices usually with ten or fewer employees but sometimes up to fifty. In situations like this budget is the most limiting factor to determine what can be done with the server and that's probably the primary reason why we stick with Hyper-V. You have no additional licensing cost this way. Your one license of Server 2012 R2 Standard includes all the features of Hyper-V with no additional cost, and you have two virtual machines included with that licensing. If we wanted to instead use ESXi, we would be severely handicapped with capabilities and versions using just the free version, and purchasing vSphere with all of the live motion capabilities that already come included within Windows would cost an additional several thousands of dollars that the customers just isn't willing to spend.

When it comes to evaluating the use of either Hyper-V or ESXi for my own usage or recommendations it really comes down more to usage than features. After all, a lot of the features are going to be the same or nearly the same between the hypervisors. Again, while Hyper-V requires more host resources to run, it's also more familiar for those used to a Windows environment. Not always, but often times my customers need to access their server to do things like check up on backups, updates, even just restarting. Without having a Windows environment this would be nearly impossible for pretty much any of my customers which are usually not skilled enough in this area to comfortably do these tasks.

There are some things I really do love about ESXi though, and wish could be done with Windows Hyper-V. For example, the ability to remotely manage the hypervisor (or a group of hypervisors) just through a web browser. The downside of course there is web management console requires a fully license vSphere setup, not the free ESXi. I also like the ease of recovery that could be done for an ESXi system if the primary OS installation or storage medium fails.

Basically, if ESXi is installed on a basic flash drive and something happened with the install or the flash drive, you just use Rufus to make a bootable USB with the ESXi, install ESXi to that flash drive, configure the network again, and then use the vSphere Client to reimport the virtual machine information from your datastore. Recovery with such a thing in Windows Server 2012 R2 Standard with Hyper-V would require loading up the recovery manager, locating a recent backup from Windows Server Backup (or other similar backup system) and hope that the backup is good and that it will actually recover.

Unfortunately I've had bad luck with some recoveries on Hyper-V. On more than one occassion, the backups haven't been good for some reason (Windows Server Backup just won't recover the image) or the recovery will complete, but the virtual machine afterwards will be inoperable and you have to recreate one or more of your virtual machines. All said, it's much more of a hassle from what I've found than what should be capable with ESXi.

Long story short, if you have a small environment with just one or two servers, your best option for features and cost is ALMOST always going to be Hyper-V. But, if you have the budget and have two or more servers, then ESXi with the vSphere licensing is probably going to be a good option with all the features of Hyper-V plus a little easier management or configuration of other things such as clustering and shared storage.
 

I've heard the same sentiment with my clients, too. When money is an issue, many of them look at the feature sets of Hyper-V and VMware, see that both have the same "core" features like HA, VMotion, etc., look at the cost of licensing, and say "Hyper-V is good enough."

There's no doubt that VMware is the feature leader in this market and I love the product. But I don't think we live in the days of "if you're virtual that means you're running VMware" anymore. More people are taking a long hard look at alternatives, like Hyper-V, and while there are more features in VMware, a lot of customers don't even use them. Now that Hyper-V is closing the gap on the feature race and even has some unique features of their own, for many clients weighing the cost to benefit of each product Hyper-V literally is "good enough." It's not a steaming pile of shit anymore like it was in 2008 and 2008 R2.

That being said, like VMware in the early days when not many knew how to architect and manage it well, Hyper-V is even more sensitive to misconfiguration and cowboy admins that don't know what they're doing which makes it even more important to have truly knowledgeable people out there helping clients and telling it like it is or the experience with Hyper-V can go south real fast.
 
Child made a very good point above about misconfiguration in Hyper-V.

To me, learning Hyper-V was pretty easy because it was in a Windows based environment that I was familiar with. When I started looking at virtualization about five or six years ago, it wasn't something that was easy to get a grasp of. Just out of school, they never talked about this new thing called virtualization, and there weren't many options or things to play with it outside of the datacenter and thousands upon thousands of dollars of investment costs. I was able to get a license of Windows Server 2008 R2 through Dreamspark and that's where I started learning Hyper-V. At the same time I also was looking at ESXi on the same physical hardware.

ESXi seemed a little more finicky to get working right (driver support, etc.) but once it was set up it was very simple in configuring and running virtual machines. I've learned from using Hyper-V that it can be a little more complex to getting your virtual networking and configuration right in Hyper-V sometimes. For instance, sometimes you have to use the right type of legacy hardware for certain OS installs especially with linux. Sometimes VLAN information doesn't work right. And if things aren't operating 100% optimally, other things may not work right with the Hyper-V platform core such as performing backups with Windows Server Backup.

So yes, misconfiguration and continual maintenance become an issue with Hyper-V more than it is with ESXi. As an example, we set up two servers at almost the same time for two clients: One wanted Server 2012 (non R2) and the other chose ESXi for the physical host (just the free version). We had to do some work arounds to get backups running on the ESXi system properly (there wasn't a good means that we could find at the time to backup the full virtual machines from the server with ESXi, while Windows had several options which could do that.)

After two years in service, the Windows Server 2012 system has been taken down for maintenance, some changes, and installing updates four times, which means about a half day of business downtime each time that is done. The ESXi system has been rebooted once following a power outage where the virtual machine didn't want to start up properly.
 
I had a bunch of Xen, VMWare and Hyper-V boxes running while I learned virtualization...

everything is on hyper-v now, it's free, it does a lot more than ESXi free does and it's not nearly as painful to use... I don't miss VMWare *at all*
 
/me is waiting for storage to pounce :)

That's next. Hoping to get that out in the next week. Each post takes me 2-3 hours to type from scratch so finding free time is the hard part.

In the meantime, correct me anywhere else you see fit. I want this thread to spur discussion!
 
That one is hyper-v only.

True, Hyper-V offers an actual virtual FC HBA so the guest OS thinks it has a FC card and is natively accessing FC storage while VMware is using NPIV to allow you to present a physical RDM directly to the guest but it still appears as a SCSI disk to the guest.

I'll change the post to reflect that.
 
True, Hyper-V offers an actual virtual FC HBA so the guest OS thinks it has a FC card and is natively accessing FC storage while VMware is using NPIV to allow you to present a physical RDM directly to the guest but it still appears as a SCSI disk to the guest.

I'll change the post to reflect that.

and it requires an RDM, so the host still sees it as well.
 
STORAGE

VMware Storage Requirements

- Hardware RAID, local drive, SAN, or USB/SD flash to boot from
- Minimum of 1GB boot device, 5.2GB required for VMFS volume and 4GB scratch space
- VM Datastores can be local hardware RAID, local disk(s), SAN, or NFS NAS

https://pubs.vmware.com/vsphere-55/...UID-DEB8086A-306B-4239-BF76-E354679202FC.html

Hyper-V Storage Requirements

- Hardware or software RAID, local drive, or SAN to boot from
- Minimum of 32GB boot device for Windows with full GUI, 8GB minimum for Hyper-V Core but 20GB recommended
- VM Datastores can be local hardware or software RAID, local disk(s), SAN, or SMB3 share
- Technically you can find a way to boot Hyper-V from USB flash or SD but I wouldn’t recommend it

http://technet.microsoft.com/en-us/library/dn610883.aspx

How They're the Same

MPIO, Including Round Robin
Storage Offloading (VAAI for VMware and ODX for Hyper-V)
Storage VMotion (requires vCenter in VMware)
Block Protocols - FC, FCoE, iSCSI
Thick and Thin Virtual Disks
Pass-through Disks and Shared Virtual Disks for VM Clustering
Use NPIV to present FC LUNs directly to guest (VMware presents as physical RDM which guest sees as SCSI disk, Hyper-V as virtual FC card guest uses to access FC LUNs just like a physical server)
VM Snapshots
Using Labels to Identify Datastores Based on Performance, SAN, etc. (requires vCenter for VMware and VMM for Hyper-V)

How They're Different

File Protocols - NFS3 vs SMB3

- VMware uses NFS as a file protocol for datastores while Hyper-V uses SMB3. NFS v3 is the same NFS we’ve all come to know and love while SMB3 is a new protocol introduced with Windows 2012.

- NFS v3 does not support any sort of MPIO. You’ll still want to provide at least 2 uplinks to the vSwitch your NFS vmkernel lives on but it won’t load balance across those two uplinks unless you change things up like mounting datastores with different IPs (NFS Datastore1 mounts via IP1, NFS Datastore2 mounts via IP2 on the NAS, and so on) or DNBS Round Robin (which I wouldn’t do since I don’t want NFS relying on DNS) but you’ll still only get a single uplink’s speed when accessing a single datastore, even if you use a vDS and LACP. Because of this, 10Gb networking is definitely a big plus when using NFS. Compared to SMB3, however, NFS is very simple to setup and manage.
I'll second the "don't do the DNS trick" recommendation here - it's a bad idea and some really goofy things can happen if you try. Also, NFS is stupid fast these days - especially on 10G, so there's not a performance problem for lacking MPIO. NFSv4 will fix this, in the next release of ESX
- SMB3 does perform load balancing and path failover. The more network adapters you throw at SMB3 on the client and NAS, the more bandwidth you can get. For example, in my lab each Hyper-V host has 4x 1Gb dedicated connections for access to my SMB3 NAS which also has 4 links. Because SMB3 actually load balances the traffic across all 4 NICs, I can get 4Gb of bandwidth. Inside a VM, I can read and write at 4Gb to its virtual disk (yay all SSD NAS!). If one NIC or path goes down, I’ll still get 3Gb without interruption.

- SMB3 support is still emerging on a lot of 3rd party storage products and even those that support it may not support all the features yet. Netapp comes to mind in that they support path failover but not load balancing yet (at least, as of 4 months ago when I last checked). Also, you may experience some quirks when trying to set up the 3rd party SMB3 storage. EMC VNX supports SMB3 but it isn’t as simple as creating a share. You’ll need to go into the CLI of the VNX to enable some features and create the share in a specific way. On top of all this, you’ll need to ensure share and NTFS permissions are all set properly. You’ll also want to use SMB Multi-channel Constraints (Powershell commandlet) to limit which interfaces are used to access the SMB3 shares otherwise if your NAS is also serving the storage on the management subnet your host uses, it will use that path to access the NAS as well.

- To make matters worse, some 3rd party products have difficulty working with VMs living on SMB3 shares. Up until several months ago, Veeam backups didn’t work properly if you used SMB3 storage exclusively. Even some of Microsoft’s own products, like Windows Server Backup, won’t work. You also can’t perform a P2V or V2V directly onto a SMB3 share. You’d first have to convert the server and store on a block device then Live Storage Migrate it to a SMB3 share.

- Both NFS and SMB3 support offloading such as VAAI and ODX which enables supported storage arrays to handle certain tasks rather than the host, like cloning files.

http://www.vmware.com/files/pdf/techpaper/VMware-NFS-Best-Practices-WP-EN-New.pdf
http://technet.microsoft.com/en-us/library/jj134187.aspx
http://blogs.technet.com/b/yungchou...s-server-2012-hyper-v-over-smb-explained.aspx

Block Protocols – VMFS vs CSV

- In the block protocol arena, I feel VMware has a big advantage here. The VMFS file system was built specifically for virtualized workloads on block storage. Windows still uses NTFS, which is a great file system but it wasn’t built with virtualization in mind. As such, Microsoft had to create Cluster Shared Volumes (CSV), so NTFS could be shared between multiple Hyper-V hosts. CSVs are basically a file system over top of NTFS so Hyper-V can use it as a shared block datastore.

- A CSV works by allowing all the members of a Hyper-V cluster simultaneously read and write to a shared block device but one of the cluster members owns the metadata of the file system. This works fine under normal conditions with only a very small performance hit for a cluster member writing to a CSV it does not currently own. However, if access to a LUN is lost by a cluster member or during certain operations (both initiated intentionally or unintentionally) it can cause the CSV to go into Redirected Mode. This means all access to the block device MUST go through the cluster member that owns metadata. Essentially the other cluster members access the block device via SMB over the CSV network. As you can imagine, performance in this scenario is very poor. Bear in mind, when Redirected Mode occurs has been lessened in 2012 R2 vs earlier versions of Hyper-V but it is still a consideration whereas it is not in VMware.

- CSVs do have two advantages: CSV Encryption and Caching.

o CSVs can be encrypted using Bitlocker which natively comes with Windows Server. This can be helpful if your company requires everything to be encrypted. With Hyper-V you can do so right through the OS rather than encrypting at the guest level or using a 3rd party solution.
you can encrypt VMFS as well, assuming the array supports native encryption, or you have an inline encryption device (I'll point out that I really don't recommend inline devices - it's quite amusing when they go sideways
o You can also use host RAM as read cache on a CSV. This works great for avoiding VDI boot storms or simply taking some of the IO off the storage array. Technically you can allocate up to 80% of the hosts’ RAM as cache as you’d like but Microsoft doesn’t recommend more than 64GB. Bear in mind, this amount of cache is per CSV, so if you set Cache to 2GB and your cluster has 4x CSVs, then each host will allocate 8GB of cache (2GB times 4 CSVs).

o VMware does not offer RAM caching unless you purchase VMware Horizon (formerly View) in which it is designed for helping combat boot storms.

- Both VMware and Hyper-V support offloading such as VAAI and ODX for block datastores as well so long as the storage supports it.

http://technet.microsoft.com/en-us/library/dn265972.aspx
http://technet.microsoft.com/en-us/library/dn265972.aspx

vFRC or local SSD swap caching

- VMware does not offer their own local RAM caching, but they do offer local SSD caching, called vFRC. This feature is available only in Enterprise Plus, but enables you to use local SSD space as read cache for VMDKs. vFRC is enabled on a per VMDK basis so you’ll need to manually manage which VM and VMDK get how much cache. It’s a powerful tool if you want to accelerate the reads on some VMs and keep their heavy IO off the storage array.

- In VMware you can also use local SSD to for VM swap files. This way if a host runs out of RAM and is forced to use swap space to serve VM that swap can come from local SSD and not shared storage. When VMs are forced to swap on shared storage it kills performance. At least this way, the VMs will still suffer a performance hit from having to swap, albeit to fast SSD, but it won’t affect every other VM on the shared storage whose hosts are NOT swapping.

- Hyper-V does not offer local SSD caching but you can manually select where a VM’s swap file is to go which could be local SSD if you wanted but that same local path needs to exist on all the hosts in the cluster.

https://pubs.vmware.com/vsphere-55/...UID-07ADB946-2337-4642-B660-34212F237E71.html
https://pubs.vmware.com/vsphere-55/...A85C3EC88.html?resultof=%22%73%77%61%70%22%20

vSAN

- VMware offers an add-on product called vSAN which enables you to use local SSD and hard drives in the hosts as a shared datastore. This eliminates the need for a shared storage array and is an excellent product.
Separate licensing as well again, and some limitations in the product (great for VDI though!)
- VMware even offers a product called the vSphere Storage Appliance (lopo can correct me here but I think it’s eventually going away) which uses virtual appliances to virtualize the host’s storage to leverage it as a shared datastore whereas vSAN actually runs in the hypervisor itself. It, too, is an add-on product.
Dead as a doornail, and lets hope the door doesn't hit it on the way out
- As of now, Microsoft’s official stance is that they do not believe in hyper-convergence because compute and storage resources do not scale the same. Their focus is on the Scale Out File Server cluster which works great as a highly available SMB3 storage option for Hyper-V virtual machines but is not hyper-convergence (like Simplivity or Nutanix). 3rd parties like Starwind do offer products that enable hyper-convergence on Hyper-V but MS has no official plans to offer anything of their own.

http://www.yellow-bricks.com/2013/08/26/introduction-vmware-vsphere-virtual-san/
https://pubs.vmware.com/vsphere-55/...UID-7DC7C2DD-73ED-4716-B70D-5D98D02F545B.html

VMware Storage IO Control and SDRS

- VMware offers two cool storage features: Storage IO Control and Storage DRS. Storage IO Control acts as a Quality of Service mechanism for all the VMs accessing a datastore. By using shares, you can grant certain VMDKs higher priority over others for when a datastore is experiencing periods of high latency (30ms is the default). This feature can be highly beneficial by curtailing “noisy neighbors” from hogging all the IO on a datastore and choking out the other VMs. Hyper-V offers nothing like Storage IO Control except the ability to set minimum and maximum IOPs on each virtual disk.

- VMware also has Storage DRS. Like regular DRS, Storage DRS can automatically assign VMs to datastores based on available capacity and can automatically move VMs between datastores based on performance inbalance. You can create a cluster of block or file datastores (can’t mix and match block and file) so when you Storage VMotion or create a datastore, you can simply point at the cluster datastore resource and let SDRS decide where it should go. However, bear in mind that in some scenarios, such as when using a storage array that does tiering, you don’t want the automated VM Migrations to occur since this will appear as hot data to the storage array causing it to needlessly tier data that doesn’t need to be. You can also use Storage DRS to put a datastore in maintenance mode and, like a host in maintenance mode, all the VMs on the datastore will automatically be evacuated so you can be sure nothing is running on it.

- Hyper-V does offer the ability to label datastores and assign them to a cloud. It will also assign new VMs to the datastore with the most available free space out of the datastores contained within that label, but it does not take performance into account nor does it monitor datastore performance and proactively migrate VMs around to balance the load.

http://www.yellow-bricks.com/2010/09/29/storage-io-fairness/
http://www.yellow-bricks.com/2012/05/22/an-introduction-to-storage-drs/

Software RAID

- VMware will not install to software RAID or “fake” RAID. For most hardware this isn’t an issue since many servers come with hardware RAID of some sort. Windows does support software RAID if you’re using supported drives and Windows itself can create software RAID after installation so you can mirror your boot disk.

VMware boot from SD/USB flash

- VMware can install to SD cards or USB flash disks. This is very convenient when you don’t want to waste hard drives on the ESXi host and once ESXi boots, it’s just running in RAM anyway so even if the flash card/drive fails, ESXi will continue to run it just can’t boot up again. While you can install Windows on the same media, I would strongly advise against it. Even Hyper-V core is more disk intensive than ESXi and performance in the host OS will suffer. Being able to boot to SD or USB Flash is a great bonus with VMware.

Converting disks from Thick to Thin and vice versa

- Both hypervisors offer thin and thick provisioned virtual disks. However, only VMware allows you to change a virtual disk from thick to thin or thin to thick while a VM is powered on by using Storage VMotion. In Hyper-V the VM has to be powered off to perform the conversion.

Hyper-V Differencing Disks

- Hyper-V does offer a type of virtual disk that VMWare does not: the differencing disk. A differencing disk is really just a snapshot of a parent virtual disk. You can use a differencing disk to test changes on a VM without actually affecting the real data. When you’re done, just delete the differencing disk. There is a performance hit for using a differencing disk just like for snapshots and you don’t want to keep it around too long as the more writes occur, the bigger the differencing disk gets. It can be handy for VDI deployments, though, if the storage array can handle the load and you’re not using them as persistent desktops.

- VMware Horizon’s linked clone technology is similar to Differencing disks but can only be used for VDI deployments, not to mention purchasing Horizon.

http://technet.microsoft.com/en-us/library/hh368991.aspx

CBT – Changed Block Tracking

- VMware has a feature called Changed Block Tracking or CBT. Many backup products rely on CBT to tell them what blocks have changed in a VMDK since the last backup so the VM can be backed up much more efficiently and without needing the software to scan the VM’s file system. Hyper-V has nothing like CBT right now and must rely on 3rd party storage filter drivers to perform the same task. This works, but adds another layer of complexity to Hyper-V and yet another 3rd party add-on that can fail. Sometimes these 3rd party drivers can even cause a CSV to go into Redirected Mode which will really hurt performance on the cluster.

http://kb.vmware.com/selfservice/mi...nguage=en_US&cmd=displayKC&externalId=1020128

VHD/VHDX disks

- One cool thing about VHD and VHDX disks is that they’re easily mountable in any modern Windows OS. Simply go to Disk Management, right click it, and choose to mount a VHD then browse to its location. Very easy way to connect up a VHD and grab data out of it.

Various Hyper-V Storage Weirdness

- Can’t mount a local ISO or one from a SAN datastore in Hyper-V like you can in VMware. Must either be in the host, on a network share, or in a Library Share in VMM, and when you do use a Library Share you’ll need to set up Constrained Delegation in AD for the Library server so the hosts can mount the ISO without copying it locally first! Much easier to mount up an ISO in VMware.

- Can’t hot add a SCSI controller to a VM in Hyper-V but have been able to in VMWare for a long, long time.

- Hyper-V still has virtual IDE controllers required to boot a Generation 1 virtual machine. Hyper-V has Gen 1 and Gen 2 VMs, something analogous to VMware’s virtual hardware versions. If a VM is Generation 1 it must boot from a virtual IDE disk. Only Windows 8/2012 or newer guests OS’s can be Generation 2 VMs which can use virtual SCSI disk boot.

- When you Live Storage Migrate a VM to another datastore, the folder on the old datastore isn’t deleted. First noticed this in Windows 2012 and figured it would be corrected in 2012 R2 but it wasn’t. Doesn’t affect anything but does make it confusing when you look at the folder structure inside a datastore.

Nice job!
 
Is either going to have a non windows managment client in the near furture. Love both but hate having to find a windows box or a windows VM to manage. With out a whole bunch of infrastruture.

Oh well it give me a reason to maintain the RDS box. :)

Not a real issue just a pet peeve. I spend alot of time around Linux and Macs.

Child of Wonder great write ups. I'm having a blast reading though them
 
The Web client is the way of the future with VMware. I don't use anything but Windows so how well the web client works on an Apple I have no clue. :)

As for Hyper-V that will likely always be Windows management clients and tools only. Same goes for clients to provision their own VMs, Appcontroller uses Silverlight.

Thanks for the feedback. I'm happy people are finding these interesting and helpful. At least I get to pour all this information in my head out somewhere. Was hoping I would get to work with Hyper-V and VMware both in my job I recently started but it's not turning out that way.
 
The Web client is the way of the future with VMware. I don't use anything but Windows so how well the web client works on an Apple I have no clue. :)

As for Hyper-V that will likely always be Windows management clients and tools only. Same goes for clients to provision their own VMs, Appcontroller uses Silverlight.

Thanks for the feedback. I'm happy people are finding these interesting and helpful. At least I get to pour all this information in my head out somewhere. Was hoping I would get to work with Hyper-V and VMware both in my job I recently started but it's not turning out that way.

At work the web client works ok on Apple machines and you can get it to work on linux in Chrome but it takes some voodoo.

I've just been to lazy at home to setup vCenter. I already had a RDS server so I've just been leaning on that more and more.
 
Nothing about SR-IOV in networking? Last I checked (which was quite a while ago) VMware didn't support migrating guests with SR-IOV enabled where Hyper-V did. I think Hyper-V was easier to get SR-IOV working on as well but that was a long time ago when BIOS and drivers were still being updated to support it.
 
Nothing about SR-IOV in networking? Last I checked (which was quite a while ago) VMware didn't support migrating guests with SR-IOV enabled where Hyper-V did. I think Hyper-V was easier to get SR-IOV working on as well but that was a long time ago when BIOS and drivers were still being updated to support it.

It's easy to get working on both, but not a huge use case for enterprise yet.
 
Nothing about SR-IOV in networking? Last I checked (which was quite a while ago) VMware didn't support migrating guests with SR-IOV enabled where Hyper-V did. I think Hyper-V was easier to get SR-IOV working on as well but that was a long time ago when BIOS and drivers were still being updated to support it.

I did briefly mention it in the High Availability section as Hyper-V allows Live Migration of VMs using SR-IOV, but it should have its own blurb under the Networking section.
 
Back
Top