ESXi vs Hyper-V for new lab setup, VM questions

Acejam

Weaksauce
Joined
Nov 4, 2009
Messages
65
Hey all,

I'm in the process of setting up a lab at home to tinker around with some VM's and learn a few things. I already use Server 2008 R2 w/ Hyper-V at work, and also use SCVMM to manage all of it. However, this is a very basic testing environment, and we don't really get into the advanced features of SCVMM.

I have my old box lying around at home which has some decent specs to it - so I thought about making it a VM host. I managed to install ESXi 4.0 via USB to my local SATA HDD, and played around with it for a bit. I got bored, and then installed Server 2008 R2 and added the Hyper-V role. After getting bored with Windows, I'm thinking about switching back to ESXi, and learning how to setup iSCSI.

My initial thought was to setup a few Ubuntu VM's, and learn to use their Ubuntu Enterprise Cloud setup. (looks pretty nifty) In addtion, I'd also like to learn how to use iSCSI, and it seems like Openfiler is a great way to do this. (not sure if it would run on Hyper-V) My specs are as follows:

Intel Core2Duo E6600 (stock 2.4ghz for now, I have stable 3.2 settings though)
6GB RAM
Asus P5B-E Mobo
Onboared Intel ICH8R Disk Controller
1 x 250GB SATA HDD
2 x 1TB Seagate HDD

Unfortunately, it appears that ESXi won't support the RAID function of my ICH8R controller. However, Windows Server 2008 R2 does fine, and one of the benefits to running Windows natively is that I could RAID0 the two large 1TB disks, and then backup my desktop to it. Another thought would be to run ESXi, and then create a Server 2008 backup VM, and just give it a huge virtual disk.

Thoughts?
 
Well ..., Ubuntu and Enterprise don't belong in the same sentence, imho anyway.
Backups don't belong on RAID0.

It's correct that ESX won't support Intel local RAID.

At the end of the day the question is; Why? Why do any of this? The answer to that determines what product you should use. If you want to test stuff that you can't easily test at work but it is for work, then Hyper-V would seem to be the way to go. If you are looking to get out of your current job/system then ESX would be better to get you some experience with it. If you are just screwing around then it really doesn't matter what you install, could be Xen or KVM or whatever.

The hardware is fine for testing, performance may tank if you have a lot of I/O intensive VMs but not like it matters whether you have to wait 1 or 4 seconds for something to happen.
 
Well ..., Ubuntu and Enterprise don't belong in the same sentence, imho anyway.
Backups don't belong on RAID0.

It's correct that ESX won't support Intel local RAID.

At the end of the day the question is; Why? Why do any of this? The answer to that determines what product you should use. If you want to test stuff that you can't easily test at work but it is for work, then Hyper-V would seem to be the way to go. If you are looking to get out of your current job/system then ESX would be better to get you some experience with it. If you are just screwing around then it really doesn't matter what you install, could be Xen or KVM or whatever.

The hardware is fine for testing, performance may tank if you have a lot of I/O intensive VMs but not like it matters whether you have to wait 1 or 4 seconds for something to happen.

Yes, I'm well aware of the Ubuntu/enterprise thing. I just wanted to test it out, since I'm a big fan of Ubunutu desktop/server for dev/coding/etc.

I already have backups of my desktop's RAID5 array on an external HDD. This would be a 2nd backup, which, I don't even really need. All of my "super important" data has been burned to DVD's.

This has nothing to do with my work - I'm merely trying to create a cool home lab where I can throw up a VM quickly to test various scenarios and things. I'd like to learn a bit more about ESXi for sure - and I think the challenege of setting up an Openfiler iSCSI target, and then haivng ESXi use it, would be a good start.

As for the hardware - my desktop is high-end enough for me as it stands now, I'm just looking to use what I already have lying around.
 
ESXi is a PITA to get going on non-certified hardware... sometimes you get lucky but mostly it just fails miserably.

Hyper-V is much more forgiving, though it does not support linux except for RHEL and Enterprise level linux os's.. convenient right? Youv'e already paid for Hyper-V but now you gotta pay for the OS's instead of using open source (centos, fedora, etc.)

Both have their ups and downs.
 
So using both... I will say that generally speaking, ESXi does a lot better job hosting whatever you throw at it... OS wise. Hyper-V + FreeNAS for example yields Legacy NIC only performance which is really bad. Integration services for Hyper-V can be rough. For example, Ubuntu 10.04 had Hyper-V integration services installed by default... but if you wanted to use the GUI you couldn't use a mouse. On the other hand, setting up Windows based VMs is really easy with Hyper-V on 2008 R2.

Now on the flip side, discounting the Windows ecosystem is probably a bad idea. For example, I found a few Mellanox InfiniBand dual port adapters for $40/ each. In 2008 R2, download drivers, double click on install package, everything works. OpenSolaris (for VirtualBox host) does not like the Mellanox MemFree adapters, and does not like Intel ICH based "RAID". ESXi has just been a PITA to get working with the adapters and is a lot pickier regarding hardware. I would offer that most hardware vendors make sure that their hardware works on Windows at this point.
 
So using both... I will say that generally speaking, ESXi does a lot better job hosting whatever you throw at it... OS wise. Hyper-V + FreeNAS for example yields Legacy NIC only performance which is really bad. Integration services for Hyper-V can be rough. For example, Ubuntu 10.04 had Hyper-V integration services installed by default... but if you wanted to use the GUI you couldn't use a mouse. On the other hand, setting up Windows based VMs is really easy with Hyper-V on 2008 R2.

Now on the flip side, discounting the Windows ecosystem is probably a bad idea. For example, I found a few Mellanox InfiniBand dual port adapters for $40/ each. In 2008 R2, download drivers, double click on install package, everything works. OpenSolaris (for VirtualBox host) does not like the Mellanox MemFree adapters, and does not like Intel ICH based "RAID". ESXi has just been a PITA to get working with the adapters and is a lot pickier regarding hardware. I would offer that most hardware vendors make sure that their hardware works on Windows at this point.


I would have to agree, most Linux/Unix OS's are not a huge fan of Hyper-V, however Hyper-V does support a number of them. I've had much better luck with VMWare in regard to Linux. Although I do run CentOS with Nagios using Hyper-V, I feel there will be a lot of improvements with SP1, hopefully some in the hardware support department (Legacy Adapter resolution would be nice).
 
I got ESXi up and running again without issue. I used the mkesxiaio script to create a bootable USB flash drive, which contained a custom oem.tgz for my Asus P5B-E's Atheros L1 Gigabit NIC.

Worked like a charm, and I've got my 2x1TB drives pooled together in the same datastore now using an Extent. Also have the base VM files running on an old WD 250GB HDD.
 
Back
Top