Microsoft has released its own embedded hypervisor for free.
http://www.microsoft.com/servers/hyper-v-server/default.mspx
http://www.microsoft.com/servers/hyper-v-server/default.mspx
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
BOOO their licensing model is horrible. They limit you to 4 vms on Enterprise?
Nothing has swayed me from VMware. I still feel it blows the doors of most other products.
It already has advanatges over ESX. ESX can't do network load balancing or SAN FC balancing. What kind of BS is that for an enterprise level product ? Hyper-V snapshoting is better IMHO as well.
What?
http://blog.scottlowe.org/2006/12/04/esx-server-nic-teaming-and-vlan-trunking/
You most certainly can certainly team NICs in ESX. That's been possible for quite a while (since ESX 2 I believe).
FC doesn't support MPIO, but you can split paths to manually load balance. Failover is fully supported.
Round robin load balancing for FC is currently experimental - http://www.vmware.com/pdf/vi3_35_25_roundrobin.pdf
Actually, there's nothing stopping multiple VMs running within VMware ESX from forming an NLB cluster and standard MSCS clustering is supported again as of 3.5 Update 2. There is however a specific method you need to follow to set it up and a white-paper is available on the subject.
Additionally, true in/out network load-balancing on the ESX host is achievable by enabling etherchannel and configuring the vswitch to forward traffic based on IP hash rather than originating port id. You don't need to do this however, since the built-in round-robin load-balancing works quite well in most situations and etherchannel can be more pain than its worth.
But I digress, I partly agree with your summation of the situation between Microsoft and VMware - in that Microsoft has some distinct advantages over VMware, not the least of which is the ability to cut deals on licensing at an enterprise level. Still, being certified in all three of the major players in the virtualization space, and having used them each in-depth, I see advantages and disadvantages to each. Yet the only player 100% ready for the enterprise at present remains VMware's ESX; but that doesn't mean things won't change over the next year to year and a half. They most certainly will.
Actually, I would argue that it's non-integration is one of its strong points. Yea, I couldn't believe that it didn't have a gui and I had to edit text files when I first started playing with it (when it first came out) but the fact that it's a stand-alone product means I can install it virtually anywhere. For instance, if I am running a compatible product, I can install it directly onto the media/master server and should I be using FC - be able to back up directly from fibre to my tape library. I can even install it on desktops and let people do over-the-network "dumps" of their virtual machines using simple vcbmounter scripts. In fact, my only real desire for VCB is that they make it play nice with linux.Another thing that bugs me is VCB. It just seems like a great product that has been released before it finished. Sure I can write a bunch of powershell scripts to schedule lan-free backups but som intergration into VC would be nice.
I see the cost savings with both Hyper-V and Xen, however if you look at the bigger picture I see increased administrative effort, re-training, real potential for data corruption and disastrous environment configurations with Hyper-V especially. For instance, the ability to use "base" disks and run multiple servers from one underlying base in Hyper-V might sound like a benefit -- but what happens if that one base disk becomes corrupt? And don't tell me it won't happen... because it will. And what about the fact that, because NTFS is not a clustered file system, you have to migrate all of the VMs on one LUN from host to host at the same time with Hyper-V. There are third-party clustered file systems out there but that adds additional cost. Another way of getting around it would be to put one or two VMs per LUN but that could quickly become a nightmare to manage in a large environment. MehI'm not sure where you don't see the cost savings. If you buy datacenter you can run as many windows vm's as you want and your virtulization platform is free. Your have to buy windows licenses for your guests on ESX anyways. SCVMM probably makes managing them easier but then your comparing VC to SCVMM not ESX to Hyper-V Server 2008.
That's especially true if you are virtualizing the right things; virtualization is not about cramming everything that was on multiple physical platforms onto a single one, it's about making smart investments and choices in your environment. Going after low-hanging fruit, etc. Do not virtualize anything that already takes full advantage of its hardware. By that law, things that would require clustering - such as Enterprise-level SQL or Exchange, are never good virtualization candidates. I didn't rule them out for small-business but in the enterprise there are much better ways of providing high availability for SQL/Exchange than to virtualize them.MPIO will be nice, but there just aren't many systems pushing more than 4Gb/sec I/O throughput.
Do not virtualize anything that already takes full advantage of its hardware. By that law, things that would require clustering - such as Enterprise-level SQL or Exchange, are never good virtualization candidates. I didn't rule them out for small-business but in the enterprise there are much better ways of providing high availability for SQL/Exchange than to virtualize them.
I do as well - from Citrix to HPCC. From one ESX host to fifty.I know plenty of people that virtualize one app on an ESX server.
Just because you -can- do something doesn't mean you should and just because something was done in the past or by another company doesn't mean it's the best choice. It all depends on sizing and your individual acceptable risk. Up until a couple of weeks ago you took a huge risk in whether or not Microsoft was going to support you should you ever need to call them. They've since modified that, though they still have some caveats regarding configuration and hypervisor choice. But even if you count licensing and support as regardless, which many companies do not, there are many more pieces of the puzzle to consider. Virtual hardware limitations, contention, available network IO, the hypervisor layer itself, limitations of the support contracts of the hypervisor you chose - for instance being told that clustering is "unsupported" in one version and "supported" in the next.Enterprise Exchange? Absolutely can virtualize that with great results. Very large scale servers are done every day.
Absolutely. But that doesn't make it the best choice for every application everywhere.You gain a lot with virtualization, not just consolidating servers. No down time maintenance....ability to move to faster hardware with a simple VMotion...etc. It's all about your architecture and design. Architect for scale.
I
The "don't virtualize anything that already utilizes its present hardware" is a very safe and often correct summation of what a guidance anyone thinking of bringing virtualization into their datacenter, regardless of their hypervisor choice. I've been in over a hundred datacenters just in the past two years alone and every single one of them had so many low hanging fruit (servers only being 5-15% utilized), it would make no business sense to virtualize business platforms like Exchange right away. Even if your exchange infrastructure was near EOL, it would make better business sense to wait - even if it meant having to invest additional money up-front to upgrade. The cost savings from virtualizing ~200-300 physical servers would far outweigh, in most cases, the cost of keepign exchange physical.
And hey, let's not forget that Exchange 2007 can be heavily modularized. There's no reason you couldn't virtualize the lesser intensive boxes and leave the more I/O driven mailbox servers to a physical cluster or two or three, all depending on the size and seat count.
I never said it was impossible - just that simply because you can doesn't mean it's the best choice. But each to his own...
I never said it was impossible - just that simply because you can doesn't mean it's the best choice. But each to his own...
CPU and memory resources have not classically been the downfall of Exchange 2003 infrastructure though Disk and Network I/O are. Again, it comes back to how big of a company you're talking about. I'm talking about "Enterprise", which I define as 10,000-30,000-50,000-75,000 seats. Not a company with a hundred or even a thousand mailboxes.Thats just the point. Very few systems utilize their hardware to apoit where you wouldn't consider virtualization. Consider Exchange 2003. Its idle on new hardware( dual quad core w/ 32GB ram). Its a 32 bit app and can't use more than 4GB of RAM. So why not put it on an esx hosts with multiple exchange mailbox servers?
I never said a single larger physical server - I said there are better ways of providing redundancy to Enterprise Exchange and SQL than to virtualize them. In other words, I always recommend redundancy - just for large implementations I would much rather see and therefore recommend a physical cluster rather than a single or multiple VMs.I don't think anyone is saying virtualize EVERYTHING, but in the vast majority of organizations you can get REAL close for x86 based applications. I'd much rather have two virtualized Exchange servers than a single larger physical server. You just gain so much more flexibility. The key there is "dynamic infrastructure".
Difference in perspective, I suppose. I work for and support a lot of fortune companies. They are always worried about support. They want to know that when they are down and losing millions if not billions of dollars, they will have people on the phone and on-site. They don't want to even take the risk that somewhere someone might decide to go by the letter of the law. Sure, it might mean that when the fog clears that company that decided not to provide support is history --- but that doesn't fix the issue.Most people aren't worried about support contracts. I only heard that from a few clients. The vendors will support you. If they don't they'll be out of business in no time. At the last VMware User's Group meeting it was asked how many people were denied support by MS for being virtualized. No one raised their hand.
I work very closely with a lot of vendors too, including EMC and VMware and there are some things (as we've both said) they don't recommend virtualizing, even with the perfect infrastructure.Hardware limitations, I/O, and contention are all architecture decisions. Architect to scale. Plenty of knowledge out there on correct and highly scalable virtual environments. I work very closely with EMC and they have an abundant amount of reference architecture and build documents. Not hard to find.