Microsoft releases its FREE hypervisor

BOOO their licensing model is horrible. They limit you to 4 vm’s on Enterprise?
 
Nothing has swayed me from VMware. I still feel it blows the doors of most other products.
 
BOOO their licensing model is horrible. They limit you to 4 vm’s on Enterprise?

No you get 4 free OS licenses for Server 2008 ENT. That means you buy one licenses for Server 2008 ENT run hyper-V and you can run 4 guests without any additional costs.
 
Nothing has swayed me from VMware. I still feel it blows the doors of most other products.

VMware ESX is good. Btter than Hyper-V, but Hyper-V is a great product for a 1st generation product. It already has advanatges over ESX. ESX can't do network load balancing or SAN FC balancing. What kind of BS is that for an enterprise level product ? Hyper-V snapshoting is better IMHO as well.
 
It already has advanatges over ESX. ESX can't do network load balancing or SAN FC balancing. What kind of BS is that for an enterprise level product ? Hyper-V snapshoting is better IMHO as well.

What?

http://blog.scottlowe.org/2006/12/04/esx-server-nic-teaming-and-vlan-trunking/

You most certainly can certainly team NICs in ESX. That's been possible for quite a while (since ESX 2 I believe).

FC doesn't support MPIO, but you can split paths to manually load balance. Failover is fully supported.
Round robin load balancing for FC is currently experimental - http://www.vmware.com/pdf/vi3_35_25_roundrobin.pdf
 
What?

http://blog.scottlowe.org/2006/12/04/esx-server-nic-teaming-and-vlan-trunking/

You most certainly can certainly team NICs in ESX. That's been possible for quite a while (since ESX 2 I believe).

FC doesn't support MPIO, but you can split paths to manually load balance. Failover is fully supported.
Round robin load balancing for FC is currently experimental - http://www.vmware.com/pdf/vi3_35_25_roundrobin.pdf

Blindly throwing packet down mutiple paths is not load balancing. Its round-robin at best. And manually specificing storage paths to particular ports doesn't scale very well. I'm not saying these feature don'ty help at all. I'm saying I expect more from a 3rd generation product.

We are 100% ESX server company with hundreds of VM's. Our vmware enterprise license was in the millions. Yet we are currently 100% licensed for Hyper-V. I'm just saying hyper-v has some advantages and a ways to go in other areas. However, I expect Mirosoft to continue to close the gap. VMware's free reign on the market is over.

The platform most VMware systems virtualize is Windows. I think Microsoft has some better insight how how to sqeeze the most out of their own OS. VMware is going to be looking over their shoulders from here on out. I'm taking my VCP next week.
 
Actually, there's nothing stopping multiple VMs running within VMware ESX from forming an NLB cluster and standard MSCS clustering is supported again as of 3.5 Update 2. There is however a specific method you need to follow to set it up and a white-paper is available on the subject.

Additionally, true in/out network load-balancing on the ESX host is achievable by enabling etherchannel and configuring the vswitch to forward traffic based on IP hash rather than originating port id. You don't need to do this however, since the built-in round-robin load-balancing works quite well in most situations and etherchannel can be more pain than its worth.

But I digress, I partly agree with your summation of the situation between Microsoft and VMware - in that Microsoft has some distinct advantages over VMware, not the least of which is the ability to cut deals on licensing at an enterprise level. Still, being certified in all three of the major players in the virtualization space, and having used them each in-depth, I see advantages and disadvantages to each. Yet the only player 100% ready for the enterprise at present remains VMware's ESX; but that doesn't mean things won't change over the next year to year and a half. They most certainly will.
 
Actually, there's nothing stopping multiple VMs running within VMware ESX from forming an NLB cluster and standard MSCS clustering is supported again as of 3.5 Update 2. There is however a specific method you need to follow to set it up and a white-paper is available on the subject.

Additionally, true in/out network load-balancing on the ESX host is achievable by enabling etherchannel and configuring the vswitch to forward traffic based on IP hash rather than originating port id. You don't need to do this however, since the built-in round-robin load-balancing works quite well in most situations and etherchannel can be more pain than its worth.

But I digress, I partly agree with your summation of the situation between Microsoft and VMware - in that Microsoft has some distinct advantages over VMware, not the least of which is the ability to cut deals on licensing at an enterprise level. Still, being certified in all three of the major players in the virtualization space, and having used them each in-depth, I see advantages and disadvantages to each. Yet the only player 100% ready for the enterprise at present remains VMware's ESX; but that doesn't mean things won't change over the next year to year and a half. They most certainly will.

Their support for MSCS(even in U2 where it no longer requires having the os vmdk on local storage) is weak at best. Limted to 2 nodes and x32 OS. Also there is no support for Server 2008 failover clusters (U3 ?)becuase 2008 doesn't support parallel scsi in failover clusters. VMware doesn't support clustering using iscsi so your SOL.

The FC HBA manual pathing should have been fixed. Round-robin for FC HBA's is currently being worked on yet Microsoft has MPIO built into all of their server 2008 platforms.

Another thing that bugs me is VCB. It just seems like a great product that has been released before it finished. Sure I can write a bunch of powershell scripts to schedule lan-free backups but som intergration into VC would be nice.

I'm not sure where you don't see the cost savings. If you buy datacenter you can run as many windows vm's as you want and your virtulization platform is free. Your have to buy windows licenses for your guests on ESX anyways. SCVMM probably makes managing them easier but then your comparing VC to SCVMM not ESX to Hyper-V Server 2008.
 
You have to compare the whole suite of products, not just ESX to Hyper-V. MS has a long way to go to compete with a complete product and VMware isn't sitting still. Take a look at like Site Recovery Manager. Great products. Many of your problems with ESX will be fixed in VI4 along with new features that are way down the roadmap for Hyper-V.

MPIO will be nice, but there just aren't many systems pushing more than 4Gb/sec I/O throughput. Yes, I know some exist but it's very few and far between. VI4 will fix that anyway.
 
Another thing that bugs me is VCB. It just seems like a great product that has been released before it finished. Sure I can write a bunch of powershell scripts to schedule lan-free backups but som intergration into VC would be nice.
Actually, I would argue that it's non-integration is one of its strong points. Yea, I couldn't believe that it didn't have a gui and I had to edit text files when I first started playing with it (when it first came out) but the fact that it's a stand-alone product means I can install it virtually anywhere. For instance, if I am running a compatible product, I can install it directly onto the media/master server and should I be using FC - be able to back up directly from fibre to my tape library. I can even install it on desktops and let people do over-the-network "dumps" of their virtual machines using simple vcbmounter scripts. In fact, my only real desire for VCB is that they make it play nice with linux. :)

I'm not sure where you don't see the cost savings. If you buy datacenter you can run as many windows vm's as you want and your virtulization platform is free. Your have to buy windows licenses for your guests on ESX anyways. SCVMM probably makes managing them easier but then your comparing VC to SCVMM not ESX to Hyper-V Server 2008.
I see the cost savings with both Hyper-V and Xen, however if you look at the bigger picture I see increased administrative effort, re-training, real potential for data corruption and disastrous environment configurations with Hyper-V especially. For instance, the ability to use "base" disks and run multiple servers from one underlying base in Hyper-V might sound like a benefit -- but what happens if that one base disk becomes corrupt? And don't tell me it won't happen... because it will. :) And what about the fact that, because NTFS is not a clustered file system, you have to migrate all of the VMs on one LUN from host to host at the same time with Hyper-V. There are third-party clustered file systems out there but that adds additional cost. Another way of getting around it would be to put one or two VMs per LUN but that could quickly become a nightmare to manage in a large environment. Meh

The fact that Hyper-V is easy to configure and "only $28" makes me wonder how many datacenter disasters we'll have this year from undertrained people standing up bad configurations or from third-party drivers corrupting data.

As for purchasing Datacenter licensing -- you can do that for VMware as well. :) In fact, that's not a new thing -- it has been around for a while.

Yes, competition is a good thing and if you look at how far XenServer has come since Citrix purchased it (yes, I know it's just one flavor of Xen but it's arguably the most advanced one), it's really shaping up to compete head-on with VMware. They just need to fix the management quirks and they'll have a really nice product. I am sure that Hyper-V will be a truly competitive product in the next year to year and a half.

Hyper-V markets itself on a short-sighted value proposition, "We're Microsoft so we can do it better." As another poster mentioned, you must look at and consider the total cost of the management suite when considering an enterprise solution. You'll also need to consider the cost of coexistence and man-hours in migrating existing servers (if there are any) to another solution, re-training people, retaining people, certifying, etc. But to put it lightly, Hyper-V has a price-point you can't beat... until you add SCVMM, SCOM, SCCM, a clustered file system vendor, etc. Then, like it or not, it comes very close to the others out there.

The only unique position Microsoft has when you include everything else is that they own everything at that point (OS, Hypervisor, Management, etc) and because of that have the capability to shave margin to make their position more marketable. Which, knowing them, is exactly what they will do. "We'll give you unlimited licenses free if you'll adopt Hyper-V and do a case study for us." :cool:

For small to medium business this doesn't matter, which is where Microsoft (has openly stated) they will be focusing this (Hyper-V server) and the premier-line (Server 2008 Virtualization) products. While they've made some strides in the enterprise environment, Microsoft hopes to pull the carpet out from under VMware. After all, this is why VMware made 3i free to download. :)

But I digress, there's no reason to necessarily stick to one vendor here. Download them all, play with them, explore them... in the end, pick what is truly best for your environment but remember to count the true costs and not just the up-front price tag.

MPIO will be nice, but there just aren't many systems pushing more than 4Gb/sec I/O throughput.
That's especially true if you are virtualizing the right things; virtualization is not about cramming everything that was on multiple physical platforms onto a single one, it's about making smart investments and choices in your environment. Going after low-hanging fruit, etc. Do not virtualize anything that already takes full advantage of its hardware. By that law, things that would require clustering - such as Enterprise-level SQL or Exchange, are never good virtualization candidates. I didn't rule them out for small-business but in the enterprise there are much better ways of providing high availability for SQL/Exchange than to virtualize them.
 
Do not virtualize anything that already takes full advantage of its hardware. By that law, things that would require clustering - such as Enterprise-level SQL or Exchange, are never good virtualization candidates. I didn't rule them out for small-business but in the enterprise there are much better ways of providing high availability for SQL/Exchange than to virtualize them.

I know plenty of people that virtualize one app on an ESX server. Enterprise Exchange? Absolutely can virtualize that with great results. Very large scale servers are done every day. You gain a lot with virtualization, not just consolidating servers. No down time maintenance....ability to move to faster hardware with a simple VMotion...etc. It's all about your architecture and design. Architect for scale.
 
I know plenty of people that virtualize one app on an ESX server.
I do as well - from Citrix to HPCC. From one ESX host to fifty.

Enterprise Exchange? Absolutely can virtualize that with great results. Very large scale servers are done every day.
Just because you -can- do something doesn't mean you should and just because something was done in the past or by another company doesn't mean it's the best choice. It all depends on sizing and your individual acceptable risk. Up until a couple of weeks ago you took a huge risk in whether or not Microsoft was going to support you should you ever need to call them. They've since modified that, though they still have some caveats regarding configuration and hypervisor choice. But even if you count licensing and support as regardless, which many companies do not, there are many more pieces of the puzzle to consider. Virtual hardware limitations, contention, available network IO, the hypervisor layer itself, limitations of the support contracts of the hypervisor you chose - for instance being told that clustering is "unsupported" in one version and "supported" in the next.

You gain a lot with virtualization, not just consolidating servers. No down time maintenance....ability to move to faster hardware with a simple VMotion...etc. It's all about your architecture and design. Architect for scale.
Absolutely. But that doesn't make it the best choice for every application everywhere.

The "don't virtualize anything that already utilizes its present hardware" is a very safe and often correct summation of what a guidance anyone thinking of bringing virtualization into their datacenter, regardless of their hypervisor choice. I've been in over a hundred datacenters just in the past two years alone and every single one of them had so many low hanging fruit (servers only being 5-15% utilized), it would make no business sense to virtualize business platforms like Exchange right away. Even if your exchange infrastructure was near EOL, it would make better business sense to wait - even if it meant having to invest additional money up-front to upgrade. The cost savings from virtualizing ~200-300 physical servers would far outweigh, in most cases, the cost of keepign exchange physical.

And hey, let's not forget that Exchange 2007 can be heavily modularized. There's no reason you couldn't virtualize the lesser intensive boxes and leave the more I/O driven mailbox servers to a physical cluster or two or three, all depending on the size and seat count.

I never said it was impossible - just that simply because you can doesn't mean it's the best choice. But each to his own...
 
I

The "don't virtualize anything that already utilizes its present hardware" is a very safe and often correct summation of what a guidance anyone thinking of bringing virtualization into their datacenter, regardless of their hypervisor choice. I've been in over a hundred datacenters just in the past two years alone and every single one of them had so many low hanging fruit (servers only being 5-15% utilized), it would make no business sense to virtualize business platforms like Exchange right away. Even if your exchange infrastructure was near EOL, it would make better business sense to wait - even if it meant having to invest additional money up-front to upgrade. The cost savings from virtualizing ~200-300 physical servers would far outweigh, in most cases, the cost of keepign exchange physical.

And hey, let's not forget that Exchange 2007 can be heavily modularized. There's no reason you couldn't virtualize the lesser intensive boxes and leave the more I/O driven mailbox servers to a physical cluster or two or three, all depending on the size and seat count.

I never said it was impossible - just that simply because you can doesn't mean it's the best choice. But each to his own...


Thats just the point. Very few systems utilize their hardware to apoit where you wouldn't consider virtualization. Consider Exchange 2003. Its idle on new hardware( dual quad core w/ 32GB ram). Its a 32 bit app and can't use more than 4GB of RAM. So why not put it on an esx hosts with multiple exchange mailbox servers?

You save space, power, and money. Even if your getting a 4 to 1 (We are 20 to 1 in production and the hosts aren't even at 33% load for most esx servers) then you are saving quite a bit.
 
I never said it was impossible - just that simply because you can doesn't mean it's the best choice. But each to his own...

I don't think anyone is saying virtualize EVERYTHING, but in the vast majority of organizations you can get REAL close for x86 based applications. I'd much rather have two virtualized Exchange servers than a single larger physical server. You just gain so much more flexibility. The key there is "dynamic infrastructure".

Hardware performance is fast outpacing the needs of the majority. The new consolidated I/O fabrics with things like Hypervisor bypass will make it even better. Large enterprises are virtualizing very quickly. They are out of space. Out of power. Out of cooling. They started virtualizing everything they could and then figured out that the side benefits are very compelling and now are moving toward the largest apps.

EVERYTHING? No. At the bank I worked on a project with a 7TB database and we were looking at very large boxes. Not a good candidate just due to that particular system. The HPC clusters? No need to do that. But almost everything else, sure.

Most people aren't worried about support contracts. I only heard that from a few clients. The vendors will support you. If they don't they'll be out of business in no time. At the last VMware User's Group meeting it was asked how many people were denied support by MS for being virtualized. No one raised their hand.

Hardware limitations, I/O, and contention are all architecture decisions. Architect to scale. Plenty of knowledge out there on correct and highly scalable virtual environments. I work very closely with EMC and they have an abundant amount of reference architecture and build documents. Not hard to find.
 
Thats just the point. Very few systems utilize their hardware to apoit where you wouldn't consider virtualization. Consider Exchange 2003. Its idle on new hardware( dual quad core w/ 32GB ram). Its a 32 bit app and can't use more than 4GB of RAM. So why not put it on an esx hosts with multiple exchange mailbox servers?
CPU and memory resources have not classically been the downfall of Exchange 2003 infrastructure though Disk and Network I/O are. Again, it comes back to how big of a company you're talking about. I'm talking about "Enterprise", which I define as 10,000-30,000-50,000-75,000 seats. Not a company with a hundred or even a thousand mailboxes.

But there are many other subtle thing to consider and aren't worth quibbling over; for instance whether or not these 30,000 people are accessing 10MB mailboxes or if we're talking about TBs with of data.

Now if we're talking Exchange 2007, much of the I/O load has been either shifted around or optimized so that it is considerably less than Exchange 2003, even with the same number of seats. Making it actually a better candidate than its older brother. My recommendation and the recommendation of many firms out there is to virtualize components of Exchange when necessary to reduce cost but to keep some, if not all (depending on the size), on physical platforms and provide redundancy through another means.

I don't think anyone is saying virtualize EVERYTHING, but in the vast majority of organizations you can get REAL close for x86 based applications. I'd much rather have two virtualized Exchange servers than a single larger physical server. You just gain so much more flexibility. The key there is "dynamic infrastructure".
I never said a single larger physical server - I said there are better ways of providing redundancy to Enterprise Exchange and SQL than to virtualize them. In other words, I always recommend redundancy - just for large implementations I would much rather see and therefore recommend a physical cluster rather than a single or multiple VMs.

Most people aren't worried about support contracts. I only heard that from a few clients. The vendors will support you. If they don't they'll be out of business in no time. At the last VMware User's Group meeting it was asked how many people were denied support by MS for being virtualized. No one raised their hand.
Difference in perspective, I suppose. I work for and support a lot of fortune companies. They are always worried about support. They want to know that when they are down and losing millions if not billions of dollars, they will have people on the phone and on-site. They don't want to even take the risk that somewhere someone might decide to go by the letter of the law. Sure, it might mean that when the fog clears that company that decided not to provide support is history --- but that doesn't fix the issue.

Hardware limitations, I/O, and contention are all architecture decisions. Architect to scale. Plenty of knowledge out there on correct and highly scalable virtual environments. I work very closely with EMC and they have an abundant amount of reference architecture and build documents. Not hard to find.
I work very closely with a lot of vendors too, including EMC and VMware and there are some things (as we've both said) they don't recommend virtualizing, even with the perfect infrastructure.

This thread has gotten way off topic so here - all I am saying is that there are a lot of decisions to be made in a given company when they consider virtualization, regardless of hypervisor. (And yes, I know that's obvious for most.) There is absolutely no justifiable reason to jump head-first into virtualization with the most critical of business systems, which in most case is their mail servers. You take a huge risk in doing so, including whether or not that company will fully adopt virtualization. If something goes wrong and performance isn't "just right" (for any number of reasons), you risk the company reconsidering.

In the vast majority of environments there are far better candidates to be virtualized first. When you get done with them, usually counting in the hundreds, you could sit down and discuss Exchange or SQL. But I digress, that's all I am saying.
 
Back
Top