Why use virtualisation in a scenario where everything could run on one server anyway?

Concentric

[H]ard|Gawd
Joined
Oct 15, 2007
Messages
1,028
Imagine that you have a relatively small enterprise with one physical server and a Windows domain where you need services like DC, DNS, file and printer serving, WSUS, etc. Just your standard domain basics - not even Exchange or databases or anything.

I have seen such a setup, and they were using Hyper-V to host two virtual Windows Servers to split up the roles (for example, one VM was a DC, DNS and WSUS, the other did the rest).

I don't understand what the benefit would be of using virtualisation in that way?
There is only one physical server, so there's no HA and you aren't consolidating any hardware or saving any power costs.

Why not just run one Windows Server directly on the hardware and have it perform all of the roles?

In this scenario the only benefit I can think of with virtualising would be if you had a separate VM for every single role, so that you can deal with any problems on one without disrupting the others?
But in the scenario I saw they weren't even doing that - they had several roles on each VM :confused:

I have a feeling that they were virtualising for the sake of it - not because it was really necessary. :p
Am I missing something?
 
To scale up and add additional roles in the future? Easier backups? That server is most likely not being used to capacity so it may give some flexibility down the road.
 
To scale up and add additional roles in the future? Easier backups? That server is most likely not being used to capacity so it may give some flexibility down the road.

You could just add more roles to a single, native server in the same way? Still don't see how it helps to have the current roles split between two VMs.

How does it make backups easier? Could you elaborate?

I guess what I'm asking is whether I'm right in thinking that it's best to either virtualise each role individually or not at all?
 
You could just add more roles to a single, native server in the same way? Still don't see how it helps to have the current roles split between two VMs.

How does it make backups easier? Could you elaborate?

I guess what I'm asking is whether I'm right in thinking that it's best to either virtualise each role individually or not at all?

Backups are made easier with VM snapshots. Service availability can be optimized and you also have the benefit of being able to adjust the resources of the server to meet future needs by allocating the server resources across the VMs. I would definitely separate the print server, all the windows Domain roles are probably fine on 1 VM.

I can give you a real world scenario...

Your single server running AD, DNS, DHCP, Printing and WSUS etc.

You go to install a new printer and windows chokes on the driver. That machine needs to be rebooted, taken off line, repaired, whatever. That leads to some potentially serious down time for the end user.

Having 2 VMs, 1 for the domain services and 1 for a print server

Before making a change to the print server you take a snapshot. You go to install a new printer and windows chokes on the driver. Power down the print server VM, revert to a previous snapshot. Back online in 15 minutes.
 
Backups are made easier with VM snapshots. Service availability can be optimized and you also have the benefit of being able to adjust the resources of the server to meet future needs by allocating the server resources across the VMs.

I can give you a real world scenario...

Your single server running AD, DNS, DHCP, Printing and WSUS etc.

You go to install a new printer and windows chokes on the driver. That machine needs to be rebooted, taken off line, repaired, whatever. That leads to some potentially serious down time for the end user.

Having 2 VMs, 1 for the domain services and 1 for a print server

Before making a change to the print server you take a snapshot. You go to install a new printer and windows chokes on the driver. Power down the print server VM, revert to a previous snapshot. Back online in 15 minutes.

I get that. That's what I meant in the OP when I said:
In this scenario the only benefit I can think of with virtualising would be if you had a separate VM for every single role, so that you can deal with any problems on one without disrupting the others?

So the best idea would be to put each role on an individual VM?
 
Sorry, i edited my original post to answer that question.

All the domain services are typically fine on one VM. Definitely need a separate print server, they occasionally need reboots and you're usually dealing with 3rd party drivers which can cause instability.
 
Remember what Virtualization is all about, it's pooling resources and applying those resources where needed. If you ran multi-role server think about an individual role consuming resources that are shared with the other roles. By splitting up some of the roles into individual virtual machines, you can assign specific resources to individual virtual machines and if MANAGED PROPERLY, you shouldn't see one virtual machine with certain roles negatively impact other virtual machines with different roles.

Now i'm not saying that this is how it should be done in "all scenarios" but it is certainly what I see in most situations, however I also see at least an HA pair.
 
Last edited:
A few of the reasons I prefer virtualization...

Isolation during development -
Helps me manage impact and unit/system testing during development and tuning cycles. Significantly easier to isolate the heavy systems into their own sandbox... removes "finger pointing" and allows for shorter cycle-times (refactoring, etc.)

Isolation during runtime -
Though OSGi helps minimize this headache... no better way to isolate:
- systems,
- configuration (version dependencies, etc),
- and library incompatibility
- some software-system vendors don't support n+1 instances per platform (fixed ports, support persons have trouble with port #... )

Just break components apart into their own runtime containers.

Fail-over, redundancy, etc
- Thinking of some of the ESXi features.

I have though collapsed systems back into a single host OS if/when needed. A few reasons to collapse previously broken-out sub-systems:
- Operations support couldn't handle the additional overhead associated with the additional instances.
- Cost was based on "n" hosts (so... we just reduced the number of hosts)
- physically ran tight on hardware resources (couldn't afford the extra RAM/Disk storage in our production platform)
 
Think of this scenario:

You're deploying a new piece of server software, installing it on your one and only unvirtuallized server, and something goes wrong completely hosing the machine.

Good job dumbass you just broke your one and only DC, DNS, DHCP, and file server. The whole business is now hard down and it's probably going to take you days to restore from backups, and anything that your business did since the backup is also lost. Hope it's a Friday and hope you didn't have any plans this weekend.

Lets try it again with virtualization:

Make new VM to install new server software to. Install completely breaks VM, but who cares just make a new one and start again, nobody outside of you knows or cares.
 
You could just add more roles to a single, native server in the same way? Still don't see how it helps to have the current roles split between two VMs.

How does it make backups easier? Could you elaborate?

I guess what I'm asking is whether I'm right in thinking that it's best to either virtualise each role individually or not at all?

backups is a big thing. Using something like veeam where you are taking image based backups, if your primary (or only) server fails, you can restore to different hardware quickly with no problems since the virtual machine is hardware agnostic. Even a workstation in a pinch.

Snapshots are great also when applying updates or making changes, if it borks something, just restore to the snap in just a couple of minutes.
 
Another one is when upgrading that physical server. Buy the new server, configure your host, and migrate the guests over. Easy peasy.
 
The big one is probably scalability. While backups are easier, the tools required to do vm level backups are usually really expensive, so probably not something a small company would invest in anyway. They'd still do regular backups.

Snapshots are nice too though. Before you apply updates or make a big change you can take a snap shot. If shit hits the fan you revert. Just don't accidentally do a revert when you go to delete the snapshot. :eek:

At home I actually have a hybrid. My main server has lot of different roles such as files, DNS, mail, domain (don't really use that anymore), web etc... but I also have VMs for some stuff like VPN, p2p, dev/test environments etc.
 
And not to mention, you can have a specific raid array for the DNS, DC, and VPN, and use a larger array for things such as file storage, print server, and other VMS that you have for testing and dev. I personally use 4 60gb HDD in raid 10 for DNS, DC, and VPN, and then 4 1tb HDD for everything else.
 
My question would be why wouldn't you virtualize?

There's no basis for the "all or nothing" theory. Two windows server instances still breaks up the failure domain somewhat regardless of whether or not they're running on the same physical hardware. Sure, separate hardware would be better, but not having it isn't a reason to throw everything on one windows install.

HP's crappy print driver just took down your print server again? Good thing you aren't running that print server on your DC, or you'd have just taken that down as well.

Buy a new server and want to move your virtual servers around? Super portable and easy in a VM infrastructure. Not so with a new physical server.

Virtualizing gives you a ton of flexibility that just isn't possible with physical servers, and if you're not taking advantage of it you're shooting yourself in the foot for no reason.

I would (and do) run a hypervisor on physical servers where I only ever plan to have a single virtual machine running on that server. Running multiple instances on one piece of hardware is only one of many benefits.

Anyway, I thought running a DC in a VM was a recipe for trouble.

I don't know who told you this, but they would instantly lose any credibility in my eyes. There are a few potential issues if you don't know what you're doing, but that's true of anything in life. Don't use snapshots on a DC and make sure you've properly configured time synchronization and you're fine.
 
PCI compliance is another reason. Different functions running on different servers is needed.
 
One big benefit is Fault Tolerance:
A VM can run from two physical hosts such that if one host has a hardware failure, the VM will keep running and not fail.
 
One big benefit is Fault Tolerance:
A VM can run from two physical hosts such that if one host has a hardware failure, the VM will keep running and not fail.
yeah, plus load management. One server running everything gets too overloaded? oops.

one server running a bunch of single purpose VMs and one starts getting lots of use? Get another server and migrate that VM over. (in the right environment one can do that without downtime I think)
 
Yuck. I hate WSUS on the DC.

IMO, management is eased by using dedicated VMs for functions and roles. This is for patching, management, backup, service availability, etc.

Some roles can be combined, but many can cause weird issues down the line and conflict with one another.
 
Back
Top