Client Advice: Should I virtualize?

KaosDG

[H]F Junkie
Joined
Apr 3, 2000
Messages
11,939
One of my clients is a medium business (small based on employee count, but medium based on financial holdings).

Right now they are chugging along happily on a Win2K3 AD, fairly simple setup.

However, they have several servers to attend to, and I wanted to see what you virtualizers thought about my ideas.

Right now their structure is (across 2 offices):

CT:
Domain Controller (2k3, also print server) - 2x Xeon 2.80GHz 1Gb Memory
Pri. Mail server (2k3, Exchange 2k3) - 4x Xeon 2Ghz 2Gb Mem
SQL Server (2k3, sql server 2000) - 4x Xeon 3Ghz 1Gb Mem
Terminal Server (2000 server) - 4x Xeon 2Ghz 512Mb Mem

NY:
Domain Controller (2k3, print server as well) - Same config at CT DC
Secondary mail server (2k3, exchange 2k3) - Same Config as CT Mail server
VPN Server (2000 server) - P3 866Mhz, 128Mb RAM
Accounting server (SCO Unix, custom db software) - 2x P2 266Mhz 128Mb Memory
Utility web server (crappy server for my locally installed Inventory software, firewall reports, WAMP stuff, etc) 2x P3 700Mhz 512Mb Memory


My idea was basically to virtualize all that I could - starting with probably the Terminal Server and Utility server (utils server could be anywhere, so I have no problem "moving" it to CT)

After that, I would want to virtualize the SCO accounting server and VPN Server, as they are both running on extremely old hardware, and I really don't need a 6U Pentium 2 server taking up my precious rack space :(

My concerns are the Mail, SQL and Exchange stuff...

1 - How reliable/recommended is it to virtualize a domain controller? Honestly I see it as being as stable as the box it is running on... so I'm not that worried... but I have been wrong before.

2 - Exchange to me seems to be one of those things that "needs" it's own hardware. From all the exchange sizing guides, and best practices, etc, I don't see how virtualizing it could be advantageous. Your thoughts? Anyone doing it on a small scale like this?

3 - SQL server. This may be problematic as the server technically doesn't "belong" to the company (they own the server hardware, but the software running on it is leased from, and maintained by the Db software vendor... i'll have to check what our limitations are on this) But in any case, it seems like this should have it's own hardware as well?



Any thoughts?

If they only buy 2 new servers, and virtualize just the older hardware then I'm ahead of the curve.
 
I had typed a really nice and concise set of responses to each of your questions...then I hit some key combination that took be back in my browser...and it's all gone...damnit. So now the response will be shorter.

As my company's virtualization go-to, I can tell you that I have virtualized every server that we had with the exception of the Backup server, because it need the physical connection the the FC tape drive. I have virtualized:

2k3 x64 Ent AD DC
2k3 Ent SQL server (x3)
2k3 Ent Exchange Server
2k3 Std Web server
created several VMs for test/dev work based on various OS's

When your're doing your conversion from physical to virtual, each server type will have it's own "gotchas" that you need to look out for. Exchange is perfectly happy as a VM, and so is SQL, but before you convert them, make sure that you've got no one accessing them, flush your transactional queues, and turn off the services (Exchange, SQl, whatever). Not doing this will result in a borked conversion, and you'll have to do it again. Also, be aware that although most servers are good candidates for being virtualized, and don't need physical hardware to run on, making a virtual machine that has really high requirements can be problematic.

I see that you Exchange server has 4CPUs and 2GB of RAM. Perfectly normal for a physical server. Best practices from most Virtualization solution companies says give all VMs 1CPU unless you find that this is not sufficient. The reason for this is that most servers are sitting idle, or at the least, underutilized. 1CPU in a VM is often enough to handle the load, despite that the physical machine it was running on had much more. If you find 1CPU isn't enough, you're talking about 5 minutes of downtime to shutdown the VM, add another CPU, or more RAM, and fire it back up. It's very simple to do.
That being said, you would not want to virtualize a physical machine that is utilizing a great deal of it's available resources. For instance, your Exchange server. If that server is using all 4CPU's more than 60%, you're need a server capable of delivering twice that much power as a VM server. You don't want to go through the trouble to create a VM from a phyiscal machine, and have the VM be so big that you need an entire host machine to run it on, that defeats the purpose, right. You might as well have saved yourself the trouble, unless you just want bragging rights that you did it.

I will tell you that the thing that makes virtualization cool is the ability to move running VMs from one host machine to another, for things like maintenance upgrades, etc. Stuff that normally would have required after hours scheduled downtime now can be done during normal work hours. There's a catch though. You have to license those features, you have to have two similar host machines, you have to have shared storage, and the host machines must each be capable of running all of the VMs in the environment (if all of the VMs are production, that is...if you can tolerate some non-essentials goin g down during maintenance, than disregard this).

Did I answer most/all of your questions? If not, or if you have additional ones, I'll check back in.
 
Things like the accounting server, ts server, vpn, and utility server are all good things for putting in vms.

The DC could be virtualized as well without any big issues. I know a few people running sql and exchange in vms but I'm generally against it. In the case of the sql server I'd leave it alone as it is supported by another company. Don't give them any extra reasons to blame you for issues with it.

What kind of hardware are you looking at for the vms or are you planning on keeping your old servers and using them?
 
My guess is based on the number of users you're probably ok with pretty much everything. As was stated above, SQL and Exchange will be your biggest ones to worry about, and with a low user count- SQL will be ok most likely and exchange will be your PITA.

To answer each of your questions directly...

1. We run our DC's virtualized and have no issues, they are prime candidates for virtualization as they generally don't do squat anyway.
2. Again, Exchange will be the one to be most cautious about- disk I/O and such are important for exchange and when it's sharing that with other VM's you can encounter performance issues, maybe save that one for last and do some disk I/O testing on your VM's prior trying exchange and see if you may encounter a hit
3. If it's a vendor supported SQL server, basically it's up to them. If they will support it virtualized then go for it. Some companies still don't so if they won't support it, clearly don't move it.
 
sabregen:

Wow, thanks for that. How does disk usage work for your VM'd Exchange and SQL servers? Are you using some external SANs or something? Honestly our email usage / sql usage is sort of limited, But i am still running the exchange IS on a Raid 5, the logs on a separate raid 1+0, and the OS on still another seperate raid 1. Right now the sql db is only running on a raid 1... I don't trust this but I may be paranoid.
I think the disk usage is my biggest concern, because of how everything "should" be organized according to MS. How would all that work?

swatbat:
Right now, if I can get away with virtualizing the DC's I may just use the DC servers. apparently they are 80546K irwindale xeon's, so I think they'd be good at least for the DC and utility servers.
I'll probably max out on the memory though.
 
Our internal infrastructure utilizes FC disk on the back end. None of our physical servers have a single internal disk, themselves, and are all booting from SAN. You're on the right track with putting Exchange datastores on one volume, logs on another, etc. I don't see anything glaringly wrong with you current use of RAID levels for each of the intended data types. However, there are some caveats with virtualization, specifically relating to disk space. I'll try and make this a simple explaination, but in order to do that, I'm going to make the assumption that you already have a SAN, understand RAID, and have a general idea of the MS recommendations for where to put what type fo data, and on what RAID levels.

In the physical server realm, if you're running a SAN, you have an HBA inside the server, and it's connected either directly to storage trays, or to the switch, then to the storage trays. You have different logical volumes in different raid levels. The os installed on the server has to have multipathing drivers installed, and probably storage management software of some sort, and SAN management software for the switches (if there are any).

In contrast, when you're running a physical server that has a hypervisor installed, and your previously physical servers are now VMs, here's what happens:

Your Virtual machine host server, with the hypervisor installed on it, handles the mutlipathing from the HBA to the storage. VMWare has built-in mutlipathing IO drivers for all of the supported IO products on the ESX HCL (so I'm going to use them as an example, because it's the easiest to understand). If you have multiple physical virtualization hosts, and you want to do the neat things like VMotion and Storage VMotion, etc, then all the hosts need to be able to see and have permissions to the shared storage (whichever type you are using). The hypervisor, in all cases, manages the mutlipathing IO requests from the physical hosts to the storage, and all hosts can see the same pool (obviously, with the exception of their own respective boot LUNs, which you do not share between hosts).

When you create a virtual machines "hard drive" in the shared storage pool, what is actually created is a flat file (actually, there's several...but it's not relevant to discuss that now) on that storage medium. Each drive in each VM gets a flat file, and they will typicall be stored inside of the directory for the VM that contains al of the other files on the configuration of the VM. For instance, your directory structure for a VM might looks like this:

SAN LUN 1 - mapped to all virtualization hosts
\storage_1 - directory inside of LUN1, all hosts can see this
\storage_1\vm_1 - directory for the first vm
\storage_1\vm_1\disk1.vmdk - first hard drive for vm1
\storage_1\vm_1\disk2.vmdk - second hard drive for vm1

And on and on, a .VMDK flat file for each of the VM's hard drives. Because the physical host is handling the paths to the physical storage medium, and your SAN volumes have been configured however you need them (logical volumes, hot spares, RAID levels, etc), the VM doesn't need to know about how it's getting the disks presented to it. As far as the VM is concerned, the "disk" that it's using is a local disk of some type (usually it's going to be presented as SCSI, but there are exceptions). So the VM just thinks it has a local SCSI disk, and uses it like it would any other disk. This is the logical view to the VM, of the storage that it has.

In reality, however, the virtual machine is running on a set of virtual hardware, all of which are defined in configuration files on a SAN LUN that is shared between mutliple physical host machines.

Did I make that more confusing, or more clear?
 
That makes perfect sense, actually... and I'm kind of surprised that I didn't see how it works before asking it. (It's completely logical, honestly. I need more coffee)
 
swatbat:
Right now, if I can get away with virtualizing the DC's I may just use the DC servers. apparently they are 80546K irwindale xeon's, so I think they'd be good at least for the DC and utility servers.
I'll probably max out on the memory though.

Yea add some memory and run the others off their own array. Should be fine.
 
when you virtualize, you'll definately see more memory usage before you will typically see any other resource contention issues. Most clients i see start out with a "small" amount of RAM, usually 32GB to start. After they get going on it, they're adding more memory a few months down the road, as that's they only constraint to being able to add more VMs to their environment.
 
Accounting server (SCO Unix, custom db software) - 2x P2 266Mhz 128Mb Memory
Where I work - our entire accounting and inventory is on a SCO box circa 1995 running WDS-IIE. On an eMachines 1GHz Celeron w/ 512MB RAM.

Thank god it's the one computer thing I don't deal with.
 
Back
Top