Virtualize WHS & SBS :: Thoughts?

Velkruel

Limp Gawd
Joined
Oct 4, 2001
Messages
345
Hello -

I've been going over some setups in my head in order to rein in all the boxes I have down in the basement due to overcrowding and power issues. I was originally going to deploy two boxes - one for SBS 2008 and one for WHS.

Now thinking about it, I'd almost prefer a more powerful single box where I can throw on a 2008 core with Hyper-V and run SBS and WHS as VMs.

This would also give me the flexibility to run a couple Linux VMs for misc web/Teamspeak/server use and consolidate an old Dell box I have sucking power.

I was thinking of something similar to the following setup -

  • Intel Quad Core -- seem to use less power than any of the AMD quad cores
  • 4-8 GB of DDR2
  • 2x 500 GB WD GP Drives in RAID1 for the host OS, VMs, and WHS VM OS partition
  • 2x 1 TB WD GP Drives in JBOD each with one VM disk so WHS will see them as 2 drives in it's data pool
  • Antec EarthWatts PS
  • I-STAR 5x Hot Swap Case with doors. The 500 GB drives would be internal and the 1 TB drives would use 2/5 of the Hot Swap bays.

My original plan was 2x boxes using the same hardware for each except the CPUs would be AMD 45w dual cores, the 2x 500 GB drives would go into the SBS box, the 2 1 TB drives would go into the WHS box, and the RAM would be divided 2 GB for WHS and 4 GB for SBS.

The more I thought about it the more it made sense to bump to a higher powered chip that won't use much more than 2x45w machines running and make everything one box. I'd get more use out of it and less power consumption.

Just wanted to get some people's thoughts before taking the plunge. Price would be a bit less for one box as well, but not by much due to the more $$$ for the better chip and larger amount of RAM.

Thoughts?
 
Already doing this, but I also have a 2nd box as an OpenFiler box that the goal is to host my VM images via iSCSI. Still playing with it, for now my DC VM image is local to the Quad Core VM box.. But it is a fun project. The most annoying part is setting up rights to manage the 2008 Core install when your worstation is part of another domain. (till you get the server up and running in other words)
 
Already doing this, but I also have a 2nd box as an OpenFiler box that the goal is to host my VM images via iSCSI. Still playing with it, for now my DC VM image is local to the Quad Core VM box.. But it is a fun project. The most annoying part is setting up rights to manage the 2008 Core install when your worstation is part of another domain. (till you get the server up and running in other words)

Excellent -- good to hear. It just seemed to me like such a waste and after reading from a few people that WHS will see each virtual disk as a separate drive in it's pool, then I should be able to maintain it's out of the box duplicating function and if a drive goes bad it would be no different that swapping one out on a stand alone WHS box.

Plus with pass through disk access in Hyper-V I think my performance should be equal to a stand-alone WHS installation as the VM stack doesn't interfere with the disk i/o, so I should be able to get peak performance on the TB drives.

How much memory are you devoting to the core host OS vs your VMs? I was thinking allocating 512 MB but wasn't sure if I'd hit any bottlenecks.
 
This brings up a question I've had for quite a while. Can WHS run in a VM in a reasonable way for say a 5 machine home network? If so, that would be great!
 
Katabatic[H];1033082212 said:
This brings up a question I've had for quite a while. Can WHS run in a VM in a reasonable way for say a 5 machine home network? If so, that would be great!

From what I've read WHS has no issue with running as a VM, and you can use the aforementioned pass through disk access feature of Hyper-V to treat a set of drives as individual drives that would be equal to just having WHS on it's own box with those drives (in theory).

That way you can allocate other disks in RAID1/5/etc to the host OS and other VMs and not waste a ton of boxes for different functions.

I want to hear a few more opinions but I think I am going to try out the setup -- I can always post results for those interested.
 
Excellent -- good to hear. It just seemed to me like such a waste and after reading from a few people that WHS will see each virtual disk as a separate drive in it's pool, then I should be able to maintain it's out of the box duplicating function and if a drive goes bad it would be no different that swapping one out on a stand alone WHS box.

Plus with pass through disk access in Hyper-V I think my performance should be equal to a stand-alone WHS installation as the VM stack doesn't interfere with the disk i/o, so I should be able to get peak performance on the TB drives.

How much memory are you devoting to the core host OS vs your VMs? I was thinking allocating 512 MB but wasn't sure if I'd hit any bottlenecks.

I can't really speak to WHS and built in drives, since I am playing with OpenFiler instead, but your logic seems right.

I don't recall that you actually "allocate" memory to the host OS, rather it gets it all and allocates memory to the host OS's.

this article:http://searchsystemschannel.techtarget.com/generic/0,295582,sid99_gci1297398,00.html
recommends saving 2GB for the host OS, but that seems overkill to me especially running Core. I have 8GB in mine, and only running 2 VM's so far with 2GB given to each, so I am not yet pushing my RAM.
 
I can't really speak to WHS and built in drives, since I am playing with OpenFiler instead, but your logic seems right.

I don't recall that you actually "allocate" memory to the host OS, rather it gets it all and allocates memory to the host OS's.

this article:http://searchsystemschannel.techtarget.com/generic/0,295582,sid99_gci1297398,00.html
recommends saving 2GB for the host OS, but that seems overkill to me especially running Core. I have 8GB in mine, and only running 2 VM's so far with 2GB given to each, so I am not yet pushing my RAM.

Cool, thanks for the info.

I mention allocate as far as what RAM is left to the host OS after the guest VMs. What are the specs of your box if I might ask?
 
Whoa, WHS on a VM would be way cool. Anyone ever migrated a box WHS install (with a bunch of hdds) into a VM?
 
Cool, thanks for the info.

I mention allocate as far as what RAM is left to the host OS after the guest VMs. What are the specs of your box if I might ask?

Intel Q9550, Intel DP35 desktop board, 8GB corsair DDR2, some old SATA boot drive, and I added a Intel ML1000pro gigabit card for iSCSI. (iSCSI on separate gigabit jumbo frame switch too)

The Openfiler box that will host the VM images is an old dual 2.4 xeon IBM server with 4GB of Rambus and an Adaptec 2024 PCI-X sata controller and 4 Seagate 500GB drives.

So I do have two boxes, but the shares are hosted by the VMs not the OpenFiler box.. I did this as much to play with iSCSI as to play with VM's
 
What a good idea! I was torn between server08 and WHS earlier this year. I settled for the [proper] server so I could replicate some of the systems I work with at home, Instead of another home entertainment device. Running WHS in a VM could be the answer.

Maybe I might treat myself in a month of so...
 
so as a follow up to anyone, how did you go about installing WHS in a VM on your server? right now i'm running server 2008 and have been trying to figure out the best way of going about installing a virtual machine package and using it for WHS.
 
Since you are installing SBS I'm assuming you will be using that for AD and what not.
So you don't need server 2008 core
You can just use windows hyper v server its similar esxi and has a much smaller footprint as its only the hyper v stuff, best part is that its free
 
so as a follow up to anyone, how did you go about installing WHS in a VM on your server? right now i'm running server 2008 and have been trying to figure out the best way of going about installing a virtual machine package and using it for WHS.

Well I am running Core as I said before, so from the workstation that has the Hyper-V management UI on it I connect to the server and create a new VM and specify that it will boot off the CD. Then start the VM and it loads setup off the CD.
 
i have a "server" in my sig runing vista ultimate for vista media center...... and using vmware server 2 running a vm WHS with 300gb datashare 1 core and 512mb of ram.

it works, it was my first virtual experiment.

i would love to have physical access to disks rather than virtual disks.....
 
Since you are installing SBS I'm assuming you will be using that for AD and what not.
So you don't need server 2008 core
You can just use windows hyper v server its similar esxi and has a much smaller footprint as its only the hyper v stuff, best part is that its free

How do you do this? From when I tried Hyper V I thought you could only use it after installing the windows server 08 full - not even the core. I am really interested in this then, because after trying to set up an ESXi machine was extremely difficult for me, likely b/c of hardware compatibility. Windows didn't have hardware compatibility issues, so I am very interested.

-darkmatter08
 
How do you do this? From when I tried Hyper V I thought you could only use it after installing the windows server 08 full - not even the core. I am really interested in this then, because after trying to set up an ESXi machine was extremely difficult for me, likely b/c of hardware compatibility. Windows didn't have hardware compatibility issues, so I am very interested.

-darkmatter08

http://www.microsoft.com/servers/hyper-v-server/default.mspx
 
just curious what is the point of SBS and then a VM of WHS? If your running that at your house, i think you have more benefit just getting each machine on the domain. Can still get some streaming software going (Sage maybe?)
 
well, i've got WHS installed in a VM now through VMWare Server 2 but i'm trying to find if there's a way to set up direct physical access to 4 hard drives that are going to be used exclusively for the WHS. i've been digging around and not been able to find anything. any suggestions?
 
i have WHS also with VMware 2 and i do not think there is direct disk access... only virtual. i think i read that Vmware workstation has physical disk access.

i just use a virtual 300gb disk for backups only for my VM WHS
 
well, i've got WHS installed in a VM now through VMWare Server 2 but i'm trying to find if there's a way to set up direct physical access to 4 hard drives that are going to be used exclusively for the WHS. i've been digging around and not been able to find anything. any suggestions?

In VMware Server 2 I'm 90% sure that it does not allow you do do direct disk acess. You must setup a Datastore.
 
yeah that what i have found....


can some one confirm that workstation gives you direct access?
 
Workstation gives you direct access :D(although sometimes its a bit finicky)....I would download the trial and test it before i threw down some $.

I wish Vista ultimate could have Hyper-V or Server2008 could have MediaCenter. Media Center and Hyper-v on the same OS would be killer IMO.
 
yeah basically that is what i am trying for, WHS+VMC on one rig.... i have seen WHS be the Host OS and VMC be the guest but that limits to network or USB tuners, no pci

well if workstation give physical access that might be a good route.
 
If your just looking to use it to record TV you can install MCE2005 on WHS.
 
FYI --

One year later I got this working. I'm using an Intel Quad Core with 8 GB of RAM and 2x500 GB WD Green HDDs in RAID1 for the VMs, and 3x750 GB WD Green HDDs as the data pool. This is all running ESXi v4.

I created a data store on each 750 GB drive and added them to the WHS VM. My current HW setup won't allow RDM (direct disk), and there doesn't seem to be much advantage to do so per my application: http://www.vmware.com/files/pdf/vmfs_rdm_perf.pdf, http://communities.vmware.com/message/1263022

In case of a disk failure, I would simply remove it from the pool, swap in the new disk, and create another data store. This would function the same for any disk adds.

Thanks.
 
I did this a few weeks ago on my Windows Hyper-V Server 2008 R2.

Direct Disk Access is available and works well. I currently have 6 vms running on my Core 2 Quad 9400 box with 8GB ram. I have 2 500 GB drives for the VMs, and 10 1.5 GB drives as the Storage volumes. Each is directly attached to the WHS VM.
 
I have done similar with ESXi.

Intel DQ45CB mATX motherboard with Intel VT-d support
Intel SASUC8I PCI-e x8 SAS controller (LSI 1068e based)
8GB DDR2, Intel gigabit NICs, four 500GB SATA test drives

ESXi has a feature called VMdirectpath that allows passthrough of PCI(e) devices directly to a virtual machine. With this, the Intel/LSI SAS controller appears in the virtual machine OS without any virtualization layer, thereby maximizing I/O. It is available on select motherboards with select chipsets.

I could even enable the RAID functionality of the controller and have the RAID volumes appear in the OS.

I am using Opensolaris with ZFS (software RAID only) as iSCSI storage.

BTW, I also run pfSense and other OSes on the same system.
 
What processor are you using on this DQ45CB?
I want to build a PVR in a virtual ESXi machine with 2x Hauppauge HVR-2200 dual tuners and use pass through for the tuners.
 
Pentium E6300. A Nehalem processor is not required. An older 65nm Conroe processor will work as well, when paired with the right motherboard (that has the right BIOS coding and chipset for Intel VT-D)

Have a back up plan in case the passthrough is not transparent. I have had no luck passing through a LSI 9260-8i SAS controller, whereas I have had no problems with Intel NICs & other LSI SAS controllers.

Hyper-V and Parallels might have similar features.
 
this stuff is freaking awsome.... serious cutting edge crap,


I now have a server with a Celeron 430 in it, but will be adding a C2Q soon with 8gb of RAM.

my goal will be to run hyper-v with WHS and W7 Home Premium (for cable card recordings, USB tuner)

the real question will be if the new internal multi tuner cable card tuners will work with pass through to guest OS,

only time will tell
 
Pentium E6300. A Nehalem processor is not required. An older 65nm Conroe processor will work as well, when paired with the right motherboard (that has the right BIOS coding and chipset for Intel VT-D)

Have a back up plan in case the passthrough is not transparent. I have had no luck passing through a LSI 9260-8i SAS controller, whereas I have had no problems with Intel NICs & other LSI SAS controllers.

Hyper-V and Parallels might have similar features.

I am reading horror stories on the Intel forum that the DQ45CB would not boot after a restart when a pci-e raid card is inserted in the 16x slot. Did you experience any problems with booting and did you try any other raid cards than the LSI?
I have an Areca 1210 in my current setup which I would prefer to use in the DQ45CB.
 
this stuff is freaking awsome.... serious cutting edge crap,


I now have a server with a Celeron 430 in it, but will be adding a C2Q soon with 8gb of RAM.

my goal will be to run hyper-v with WHS and W7 Home Premium (for cable card recordings, USB tuner)

the real question will be if the new internal multi tuner cable card tuners will work with pass through to guest OS,

only time will tell

Hyper-V supports USB passthrough, but not PCI(e) passthrough.
ESX(i) supports PCI(e) passthrough, but not USB passthrough.

by ESX(i) I mean ESX and ESXi
by PCI(e) I mean PCI, PCI-X, and PCI-e

I am reading horror stories on the Intel forum that the DQ45CB would not boot after a restart when a pci-e raid card is inserted in the 16x slot. Did you experience any problems with booting and did you try any other raid cards than the LSI?
I have an Areca 1210 in my current setup which I would prefer to use in the DQ45CB.

I did encounter slow boot, but not failed restarts (regardless of BIOS version). Disabling Intel AMT, network booting, and other miscellaneous chipset features speed up booting considerably (2 to 3 times). The BIOS (sometimes) hides the RAID card "pop-up" BIOS, so you have to continually press Ctrl+M or whatever to get into the RAID BIOS.

I should note, do NOT use BIOS version 0093.Disabling Intel AMT (the remote management feature for vPro) stalls boot time by 30-40 seconds. PCI and PCI-e devices freeze for no reason at all, AND, worst of all, rebooting doesn't fix it! I had to make a recovery BIOS DVD, take out the recovery jumper, and blind boot from DVD to restore the BIOS! There is at least one account of this happening on the Intel forums.

I have since reverted to 0083 and it has been running flawlessly for months straight.
So overall, I would recommend the board (there isn't really a competitor, I was hoping SuperMicro would make an equivalent).

I have used a LSI 9260-8i (newest SATA 6gbps generation card) and LSI 3041 (if I remember correctly, based on the LSI 1068E chip) without issues.

I would check vm-help.com forums to see if anybody has used the Areca 1210 card with ESX(i), if you are planning that route. It may be that even though your card is hardware accelerated, ESX(i) will not recognize RAID arrays on the drives connected to the card. It may recognize them in IT (initiator target) mode, as single disks. It is for this reason I decided to passthrough the card I am using to a OpenSolaris VM that can manage the card and also create a software, highly reliable ZFS RAID.

Hope this helps!
 
Thanks for this info. I have now posted an new thread for the DQ45CB with the question on the raid card and not necessarily in combination with ESXi. See what comes out of that.
 
I purchased the board and was able to build a virtual machine with a HVR-2200 dual tunercard in passthrough mode. This is perfectly working!
BTW: are you using your raidcard passive or active cooled? I am planning to purchase an Adaptec 3805 or 5805 but they seem to run very hot.
 
Back
Top