AMD build for Server 2008 R2 Hyper-V

Konowl

Limp Gawd
Joined
Jan 16, 2005
Messages
139
Looking to get a technet subscription and build a new server supporting Hyper-V. Want to run 2008 Server R2 with Hyper-V and toy around with multiple vm's of Server OS's in various roles, while also having my WHS V2 in a VM.

Problem is i need to do this on the relatively cheap. Is a 1055T x6 from AMD doable for server 2008?
 
Looking to get a technet subscription and build a new server supporting Hyper-V. Want to run 2008 Server R2 with Hyper-V and toy around with multiple vm's of Server OS's in various roles, while also having my WHS V2 in a VM.

Problem is i need to do this on the relatively cheap. Is a 1055T x6 from AMD doable for server 2008?

my only advice i can *STRONGLY* suggest is look at how much MAX ram you can get on your board.

THEN max it out :) 16gigs gets used up FAST 32gigs just cost me 300$ for my quad core xeon 1 u server :)

Welcome to hyper V :) You will love it!
 
I'm just asking as I've read various sources that say there are no Server 2008 drivers for any AM3 chipsets.

I figure 6 cores will be enough.

Thanks for the 32 gig suggestions - DDR3 is relatively cheap now. I just want to do Server 2008 on a budget - I'd LOVE to get a duel CPU Xeon processor board with 32 gigs of RAM but have a wedding coming up :)
 
What sources say Server 2008 doesn't work on AM3? I have Server 2008 R2 running just fine on my AM3 motherboard as my Hyper-V host. 6 cores will be plenty. You usually bottleneck disks and RAM before cores.
 
More RAM the better, 4-8 GB per virtual server and it disappears fast.
Also install full Server 2008 r2, then install the Hyper-V service. It is a lot easier to manage it that way, than just installing Hyper-V on the server.
 
More RAM the better, 4-8 GB per virtual server and it disappears fast.
Also install full Server 2008 r2, then install the Hyper-V service. It is a lot easier to manage it that way, than just installing Hyper-V on the server.

+1 to this, do the full server install and add the hyper v role. makes life much easier.
 
More RAM the better, 4-8 GB per virtual server and it disappears fast.
Also install full Server 2008 r2, then install the Hyper-V service. It is a lot easier to manage it that way, than just installing Hyper-V on the server.

Fine idea if it's for a lab, but this is an utter waste of resources for a production machine.
 
When you have a machine with dual, quad or hex cores, with 70+ GB of RAM, the amount of system resources you waste running a full Server 2008 r2 is negligible.
 
When you have a machine with dual, quad or hex cores, with 70+ GB of RAM, the amount of system resources you waste running a full Server 2008 r2 is negligible.

If you're not using a centralized management scheme on big iron (64+ GB of RAM), you have bigger problems to worry about than resource usage.
 
can you elaborate?

Think he is saying that you shouldn't be using the local machine to run the hyper-V management software from but should be using another machine on the network.

However i'm pretty sure that live migration actually requires you be running a full OS and not just the hyper-v server without the actual windows explorer shell running. which if you were trying to setup a good setup for a large enterprise deployment you would want that.
 
If you're not using a centralized management scheme on big iron (64+ GB of RAM), you have bigger problems to worry about than resource usage.

Not everyone can afford it, and not everyone can convince the higher ups that they need it.

The reason most people use Hyper-V is because it's free, so why would i want to go out and pay MS another 5k+ for SCVMM for 1 or 2 hosts?

If I was willing to pay that much for centralized VM management I wouldn't be using Hyper-V.
 
Not everyone can afford it, and not everyone can convince the higher ups that they need it.

The reason most people use Hyper-V is because it's free, so why would i want to go out and pay MS another 5k+ for SCVMM for 1 or 2 hosts?

If I was willing to pay that much for centralized VM management I wouldn't be using Hyper-V.

Hyper-V doesn't require a local Windows Server install to be managed effectively. By "centralized management scheme" I was not specifically referring to SCVMM, rather simply a dedicated machine/VM/management interface for managing all of your VM hosts in a consolidated way, as Exavior suggested. If your organization can afford multiple 64GB+ VM hosts but not any way to manage them all together, it has endemic problems.

However i'm pretty sure that live migration actually requires you be running a full OS and not just the hyper-v server without the actual windows explorer shell running. which if you were trying to setup a good setup for a large enterprise deployment you would want that.

I was unaware of this. If correct, that's a disappointing oversight/downfall on Microsoft's part.
 
just looked it up, you must be running Enterprise or Datacenter to use live migration, which makes sense as those are the only versions that support clustering. you can install the core version of them however.

that said a 64GB server isn't anything special. having only a few of those wouldn't give you much need for a centralized program. I would just use the built in windows hyper-v manager for that. now you do need at least the essential version of system center to control live migration. but if you only care about each one being a stand alone, I myself wouldn't worry about buying special software till I was looking at more than 5 or so servers like that.
 
just looked it up, you must be running Enterprise or Datacenter to use live migration

This is incorrect. You can cluster and use live migration with Hyper-V Server 2008 R2. It can be managed from the RSAT tools after setup using either Failover Cluster Mananger if in a cluster or Hyper-V RSAT tools. You couldn't do live or quick migration in the 2008 non-R2 version.
 
thanks for correcting me. I was trying to find the chart that they have that shows which features which versions support and couldn't find it anymore, looks like they redid the site and got rid of that. What they did have stated that you had to have enterprise or datacenter version of server, but that must have just been refer to between the 3 versions of server.
 
I'm now thinking about getting a proper server board with Xeon cpu and do things "right". I'm contemplating what to do with my storage which is currently on WHS 2011 (sickbeard couchpotato and sabnzb). I have some various older parts available and was thinking about building an unraid box or something equivalent, or instead putting it all on the server and doing it in a VM under ESXi or Hyper-V and givng the vm direct disk access and sharing it back to the server via nfs or iscsi.

I've spent so much time researching etc that I'm now at the point where I have no idea what I want to do, but the itch really really needs scratching.
 
I'm now thinking about getting a proper server board with Xeon cpu and do things "right". I'm contemplating what to do with my storage which is currently on WHS 2011 (sickbeard couchpotato and sabnzb). I have some various older parts available and was thinking about building an unraid box or something equivalent, or instead putting it all on the server and doing it in a VM under ESXi or Hyper-V and givng the vm direct disk access and sharing it back to the server via nfs or iscsi.

I've spent so much time researching etc that I'm now at the point where I have no idea what I want to do, but the itch really really needs scratching.



You don't need some high end unit to toy with, you can build a nice hyper-v box for cheap, and go from there if you require more. My first hyper v box was a acer core 2 duo with 4 gigs ram and a 160gig hdd. I went from there after learning it and playing with it.
 
If you're thinking about a virtualized storage solution and are willing to go with a xeon, I'd give serious thought to Gea's "all in one" zfs solution.
 
You don't need some high end unit to toy with, you can build a nice hyper-v box for cheap, and go from there if you require more. My first hyper v box was a acer core 2 duo with 4 gigs ram and a 160gig hdd. I went from there after learning it and playing with it.

Plus 32 GB is your max for ram unless you install enterprise or data center 2008.
 
If you're thinking about a virtualized storage solution and are willing to go with a xeon, I'd give serious thought to Gea's "all in one" zfs solution.

I was definately leaning that way - but the price of 8 GB UDIMMS are out of this world!
 
For the moment, I am living with 16GB. Depending on your guests, you can over-commit pretty heavy and still be okay.
 
For the moment, I am living with 16GB. Depending on your guests, you can over-commit pretty heavy and still be okay.

There won't be a lot of "guests". It will be an all in one playrig at the moment. One VM for OpenIndiana/Napp-it, and all the rest to "play". Considering VM for WHS 2011 for simple client backups and web access to my files, maybe one VM for PFsense, and a couple for Server 2008.
 
16GB should be fine then. I have 6GB for my nexentacore SAN VM, two XP pro VM (1GB and 2GB), a ubuntu mail/web VM with 2GB, an openvpn appliance with 1GB, a pbx in a flash VM (centos5) with 1GB and an astaro security gateway VM with 2GB, and seem to hang around 14-15GB summary under esxi.
 
Oh, yeah, dunno how necessary it is, but make sure you have vmware tools installed on all guests. At the very least, this gets you the balloon memory driver, which helps when overcommitting...
 
Back
Top