Five Servers Into One?

Hurin

2[H]4U
Joined
Oct 8, 2003
Messages
2,410
Hi there,

So, I've got the following five servers in an academic environment.

  1. File/Print server - I'd say no more than 20-25 users ever using it at once.
  2. Web server - light to medium academic duty.
  3. MS SQL server - provides all the database-driven backends to the web server above.
  4. DNS & Primary AD server - The primary MS domain controller and assorted other things (Symantec Corporate Edition server, backup server, etc.)
  5. Linux LAMP server - Linux, Apache, MySQL, PHP.
So, here's the question. On a currently cash-strapped academic budget. . . given that usage of our resources doesn't tend to be terribly high (we're not e-commerce, these are academic sites). . . is it feasible/advisable to move these five individual servers to a single uber-server that would be running Windows Server 2008 and Hyper-V. I'd essentially just duplicate the above, but make them virtual servers.

The Hyper-V server itself would be a:
Core 2 Quad
8GB of DDR2
2-4TB RAID5 array
GB Network connectivity to backbone

A couple of the servers are aging. So the goal here is not to have to buy multiple new servers as well as cut down on the heat and noise in the server room (which is also my freakin' office!).

I realize I won't know for sure until I just start testing. But if anyone has any experience with Hyper-V and its performance, I'd love to hear if I should expect performance to be on par with our present-day, discrete server setup.
 
I would recommend putting the AD controller on a seperate machine, and depending on what kind of performance you need you can stick the others on a virtualized server. I would highly recommend though going for 32GB of ram, and using ESXi instead of Hyper-V.
 
The problem with virtulization in your size is HA. In order to get high avilability you need some type of shared storage. You can easily run a 2 node ESX or Hyper-V cluster if you buy shared storage. 32GB of RAM would be overkill for 5 virtual systems. 16GB would be more than enough and allow for some growth.
 
The problem with virtulization in your size is HA. In order to get high avilability you need some type of shared storage. You can easily run a 2 node ESX or Hyper-V cluster if you buy shared storage. 32GB of RAM would be overkill for 5 virtual systems. 16GB would be more than enough and allow for some growth.
So if I'm understanding you correctly, you're saying that the problem for me would be if that single server dies, I've got nothing. So I should look into some form of redundancy beyond RAID (given that more than just a hard drive or a raid controller can fail!).

Is that right?

Since this is an academic environment, it wouldn't be the end of the world if we lost everything for a few hours (or even a day) while I had to take the (regularly backed up) VHDs and pop them on another Hyper-V server that I'd probably have ready on a basic workstation-class machine as a "hot-swap" server. Does that address this concern?

Sorry if I'm just talking out of my rear. I'm only in the initial "spit-balling" stages of this plan. :D
 
Which flavor of Linux are you planning on running? If RedHat/RHEL, Hyper-V is out of the question support-wise. The only one they officially support is Suse enterprise/SLES 10. No Unix, no Novell, just Windows and one Linux distro.

http://www.microsoft.com/windowsserver2008/en/us/hyperv-supported-guest-os.aspx

That was one of the reasons my organization turned Hyper-V down and continued to use ESX. ESX has a way broader OS support scope, and VMware was providing virtualization products before it became a big deal in the last couple of years. Also, I haven't read too much into it, but resource-wise on the physical host, i'd rather use ESX and it's small/efficient VMcore/hypervisor than use Hyper-V sitting on top of Windows Server 2008. Again, I haven't read on the efficiency of Hyper-V running on top of 2008, so be easy.
 
You're in a tough spot. I will second the recommendation to get more RAM. I'd say 8GB to start, and plan to move up to 16GB, should you need it (keep it in mind, for budgetary concerns). JonBoy69 is correct, Hyper-V only supports one Linux distro currently, and that means that you'll be running Server 2k8, which means more licensing. I would like to tell you that there's an easy direct way to transfer over what you currently have to a reliable platform and not have to incur extra costs, but that's just not realistic, to be honest. In your situation though, here's what I would do (this is all keeping budget concerns in mind, so I hope I don't get bacshed for this):

1) Determine if you current server meets the ESX(i) certified platform list. If these are whiteboxes, are the chipsets and add-in adapters on the ESX IO HCL?
2.) Get some sort of shared storage. If these servers have internal disks of any decent size, or recent production, consider getting a NAS enclosure for them, and get the drives out of the physical servers themselves (I'll tell you why in a second). Keep one drive to install ESXi onto, and boot from.
3) Immediately install 8GB of RAM. RAM is the most used and frquently in contention resource in a virtual environment
4) Install ESXi on the internal drive
5) Set up your NAS box, and make sure that all your VLANs (if you're using any) are properly configured so that the ESXi server can see it's targets.
6) Create VMs on the NAS

When you're to the point that you're giving though to bringing up the second ESXi server, you'll already have your shared storage setup. You just add the storage to the storage configuration on the new host, and off you go. This will get you almost ready to do the next steps, which are VMotion, HA, and DRS. All three of these require licensing however, and that's never free. I believe to start out, you'll be looking at $1500 per physical server. That will also get you a Virtual Center server license to manage the environment. At that point, you'll be able to sustain a single host failure, migrate machines from one host to another, etc.

Your initial buy-in would be cheap (basically just needing an additional 4GB of RAM and a NAS enclosure), and it gives you a planned upgrade path. Initially, there's no additional licensing that you'd have to buy, as ESXi in a single host envinroment without Virtual Center, HA, DRS, and Vmotion is completely free.

Thoughts, anyone?
 
I have read all the feedback with interest. And they are much appreciated (and impressive!). But I must say that I think it's all a bit over-done for our needs. I think you guys are all a bit on the higher-end of things where virtualization is concerned than I was considering.

Since we're a humanities department at a public university with a very limited budget, I was thinking something more along the lines of: "Build a Core2 Quad machine with 8GB or RAM. Throw on Windows Server 2008 with Hyper-V. Create four or five virtual servers. . . move my crap over. . . test. . . done." With maybe a second physical machine (one of the current servers) acting as a backup domain controller in case the Hyper-V server ever quits.

My main concern was with performance. I suspect that with our (relatively) low utilization, we'd probably be okay. But I really don't know how well things scale in this sort of environment.

Oh, and we use Fedora for our linux server(s). I'm told Hyper-V can be made to get along with Fedora, though perhaps not with full "virtualization recognition" available to other linux flavors. Why Fedora? I've always just liked it. And we're not smashing atoms over here. So we didn't need "industrial strength" linux. :D
 
Other flavors of linux will work, just test them out and try. I have Debian working just fine under hyper - V and also if you buy server 2008 enterprise which will be cheap for education...i think $300...it comes with 4 more licenses for windows VMs.
 
re: Hyper-V
I have Debian and Centos running in Hyper-V without any issues. In my experience, most people who whine about Hyper-V and Linux are people who A) never actually used Hyper-V, or B) don't have enough technical background or patience to excersise due diligence and install a legacy network adapter when installing Linux on Hyper-V.

In an academic environment "support" is overrated anyway because most installations are part of a volume license or MSDN Academic Alliance license where you don't really get any support anyway.


re: RAM when running under Hyper-V
Having said that, 8 GB isn't enough RAM for the host and 4 VMs.
Different people will have different opinions on this, but FWIW here's my take as someone who is actually currently running Windows 2008 + Hyper-V: 2 GB for the host and 2 GB for each VM.

Rationale:
The Windows 2008 Server host needs 2 GB, no two ways about that. Save yourself and do a full install, not Server Core. Each additional Windows 2008 VM needs 2 GB if you run IIS, or SQL, or AD, or anything other than what comes with a "no roles" and "no features" install. Linux VM RAM needs vary widely, 512 MB are often adequate unless you run memory intensive tasks.

The #1 performance limiter in virtualization is file I/O. More RAM, less file I/O (yes, there are exceptions, let's just go with generalization on this one), and RAM is cheap.


re: VMware ESX(i)
ESX(i) main issue is the very restricted hardware compatibility especially when it comes to storage controllers. If you happen to have one that is compatible, good for you, otherwise you are in for a big investment.

That's the only thing I am going to say about ESXi. Most people tend to bring up ESX functionality when talking about ESXi forgetting that the licensing is killer for academic environments (VMware really should give academia more of a break, the current academic licensing is just lip service.)

For academia the "best buy" is VI3 Foundation (or VI3 Standard if you need HA), but that works on a 2-socket basis, so if you only have a single CPU machine, you sort of overpay. VI3 Foundation academic is $1400 (license and support, can't buy seperately), and it will NOT get you Virtual Center (which you don't need anyway).


I work in academia myself, and here is what I did, and what worked for me, YMMV.
My first virtual environment was running Virtuozzo (commercial product, $1500/year/2-sockets). Then I switched to Hyper-V (free, for practical considerations) which met our needs better than Virtuozzo.

Then after a year and a half into virtualization I am moving to VMware Virtual Infrastructure (which is the de facto industry standard for virtualization).

My point? When you are on a budget, and you need something that works, Hyper-V is a good solution if you don't run "mission critical" systems. With Hyper-V you can get going while at the same time being able to later demonstrate the need for a better system and hopefully receive funding for it.

If you by some miracle get ESXi to work on your "consumer grade" hardware, then you may just run into the issue of: "Oh, you are already using VMware! Great! No need for us to spend any funds on buying it!"

The difference between ESXi and ESX may be obvious to us, but to administrators one VMware product is just like the other, it's "VMware after all!", and it may be difficult to argue the need for funding if you are "already using it".
 
Thanks for the info Thuleman. I'll definitely be doubling the RAM to 16GB since it's most likely going to be dirt-cheap DDR2 anyways.
 
One thing to add is that I would recommend a workstation class board by Supermicro or something along those lines. If possible setup on one the other servers the same setup for back-up purposes. Just in case main server goes down, at least you have something. It may be slow, but at least its functional. Just a thought as I was reading through this thread. :)
 
I would recommend putting the AD controller on a seperate machine.....

From experience I second this. You need to have your primary authentication on a physical server, not a virtual one.
 
Simple and technical is Yes, from a performance perspective reasonable performance can be expected in this configuration.

Others have given good advise. Use a seperate physical machine for resundant AD, a desktop will do. Additionall memory would be the best upgrade if possible. 8G will likely buy you some time for 4G sticks to come down in price.

A shared storage solution will aid in high availability. OpenFiler is a inexpensive option.
 
Sorry to dig up a month old post, but i have to say a few bits here...

First and foremost, everyone has been pushing overly complicated setups, and the OP just want's to run a basic setup. I'm going to guess since he's a department techite, he's probably not expecting 30 million hits to his servers in 20 minutes...

I'm guessing the hardware he's running currently is 3-5 years old (hand-me-downs from labs and such). So putting a c2q machine with a buttload of ram in it would be quite a step up.

Here's my 2 cents;

I have had 10 VMs running under windows XP host on modest hardware (all commodity) without issue. Now, i will say that when i PURPOSEFULLY tried to crash a machine, i could (ie, runaway sql queries eating cpu and ram, latest apache exploits, etc.).

I would say that a c2q is good enough for now (and possibly in the future depending on what kind of traffic you generate), but definately get the ram upgraded. 16gb will keep you good probably until you or your predecessor decide to redo the system. The NAS idea isn't a bad one, but you could also get by with running raid0/1 drives.

I have a LAMP server, AD server, win2k3 sbs, and a putz around linux distro installed running under windows XP pro, 2x 80gb mirrored drives, 4gb ram, and a celeron 2.8... Yeah, it's painful some days, but it serves up just fine for about 50-60 users hitting the lamp VM for custom CRM apps and cvn all day long... the sbs is strictly handling sharepoint and blackberry exchange crap (not my choice)... CPU lives around 40-60% at any given point and the ram is damn near maxed, but hey, it holds fine for throwaway hardware....
 
Well, reviving this thread to follow-up and ask a new question. . .

The move to a virtualized server environment was put on hold due to budget cuts. But, at some point, this is no longer going to be elective. When I can no longer hold things together with "spit and wire". . . we'll have to bite the bullet and move to a server like the one I outlined above.

Considering the duties specified in the first post. . . does it sound reasonable to shave approximately $400-600 off the budget and go with Intel ICH10-R for the RAID array? I realize overall performance numbers would suffer compared to a discreet RAID controller (was considering HighPoint RocketRAID 3520), but would the difference actually be noticeable to the end-users? If RAID controller is judged to be mandatory, I'd love to hear suggestions in the sub-$400 range that would perform adequately.

Thank you for all the replies and advice so far!
 
I am not sure that is a good option. I believe the ICH10R (motherboard RAID) is still a "fakeraid". This is not as robust as going with a separate hardware RAID controller. Also, if you are considering going ESXi, I know update 4 (released 3/30) now supports ICH10, but I don't think it supports any ICH10R. In the past it has only supported independent disks on the intel controllers.

I bit the bullet on my home virtual server and bought an Adaptec 3405 controller to run ESXi and a second NAS server (openfiler guest). It was not cheap, but knowing that I won't lose any data because of a dying disk in my basement was worth it.

If you are going to be consolidating that many servers I would definitely use hardware based RAID from a good vendor.
 
Heh, I guess I was just engaging in wishful thinking. I'll be sticking with "real" RAID. Thanks.

At this point, I'm still almost certainly going to be in a Windows environment with Hyper-V. But ESXi compatibility would be nice in case I change my mind.

So, I'm torn between the Highpoint RocketRAID 3520 and the Adaptec 3405 (kit, so I don't have to buy cables).

The RocketRaid has native support for 8 drives without need for extenders. It also has 256MB cache vs 128MB on the Adaptec. But with four 1GB drives in Raid 5, I don't see us needing to expand past four ports anytime soon.

The difference between them is $90. So, with only that much difference, I guess I'd rather go with the better one. I'm just not sure which that is.
 
Nevermind, I'm probably going with an Adaptec 5405. It has a much faster Intel processor than the 3405 for not much more money ($409).
 
Almost any raid controller will work with ESXi/ESX3.5. Don't go with something that expensive unless your want to waste your money on a local storage solution. Its a shame that so many people are *STILL* building their servers around the old ESX HCL.:rolleyes:

ESX4 will be out, along with its full SATA support if you want to stick with local storage.
 
Almost any raid controller will work with ESXi/ESX3.5. Don't go with something that expensive unless your want to waste your money on a local storage solution. Its a shame that so many people are *STILL* building their servers around the old ESX HCL.:rolleyes:

ESX4 will be out, along with its full SATA support if you want to stick with local storage.

I have a copy of ESX 3.5 and it did not see my Adaptec 3405. ESXi 3.5 loads the driver and sees my logical drives, but it was a no go with full ESX even though Adaptec claims the driver is in there. (I did not search for it since ESXi will do what I need it to) However, the OP is going to be using this server for what seems like a production environment. What type of brands/models of controller would you suggest that you would trust for running departmental servers?
 
Ah yes, the Dell Perc 5/i

Great little controller for the money. It can really run and is super stable.

The Adaptec 5405 is a good controller also.
 
I have a copy of ESX 3.5 and it did not see my Adaptec 3405. ESXi 3.5 loads the driver and sees my logical drives, but it was a no go with full ESX even though Adaptec claims the driver is in there. (I did not search for it since ESXi will do what I need it to) However, the OP is going to be using this server for what seems like a production environment. What type of brands/models of controller would you suggest that you would trust for running departmental servers?
Not sure why it didn't recognize your card, but ive built tons of white boxes with numerous different vendors which worked great. If you can, get yourself a more recent build of ESX3.5. Many people have gotten that card working with 3.0.x, check the VMTN.

Your question about a production environment, None. If I was building out a VI(And when I built out the then 2nd largest in the world, 1500+ running virtual servers) I would select a server from the HCL and stick with that for support purposes. I would also steer very clear from local storage. I need my LUNs to be shared(weather it be NFS/iSCSI or FC), period. VMotion/HA-DRS is a standard, and using these with local storage is a kluge hack at best.
 
Your question about a production environment, None. If I was building out a VI(And when I built out the then 2nd largest in the world, 1500+ running virtual servers) I would select a server from the HCL and stick with that for support purposes. I would also steer very clear from local storage. I need my LUNs to be shared(weather it be NFS/iSCSI or FC), period. VMotion/HA-DRS is a standard, and using these with local storage is a kluge hack at best.

OK, but the OP said he is in an academic environment with his funds limited. So lets say he doesn't have they money to spend on a SAN which I think is actually the case.

Your other post said you don't know why people are wasting money on local storage controller cards on the old HCLs anymore and that most any RAID controllers will work with ESX(i). I am just curious as to what you would recommend for this user's problem if SAN is not an option.

Almost any raid controller will work with ESXi/ESX3.5. Don't go with something that expensive unless your want to waste your money on a local storage solution. Its a shame that so many people are *STILL* building their servers around the old ESX HCL.


I personally think he (or she) could stick to a high quality RAID controller for what he is trying to do which he is going to have to spend some cash on. I am just trying to figure out where you thought that was a waste of money.
 
Yeah, I gotta say, 90% of this thread reminds me of how it always goes when a "basic use" friend or client asks me to spec out a computer for them. They'll just be using it for web browsing and Microsoft Word. But I constantly have to remind myself that they don't need low-timings RAM, the motherboard with the 8-phase power, or the video card that will give them decent gaming capability. They won't ever notice such things and would have been just as happy with a Dell "off the shelf."

So, just to simplify things:

  • I'm going to be using Windows Hyper-V.
  • I'm going to be using local storage on a RAID controller.
  • My budget is about $2000.
  • I intend to use standard (non-server) components in order to save money. I've done this with all five of my servers in the past and have not had any trouble.

Here's the build so far:

  • i7 920 (Core i7 will make it easier to squeeze in more RAM at lower cost since so many boards have 6 slots)
  • Asus P6T Deluxe (Deluxe = two integrated NICs)
  • 12GB RAM (6x2GB)
  • 4 WD RE3 Hard Drives (1TB each, 7200RPM)
  • Adaptec 5405

Again, I'm not looking to build the second largest server farm in the world here. I'm just looking to move a file server, a web server, a SQL server, and some other basic stuff onto one server via Hyper-V. I think we've covered a lot of useful information here and some of it has helped me think things through a bit more and prepare. But telling me that I should get fancier and spend more money isn't going to get us anywhere so long as Hyper-V will work with acceptable performance. Other things might be nice, but if it's not necessary for my needs, I can't justify the expense. :D
 
Hurin,

What's your plan to take them from physical to virtual? I have been using VMWare converter and it has worked great turning 2 physical machines into Virtual.

Microsoft had a process years ago when I turned 2 physical windows 2000 servers into Windows server 2005 virtual machines. But that process was LONG. I don't remember what product they used, but it involved me building and installing two other machines and making sure things could boot PXE. A serious pain in the ass, but it worked. Hopefully they have a cool tool like VMware now.
 
OK, but the OP said he is in an academic environment with his funds limited. So lets say he doesn't have they money to spend on a SAN which I think is actually the case.

Your other post said you don't know why people are wasting money on local storage controller cards on the old HCLs anymore and that most any RAID controllers will work with ESX(i). I am just curious as to what you would recommend for this user's problem if SAN is not an option.




I personally think he (or she) could stick to a high quality RAID controller for what he is trying to do which he is going to have to spend some cash on. I am just trying to figure out where you thought that was a waste of money.
hmm.... maybe you haven't built very many whiteboxe's recently:confused:. If you decide to go with local storage, you can get away with most dinky ~$50 SATA cards. Like I said, most onboard chipsets are also supported(not officially) in the recent build for ESX3.x . I don't know how else to explain it to you...... maybe lab it?

Hurin,
All those components will be fine for esxi. You're making a mistake going with Hyper-v, its an inferior product and I really don't want to go into the reasons why, and frankly don't have the time. Like I said before, people have gotten that card working before, you didn't check VMTN obviously. Heres just one link
http://communities.vmware.com/message/1011057

You're budget is more than enough. If you feel the need to not want to go with something free and better... thats you're decision.
 
hmm.... maybe you haven't built very many whiteboxe's recently:confused:. If you decide to go with local storage, you can get away with most dinky ~$50 SATA cards. Like I said, most onboard chipsets are also supported(not officially) in the recent build for ESX3.x . I don't know how else to explain it to you...... maybe lab it?

I have only built 2 whiteboxes this year and was looking for the cheapest storage solution that was PCIe and could do RAID5. I thought the Ebay'd PERC 5i's was that standard looking through the forums and other sources.

So for whiteboxes what onboard controllers are supported that can do RAID? I thought the ICH series were only supported as independent disks even with ESXi update 4?

And these "dinky ~$50 SATA cards" , are they RAID capable? Point me to a model in that price range if it can do RAID (esp. RAID5 and run ESXi). I know both of my whiteboxes running right now work fine with the motherboard disk controllers for ESXi, but damn, I don't want to come home one night and find my servers and storage crashed because of a single disk failure.

I was looking for that el-cheapo RAID card that worked with ESXi. Gimme a link, I want to buy a couple instead of wasting more money!
 
I have only built 2 whiteboxes this year and was looking for the cheapest storage solution that was PCIe and could do RAID5. I thought the Ebay'd PERC 5i's was that standard looking through the forums and other sources.

So for whiteboxes what onboard controllers are supported that can do RAID? I thought the ICH series were only supported as independent disks even with ESXi update 4?

And these "dinky ~$50 SATA cards" , are they RAID capable? Point me to a model in that price range if it can do RAID (esp. RAID5 and run ESXi). I know both of my whiteboxes running right now work fine with the motherboard disk controllers for ESXi, but damn, I don't want to come home one night and find my servers and storage crashed because of a single disk failure.

I was looking for that el-cheapo RAID card that worked with ESXi. Gimme a link, I want to buy a couple instead of wasting more money!
I totally had a post of comments flaming you, but I figured I'dd delete them. I get the feeling that you're being a bit condescending and all I have to say to that is that you should do research on who you are talking to before you start talking. Ive also been in the virtualization game since the early GSX days(actually, come to think of it.. I used it in its release year quit a bit).

Also, the ESX SATA/RAID batte has been going on for YEARS. Its bascially trial and error, but if you want concrete proof about dirt cheap cards working.. .I guess I need to spoon feed it to you ;)

- http://www.overstock.com/Electronic...d=123620&fp=F&ci_src=14110944&ci_sku=11096466 - Sub $100
Tried and tried card that works great with all ESX flavors for the most part. Again, check out the VMTN.

- http://www.buy.com/prod/promise-sat...-7-pin-serial-ata-300/q/loc/101/10411959.html
Again, tried and tried. Even on the unsupported SATA/RAID supported card pages. Not raid, but DIIIIIIIIIRT cheap.


http://www.vm-help.com/esx/esx3.5/Whiteboxes_SATA_Controllers_for_ESX_3.5_3i.htm

Im not going to continue listing out cards, im sure you will be able to find many $50-100 Raid compatible cards. Dont give me this crap about "fakeraid" or BIOS raid either.

OP, sorry the thread jack. It always seems someone has to get the last word in. Hopefully this will be the last for this "informational request" by drake. Lastly, CHECK out that unsupported HCL and start building a whitebox from there.
 
xphil - that is really all I wanted to know. I honestly wanted to know what cards you thought or could show me that would work that were actually that cheap. I am in no way wanting to get into a virtual server experience pissing match. I am sure by far you have much more experience with virtualization than I have.

Look, The problem started when I didn't understand why you were banging on people for "wasting money" on storage controllers that were listed on older HCLs as you said.

I thought this was either in reference to my post about buying my Adaptec 3405 or Hurin's post about buying an adaptec controller.

So, I really didn't know who you were saying was wasting their money building servers from old HCLs and really still do not know.

Yeah, you showed me a link to a PCI-X adaptec RAID controller for under $100. That doesn't fit my needs because I needed PCIe RAID. I think I bought the best controller that fit my requirements for my job. I don't think I wasted money and don't think Hurin is either. I was looking for you to expand on your comment on who was wasting their money on what.

I could be wrong, but I tried not to insult or provoke you, but you had to come back and let us know you did build the 2nd largest virtual infrastructure... and yeah the good old days of GSX. Thanks for your help - sorry to get you upset.

And Hurin, yeah, sorry if I threadjacked.
 
xphil - that is really all I wanted to know. I honestly wanted to know what cards you thought or could show me that would work that were actually that cheap. I am in no way wanting to get into a virtual server experience pissing match. I am sure by far you have much more experience with virtualization than I have.

Look, The problem started when I didn't understand why you were banging on people for "wasting money" on storage controllers that were listed on older HCLs as you said.

I thought this was either in reference to my post about buying my Adaptec 3405 or Hurin's post about buying an adaptec controller.

So, I really didn't know who you were saying was wasting their money building servers from old HCLs and really still do not know.

Yeah, you showed me a link to a PCI-X adaptec RAID controller for under $100. That doesn't fit my needs because I needed PCIe RAID. I think I bought the best controller that fit my requirements for my job. I don't think I wasted money and don't think Hurin is either. I was looking for you to expand on your comment on who was wasting their money on what.

I could be wrong, but I tried not to insult or provoke you, but you had to come back and let us know you did build the 2nd largest virtual infrastructure... and yeah the good old days of GSX. Thanks for your help - sorry to get you upset.

And Hurin, yeah, sorry if I threadjacked.
Again, I deleted my flaming comments for a reason... didn't want to start any wars. Ive been extremely frustrated recently(CCIE approaching) and haven't had the time to get back into what I really love doing, virtualization. I lurk on this subforum quite a bit and see crap all the time and misinformation.... I guess I just got fed up. Again, what I was trying to convey was that if you're going to build a whitebox you can start at the bottom(motherboard) and build it in a very cheap manner without touching the HCL. If you want solid hardware raid, you can find many server borads that can tout that for well under 400 bucks, which is less than many stand-alone raid cards. To each their own I guess.

One last thing, since you're new here:
I pretty sure Im considered a troll by now here, though Its mostly from jealously. Im extremely young and have more experience than most. I deal with this on an professional level daily though, but on these forums I can handle it a bit differently :p So don't take too much offense to what I spew
 
The only thing that got me torqued is that I was trying to help the guy by telling him what controller I used for my home whitebox. You followed up with the "wasting money/HCL" bit, but did not explain it at all even after I tried to get you to do so. So, I see your point now, even though I think it was misdirected at this thread if you read the entire thing and what the OP is trying to accomplish.

I did not spend a long time looking for a similar solution to the OP's issue for my home system, but I surely wanted to know why you may have thought our ideas could be considered wasting money.

I am not looking for the last word here either. I understand what stress can do to people. Good luck on your CCIE test. Maybe you will get to be number 21000 and have that embroidered on your CCIE shirt.
 
Last edited:
Like I said before, people have gotten that card working before, you didn't check VMTN obviously. Heres just one link
http://communities.vmware.com/message/1011057
Forgive me, but I don't know where you got the idea that I didn't think "the card" would work. Second, I'm not sure where it's "obvious" what I checked and didn't check. All I said was that I was looking for a good RAID controller. And to keep things simple, because I got the sense people were trying to get somewhat exotic, I mentioned that I'm leaning towards Hyper-V.

If you feel the need to not want to go with something free and better... thats you're decision.
Gee, thanks! This thread feels like it should be in the video card forums at this point.

For the record, our Windows Server licenses are dirt cheap. And since we will be buying Windows Server licenses regardless, Hyper-V is essentially free as well (whereas ESXi will be extra in licensing if we make use of HA, DRS, etc.). As for Hyper-V being inferior. . . I believe you that it is. . . but if it does what I need it to do and there is no detrimental effect on performance apparent to my users, for ease-of-use and learning curve purposes, it may just make more sense to go with the Microsoft solution in what is already a very Windows-centric environment.

I pretty sure Im considered a troll by now here, though Its mostly from jealously.
I assure you that I don't know enough about you to be jealous of you. Since I don't know anything about you at all. But I can tell you that based on the tone of your posts, I don't think jealousy is one of the primary reasons some might consider you a "troll." But, I am grateful for your input even if it was delivered a bit combatively. Thanks for participating.

Drake3, Thanks to you as well! Regarding migrating the physical servers. . . I may take this opportunity to "clean up" the servers and just move the services to "fresh" installs of Windows Server 2008. Though that's problematic for domain controllers. I once spent 72 hours over a long weekend rebuilding our Windows 2003 domain from scratch just to get a fresh, pristine foundation. We had been on a domain that had been upgraded form NT 4.0, to Win 2000, to Win 2003. . . and I just wanted to start over clean. I may do something like that again. Though with virtualization, I can do so a bit more gracefully. :D

Edit: For the record, I do intend to goof around on the server hardware and try ESXi after the hardware arrives (it looks like we'll be ordering it sooner than I thought). Who knows, maybe it will grab me and I'll go with it.
 
Last edited:
heh, you make me giggle hurin. :p The entire dialog was between myself and Drake, and Im pretty certain that we also apologized for the thread jack. If you dont know who I am, visit the network and security forum(this forum was a spin off of that). ;)

If you think im combative surely you havne't worked in the IT industry long, I feel as though I was very diplomatic in fact! LOL :p I didn't even flame. I swear, some people need to lighten up, arguing is good with facts... Learn this

But anyways, to reiterate, because you clearly got the wrong impression from my posts... I was not aiming any of the comments towards you except for MY opinion of Hyper-V. Glad we got that cleared up.

Drake: Again, the "wasting your money" was a general comment for this entire subforum as a whole. It seems as there there are a few people who give inaccurate information, that i have noticed from a few months of lurking.

And yes, I am extremely proud that I built out that environment and was part of the engineering group(lead for the entire network)... tell me, how many 23 year old's get an opportunity like that..... :eek:
 
Back
Top