Need Advice / Review of my first ESXi Build

bluesdoggy

Limp Gawd
Joined
Jul 14, 2000
Messages
442
I'll try and keep this as concise and organized as possible; however, I want to present a full picture in order to make my situation clear for you. Just know up front that any advice / tips / critiques are welcome.

Project Statement: I'm the one-guy IT staff for a small logistics company. I am in the process of reorganizing our computing infrastructure and trying to get positioned for the next 3 to 5 years of progress. I feel that we can benefit from virtualizing some of our existing servers / applications and grow this as we move forward. I am working towards this goal on a somewhat limited budget due to business size / industry conditions.

Current Setup / Infrastructure: Our main business application runs of of an IBM Power6 server (as400 / iseries / systemi, depending on your age). We run < 5 servers total right now of other varieties (file, AV, firewall, applications, etc..). My userbase (20 ~ 25 users distributed across 3 locations in a 10/10/5 split) work on windows workstations in an workgroup environment. Offices are connected to each other by an MPLS cloud with varying pipe sizes from each office to said cloud. The office I am designating as "the data center" will have a 10 meg pipe out. Traffic on this cloud includes VOIP. Aside from a fairly robust service agreement with IBM for hardware and software support on the Power6 box, we do not maintain any other managed equipment aside from edge routers for the MPLS cloud; I handle everything.

Why Virtualize : I deal with fairly limited physical space for computer equipment and am in the process of upgrading what little space i do have to offer better temperature control. Now is a good time to introduce / revise our hardware setup. Currently our "servers" are, for the most part, white boxes that i've constructed. From a shared applications standpoint, we currently run multiple applications off of the same boxes in the same OS instances, meaning errors / maintenance on one app takes down the others as well. Because we continue to add apps at a rate of 1 every 12 ~ 18 months, resources on some of these boxes is becoming an issue.

I feel that moving to a small-scale virtualized scenario would accomplish several goals that i have : it would allow for the maintenance of less physical hardware, increase availability of apps by giving them each their own sandbox to play in, and would help me better utilize my limited physical space.

My Proposed Strategy: I am not a complete newb when it comes to virtualization; however, I've never attempted to run ESXi before. With this in mind, I am going to begin by putting together either a purchased (IBM / HP / DELL) server or a whitebox server with the purpose of running vSphere Essentials on it. I will initially use this as a test-lab to familiarize myself with the technology and to get a feel for how things scale, need to be configured in a virtual environment. After this initial phase I will move into a small-scale production environment where i virtualize our apps, one at a time until all are operating under a virtualized environment. I will then begin to transition services such as firewall, av, backups, etc.. over. The next phase will be sometime next year / year and a half where I had a second physical box into the equation and upgrade to essentials plus. I will then begin to utilize some of the more advanced features like high-availability.

My proposed hardware: I am working with my CDW rep to obtain a spec'd out box from a few OEMS. I'll ignore those for the moment and focus on the whitebox configuration i've come up with.

CPU: Xeon X5650 (2.66ghz LGA 1366, 6 core)
Motherboard: Supermicro X8DAH+-F-O
Ram: 3 x 8GB or 6 x 4GB of supported, registered EEC DDR3
LAN: Intel EXPI9404PTL 10/100/1000 Quad Port PCI-Express PT low profile server adapter
CPU Cooler: SuperMicro SNK-p00e8P
Case: Supermicro CSE-825TQ-R700LPB
Power: Redundant 700w PSU included with case.

Thoughts / Questions:

1) You'll notice i left out storage. I plan on running ESXi 4.1 off of a usb drive. As for storage for the vms, I am unsure about how to proceed. I know that if i plan to go to a 2 box setup down the line, i'll need some sort of shared storage to get what i want out of it. As for right now, that setup is a bit wasteful (in my eyes). I think my best option would be to throw a raid card in and simply use 4 or 5 disks in a RAID 5/6 array for the first host. I've also thought of using hardware pass-thru and actually virtualizing an openfiler / open indiana instance and using that as a "one box" approach for now that can be used even after i add more hardware later.

2) Am I missing anything glaring ? Am i going down the path of stupidity? Trying to cost-justify investment in virtualization at this small-biz level is difficult for me simply because many times we have settled for "well enough" rather than "done well".

3) What do i need to do about cooling the above setup? are the fans that are included with the case enough or do i need to upgrade? Am i gonna melt the face off of that proc if it is just shrouded without a fan attached?

4) Comments on my hardware setup are appreciated. I put this together based on alot of reading in this forum and others. I don't mind over-building a bit, i want this solution to be viable for several years to come. What i'm deathly afraid of is under-specing. I've left myself room to add a second processor and have put in (what i think) is way more ram than i probably need.

Again, thank you to everyone who contributes here, especially those that take the time to read through this and offer some advice.
 
I don't seem to have a lot of comments on your choice of hardware.. It seems to be in line, but what I might suggest is to check with the vendors that you had mentioned to see if they have a solution that will need your needs on a budget. When my company went down the virtualization route we went through dell and they provided us a package deal consisting of 2 servers loaded up with a fair amount of ram and their MD3000i san with a few disks to get our feet wet. Depending on how much they want your business and how much you have in the budget to do all of this the company you decide to go with might play ball and undercut the next if they know you are seeking out the same solution from other vendors.

With that said I think your going down the right patch with putting esxi 4.1 on a usb stick. All of our ESXi servers past and present have been setup with a 1 gb sd card where esxi is installed and we've never had any troubles.

The one thing that always makes me nervous is using local storage for VMs. Something could happen to the server (failure, raid card failure, or something else I'm not thinking of) and there goes the VMs that you have on that box. I've always personally had the believe of utilizing some sort of SAN or shared storage for the host servers for various reasons. Lets say you have a 2 esxi box setup. if you have all of the vmdk / vmx files on the shared storage it's just a matter of re-registering the vms on the other box, turning them on and away you go. Minimal downtime.

If your strategy you mention only one box to start out with.. My opinion would be to get 2 to start out with for redundancy purposes. If you move all of this stuff into one box and that one box takes a dump then what are you to do? Sit on your thumbs and hopefully the vendor pulls through in the 4 hour turn around you have for support?

Just my 2 cents
 
I don't seem to have a lot of comments on your choice of hardware.. It seems to be in line, but what I might suggest is to check with the vendors that you had mentioned to see if they have a solution that will need your needs on a budget. When my company went down the virtualization route we went through dell and they provided us a package deal consisting of 2 servers loaded up with a fair amount of ram and their MD3000i san with a few disks to get our feet wet. Depending on how much they want your business and how much you have in the budget to do all of this the company you decide to go with might play ball and undercut the next if they know you are seeking out the same solution from other vendors.

With that said I think your going down the right patch with putting esxi 4.1 on a usb stick. All of our ESXi servers past and present have been setup with a 1 gb sd card where esxi is installed and we've never had any troubles.

The one thing that always makes me nervous is using local storage for VMs. Something could happen to the server (failure, raid card failure, or something else I'm not thinking of) and there goes the VMs that you have on that box. I've always personally had the believe of utilizing some sort of SAN or shared storage for the host servers for various reasons. Lets say you have a 2 esxi box setup. if you have all of the vmdk / vmx files on the shared storage it's just a matter of re-registering the vms on the other box, turning them on and away you go. Minimal downtime.

If your strategy you mention only one box to start out with.. My opinion would be to get 2 to start out with for redundancy purposes. If you move all of this stuff into one box and that one box takes a dump then what are you to do? Sit on your thumbs and hopefully the vendor pulls through in the 4 hour turn around you have for support?

Just my 2 cents

Thanks for the feedback.

I understand your reservations about using local storage, that is the one part of this that leaves a funny taste in my mouth as well. I really don't think i can swing a SAN appliance from one of the big OEMS though in terms of budget.

What are the community's thoughts on using something like a NAS-like device from Qnap , Synology or Drobo? I know Drobo is typically frowned upon as overpriced, but i've seen a Synology unit that was vmware certified and that could act as an iSCSI target.

There's also the option of using Openfiler on a separate box built out with just storage in mind, say a Norco unit with a quad intel nic, and a micro atx motherboard, a pci-x sas drive controller that could handle a bunch of disks, no raid, and whatever proc/mobo combo would handle the load.

Speaking to your concern about hardware failure in a single box scenario, if i went Whitebox i think i'd have enough budgeted to basically just build 2 of whatever i choose, so i could have another box ready to step in if a piece of hardware goes bad. With an OEM solution, i don't know that i'm gonna be able to due to budget.
 
Have you thought about a ZFS NAS/SAN appliance? You could install some flavor of opensolaris, and napp-it to manage it. Works very well with ESXi.
 
i had thought of doing an open indiana box and using it as a SAN. I've also looked at buying something like dell's 3200i (apparently the successor to the popular md3000i) but that is going to be cost prohibitive i think.
 
You don't need a super high-end box to serve as the SAN...
 
Last edited:
Yes, what is the $$$ constraint? Depending on the answer and your other constraints, an all-in-one might be a decent way to go.
 
its not well defined, but speaking in general i've got around 6k to work with in total. I could probably stretch that to 7 or so, but thats about the max.
 
Thanks for the feedback.

I understand your reservations about using local storage, that is the one part of this that leaves a funny taste in my mouth as well. I really don't think i can swing a SAN appliance from one of the big OEMS though in terms of budget.

What are the community's thoughts on using something like a NAS-like device from Qnap , Synology or Drobo? I know Drobo is typically frowned upon as overpriced, but i've seen a Synology unit that was vmware certified and that could act as an iSCSI target.

You should still take a look at the big OEMs before dismissing them because of cost. Depending on space requirements you might be pleasantly surprised.

I've heard of a couple of people using the synology / drobo for backup to disk type things, but not for shared storage. I can't imagine that it wouldn't work the same. I remember reading that Qnap has vmware ready appliances to use for shared storage. Check them out too.

Like everyone else is asking what is the budget look like for this project?
 
Some of Synology's stuff are VMWare ready and support iSCSI. Call them up, their pre-sales guys are very knowledgeable. Bought a DS1511+ (business class) a couple of months ago for home use myself.
 
If you go the one ESXI server to start out, Check to see if any of your current server could serve as a backup for now as a second ESX box incase your one goes down. ;)
 
Well..let me first start out by saying that I come from a rather large enterprise so my mindset is a bit different, however, I still think my points are necessary in any size business:

1. Do not go whitebox. You will be left holding the stick..when sh!t goes down. Get a decent server from a reputible company that has a great support package..such as Dell/HP..etc. It would be very bad if you migrated critical apps and you have to go through the process of RMA..etc.

2. Do not build the SAN on the virtualized environment for the same reason above. Your virtualized environment goes down, so does the SAN..etc. Go with a reputable NAS, Synology was mentioned but you need to consider what are the required IOPs..etc for your apps.

I see this too many times. There are recommendations for "lab" type equipment for a production environment. This way of thinking is flawed as you can see from my points above. Yeah...it may work, yeah, you may hear what other people are doing today with "no problems," but you have to look at the risk. Also, another thing I see is no mention of network backend..etc. This is one of the most critical pieces, everyone is worried about Compute, CPU/Memory, but rarely do I see people's networking strategy to VMware deployments. Do your research, ask questions.
 
Last edited:
Well..let me first start out by saying that I come from a rather large enterprise so my mindset is a bit different, however, I still think my points are necessary in any size business:

1. Do not go whitebox. You will be left holding the stick..when sh!t goes down. Get a decent server from a reputible company that has a great support package..such as Dell/HP..etc. It would be very bad if you migrated critical apps and you have to go through the process of RMA..etc.

2. Do not build the SAN on the virtualized environment for the same reason above. Your virtualized environment goes down, so does the SAN..etc. Go with a reputable NAS, Synology was mentioned but you need to consider what are the required IOPs..etc for your apps.

I see this too many times. There are recommendations for "lab" type equipment for a production environment. This way of thinking is flawed as you can see from my points above. Yeah...it may work, yeah, you may hear what other people are doing today with "no problems," but you have to look at the risk. Also, another thing I see is no mention of network backend..etc. This is one of the most critical pieces, everyone is worried about Compute, CPU/Memory, but rarely do I see people's networking strategy to VMware deployments. Do your research, ask questions.


Addressing each point:

1) I agree with you 100% in terms of having robust, immediate support available for business critical apps. Our "life blood" is the software running on our IBM power6 box, and on that one i maintain not only round the clock support on hardware and software, but also proactive hardware monitoring. For everything that would be virtualized, at least at first, a small amount of downtime wouldn't be killer. I generally keep a few parts here as immediate backups (some ram, hard disks, etc...) and until we go to a situation where we have multiple boxes in a high availability scenerio, i'll keep our existing hardware available as a back up. I'm well aware this wouldn't fly in a larger, more corporate environment but its one of the concessions i'm used to making because of our size and the constraints on budget.

2) The more i've read on virtualizing the SAN, the less i like the idea. It seems like, especially starting out with a one box approach, i'm just overcomplicating things and introducing fail points, virtualizing the san would make less sense than just using localized storage until needing to migrate to san for multiple boxes. I will probably play around with this some in my home lab environment, but thats it. This leaves me looking at hardware appliances (the synology unit mentioned previously in this thread is quite attractive), a dedicated machine for SAN, an oem SAN, or just using localized storage for now. If i'm missing an affordable oem SAN (for me that is a san < 3 ~ 4k) solution someone please let me know; i just haven't been able to find it.

3) I have glossed over network infrastructure. My general plan is to have each physical host tied to a gigabit managed switch. I'll trunk together ports to offer increased bandwidth. A separate switch will be used for management ports. connection out to our main network will go through another trunked connection.

Thank you for the advice and words of caution.
 
Addressing each point:

1) I agree with you 100% in terms of having robust, immediate support available for business critical apps. Our "life blood" is the software running on our IBM power6 box, and on that one i maintain not only round the clock support on hardware and software, but also proactive hardware monitoring. For everything that would be virtualized, at least at first, a small amount of downtime wouldn't be killer. I generally keep a few parts here as immediate backups (some ram, hard disks, etc...) and until we go to a situation where we have multiple boxes in a high availability scenerio, i'll keep our existing hardware available as a back up. I'm well aware this wouldn't fly in a larger, more corporate environment but its one of the concessions i'm used to making because of our size and the constraints on budget.

2) The more i've read on virtualizing the SAN, the less i like the idea. It seems like, especially starting out with a one box approach, i'm just overcomplicating things and introducing fail points, virtualizing the san would make less sense than just using localized storage until needing to migrate to san for multiple boxes. I will probably play around with this some in my home lab environment, but thats it. This leaves me looking at hardware appliances (the synology unit mentioned previously in this thread is quite attractive), a dedicated machine for SAN, an oem SAN, or just using localized storage for now. If i'm missing an affordable oem SAN (for me that is a san < 3 ~ 4k) solution someone please let me know; i just haven't been able to find it.

3) I have glossed over network infrastructure. My general plan is to have each physical host tied to a gigabit managed switch. I'll trunk together ports to offer increased bandwidth. A separate switch will be used for management ports. connection out to our main network will go through another trunked connection.

Thank you for the advice and words of caution.

Use local storage, don't run a virtualized SAN for what you're trying to do. Once you get a shared storage solution later you can migrate to it with svMotion (if you have the license) or cold migrations. If you were building a LAB a single box solution might make sense for learning purposes, but for what you're doing I think local storage makes the most sense and it appears you've already come to that conclusion yourself.

As far as the networking goes, I wouldn't isolate the virtual machines to one switch and then management to another. I would have both types of traffic on both switches as that will provide redundancy. Your post above says a quad port NIC, but I'd really recommend 2 dual port NICs over the single quad. That way you can have one interface from both cards teamed for the management networking touching both switches, and the same for the virtual machines. Having the two dual port NICs allows you to avoid another single point of failure. Ultimately the server itself could just die, but it is smart to be redundant as possible where affordable.

I cannot recommend you use whitebox hardware, because when you call VMware for support -- they will demand you use servers on the HCL. Obviously that could be a real problem. I'd be interested to see what you would save by using non-HCL hardware, I imagine it could be several hundred dollars, but to me that just doesn't justify it.
 
I have to agree with defuseme2k on the support issue. Virtualization can be frustrating if you don't have support.

Two things:

Consider a HP proliant ML330 box for your VM.

Consider either a QNAP NAS or build a FreeNAS box using version 2.0 and implementing ZFS.

Ether way connect them to your VMserver via iSCSI

Personally I would go with the QNAP box 459 Pro II or larger. Mostly because its mechanically less complicated and there is less that can go wrong. Also your worse fall back position in a jam would be buying a second unit, but that would be a rare case.
 
Just a quick update,

Due to the responses advocating an OEM solution, i've redoubled my efforts to work on pricing with IBM, HP, et al to get to something that works. Once i've got some solid quotes and parts lists, i'll post here for everyone to see.
 
Back
Top