Help building cheap ESXi host.

JayG30

n00b
Joined
Aug 6, 2008
Messages
28
Hello,

I'm planning out a low cost ESXi 5.5 host for a production environment. I won't be running a TON of busy VM's or anything but I'd like the datastore to be somewhat protected. I don't have access to any shared storage and the whole infrastructure at this location is soon to be overhauled. Right now I just need a decent box to run a few things on. I have a similar system to what I'm about to spec out that runs various VM's in a production environment (FreePBX, Samba4 AD DC, Windows, in total 5 VM's). So here is what I'm thinking;

Code:
$240 Supermicro CSE-813MTQ-350CB 1U rackmount case
$200 Supermicro MBD-X10SLM+LN4F-O motherboard
$250 Intel E3-1230v3
$100 8GB of Crucial ECC RAM (CT102472BD160B)
$21  Supermicro heatsink
?    Hardware Raid Controller
?    Hard Drives

Basically the Supermicro parts make up the1U SuperServer 5018D-MTLN4F. It is just cheaper for me to buy the parts separate.

I like the motherboard for the QUAD Intel 210 NICS. I think that will serve me better then tons of SATA/SAS ports in this case since I'm not building a storage box.

I used the 1230v3 before because of the hyperthreading and it has worked well for me. It seems like there is a new refreshed version called the E3-1231v3 but I don't really know what the difference is.

I previously used Kingston RAM but have recently read that there are issues with it on Supermicro X10 boards and Kingston removed all X10 motherboard compatibility from their website. People have said the model Crucial RAM I listed works.

I hit a road block at the RAID controller. The VM's will have to run from local storage and I figure, seeing as it is production, I don't want to have no redundancy at the HDD level. So I think I need a REAL Raid card. And I think I've read if I want performance in ESXi I'll need caching and a BBU. I also don't know if I need to take anything into consideration for support in a Supermicro motherboard. The case only supports 4 hard drives and I figure I'd run RAID10. I don't want to spend a ton of money and I know very little about what is available for RAID controllers (Perc/LSI/IBM/etc). I've read good things about LSI though and I know ebay has good deals on this type of stuff.

I was considering using SSD's since I/O seems to end up being the biggest bottleneck when doing virtualization. Might end up being to much money though and go back to regular drives. However if I go SSD, what should I look for or any recommendations?

Lastly, do you think this is a good build? Is there something else I should consider that will provide the reliability, performance, compatibility, and efficiency at a lesser cost?

Thanks.
 
Buy a used r710,c1100 or c6100 if you can. Might be cheaper and easier if it is being overhauled soon.
 
Yea that is something I've been looking at. I just suck at buying used on ebay.
Looking at the r710 it looks like they have either Dual Xeon 56XX or 55XX. Dual 55XX benchmark pretty poorly though. Would both of those be better than a single E3-1230v3? Obviously it won't run as efficiently. I love supermicro IPMI, never used equivalent on Dell (iDRAC right?), is it good?

Quick glace it's probably going to be like $400 cheaper? But obviously it is used equipment vs new. Only other issue is space. I have to move everything and don't know if I'll have room for even a 2U server. :(
 
I like the motherboard for the QUAD Intel 210 NICS. I think that will serve me better then tons of SATA/SAS ports in this case since I'm not building a storage box.

I hit a road block at the RAID controller. The VM's will have to run from local storage and I figure, seeing as it is production, I don't want to have no redundancy at the HDD level. So I think I need a REAL Raid card. And I think I've read if I want performance in ESXi I'll need caching and a BBU. I also don't know if I need to take anything into consideration for support in a Supermicro motherboard. The case only supports 4 hard drives and I figure I'd run RAID10. I don't want to spend a ton of money and I know very little about what is available for RAID controllers (Perc/LSI/IBM/etc). I've read good things about LSI though and I know ebay has good deals on this type of stuff.

I was considering using SSD's since I/O seems to end up being the biggest bottleneck when doing virtualization. Might end up being to much money though and go back to regular drives. However if I go SSD, what should I look for or any recommendations?

Only some thoughts

- with ESXi you mostly use vlans - no need for a quad nic
- if you need performance, think about 10 GbE

- If you need some data security, you need a Raid but ESXi is lousy on local storage:
slow on access and backup/clone/move, no caching, no checksums, limited snapshots etc

- you need a backup option to another pool, best on another place

Think about shared ZFS SAN/ NFS storage instead of simple local storage.
You can virtualize the SAN, you only need a dedicated HBA controller and some more RAM

- Use a SSDs mirror for VMs (120GB up to 1 TB) and a Raid 6/Z2 with spindels for backup and general use storage
Regarding cheaper desktop SSDs I currently use Sandisk Extreme Pro (up to 960 GB) for my napp-in-ones

My current suggestion:
-Mainboard: http://www.supermicro.com/products/motherboard/Xeon/C220/X10SL7-F.cfm
(includes the HBA, with ZFS better than any hardware raid with cache/BBU)
- any Xeon
- buy 16-32 GB ECC RAM, use 1/3 up to 1/2 for a ZFS OmniOS storage VM (RAM above say 2 GB is used as fast readcache)
- use ESXi 5.5U2
- add a virtualized SAN (you can use my free, downloadable ready to use napp-it/OmniOS ZFS SAN VM/ Web-appliance)
pass-through the LSI HBA disk controller to OmniOS
- use the (local) SAN storage as shared NFS datastore

read my miniHowto
http://www.napp-it.org/doc/downloads/napp-in-one.pdf
 
Last edited:
I didn't see what apps/vms you will be running. If I knew what was going to be running along with how you plan to size them, I could make better recommendation about storage. If they are typical systems that are up all of the time, then you just need a supported raid controller that can allow you do do at least RAID 1 for redundancy. Start up mibht not be fast, but that's not really an issue except for the rare reboot.

I can easily use 4 nics. 1 for mgmt of the host, 1 for mgmt vlan for the vms, 1 for internal vlan for the vms, and 1 for external vlan for the vms. If you do use external storage later, you need a nic or hba for that.
 
Thanks for the replies. Let me start with the important part directly related to my post.

To be honest, I'm not sure yet what will all run on this machine. It basically started because I need a place to install software for inventory management, alerting, and helpdesk. I use Spiceworks (sure some of you know about it) and really like it. I might also need to install Nagios in another VM. There aren't a lot of things that need monitoring either, I bet I could run this on a decent desktop if I had to. I'll probably also use the Windows VM for Spiceworks for management tools like I do at our main facility (access control system software, camera software, Unifi WiFi controller software, Windows RSAT tools if necessary, things like that). But then, because the other systems down there are so taxed (thus why we need to do an overhaul) and others are so OLD, I might look at if it has enough resources to move some stuff to it. If not, no big deal because again I'll be overhauling everything in a few months.

My goal right now is to try and get a system as powerful as I can for sub $1k. After doing some reading I think I might be better getting a used Dell server. I like the looks of the R610. Bit nicer then the C1100 since you can get (easier) dual PSU and another slot. Here is what I'm looking at now:

R610
Dual Xeon X5650
32GB RAM (4x8GB PC3-10600R)
Quad NIC (Two Dual Broadcom NetXtreme II 5709c)
PERC H700 512MB BBWC
Dell iDRAC6 Express
Dual 717W Redundant Power Supplies
2 x 240GB SSD (something "consumer grade")
---------------
Total: ~$950

I'll run RAID1 unless I need more storage later. I think the SSD is worth the money over 10k or 15k SAS. I don't think I need the H700 but I found a server with it and I'm looking at about $150 savings without it. Could always sell it off at a later date. And I figure once I overhaul everything and if we move to a shared storage server I can sell off the H700 and move the SSD's into the ZFS storage server, turning the box into a dumb VM Host. Thoughts?


To the other points brought up in the thread
I know all about the all-in-one setups by running a a storage VM. I won't be doing it for these people but appreciate the recommendation.I personally run a separate ZFS box here at our main facility. When I overhaul things at this location I will probably end up using shared ZFS storage (using the exact supermicro board you linked). Also, I find the NIC's very important when you start getting into larger deployments. I like having NIC's for management and vMotion when necessary.

I would like to understand why VMWare sucks with local storage because I was considering going this route. I've always understood local storage to be faster. A local RAID10 being faster then a NAS/SAN of the same RAID level simply because you remove things like NFS/iSCSI and the disks are ON the board itself. The issue I've always had with VMWare is the additional cost of a RAID controller to actually do that and for a long time there was an inherent disadvantage because you weren't using shared storage (no VMotion or HA). But it seems now with VSAN and all the other tech that has spun off from it, that you can create a cluster of hosts and virtualize all the local storage into a virtual SAN and have all those abilities back.

I should also mention I'm not even 100% sold yet on using VMWare for a overhaul. They use a LOT of stuff that requires Windows Servers. So I might end up using Hyper-V. Or if I am feeling adventurous perhaps KVM with something like oVirt.

This facility doesn't have much virtualization right now and what they have done is done wrong. They do the typical stuff like DNS, AD, Print Server, but the biggest resource hog is Terminal Server. They have another box that runs the primary software for the company that uses an Oracle 11g database. What was done is actually pretty bad, the people who set them up used SBS 2008 FE with Hyper-V installed to run 3 VM's (AD, Terminal Server, and a Web Server). The AD and Web Server (which need very little resources or features) have 2008R2 installed while TS (which needs a lot of resources and features) is on 2008 standard. So they don't even have RDS instead of TS because they didn't use 2008R2. And the fact it is all virtualized on a SBS 2008 FE server means they are memory limited at the Host level which is now a big issue for the TS (I can't add more RAM to the TS server and it needs it). So before I go spending a bunch of money of new server I have to look if we can configure things correctly to get the most of the hardware we have (primarily an R510 that is not virtualized and 2 R710's of which only the one is virtualized in the method mentioned above).

Down the line once we overhaul everything they will end up with at least 3 hosts, with at least some form of High Availability (depending on MTTR need probably Veeam or full blown VMware HA), a large ZFS storage server, and disaster recovery offsite to our main facility here.
 
Back
Top