Critique my ESX/ESXi build.

bamtown

n00b
Joined
Mar 25, 2010
Messages
9
My objective was to build a quiet ESX sandbox that can sit in my apartment without sounding like a jet turbine (HP/Dell). It also needs to be able to run 15-20 low intensity VM's. These parts are based on whitebox lists and the information I found by scouring the internet. I'm fairly certain they all should be compatible with ESX 4.0 U1. My budget was about $2000, but I obviously went over that. Here are the specs:

build1.jpg

build2q.jpg


The only thing I'm concerned about is the SATA drives and performance. I plan on using the SSD to host the OS and the SATA Samsungs to host the VMFS datastore. ESX obviously won't work with the onboard software RAID. If I went with the PIKE 1078 card, it would be about $400 to use RAID 5. By sticking with RAID 0 or 1, I can use the 1068E card for only $80. I'm fine with RAID 0 as this box is only for testing purposes. Also, I noticed that most of this technology is over a year old now. Should I wait longer or do you think this is overkill to begin with? Am I better off building an i7 960 workstation at a cheaper cost?

Any input is appreciated.
 
Did you find specific info the indicated the pike 1068 was compatible with esx4? Just curious because I'm currently running the z8na myself with esx4 and will need to add more hard drives in the future, but my preliminary research did not yield any concrete info that the pike 1068 was compatible. Not trying to discourage you, just wondering if you found something I didn't. I haven't done an exhaustive search, I just started poking around the past few days. Sorry if I thread jacked.
 
Yes, the 1068 was actually the one thing that was halting my entire build. I couldn't find anyone that had used it. Luckily, after searching and searching, I found a guy that not only used it, but built a rig very similar to the one that I spec'd out: http://invgroup.com/Virtualization/ESXiWhiteboxServer/tabid/105/Default.aspx

I wish I would have found this page earlier. It would have saved me a lot of time spent on researching, pricing, etc.

Keep in mind that the 1068 is limited to hosting 2 arrays.

How has your experience with the z8na been so far? Any trouble with the onboard NICs and ESX 4?
 
The only issue I've had with the board wasn't even related to ESX. I've been unable to get the ASMB4-IKVM module to work. I suspect its an issue on ASUS's end since they usually use these with 3 nic boards. Reading the bios updates, it sounded like it should have been fixed about 2 updates ago, but I can't seem to get it working. I don't recall any issue with the onboard nics and ESX. I will say that i've only recently started running them at gigabit speeds. I do seem to have a bottleneck with transfer files to my WHS VM, but I don't know that it's not my cabling or my desktop's onboard. On another note, keep in mind many of the 12gb memory kits (at least the reasonably priced ones) tend to be quad rank. This will limit you to a memory speed of 800mhz. It's a Nehalem limitation. Kingston is the only one I've found that offered a compatible 12gb kit that specifically stated it was dual rank. They also offer a quad rank kit as well. I'm waiting for ram prices to come back down so I can pick up one of the dual rank kits. I'm only running one processor right now because I decided to go with the high priced L5520 for the power savings. Kinda wishing I had just said screw it and went with the E5520 and could have afforded a second processor sooner.
 
My biggest question is why go with Intel? Why not go quite a bit cheaper with AMD and still have the VT available?

Another thing you have me scratching my head about is...why the sound card?
 
Sound card is pointless, none of them are supported with ESXi. Drop it and save some cash.

Your controller is supported but I'd be surprised if it isn't a fakeraid controller and ESXi didn't simply see it as a JBOD controller.
 
forget the sound card. Won't do you any good.

That pike is going to be SLOW, especially on SATA without a BBWC. Buy a used PERC5 card instead, get a cache / battery module, and you'll do a lot better, especially since you could use SAS with cache later on if you'd like too. It's an LSI card like the PIKE, but a lot more capable.
 
Go with an AMD build and same some money.

As you have mentioned, the VMs you plan on running aren't that resource intensive.
Get enough cores and spindles and you will be all set.

Nice build otherwise. :)
 
Homebrew, I created this thread for you and the others that are having issues with the ASMB4-iKVM module: http://hardforum.com/showthread.php?p=1035550222

I managed to finally get mine working, but it was quite a pain in the rear until I blindly peformed the exact sequence of events required to get the thing working correctly. If you have any questions, let me know.

Also, I ordered the Kingston RAM that you recommended. It's been great. This system has been flying so far. I'll update again in a week or so when I have a better idea of the overall performance.
 
Glad to hear it. I'll give your method a try next weekend and report back. How's the pike working out for you?
 
I'm surprised no one has mentioned this but... why the SSD for VMWare? It doesn't access the drive that much so you're not gaining anything by having one, save a few bucks and just throw in a cheap SATA drive.
 
I'm surprised no one has mentioned this but... why the SSD for VMWare? It doesn't access the drive that much so you're not gaining anything by having one, save a few bucks and just throw in a cheap SATA drive.

You are joking. Right? :eek:

An SSD will make a world of a difference.
VMware generates lots and lots of small random I/Os.
I currently have close to a dozen VMs running on a single Intel 160GB G2 with better performance than when I had them spread across multiple HDs.
 
I'm surprised no one has mentioned this but... why the SSD for VMWare? It doesn't access the drive that much so you're not gaining anything by having one, save a few bucks and just throw in a cheap SATA drive.

You have ~got~ to be smoking something.

A SATA disk is slow as ~snot~. We're talking about a max sustained 100 iops or so. And that's not assuming RAID overhead.
 
You have ~got~ to be smoking something.

A SATA disk is slow as ~snot~. We're talking about a max sustained 100 iops or so. And that's not assuming RAID overhead.

You're missing what he asked. The guy is buying a SSD just to boot ESX from..not to hold VMs. That's overkill...though a small SSD isn't too much.
 
You're missing what he asked. The guy is buying a SSD just to boot ESX from..not to hold VMs. That's overkill...though a small SSD isn't too much.

eh. depends on what you're doing. That is true, but it does make for a fast-ass boot :D
 
You're missing what he asked. The guy is buying a SSD just to boot ESX from..not to hold VMs. That's overkill...though a small SSD isn't too much.

Ah, that changes things then.
A good USB stick would be a better choice and then double the money to get an 80GB Intel SSD to host some of the VMs.
 
You're missing what he asked. The guy is buying a SSD just to boot ESX from..not to hold VMs. That's overkill...though a small SSD isn't too much.

I boot my ESXi box from a SD card in my HP box. Seems to work fine and boot very quickly.
 
Ah, that changes things then.
A good USB stick would be a better choice and then double the money to get an 80GB Intel SSD to host some of the VMs.

note that you need a good one, or you'll overload the bulk transfer capabilities of the USB bus on the stick and cause a host crash :)
 
note that you need a good one, or you'll overload the bulk transfer capabilities of the USB bus on the stick and cause a host crash :)

True.
SLC based USB sticks are also recommended (good luck finding one though).
Otherwise, disable all forms of logging as that would wear out the drive in a matter of months.

I am using an old 1GB USB stick for my ESXi installation, and it is working great.
 
Back
Top