ESXi 5 Home Lab Build Questions

Joined
Jun 9, 2004
Messages
928
Sorry for the mess, I hope it's readable. What I have here is a bunch of notes/plans about what I want to do.

I currently have:
Q6600
P35-DS3L
6GB RAM

While the CPU performance is fine, I feel like I need more RAM, and I don't want to spend money on "old" DDR2. Power consumption is also an issue. I've recently set up an OI ZFS server on this machine, and I think I am going to keep it, so I want better support for it in this new build.

With that being said, I've been doing a lot of research about what to buy. Budget is the biggest constraint by far. There's 3 basic routes I see (I'm leaning towards option 2):

Supermicro server motherboard (would need help picking)
Xeon e3-1220
ECC memory
Benefits: remote management, dual intel nics, vt-d (i'm assuming), ECC

Intel DQ67SWB3
i5-2400
non-ecc
Benefits: remote management, Cheaper memory, vt-d

M5A97
Phenom II x6 1055t (this might also not be much of a reduction in terms of power consumption, haven't found anything directly comparing this to the q6600)
non-ecc
Benefits: not sure about vt-d here. the chipset seems to support it, but i'm not sure about this particular motherboard. Not sure about the processor either. Asus' website says that this board also can support ECC. The NIC seems to be automatically recognized in 5.0.

For RAID cards, I've been looking at the IBM M1015 or the Intel SASUC8I. I've never used a raid card, or anything SAS before. I'll be attaching it to some normal SATA drives. I've referred to this thread which comes up on Google, but I'm still a bit confused. The name of the cable is SFF-8087 (Mini SAS) to SATA forward breakout?

Any help would be greatly appreciated. Thanks!
 
Last edited:
As an Amazon Associate, HardForum may earn from qualifying purchases.
I went with option 2 recently due to the cheaper memory, and the lack of widely available 8gb ECC UDIMMs. IPMI would be a very nice feature to have but I don't list it as necessary in a home lab.

I went with a Core i5 2500 and 32gb of memory (8gbx4). I was replacing a failed host so I reused some parts (nics, case). I went with almost the same exact motherboard (BOXDQ67OWB3). The onboard nic will work using Chilly's driver btw.

The total for the motherboard, cpu and ram was about $580.
 
Ive been looking at intel pro nic cards pulled from dell servers online. Are these compatible with desktop mobos? Particularly the one in my sig. Not to sure if it matters or not.
 
Ive been looking at intel pro nic cards pulled from dell servers online. Are these compatible with desktop mobos? Particularly the one in my sig. Not to sure if it matters or not.
Have 5+ of these, no problems with any of them. The IBM versions work fine too.
 
^ nice to know.

i went ahead and ordered one pulled from a dell server.
 
I went with option 2 recently due to the cheaper memory, and the lack of widely available 8gb ECC UDIMMs. IPMI would be a very nice feature to have but I don't list it as necessary in a home lab.

I went with a Core i5 2500 and 32gb of memory (8gbx4). I was replacing a failed host so I reused some parts (nics, case). I went with almost the same exact motherboard (BOXDQ67OWB3). The onboard nic will work using Chilly's driver btw.

The total for the motherboard, cpu and ram was about $580.

Does this board support VT-D and Directed I/O? I am hoping to use it for ESXI 5 and passing through a SATA card for UNRAID.
 
I would go with a Supermicro 1155 or 1156 motherboard (you can find them open box on Neweeg for <$100) and a Core i3 cpu. 1155 is faster due to Sandy Bridge and supports more instructions but with 1156 you have have 32GB of relatively cheap memory. I went with the 32GB 1156 quad core build for my first box and I aside from the virtual NAS, and after using it for 6 months I don't really have a use for 8 logical cores in the home lab. So for the second (which may be a dedicated NAS box) I just got Supermicro 1155 board with 16GB on Newegg and the Core i3-2100 is $99 at Microcenter.
 
I would go with a Supermicro 1155 or 1156 motherboard (you can find them open box on Neweeg for <$100) and a Core i3 cpu.

Can't use VT-d on an i3. Would not recommend it for a virtualization lab machine.
 
Can't use VT-d on an i3. Would not recommend it for a virtualization lab machine.
Got me there. I already have a Xeon 3440 for that, and I will be moving to a dedicated nas asap anyway.

Get a Xeon E3-1230 instead, just keep in mind, Sandy Bridge socket 1155 Xeons DO NOT support registered memory.
 
Something wrong with your i5 2500k | P8P67 deluxe | 16GB 1600 ? Why not put ESXi on there? I used to run my lab on a 2600K (double the logical cores with hyperthreading but not neccessary) and 16GB DDR3. DDR3 is dirt cheap right now.

If VT-d is neccessary, sell the 2500K and get a 2600K.
 
Generally speaking, RAM is paramount. The more RAM you have, the more useful your rig will be. The free version of ESXi 4 will take up to 256 GB of RAM. ESXi 5 limits it to 32 GB of RAM.

Processors should have VT-d, so you can directly connect PCI devices to a VM (drives, network cards, etc.).

ESXi tends to be picky about the network cards it uses. $40 on eBay gets you a PCIe Intel Pro 1000 (get the server model, not sure if the desktop models work) and ESXi loves it. ESXi 5 is less picky than ESXi 4.

I think hardware RAID is highly overrated on ESXi hosts. For one, ESXi doesn't do motherboard-based RAID (which isn't really hardware RAID). So you'd have to get an expensive RAID card.

And what would you use it for? Data integrity? Remember RAID isn't backup. Performance? Spend the money from the expensive RAID card on a decent SSD. There's no RAID card out there that can compete with a decent SSD in terms of performance. You'd need an array of 300+ drives to equal the IOPS performance of a single consumer SSD.

Divide up your bulk storage with your OS storage, so you can put the stuff that needs to be fast on the SSD.
 
I'd say hardware raid does have some benefits for a home lab, if it has cache and a BBU, if you can find a deal on ebay. Or just get both, that's what I do. Important or 24/7 vm's are on the raid1 array for redundancy, test/dev vm's are on the 128gb SSD for speed.
 
Back
Top