Your home ESX server lab hardware specs?

Last edited:
Tempting..could use another kit. :) These virtual appliances I'm playing with want 8GB...
 
Upgrading my home lab with a better file server running ZFS and replacing my Broadcom network cards with Intel. :)

Now my file server will be:

AMD Athlon II X2 250u
2x4GB DDR3
8x500GB WD5000AAKX drives in one zPool serving iSCSI and NFS datastores
4x1.5TB zPool for CIFS shares
Two Intel Dual port PCI-E Gb NICs (2 ports for iSCSI, 2 for NFS)
 
Upgrading my home lab with a better file server running ZFS and replacing my Broadcom network cards with Intel. :)

Now my file server will be:

AMD Athlon II X2 250u
2x4GB DDR3
8x500GB WD5000AAKX drives in one zPool serving iSCSI and NFS datastores
4x1.5TB zPool for CIFS shares
Two Intel Dual port PCI-E Gb NICs (2 ports for iSCSI, 2 for NFS)
Did you get in on those dual PTs I showed you? I just bought two myself.
 
AMD_Gamer grabbed that open box X8SIL. He'll be back tomorrow.

About to order some 2GB upgrade for my Synology boxes (additional caching).
 
Just noticed I only have 3 free ports on my 1810G-24. Have to figure something out. I have two shiny 2960S switches here, but they are too damn loud for my office.
 
Move your IPMIs to a smaller dumb switch? If you have them through the 1810 that is. They run at 100mb anyway. Waste of a gig port.
 
Yeah..that's the plan. I can probably free up 5 ports or so. I want to add a dual-port card to each lab box so that should be enough...for now.
 
I bought two of those dual port Intel 1000 PTs from savemyserver on eBay. They ran me ~$50 a piece. Not too bad.
 
Just noticed I only have 3 free ports on my 1810G-24. Have to figure something out. I have two shiny 2960S switches here, but they are too damn loud for my office.

Have you ever posted a pic of your lab in the Networking forum?
 
My first ESXi machine. After reviewing the new v5 licensing i'm converting this to xenServer tonight.
I may even put it in a case :)
 
My first ESXi machine. After reviewing the new v5 licensing i'm converting this to xenServer tonight.
I may even put it in a case :)

You need a server motherboard with IPMI. Both the ESXi machines for my home lab and the machines i have at work have never had monitor/keyboard attached to them along with physical media to install stuff.
 
You need a server motherboard with IPMI. Both the ESXi machines for my home lab and the machines i have at work have never had monitor/keyboard attached to them along with physical media to install stuff.

I'll bet yours cost more than $200. ESXi 4.1 running like a champ.
 
Well i wanted to see what ESXi was about so i grabbed the gear out of the mineral oil rig.

ASUS M4N68T-M
AMD Phenom™ II X4 840 $97 (micro center bundle)
4GB DDR3 Crucial i think $40
DVD Burner $20
500gb WD $40
400w PSU $15
Intel Pro J1679 (Found this in a 1u server i had lying around, a quick google search values it at $15)



I've been looking into IPMI but don't have any experience with it. All my customers are off site anyway, so it wouldn't do me much good.
 
Just put a 2GB SODIMM in my DS1511. Waiting for the datastore on my DS1010 to go in to maintenance mode using SDRS and then I'll upgrade that one with NO DOWNTIME.
 
Just ordered 3 Intel PT Dual-port NICs for the lab. Might as well go ahead and fill up that switch... $50/each w/ free shipping on Ebay.
 
I wanted to play with some Intel SR-IOV NICs but it appears that still isn't supported in vSphere 5. Oh well..those cost a lot more. :)
 
Well i wanted to see what ESXi was about so i grabbed the gear out of the mineral oil rig.

ASUS M4N68T-M
AMD Phenom™ II X4 840 $97 (micro center bundle)
4GB DDR3 Crucial i think $40
DVD Burner $20
500gb WD $40
400w PSU $15
Intel Pro J1679 (Found this in a 1u server i had lying around, a quick google search values it at $15)



I've been looking into IPMI but don't have any experience with it. All my customers are off site anyway, so it wouldn't do me much good.


wouldent you be able to connect to their network with vpn, then hit the ipmi ip address ?
 
wouldent you be able to connect to their network with vpn, then hit the ipmi ip address ?

Sure :) If they had VPN connections available and were virtualizing enough machines to merit hardware capable of ipmi. I live in a very small town, and service only slightly larger towns. Currently i've only got 2 VM's and planning for a total of 8 more across 3 businesses.
 
Thought I would let everyone know that I got my T110 I recently bought up and running and as far as I can see (contrary to what I read many places) - the onboard sata, and onboard NIC are both working great and available in the most recent version of esxi 4.1 for Dells.

I read in many places that a raid controller would be necessary due to esxi not being able to see the onboard in these machines. I do have a perc 6 but the onboard is also working.
 
Put the new Intel NICs in..no problems except my HP 1810G-24 is completely full now. Time to get another 24-port switch or a quiet 48-port if I can find one...

Also replaced one of the OEM Intel coolers with a spare I had. Was starting to squeak.

vcenter5.png
 
I asked before but you should post some pics of your lab setup. A full 24 port switch is awesome!
 
My lab is boring..I don't have much gear. Have to remember, each of my 3 hosts uses 5 ports each so that's 15 right there. The two Synology boxes take 4 total. Few other things and you're full.

My goal has always been to make a small, powerful, efficient, and quiet vSphere lab. It's worked out well.
 
My goal has always been to make a small, powerful, efficient, and quiet vSphere lab. It's worked out well.

This is the reason i'm going single beefy server w/nested VM's. Hopefully, there aren't that many limitations that I've had with 4.x.

If you decide to sell that 24pt, hit me up, I need a switch that supports VLANs..etc..
 
I hate nested VMs. Just makes things too complicated when doing some stuff, like vCloud Director. Each of my servers idles at 46w and usually at least 1 is powered down via DPM. They make no noise and just hum along.
 
My 1800-24G is also maxed out. Not that all 24 ports are connected, but the 21 remaining functional ports are all connected. 3 are no longer working thanks to a massive power surge 2 weekends ago. :p
 
2 hosts and a FreeNAS box all connected through an HP Procurve 1800-24G

AMD Phenom II X4 925
16GB RAM
Intel Pro/1000 PT dual port PCI-E NIC (iSCSI)
Intel Pro/1000 CT single port PCI-E NIC (NFS)
Intel Pro/1000 MT dual port PCI NIC (Mgmt and Virtual Machines)

FreeNAS box
AMD Athlon II X2 240
8GB RAM
8x500GB RAIDz1 pool for NFS and iSCSI datastores
4x1.5TB RAIDz1 pool for CIFS shares
2x Intel Pro/1000 PT dual port PCI-E NICs (2 ports for iSCSI multipathing, 2 ports for NFS LAGG)
Intel Pro/1000 GT PCI NIC
 
Back
Top