Your home ESX server lab hardware specs?

Im using an HP DL320 4th gen and it makes an ok for a small server

Pentium D 2.8Ghz
8gb DDR2
1x 80gb for OS/ISO
1x 500gb for vm's

No backups or anything, gigabit d-link switch for backbone. Works well, pretty quiet. but it gets warm.

I have this server,

http://www.newegg.ca/Product/Product.aspx?Item=N82E16816110047

and its DEAD quiet, fans set to quiet managed mode.

Installed is a quad core xeon and 16gigs ddr3 ram and 2 x 500 gig drives in raid 0. ITS FAST and quiet.


Dash
 
ended up securing another Dell T110 with 16gb of ram.

Now time to put in some Shared Storage within my home lab.
 
The asus looks nice. What kind of supoprt does it have for SAS raid? My wife just gave me notice that out bedroom closet is not goign to work out for my computer lab. I may have to drywall part of the attic and bring in AC.

Also what kinda of share storage ar eyou guys using. I plan on doing internal raid 0 or 10 for now as I need fast storage with decent amount of IOPS.
 
The asus looks nice. What kind of supoprt does it have for SAS raid? My wife just gave me notice that out bedroom closet is not goign to work out for my computer lab. I may have to drywall part of the attic and bring in AC.

Also what kinda of share storage ar eyou guys using. I plan on doing internal raid 0 or 10 for now as I need fast storage with decent amount of IOPS.

im not sure about the sas, as the page is down, i know its supported..
 
The asus looks nice. What kind of supoprt does it have for SAS raid? My wife just gave me notice that out bedroom closet is not goign to work out for my computer lab. I may have to drywall part of the attic and bring in AC.

Also what kinda of share storage ar eyou guys using. I plan on doing internal raid 0 or 10 for now as I need fast storage with decent amount of IOPS.
Like many here, I use a Solaris VM with ZFS as a NAS. It's cheap and works well when you don't have to tinker with it. However I am keeping my eye open for deals to build a dedicated box, you can have a very beefy storage system for less than a VMware compatible retail unit.
 
8 Core Opteron with 16 GBs of RAM, 1TB local datastore for VMDKs.
6575e0.png
 
How does Solaris NAS compare to FreeNas in terms of file io performance?
 
I'm planning my first home lab env: it should be able to host at least 4-6 2008 R2 VMs, but low power consumption is also important (even though it will not be running 24/7).

Which processor do you recommend: i7-2600, i7-2600S or E3-12xx?
 
I bought my first asus because it was on sale and it was perfect for my budget, it worked out 100% perfectly because its silent and that was key factor to me, every once in a whyle if i'm in manland doing some work i can hear the fan lightly speed up but its very soft.

So then i sold my other 4 u server that i built and bought another asus 1u server, it's older oly has a core 2 unit in it, but i didn't need speed only building a storage server, i was VERY happy with freenas, will be installing it or 2008storage server when i move in 2 weeks onto the new box.

The asus rack servers are awesome, very happy with mine.
 
Never did get my Cisco SG300 ordered (haven't done a big enough Cisco lab order yet) but I did find an HP 1800G-24 in our other lab yesterday. That'll give me enough ports for now.
 
I'm planning my first home lab env: it should be able to host at least 4-6 2008 R2 VMs, but low power consumption is also important (even though it will not be running 24/7).

Which processor do you recommend: i7-2600, i7-2600S or E3-12xx?

You really don't need that sort of processor power for 4-6 VM's. I'm poking around at creating another box myself. You can do 2 AMD x4 boxes for the price for doing a xeon E3 box. They even have the new 32nm low power processors.
 
You really don't need that sort of processor power for 4-6 VM's. I'm poking around at creating another box myself. You can do 2 AMD x4 boxes for the price for doing a xeon E3 box. They even have the new 32nm low power processors.

Also bear in mind ESXi can interact with AMD PowerNow to clock down the frequency of your CPU to save power if you tell it to. So don't spend an extra $50 on a low wattage CPU when you can still have the horsepower available if you need it for less money.
 
syZVw.png

Gigabyte hooked it up with vt-d on rev1 of my board finally!
Quite happy with what i have learned from this box...
 
Last edited:
Just a winxp install that runs a few hammerdins...
Me and a friend picked back up d2 to get back in swing for diablo 3.
 
Some upgrades since #241.
ESXi 4.0 vanilla host:
2x X5570
Asus Z8PE-D12X
12GB DDR3
2GB Kingston Datatraveller
Qlogic QLE2460 4Gbps FC init

ZFS server:
Athlon II X2 245
Asrock 790GX Pro
8GB DDR3 ECC
4x 1.5TB Raidz1
Qlogic QLE2462 4Gbps FC target

Some bits about setting those FC cards can be read here:
http://hardforum.com/showthread.php?t=1631559&page=2
 
I have two Dell PE 2900 towers:

Server 1
2xdual core Xeon CPU
24GB of RAM
3x1TB drive RAID 5
iDrac

Sever 2
2xdual core Xeon CPU
12GB of RAM
3x2TB drive RAID 5
internal Ultrium 3 tape drive
iDrac
 
vmwarey.jpg


My personal test box. Have not mapped a datastore to it yet - but I'll do that later this week and start using it more. I may get an order for a few more 4GB sticks to get it up to 24GB or higher, but I'm sure my manager will tell me to use one of the other units with 16GB if I need a bit extra.
 
esxi.jpg


Asus P6X58D-e, i7-930, 24GB ram
I am moving my mdadm storage over to ZFS soon. I am also moving my pfSense box to a VM. Almost all of the CPU is being taken by the F@H VM that I suspend if I need to run something more CPU intensive on another VM.
 
2vd0eie.jpg


That is mine .. :)

vmwarey.jpg


My personal test box. Have not mapped a datastore to it yet - but I'll do that later this week and start using it more. I may get an order for a few more 4GB sticks to get it up to 24GB or higher, but I'm sure my manager will tell me to use one of the other units with 16GB if I need a bit extra.

By the way, you can supress the SSH / shell warning

vya4jk.jpg


Bottom option = set to 1
 
it's to bad you can't controle a esxi server / box from a mac :( stupid windows platform
 
it's to bad you can't controle a esxi server / box from a mac :( stupid windows platform

Sure you can! Set up a small windows VM for managing vSphere, and RDP into it. Or be hardcore, do everything through the terminal(SSH)!
 
Well.... The good news is that my SG300-28 arrived. The bad news is I screwed up and ordered the -AR model and not the -NA. Same switch..different power cable included but I don't need it. The problem is that the rebate is for the -NA only...so I'm not sure I'll get the rebate. But...this was a NFR switch so I doubt they'd process the rebate anyway. If they did it would make the switch cost me like $35. :)
 
Standing up new lab with vCloud Director. Not quite finished but working on it.

Free ESXi 5.0 w/config change to allow nested 64bit VM's.




vCenter Instance of Virtualized ESXi hosts




vCloud Director

 
Here comes a little update.. ~ 60GHz and 84GB memory
If you want to know more simply click the images and end up at my blog :D

#1


#2


Enjoy IT

MAFRI
 
well now i need to add drives...

got it going... had to add an intel pro 1000 GT nic


ESXi 5


.
esxi5.jpg
 
Those of you that run multiple VM networks, can i see the networking part of the configuration tab? i'm wanting to do some vlan trunking and putting some VM's on specific VLAN's and i'm confused about how to do it.
 
I just add another switch, and do each NIC as a diff thing, for instance, Nic 0= Management network and such Nic 1= VM's network access to internet/each other
 
I just add another switch, and do each NIC as a diff thing, for instance, Nic 0= Management network and such Nic 1= VM's network access to internet/each other

In our development environment we'd need eight ports per host = 24 switch ports for VM use alone. :eek: Nevermind out of band management, OOB backups and iSCSI ports...

Easier to set up a trunk and VLANs.

I'll post a picture of our ESX lab config tomorrow for you, moose.
 
Not unusual to see 12 NICs on a vSphere box, depending on storage type and stuff. Why 10Gb is so popular.
 
Here's an example from one of our lab ESX hosts. It has two pairs of NICs: one for live traffic and one for iSCSI storage. VMs live in one of three VLANS (30, 70 & 102):

esxnetconfig.PNG
 
Thanks that helped. I have since grouped all 3 of the NIC's that i have on my server into one group and trunked the VLAN's i use to my ESXi server and finally have properly setup the different restrictions for each VLAN.... i think, i'll find out when i start applying ACL's to my router.

And so that i'm contributing something useful here is my server :D. There are actually 4 NIC's but one is passed through to my minecraft VM. Also need to think about getting more ram as i don't have everything deployed that iwant and i'm pushing close to capacity LOL.
ESXi.PNG
 
Four VMs using 24GB of RAM? Would love to see what they are actually using and how far back you can scale those.
 
Back
Top