Your home ESX server lab hardware specs?

Like everything else, it depends.

If you are running a sleepy little ESX host for development, it probably isn't important. If you plan on running your hosts flat out and need to eke every last bit of performance out of it, then yeah, go for the faster ram.

What's the cost difference? If it's a hundred bucks, I say go for it, if only to know that you have the fastest ram available and that won't be a bottleneck.

If it's ten thousand bucks, certainly go with the slower ram.
 
can someone pm how they setup their firewall like pfsense inside esxi.

I can't get pfsense inside esxi to forward ports..
 
Finally have my machines up!

ESX 5. 1: BOX1
Gigabyte 990XA-UD3 Motherboard
AMD FX-8120 Stock
32GB RAM - 4 x 8GB
Internal Storage:
1 x 128GB Corsair M4 SSD
LSI SAS3041E
4 x 250GB SATA2 7200 RPM RAID0
-- Windows VMs for Studying/Lab work

ESX 5.1 : BOX2
Tyan S7002G2NR-LE (dual cpu board)
1x Xeon E5504 2.0GHz Quad-Core
16GB ECC RAM (4 x 4GB)
Internal Storage:
1 X OCZ Vertex 4 128GB
2 x 160GB SATA2 7200RPM (JBOD)
-- Rus mostly linux VMs (pfSense, CentOS, LinuxMint Zenoss)

FreeNAS 8.3 Storage Box
ASUS M2N-E
AMD x2 4000
8GB - 2 x 4GB DDR2
CIFS - ZFS1: [6TB] (8 x 1TB (Media Sharing) (2 x External 4bay eSATA Enclosures)
SIL3132 eSATA PCIE x1 controller
iSCSI - ZFS2:[2.2TB] 6 x 500GB (VM Storage) Connected to internal SATA2 Ports
 
Bumped one of my hosts to 32GB. Needed a bit more room now that I'm testing more apps.
 
can someone pm how they setup their firewall like pfsense inside esxi.

I can't get pfsense inside esxi to forward ports..

weird I had 0 issues with pfsense on ESXi 4.1. My box has 3 NICS. I dedicated two of them to PFSEnse. One goes to the modem DHCP, the other goes to the LAN Static IP addy. After you install and boot the VM...it works just like it was in a baremetal box.
 
weird I had 0 issues with pfsense on ESXi 4.1. My box has 3 NICS. I dedicated two of them to PFSEnse. One goes to the modem DHCP, the other goes to the LAN Static IP addy. After you install and boot the VM...it works just like it was in a baremetal box.

Yeah, it just worked out of the box for me too. Didn't have to do anything special...
 
Asus p8b-x, e3-1230 v2, w/ lsi 9260-4i & multiple intel ct cards, works like a champ with passthrough and raid status monitoring (esxi 5.0 originally, upgraded to 5.1)
 
can i post what I'm planning?

fx 8350 (hoping to get to 4.6ghz+)
32GB ddr3
990fx mobo
dual ibm 1015s in IT mode - already purchased
norco 4020 - already purchased
1000W psu - included with the norco bought here at [H]
ati 5450 passthrough for the htpc
currently got 5x3TB hdds for storage zfs
2x1.5TB for esx in raid-1
2x 1.5tb drives un-allocated
 
Not my home box, but my colo'd at work ESXi lab. Starting simple.

EiZi8.jpg


2x x5650
196gb DDR3
5x 256gb M4
5x 512gb M4
LSI 9200-8e
(2)LSI 9211-8i

Connected to a Supermicro 24 1.5TB JBOB. Running OI feeding back NFS and iSCSI to ESXi. Backed up by a Synology NAS, connected to a HP 1910-24G/ASA5505. Next up, independent SAN, 40GB IB, and two more servers for some HA love.
 
i read on vmwares forums that the free license of esxi only supported like 8 cores and 32gb of ram, can you use all of those cores and that ram?
 
We just switched from Essentials to Essentials Plus and I decided to keep the old licensing for ~2 test boxes, both with more then 32gb of ram.
 
Holiday project...

ESXi:
5.1.0 (VMKernel Release Build 799733)

Motherboard:
Supermicro X9SCM-IIF-O
BIOS version: R 2.0a (shipped with current)
(superbiiz# MB-X9SMII)
NOTE - IPMI works very well on this board. Nice feature.

CPU:
Xeon E3-1270 V2 3.5GHz
(newegg# N82E16819117283)

RAM:
4 x Samsung DDR3-1600 8GB/1Gx72 ECC M391B1G73BH0-CK0
(superbiiz# D38GE1600S)

RAID:
LSI 9260-8i
(newegg# N82E16816118104)

Hard Drives:
2 x WD VelociRaptor WD6000HLHX 600GB (newegg: N82E16822136555)
2 x WD VelociRaptor WD5000HHTZ 500GB (newegg: N82E16822236244)

Case:
LIAN LI PC-7HX Black Aluminum ATX Mid Tower
(newegg# N82E16811112389)

Application/Use:
Linux hosts for Webphsere application development
Websphere, db2, Tivoli, FileNet (etc.)
 
Last edited:
My home box. I have another 16GB (2 x 8GB) of RAM and another PCI-e 2x1Gb NIC to stick in when I can be bothered and another 2TB green disk to add.

VMWare-New.png


Just migrating the VMs from my old Lab (1 x Dell R200 and 1 x Dell 860) had no issues up to now.
 
Last edited:
Found the following on ebay for about $200 and bought three of them:

Supermicro 1U half-size 512F-280B chassis with X7DVL-E motherboard
Onboard dual Gig NICs
2x Intel L5320 processors (1.86Ghz 50W Quad Core - No HT)
24GB DDR2 ECC Registered RAM
160GB 7200 RPM HDD

I added an Intel PCI-X dual Gig network cards and got a copy of vSphere Essentials Kit

next is to buy a Synology DS1512+ NAS with 5x 600GB Raptors (already got three of them when Amazon was selling them at $110 a piece)

Here's a pic of what my lab will look like:

personalt.jpg


btw: geeks.com has my server with 16GB of RAM for $194 if anyone's interested.
 
Single ESXI 5.1.0a box

Intel Xeon E5-2620
Supermicro X9SRI-3F
64GB (16x4) Kingston KVR13LR9D4/16HM

128 Samsung 830 (Server 2012's VMs)
256 Samsung 830 (everything else)
4GB USB (ESXI)
1TB Seagate (random testing VM store)
16TB of random sized disk file storage

IBM M1015 (LSI 9211-8i)
Chenbro 32 port expander

Intel 1000 PT Dual port

Norco 4020
Seasonic x750 Rev3 Gold
 
because I put together a box with 64GB of ram and looking at xen and the like would rather go esxi just curious if he bought a license or there's a way to trick it into using it all I dunno
 
thanks man navigating the pricing tiers of vmware and the like was a pain in the butt
 
thanks for the info, I just finished my build, and ended up getting a license $$

pics here but here's the specs:

https://www.dropbox.com/sh/5fmkstrxyqp7h6w/vQ-gWx5i06

3930k @ 4.4Ghz
64GB DDR3 1600 9-10-9-1T
120GB Samsung 840
2x Intel Pci-e Intel CT nics
LSI 2008 based HBA for connecting to the DAS
Noctua NH-D14
Lian Li PC-7HX - Love this case, got it here at [H]

Silent, powerful, adding vm's soon

DAS

16x 3TB hdd's 4 volumes of 4 disks in RaidZ
HP SAS expander
PCMIG Pci-express backplane
1KW PSU
Norco 4020 case bought here at [H]

benchmarks coming soon
 
Lab is expanding...

2x HP v1910-24g network switches

3x VMware 5.1 hosts

AMD Phenom II X6 1045T CPU
32GB RAM
Intel PRO/1000 VT quad port NIC

2x Hyper-V 2012 hosts

AMD Phenom II X4 925
16GB RAM
Intel PRO/1000 CT, Broadcom 5708 Gb NICs

Microsoft Server 2012 File server

AMD Athlon II X2 240 CPU
32GB RAM
Intel PRO/1000 VT quad port NIC
2x80GB RAID 1 OS
2x240GB RAID 1 Intel 330 SSDs for level 2 caching
8x2TB Samsung 7200RPM SATA drives in Storage Pool for file shares and iSCSI
Starwind for iSCSI
Fancycache 0.8 Beta installed for RAM and SSD caching on iSCSI shares
 
Lab is expanding...

2x HP v1910-24g network switches

3x VMware 5.1 hosts

AMD Phenom II X6 1045T CPU
32GB RAM
Intel PRO/1000 VT quad port NIC

2x Hyper-V 2012 hosts

AMD Phenom II X4 925
16GB RAM
Intel PRO/1000 CT, Broadcom 5708 Gb NICs

Microsoft Server 2012 File server

AMD Athlon II X2 240 CPU
32GB RAM
Intel PRO/1000 VT quad port NIC
2x80GB RAID 1 OS
2x240GB RAID 1 Intel 330 SSDs for level 2 caching
8x2TB Samsung 7200RPM SATA drives in Storage Pool for file shares and iSCSI
Starwind for iSCSI
Fancycache 0.8 Beta installed for RAM and SSD caching on iSCSI shares

I know 2008/2012 has iscsi that works with esxi, but not sure what Starwind does?
 
I know 2008/2012 has iscsi that works with esxi, but not sure what Starwind does?

starwind makes his file server an iSCSI server so the hyper-v hosts and VMware hosts can mount a volume he has as an iSCSI volume
 
starwind makes his file server an iSCSI server so the hyper-v hosts and VMware hosts can mount a volume he has as an iSCSI volume

But you dont need Starwind to make it an iscsi server since 2008/2012 can have it installed through the iscsi software target for server.
 
Hello, I would like to ask for some information about the following thoughts that I have. I am planning to create a "cheap as possible" test rig (which maybe it will be more than one). I've seen, from ark.intel.com, that Celeron g540 and g550 have EPT/SLAT capabilities and that seems ok for my needs (hyper-v & esxi inside an esxi server). Does anyone have any information about this.

My plans are to install such a cpu (I don't want anything powerfull as it will be just a test machine), and use as much ram as possible (probably the full extend of 32gb that these cpu's support). Also for storage a raid10 of 4 low powered 2.5" SATA III disks is also ok and it will provide me with enough speed, storage and low energy consumption.
 
Hello, I would like to ask for some information about the following thoughts that I have. I am planning to create a "cheap as possible" test rig (which maybe it will be more than one). I've seen, from ark.intel.com, that Celeron g540 and g550 have EPT/SLAT capabilities and that seems ok for my needs (hyper-v & esxi inside an esxi server). Does anyone have any information about this.

My plans are to install such a cpu (I don't want anything powerfull as it will be just a test machine), and use as much ram as possible (probably the full extend of 32gb that these cpu's support). Also for storage a raid10 of 4 low powered 2.5" SATA III disks is also ok and it will provide me with enough speed, storage and low energy consumption.

Can you give more of an idea of what you want to do?
How many VM's, usage, AIO machine, etc.
 
My understanding says you'd need at least vt-x

As far as I know this cpu has definitely VT-x capabilities, but in intel's site I can see it also has "Intel® VT-x with Extended Page Tables (EPT)". I just don't know if it is typo error or it is a fact that it has EPT support.
 
Can you give more of an idea of what you want to do?
How many VM's, usage, AIO machine, etc.


I want to run a hypervisor inside another hypervisor(esxi or VMware workstation) and be able to run a vm under the virtualized hypervisor. I want this as a test rig in order to test hyperv/esxi clusters etc. I don't care about pass-through capabilities from my system because the only thing that I want is to create "private cloud" environments for testing purposes.
 
starwind makes sense if you want better performance because of caching ms target does not do, because of deduplication windows server 2012 cannot do for live vhds and because of fault tolerance ms target cannot do w/o shared storage like fc or sas or again iscsi :)

with hyper-v 3.0 and test & development no production you can skip using iscsi at all and put vhds on smb 3.0 share

But you dont need Starwind to make it an iscsi server since 2008/2012 can have it installed through the iscsi software target for server.
 
Back
Top