Your home ESX server lab hardware specs?

Finally up and running:

hyperv.jpg


AMD 880GX
Phenom II 1055T
8GB Ripjaws
2 x Intel 1Gb Dual Nics
3 x 500gb RAID5
Windows Server 2008 R2 Hyper-V
Roles: Hyper-V
Active Directory/DNS
VM's: 2 x SQL/DFS Cluster SQL 2008
Windows Home Server Vail
 
Last edited:
Installed SCVMM and VAIL and it works very well. I'm really liking SCVMM Library now, just need to get some rapid deployment setup in powershell.

30959161.jpg
 
Poweredge 840
Quad Core Xeon 3220 2.4ghz
8gb 6400 ram
4 x 500gb sata hd
intel gig nic

vsphere 4.1
 
Last edited:
Platform - leaning towards OpenNebula

CPU: Intel i7-950
Case: Norco-4220 4U - 20 HD Bay
Mobo: GIGABYTE GA-X58A-UD5
RAM: 24GB Total of Mushkin Enhanced Silverline DDR3 1333 4GB sticks CAS9, 9-9-9-24
Video: MSI N8400GS-D512H GeForce 8400 GS 512MB 64-bit DDR2 PCI Express 2.0 x16
Power: SeaSonic X650 Gold 650W ATX12V V2.3/EPS 12V V2.91 SLI Ready 80 PLUS GOLD Certified Modular Active PFC Power Supply

I really don't care about the video, I might look for something less power hungry if I can find it.

I plan to have the root OS run off of a 16GB USB key, not sure yet. Maybe another for swap.

For hard drives, I have a couple of Western Digital Caviar Green WD20EARS 2TB 64MB Cache drives, and intend to load a lot more. To add much more though, I'll need to buy RAID cards with enough ports and possibly upgrade my power supply.

This will be running in the basement, serving media, acting as our primary machines, recording OTA broadcasts, Asterisk, among several other things. The build (minus the hard drives) cost me about $1500.

I don't plan to do much overclocking, but you never know.

I don't need an optical drive for now since I'll just feed it ISOs. I plan to back up on hard drives. Maybe when the price of Blu-ray writeables starts to come down I'll think about adding one.

I'll use OpenNebula since I'm pretty active on Amazon's AWS now, and I like the concept.
 
Updated the hardware in my lab. Now the total power consumption for both ESXi boxes and the file server is less than 200W. :)


ESXi box 1
AMD Athlon II X4 600e 45w CPU
16GB RAM DDR3 1.5V
3x Gb NIC (SC/VMotion, iSCSI, VM port groups)
80GB 2.5" HDD

ESXi box 2
AMD Athlon II X3 405e 45w CPU
16GB RAM DDR3 1.5V
3x Gb NIC (SC/VMotion, iSCSI, VM port groups)
80GB 2.5" HDD

File Server (Debian Linux)
AMD Athlon II X2 250u 25w CPU
2GB RAM
2x 120GB 2.5" HDDs mirrored (OS)
4x 250GB HDDs RAID 10 (VMs, iSCSI)
3x 1.5TB HDDs RAID 5 (file share)
2x Gb NICs (one for iSCSI, one for file share)
 
Last edited:
Looking to build a home ESXi lab. I'm going to get an Iomega IX4-200D 4TB for my NAS storage. Any recommendations on servers? I want low power...quiet...small... I don't need any internal disk, I'd like to USB boot ESXi. I do want at least 3 Gb ports. Suggestions?
 
Those machines that Child of Wonder built look pretty cheap and offer what you're looking for power wise unless you want to go Intel.
 
True. CoW, what case are you using for those ESX boxes?

Nothing special, just a pair of basic computer cases.

Each has a MSI 760GM-E51 motherboard and an 80GB 2.5" 5400RPM SATA drive with ESXi installed. Paired with 4 sticks of 1.5V DDR3 RAM, each box idles around 45W according to my Kill-a-Watt.

You could probably knock 1-2W off that using a flash drive but I like to have a little bit of local storage in case I need to perform maintenance on my file server so I can Storage VMotion my vCenter server and DC there temporarily.

Just make sure you set your SATA ports to AHCI in the BIOS and get a motherboard with the SB710 southbridge and single drives will work with ESX/ESXi. Grab a Intel dual port PCI-E NICs and you've got a test box with a quad core CPU, 16GB RAM, 4 Gb NICs for around $600. Best part is, it won't cost more than a few dollars a month in electricity to keep running 24/7.
 
I believe ECC memory will simply run in non-ECC mode if installed in a desktop board.

Fully buffered DIMMs, on the other hand, will not work in a desktop board.
 
Finally got the parts and starting to build and configure :) This is for my test/dev lab at home.

Primary Host
CPU: Intel Xeon X5640
Case: Norco-4220 4U - 20 HD Bay
Mobo: Supermicro X8ST3-F
RAM: 12GB Kingston ECC DDR3 1333
Power: Corsair 520HX
OS: ESXi 4.1 (on a 4GB USB drive)
Drives: 160GB 2.5 drive (ISOs and local storage), 400 GB, 1.5TB (testing RDM and/or PCI passthrough for the LSI 1086e SAS controller)

Secondary Host (temporary, built a couple VM's in preparation and used to test VMotion)
CPU: Intel E6400
Case: Aerocool Masstige
Mobo: ASUS P5B Deluxe
RAM: 8 GB DDR2 6400
Power: Corsair 520HX
OS: ESXi 4.1 (on a 4GB USB drive)
Drives: 80GB drive (ISOs and local storage), 160 GB drive (local storage)

*This system will probably be converted to an OpenFiler iSCSI storage at some point

iSCSI storage (Temporary, used to test VMotion)
Dell Optiplex 760
CPU: Intel E7500
RAM: 4 GB DDR2
OS: OpenFiler
Drives: 250GB

If I can scrape up enough cash, I'm hoping to build an i5 or i7 box to replace the secondary and let it take over the iSCSI role (since the 760 is a small form factor case with room for only 1 hard drive). The i5 or i7 should be more compatible with the Xeon for VMotion purposes also (just can't afford another 56xx box ;) ).
 
My setup at home

GENESX2
CPU: Intel i5 750 @ 3.2Ghz
Case: Fractal Design Define Blackpearl R3
Mobo: Asus P7H55-V
RAM: 12 GB Gskill DDR3
Power: Corsair 450w
OS: ESXi 4.1 (on a 1GB USB drive)
NICs : 3x Intel Pro 1000 MT Adapters
Drives: 2x 1TB Samsung F3
1TB Isci presented volume

GENESX2
CPU: Intel i5 750 @ 3.0Ghz
Case: Fractal Design Define Blackpearl R3
Mobo: Asus P7H55-V
RAM: 12 GB Gskill DDR3
Power: Corsair 450w
OS: ESXi 4.1 (on a 1GB USB drive)
NICs : 3x Intel Pro 1000 MT Adapters
Drives: 2x 1TB Samsung F3
1TB Isci presented volume

vCenter 4.1 enterprise + is running as a VM on GENESX2, so I can play around with HA and DRS, FT and vMotion, I work from home 80% of the time so I needed a decent test lab for work and my own use.
The two cases I bought them earlier this week lots of drive space and they are nice and silent.

I have a seperate 6TB Windows Home Server running Starwind isci target software presenting a shared iscsi lun to the two ESX systems to allow me to vmotion and storage vmotion the systems around, also handy for doing test 2008 cluster builds.
 
Starting to get my ESXi 4.1 lab built. I got my Synology DS1010+ and 5 WD 1TB Black drives in yesterday. So far I'm loving the Synology. Great hardware and a really good/simple/fast management interface.

Just ordered the first parts for my server a minute ago. Getting one to make sure my build is good, then I'll add one more. Went with an Intel X3450 CPU on a SuperMicro X8SIL-F board in a Lian V351B case. I'll be booting off USB so no internal drive. Starting with 8GB of RAM.

Also got a managed switch... HP ProCurve 1810-24G. Fanless and can trunk multiple VLANs which is required for some of my lab work.

This is going to cost me a good bit more than I originally budgeted but I think I'll get a much better lab for it.
 
NetJunkie,

Are you doing everything through both those onboards or are you going to pick up some others? I like that case, looks extremely small and simple and compact.
 
NetJunkie,

Are you doing everything through both those onboards or are you going to pick up some others? I like that case, looks extremely small and simple and compact.

I'll add more. Two is getting me started but to do some other fun things I'm going to want at least 3 total NICs..maybe 4. But one good benefit of the X8SIL board is that the two onboard NICs are Intel chipset....so just install ESXi and go. Nothing fancy or manual about it.
 
Updated the hardware in my lab. Now the total power consumption for both ESXi boxes and the file server is less than 200W. :)


ESXi box 1
AMD Athlon II X4 600e 45w CPU
16GB RAM DDR3 1.5V
3x Gb NIC (SC/VMotion, iSCSI, VM port groups)
80GB 2.5" HDD

ESXi box 2
AMD Athlon II X3 405e 45w CPU
12GB RAM DDR3 1.5V
3x Gb NIC (SC/VMotion, iSCSI, VM port groups)
80GB 2.5" HDD

File Server (Debian Linux)
AMD Athlon II X2 250u 25w CPU
2GB RAM
2x 120GB 2.5" HDDs mirrored (OS)
4x 250GB HDDs RAID 10 (VMs, iSCSI)
3x 1.5TB HDDs RAID 5 (file share)
2x Gb NICs (one for iSCSI, one for file share)

Mind commenting on your disk performance? I'm building a similar setup with a shared iscsi box, I have 4x 500GB sata 7k drives that I'm planning to put in RAID10 as well.

I'm not expecting blistering speed, but I'd like to be able to comfortably run a couple of low use "production" guests, as well as a handful of test boxes and whatever. Planning to use freenas as the iscsi host with a sempron 140 and onboard sata.
 
Mind commenting on your disk performance? I'm building a similar setup with a shared iscsi box, I have 4x 500GB sata 7k drives that I'm planning to put in RAID10 as well.

I'm not expecting blistering speed, but I'd like to be able to comfortably run a couple of low use "production" guests, as well as a handful of test boxes and whatever. Planning to use freenas as the iscsi host with a sempron 140 and onboard sata.

I've been very pleased with the speed. I'm able to comfortably run 10+ VMs on the array.

Right now, unless things have changed in the last 6 months, I wouldn't recommend using FreeNAS as an iSCSI SAN for vSphere. Openfiler is also not a good option because of the older version of IETD they use which causes SCSI_RESETs with vSphere. In my setup, I installed and configured IETD myself.

Using either one of those as an NFS datastore, on the other hand, will work fine but you'll want to research how to tweak them for best performance.

A few settings to set if you're going to manually set up IETD are:

Lun 1 Path=/storage/iscsi1.img,Type=fileio,IOMode=wb
Lun 2 Path=/dev/md0,Type=fileio,IOMode=wb
Alias FS_iSCSI
MaxConnections 6
InitialR2T No
ImmediateData Yes
MaxRecvDataSegmentLength 262144
MaxBurstLength 1048576
MaxXmitDataSegmentLength 262144
FirstBurstLength 262144
DefaultTime2Wait 10
DefaultTime2Retain 20

HeaderDigest None
DataDigest None

Do a Google search and you can find those in bold are recommended settings for a vSphere environment. Enabling writeback cache makes a big speed difference BUT you need to make sure your file server is connected to a reliable UPS at the least and is configured to gracefully shut down in the event of a power outage. If not, your VMs can and will become corrupt, possibly making the VMs unstable if they boot at all.

I've also configured my RAID 10 array with mdadm and a 512KB chunk size.
 
Last edited:
Home: Core i7-920, 24G of ram, 2 1TB drive in mirror for boot and local storage. Access to 7.5TB of storage off QNAP TS-659 Pro via iSCSI. ESXi 4.1.

Work: Cluster with 9 servers, each with: HP BL680c, 2 x 6-core E7450, 48G ram, 2x147GB SAS 6G for boot, 3 quad port NC325m for connectivity. Backed by 4G fiberchannel SAN to EVA6400 and EVA6100 with combined 175TB of storage. ESX 4.1 Enterprise & vSphere 4.1. DRS, FT, vMotion, Storage vMotion, etc.

Work Development: Mix of ESXi and Hyper-V for testing/evaluation.
 
I used a similar setup to the one Child of Wonder did:

MSI 760GM-E51
AMD Athlon II X4 605e Propus 2.3GHz Socket AM3 45W Quad-Core
12gb Corsair 1.5v RAM
160gb for my main drive.

Still working on the iscsi solution. I really know very little about it all. Time to learn more. :)

Mad props to Child of Wonder for answering my questions.
 
I tried posting in the network and security forum for help in building a server. Maybe you fine gents could be of assistance.

I want to build a couple of rack-mount servers (probably two or three). One is going to be a home server/GNS3 box (unless I can learn Linux and use it Dynamips).

I want two ESX or ESXi boxes.

I currently have a desktop, which I plan to gut for some parts.

I currently have:
Intel QQ6600
6 GB DDR2 RAM (I don't mind buying a pair of 4 GB modules to up it to 8 GB)
Gigabyte motherboard (not integrated graphics)
750 GB Hard drive (but I plan on using my Sysnology DS209 as an iSCSI target)
and 3 Adaptec 4X 10/100 RJ-45 ports

Do you think I can build a decent ESXi host with the above parts and if I add:

a Istar 3U rack-mount case
http://www.newegg.com/Product/Product.aspx?Item=N82E16811165083

low-end graphics card
http://www.newegg.com/Product/Product.aspx?Item=N82E16814130579

and a 400 to 500w power supply
http://www.newegg.com/Product/Product.aspx?Item=N82E16817139001

I plan on building the second ESXi host when the Sandy Bridge-based Xeons drop.
 
Last edited:
I've been very pleased with the speed. I'm able to comfortably run 10+ VMs on the array.

Right now, unless things have changed in the last 6 months, I wouldn't recommend using FreeNAS as an iSCSI SAN for vSphere. Openfiler is also not a good option because of the older version of IETD they use which causes SCSI_RESETs with vSphere. In my setup, I installed and configured IETD myself.

Using either one of those as an NFS datastore, on the other hand, will work fine but you'll want to research how to tweak them for best performance.

A few settings to set if you're going to manually set up IETD are:

Lun 1 Path=/storage/iscsi1.img,Type=fileio,IOMode=wb
Lun 2 Path=/dev/md0,Type=fileio,IOMode=wb
Alias FS_iSCSI
MaxConnections 6
InitialR2T No
ImmediateData Yes
MaxRecvDataSegmentLength 262144
MaxBurstLength 1048576
MaxXmitDataSegmentLength 262144
FirstBurstLength 262144
DefaultTime2Wait 10
DefaultTime2Retain 20

HeaderDigest None
DataDigest None

Do a Google search and you can find those in bold are recommended settings for a vSphere environment. Enabling writeback cache makes a big speed difference BUT you need to make sure your file server is connected to a reliable UPS at the least and is configured to gracefully shut down in the event of a power outage. If not, your VMs can and will become corrupt, possibly making the VMs unstable if they boot at all.

I've also configured my RAID 10 array with mdadm and a 512KB chunk size.


Thanks for the info, I'll definitely look into that. I did get the system up and running, I've got the esx box on an i7 currently with 6GB, running diskless with only a USB flash drive for the install, this is running along side the fileserver which is a sempron 140 with 1GB, running 4x 500GB in RAID10, as well as a pair of 1TB in RAID1, hosting up the 1TB RAID10 volume on iscsi and an ISO share on the 1TB RAID1 over NFS, this is currently on openfiler as I found freenas to be completely unstable, if I start to have problems with openfiler I'll have to look into the alternative options you mentioned, so far it's been rock solid. Benchmarks were quite slow, something like 80MB/s sequential but this was using the iscsi initiator for win vista, on my primary LAN network, each box has a quad port intel card so I'll be bonding the nics and placing them on a dedicated vlan with jumboframes, hopefully that, along with some other tweaks will help with performance.
 
I tried posting in the network and security forum for help in building a server. Maybe you fine gents could be of assistance.

I want to build a couple of rack-mount servers (probably two or three). One is going to be a home server/GNS3 box (unless I can learn Linux and use it Dynamips).

I want two ESX or ESXi boxes.

I currently have a desktop, which I plan to gut for some parts.

I currently have:
Intel QQ6600
6 GB DDR2 RAM (I don't mind buying a pair of 4 GB modules to up it to 8 GB)
Gigabyte motherboard (not integrated graphics)
750 GB Hard drive (but I plan on using my Sysnology DS209 as an iSCSI target)
and 3 Adaptec 4X 10/100 RJ-45 ports

Do you think I can build a decent ESXi host with the above parts and if I add:

a Istar 3U rack-mount case
http://www.newegg.com/Product/Product.aspx?Item=N82E16811165083

low-end graphics card
http://www.newegg.com/Product/Product.aspx?Item=N82E16814130579

and a 400 to 500w power supply
http://www.newegg.com/Product/Product.aspx?Item=N82E16817139001

I plan on building the second ESXi host when the Sandy Bridge-based Xeons drop.

I should be sleeping right now but I'm trying to solve my own ESXi issue with a mix and match of two Xeon Quads (supposed to work on paper but ESXi decided to puke), alas I digress.

Your processor will be fine for a handful of VM's, what Gigabyte board are you using? Best to check the ESXi whitelist to see if its on the list (most likely it is but you'll need to add a Intel or Broadcom NIC card). A rackmount case isn't needed but a nice to have. You will however need more Intel or Broadcom NIC cards if you plan on using dynamips (another solution is to have one or two physical NIC's then vswitch all of them through ESXi and use some Cisco Catalysts as breakouts from there).

If I ever get this dual rig going it'll look like this (OS testing, Dynamips, folding):

Supermicro X8DTN+
Intel Xeon X5560
Intel Xeon X5570
8 GB DDR3 RAM (will probably switch over to ECC once I can afford it)
2x36GB Raptors for booting
1x500GB WD for VM storage
1x500GB Seagate for VM storage
3xIntel Quad Port Gigabit NIC PCI-X
1xSilicom Quad Port Gigabit NIC PCI-e
Supermicro Chassis (damn it's loud!)

My other ESXi box looks like this right now (pfSense, Untangle, Ubuntu download, Backtrack, and soon to be PVR once I get some more hard drives):

Intel i5 650
Intel BOXDQ57TM
8 GB DDR3 RAM
80 GB HDD (ESXi boot drive)
500 GB HDD VM storage
2xIntel 1000PT NIC (PCI-e)
1xIntel 1000GT NIC (PCI)

Retired (used to be the dynamips lab):

Intel Q6600
Supermicro PDSME+
8 GB DDR2 RAM
160 GB HDD
250 GB HDD
Antec 300 case
Coolermaster 500W 80 Plus PSU
 
Your processor will be fine for a handful of VM's, what Gigabyte board are you using? Best to check the ESXi whitelist to see if its on the list (most likely it is but you'll need to add a Intel or Broadcom NIC card). A rackmount case isn't needed but a nice to have. You will however need more Intel or Broadcom NIC cards if you plan on using dynamips (another solution is to have one or two physical NIC's then vswitch all of them through ESXi and use some Cisco Catalysts as breakouts from there).

Retired (used to be the dynamips lab):

Intel Q6600
Supermicro PDSME+
8 GB DDR2 RAM
160 GB HDD
250 GB HDD
Antec 300 case
Coolermaster 500W 80 Plus PSU

The board should work (if not oh well). It has a Broadcam integrated NIC on there. I have three Adaptec X4 10/100 RJ-45 ports NICs that I wanted to throw in there. I want to do the rack mount for the lab I plan on building. I was thinking of getting a Dell 6224 GE switch as the break out (or HP 1810, since people are crazy about it).

Are you saying I can only run a few (like 3) if I use the QQ6600 with 6GB of RAM?

So what did you do with the retired PC?

And the specs for your other two ESXi hosts are ridiculous. I want to build two "real" hosts but want to wait until Bulldozer or Sandybridge drop, so I can get those processors, since they're somewhat close to launching. Hopefully, they support virtualization.

What I'm thinking of doing is taking this course:

http://courses.ucsc-extension.edu/u...reaId=3785130&selectedProgramStreamId=1535344

And using my home lab. You get 70 percent off the VCP voucher, and I can get tuition assistance to pay for the class.

I appreciate the help.
 
Is that the only class necessary for VCP410? I just started looking into it, but I thought you had to take the fast track, or the three separate courses. The FT is like $5k but I can convince work to pay for the C&M if it's only $1k.

The Q and 6GB will handle more than 3-4 vms, it depends what they're doing, but for a lab setup you can squeeze quite a few vms into 6GB so you'll probably be more limited by disk performance than you will cpu or ram.
 
Is that the only class necessary for VCP410? I just started looking into it, but I thought you had to take the fast track, or the three separate courses. The FT is like $5k but I can convince work to pay for the C&M if it's only $1k.

The Q and 6GB will handle more than 3-4 vms, it depends what they're doing, but for a lab setup you can squeeze quite a few vms into 6GB so you'll probably be more limited by disk performance than you will cpu or ram.

Thanks. I guess I'll be on my way. The only thing I need to research is if a Arctic Freezer Pro 7 can fit in a 3U or 4U rack mount chassis (or if I'll have to return to the stock cooler...booh).

VMware recently started the VMware Academy (similar to the Cisco Network Academy), so Community Colleges, Technical schools and universities can teach Vsphere classes. They get access to slides, books, etc.

http://www.vmware.com/partners/programs/vap/

So, it's definitely legit. Some people have gone to community colleges, where they've gotten to take the class for $400.

I'd definitely do this vice the multi thousand route. You have three to four months to read and do labs versus the one-week course. I took the VCP3 class and it was booty. I didn't learn anything, so if I can have the time and get the Mastering Vsphere along with the official courseware (plus CBT Nuggets), I'm golden.
 
Finally...my lab is built...only one problem, bad DIMM slot in one of my motherboards but Supermicro is cross shipping me a new one.

Two vSphere 4.1 Hosts Running ESXi

  • Lian Li v351B Case
  • Supermicro X8SIL-F Motherboard
  • Intel Xeon X3450 CPU
  • Kingston ValueRAM (2x4GB each)
  • Rosewill Green 430w PS
  • 4GB USB thumb drive plugged in to internal USB port

Networking:

  • HP 1810G-24 Switch (Primary)
  • Cisco 2960S w/ 10Gb Uplinks (Secondary for testing)

Storage:

  • Synology DS1010+
  • 5 x 1TB WD Caviar Black in a RAID5 Set

Software:

  • ESXi 4.1 booting from internal USB thumb drive
  • Enterprise Plus licensing

Blog posts (Will be updated as posted)


This is a great setup. It's fast and everything works....all features of Enterprise Plus are supported. The motherboard has two onboard Intel NICs and a third NIC dedicated to IPMI. I expect this will be a supported platform for a while. The servers idle at 37w and I have DPM shutting one down when possible. This whole setup is also very quiet. I have it in my home office sitting about 8' from me and it's not distracting at all. The only noise I even hear is the very low hum from the DS1010+.
 
I like your setup netjunkie. I'll probably build something similar except for the chassis. I want rack mounts. Are you just placing the USB at the top of the boot order. Is there anything special you have to do to ESXi to run it from there?

And I have Synology 209 and love it (plus it supports iSCSI). I might look at the model you have (as I'm limited to RAID 0 and 1 because of the two drive bays).
 
Yeah I'm going to be looking into something similar, probably not all at once for some Citrix certifications coming up. Going with the QNAP instead though since it's on the XenServer HCL. Unfortunately only the RS411 is on it from Synology, but QNAP builds a quality product as well. Looking forward to the next blog post NetJunkie. Keep up the good work.
 
The board should work (if not oh well). It has a Broadcam integrated NIC on there. I have three Adaptec X4 10/100 RJ-45 ports NICs that I wanted to throw in there. I want to do the rack mount for the lab I plan on building. I was thinking of getting a Dell 6224 GE switch as the break out (or HP 1810, since people are crazy about it).

Are you saying I can only run a few (like 3) if I use the QQ6600 with 6GB of RAM?

So what did you do with the retired PC?

And the specs for your other two ESXi hosts are ridiculous. I want to build two "real" hosts but want to wait until Bulldozer or Sandybridge drop, so I can get those processors, since they're somewhat close to launching. Hopefully, they support virtualization.

What I'm thinking of doing is taking this course:

http://courses.ucsc-extension.edu/u...reaId=3785130&selectedProgramStreamId=1535344

And using my home lab. You get 70 percent off the VCP voucher, and I can get tuition assistance to pay for the class.

I appreciate the help.

I thought all Gigabyte consumer boards pretty much all ran Realteks for the onboard LAN, I stand corrected :)

If the onboard is a Broadcom yes that will work but the 10/100 unfortunately won't since VMWare dropped support for all 10/100 NIC's, gigabit is the minimum now.

I'd love to put my i5 setup into a 2U iStarUSA rackmount I've been eyeing on Newegg but I can't spend any more money on my lab until I pay off some bills lol.

As for the retired ESXi system it's pretty much just sitting there, I posted for sale locally (can't post it here yet since I don't have enough posts).
 
I thought all Gigabyte consumer boards pretty much all ran Realteks for the onboard LAN, I stand corrected :)

If the onboard is a Broadcom yes that will work but the 10/100 unfortunately won't since VMWare dropped support for all 10/100 NIC's, gigabit is the minimum now.

I'd love to put my i5 setup into a 2U iStarUSA rackmount I've been eyeing on Newegg but I can't spend any more money on my lab until I pay off some bills lol.

As for the retired ESXi system it's pretty much just sitting there, I posted for sale locally (can't post it here yet since I don't have enough posts).

You're right, it is a Realtek on the board. And Gibabit ethernet is the minimum. Gah.

I guess I'll use the NICs for GNS3 (I paid $60 each off of Ebay).

And will the i5 fit in a 2U. Will you not have issues with the cooling? And will the NICs fit in also?

And I guess you're one of those people who doesn't believe in having a million computers (I'm the same way. I just sold a netbook and notebook).
 
You're right, it is a Realtek on the board. And Gibabit ethernet is the minimum. Gah.

I guess I'll use the NICs for GNS3 (I paid $60 each off of Ebay).

And will the i5 fit in a 2U. Will you not have issues with the cooling? And will the NICs fit in also?

And I guess you're one of those people who doesn't believe in having a million computers (I'm the same way. I just sold a netbook and notebook).

I used to have a million computers, downsized and now it's starting to grow again lol. The i5 should fit into the 2U as it's an mATX board and the stock coolers that come with the i5's are quite low compared to the newer ones (I'll probably send an e-mail to newegg/istarusa to confirm but it should fit). As for the NIC's I jumped on a buy it now auction for 4 Intel 1000PT gigabit PCIe adapters (HP stamped) for $80 at the time. They came with low profile brackets and I'll probably have to hunt one down for the 1000GT.
 
Alright, Netjunkie has me hyped/motivated. I went on Newegg, here are the parts I have so far:

Intel Xeon X3450 Lynnfield 2.66GHz 8MB L3 Cache LGA 1156

Kingston ValueRAM 4GB (2 x 2GB) 240-Pin DDR3 SDRAM ECC Unbuffered DDR3 1333 X2 for a total of 8

SUPERMICRO CSE-813MTQ-520CB Black 1U Rackmount Server Case w/ 520W Power Supply

SUPERMICRO MBD-X8SIL-F-O Xeon X3400 / L3400 / Core i3 series Dual LAN Micro ATX Server Board w/ Remote Management

Does this look good to everyone? Will I run into any problems with any ESX features? I might throw in another 1X or 2X NIC?

And is there anything I have to do to load ESXi off of thumbdrive? I'm going to use my current Synology 209 for an iSCSI target.
 
You're good Spider. The onboard NICs on the X8SIL are Intel and work just fine. If you want more you can add more...but that's up to what you're doing. You can use any feature in the Enterprise Plus suite with this setup. Just boot off the ESXi disc (via virtual storage over IPMI) and it'll see the USB thumb drive as a valid install destination. I'm using 4GB thumb drives.
 
Netjunkie,

I haven't messed around with IPMI. Is there an installation guide to doing this? Or is it as simple as going into the BIOS?

And I want to use the setup when I take that Vsphere class.

And did you buy the license for Enterprise+?

Also, do the Intel NICs support 802.1Q? If I were to use them with Dynamips/GNS3 and real switches, would it support interVLAN routing?

Sorry for the questions, but you've been gracious enough to answer my questions.
 
Netjunkie,

I haven't messed around with IPMI. Is there an installation guide to doing this? Or is it as simple as going into the BIOS?

And I want to use the setup when I take that Vsphere class.

And did you buy the license for Enterprise+?

Also, do the Intel NICs support 802.1Q? If I were to use them with Dynamips/GNS3 and real switches, would it support interVLAN routing?

Sorry for the questions, but you've been gracious enough to answer my questions.

IPMI is simple...basically the X8SIL-F has a third NIC dedicated to remote management. You set the IP for that in the BIOS (or see the DHCP address). You web in to that and from there you can remotely control the server and remotely mount things like ISO files for installs. Very useful.

I get vSphere licenses for free...so I didn't buy them.

Yes, the NICs do 802.1Q, though it's a feature of the driver really. But yes, they do it. I'm using VLAN tagging in my lab right now.

Happy to help.
 
NetJunkie,

I'm in the process of setting up something similiar to yours. From the whitepapers, VMware seems to want you to dedicate nics for a lot of different functions, requiring a large amount of nics. I will have a single onboard for management and a dual Intel NIC. I was thinking 1 to attach to my HP VSA iSCSI SAN and 1 for Heartbeat/Vmotion, etc?

Is that your thoughts using 3 nics?

Thanks!
 
Back
Top