Home ESXi Lab: Buy old Dual Xeon Server or build New AMD FX

Joined
Dec 9, 2014
Messages
12
I am torn between looking on Ebay for an old server with Dual Xeons, or building my own Fx-8350 platform. I am going to be running 5 or so VM's: 1 storage, 1 DC, 1 Sophos Home Firewall, and a few test servers. I have about $1,000 to spend, and when I design my own build and price it out it comes out to about $950 for an fx-8320e, (2) 250 GB Samsung 850 EVOs, (2) 4TB Toshiba HDDs, GIGABYTE GA-990FXA-UD5 Mobo, Case and Power supply. (I already have 24GB of RAM, and a 4 port ESXi compatible NIC)

This will be up and running in my office 24/7, and I am looking for something quiet when compared to an old server chassis. My only concern is, will the performance of the fx-8320e be as good or better than say a pair of Xeon x5650's? I don't want to sacrifice performance for a "fancy" build especially because I can probably get an old server with dual Xeons for less than my build.

So I am trying to decide, do I spend the money on an old server and add hard drives, or do I go with a new build? And would a new build be capable of running my VMs and playing games if I add a video card? I have seen mention of this but am not sure how well this works.

Any input is greatly appreciated. I have been doing my research but am still 50/50 on this decision, and I am getting anxious to make a purchase.
 
Last edited:
Don't plan to play video games on the esxi box

how are you doing the 'storage vm'? passing the 4TB drives through to that VM?

an older server most likely will be loud for an office..cpu wont improve the VM's as much as faster disk and memory will
 
Yeah I saw someone run VMware workstation and virtualize his hosts, and then still use the original machine for gaming...thought it was interesting.

The plan is to utilize the 4TB for file storage mostly, and for non critical VMs, then utilize the SSDs for VM OS drives. I would think with SSDs I would be fine for the important stuff.

Thanks for the reply.
 
Buy an old dell or hp workstation with dual xeons, they seem to go under the radar and are more moderate,y priced, despite being virtually the same thing.
 
That is exactly what I am leaning towards...a Z800 or similar. I just want to make sure a new FX-8350 won't outperform one of these, while also being quieter and more energy efficient.

Does anyone have any sort of comparison between Xeons vs a FX-8350? To me this seems like a no brainer to go with the Xeons, but some people are absolutely in love with these new FX procs for virtualization...I just want to make sure.
 
http://cpuboss.com/cpus/Intel-Xeon-X5650-vs-AMD-FX-8350

Looks like the x5650 vs the FX-8350 are pretty similar, but when you have two x5650's.... you'll blow the FX-8350 out of the water.

I've been using two L5640's with ESXI, and absolutely love the performance. 24 threads? win.
You can also use more memory with the Xeons. Most of the dual socket boards for 1366 have 6 memory slots per socket, and ebay has some great deals on ECC DDR3.
 
As an eBay Associate, HardForum may earn from qualifying purchases.
I actually have the exact motherboard you are suggesting on my ESXi 6.0 rig, paired with a FX-8350 CPU.

I currently run pfSense, FreeNAS 9.3, Ubuntu 14.04 LTS server, and a Windows VM (plus the vCenter appliance) and have no performance issues with it.

The only annoyances are:

- the onboard NIC (Realtek) which ESXi doesn't like much, but you already covered that with the 4-port NIC
- the SATA ports do not work in passthrough (the ports can be seen but the drives attached cannot), so if you build a VM NAS you should buy a controller like a M1015 and passthrough it to the NAS VM

I can't judge the performance against the dual Xeons but so far I have been reasonably happy. Then again I'm not running anything crazy either, the Windows VM is to record HD video from my cable box and that's the biggest CPU draw - YMMV.
 
http://cpuboss.com/cpus/Intel-Xeon-X5650-vs-AMD-FX-8350

Looks like the x5650 vs the FX-8350 are pretty similar, but when you have two x5650's.... you'll blow the FX-8350 out of the water.

I've been using two L5640's with ESXI, and absolutely love the performance. 24 threads? win.
You can also use more memory with the Xeons. Most of the dual socket boards for 1366 have 6 memory slots per socket, and ebay has some great deals on ECC DDR3.

Some of them, like the Dell R710 have 9 slots per socket and can take up to 288GB of RAM... :-D

Dual XEONs will blow away the AMD setup.
 
As an eBay Associate, HardForum may earn from qualifying purchases.
Wait for the new line of Xeon D-1500's to come out. I've got the Xeon D-1540 and I run 8-10 VM's off it including some CPU intensive ones (Plex with lots of transcodes for example). The 1540 SOC boards are expensive right now ($500+) but a bunch more are coming out shortly that will undoubtedly have lower prices.

This is the board I've got.
 
Wait for the new line of Xeon D-1500's to come out. I've got the Xeon D-1540 and I run 8-10 VM's off it including some CPU intensive ones (Plex with lots of transcodes for example). The 1540 SOC boards are expensive right now ($500+) but a bunch more are coming out shortly that will undoubtedly have lower prices.

This is the board I've got.

That is a sweet board! That definitely looks like a home virt lab dream, but my budget is $900-1000. I will have to look at something like that in a couple years, as there is no way I can get my storage and accessories (I would need a case and power supply as well) with that system in my budget.
 
absolutely love the performance. 24 threads? win.

i dont understand all the love for the number of cores in a virtualized lab ; i have a similar setup the OP aims for and the main concern is memory not the cpu
and ofc power use in a 24/7 operation

I7 4765T (due to 35W tdp, at that time E3-1230Lv3 was impossible to find), 32gb. 2x samsung ssds and a 4gb hybrid drive,psu with a gold cert.

DC,RDS, Exchange, Storage, Webserver, Sophos UTM, MS SQL and also Plex media server and the cpu doesnt even get warm; Plex adds about 20% when transcoding for one device, IPS is a hog also when my 225/10 line is at full use but the number of cores does not mean much its a Ghz thing. Might also go for passive cooling but those Dynatrons are rather expensive

If i was buying now i would also get the new Xeon D tho
 
Last edited:
Depends on how much you are going to virtualize and how much usage it gets. On a Pentium G620 (Dual Core Sandy Bridge) w/24GB RAM I run:
  • 2x VPN Servers
  • A FTP Server
  • A Killing Floor Dedicated Server
  • Ubiquiti UniFi Controller Server
  • FreePBX
  • 2x Bind DNS Servers
  • NewzNab Usenet Indexing on w/LAMP Stack
  • ZoneMinder Video Recording
  • Squid Caching Proxy
  • Sickbeard/SABNZBD/CouchPotato/Network Shares

I have a spare Win 2012 Server Licence and I've been thinking about adding in a domain controller too as well as a Plex sever. Since this is in a home environment, even with all of this stuff running I rarely see CPU usage top 50%.

I went this route over a "real" server for power reasons. It pulls ~33 watts idle and ~58 watts fully loaded. A real server pulling 200W (low side) would cost me $21.60/month to run or $259/year. My low power build costs me about $4.32/month or $51.84/year. The difference is ~$208/year in power savings. Considering I don't need that much processing power, this route was more economical power and equipment wise.

0wrPXkN.png

NUJtxF6.png
 
This is turning into a great thread for people to realize the potential and power that a budget low power system can provide. I had no idea people were able to do so much with such efficient systems!

I am still favoring a workstation build, but it is cool to see people running what I am looking for on systems less powerful than this.

If anyone has more input I would love to hear it!
 
I know more than a couple people using i7 or i5 NUCs for their VM labs. The only real downside is the max of 16GB* RAM. The bigger ones can take an mSATA or M.2 SSD (depending on the model) and a regular 2.5" drive, so plenty of storage for a lab. They only use a few watts at idle, they're tiny, and they make very little noise, so perfect for just throwing on your desk. Also great for client demos, since you're not having to rely on your own laptop to run all the VMs.

* Note that you can supposedly run 32GB in the newer Broadwell NUCs using the 16GB DIMMs from these guys ($325 each, ouch!):
https://squareup.com/market/MemphisElectronicRetail
Not officially supported by Intel, but I've ran across a few articles showing they work just fine.
http://www.pcworld.com/article/2894...your-laptop-or-nuc-you-can-finally-do-it.html
http://www.anandtech.com/show/7742/im-intelligent-memory-to-release-16gb-unregistered-ddr3-modules


Also, it's depressing to see people talking about dual x5650s for their home labs, when that's what we're still running in our production esxi host at work (an old R510). :) It's been a great workhorse, and those chips are still massive overkill even today with the 14 VMs we have running. The only time the CPU usage gets above about 5% is when our nightly compiles run and bump it up to 40% usage. I hit RAM and local storage limitations well before the CPUs even get warm.
 
NUCs are far too expensive and hardware limited for my taste, although an interesting choice to run them in a cluster
 
This is turning into a great thread for people to realize the potential and power that a budget low power system can provide. I had no idea people were able to do so much with such efficient systems!

I am still favoring a workstation build, but it is cool to see people running what I am looking for on systems less powerful than this.

If anyone has more input I would love to hear it!

Just remember that all of those things listed require very little in the way of resources. Zoneminder is probably the most taxing out of the bunch. The type of workload dictates a lot.
 
Back
Top