Anyone run AMD APUs in ESX hosts?

lopoetve

Extremely [H]
Joined
Oct 11, 2001
Messages
33,890
Expanding the home lab - got a tight budget for this load up, and I'm being tempted by AMD APU setups. I don't need a ~ton~ of CPU power for anything (I'm not running production - some web servers, some VDI desktops, a couple of small databases), but I don't have any hands on experience with the APU (specifically, the latest A10 4-core models) to know how much horsepower they actually have.

I can build nodes with the A10-7850 and 16G of ram for about $300 a pop (including 3 compatible network interfaces), which is about $200 cheaper than any Intel setup I've come up with so far (especially since I don't need storage), but I don't want to buy into something that is going to be dog slow. Anyone tried these so far? I also looked at the ThinkServer everyone is doing, but when you add in 16G of ECC to those, they're a good bit pricier than the APU setup would be.
 
Not sure what the Intel ARK equivalent is for AMD, but I couldn't find anything that says this has AMDs version of VT Extensions.
 
The latest do have AMD-V, although not their equivalent of VT-D (but I don't need that - I've got storage ;))
 
I would double check to see if the kaveri architecture supports ecc memory, if that is what you are wanting to run as I am pretty sure only the earlier apu architectures support it. Other than that I would assume that these would be on par with at least first gen i7's if not better.
 
I'm kind of in the same boat - looking to revamp my home lab. I've been running these Dell C6100/C6005 setups for awhile and while they're neat, they're just not what I need at all.
I've looked at getting some UCS stuff in, but talk about spendy, youch! Even second hand, no way I can swing it.
So I'm looking at white box and other options. I too just need compute - no VT-d or local storage. I want at least 3 hosts. I have boxes of ECC DDR3 (something like 60 x 8GB sticks) that I can use, so I don't mind going with a box that needs ECC. From the little I've read on the ThinkServer, they are interesting but they'll take up a ton of room.
 
I would double check to see if the kaveri architecture supports ecc memory, if that is what you are wanting to run as I am pretty sure only the earlier apu architectures support it. Other than that I would assume that these would be on par with at least first gen i7's if not better.

Don't need ECC; that was for the ThinkServers that require it.
 
I'm kind of in the same boat - looking to revamp my home lab. I've been running these Dell C6100/C6005 setups for awhile and while they're neat, they're just not what I need at all.
I've looked at getting some UCS stuff in, but talk about spendy, youch! Even second hand, no way I can swing it.
So I'm looking at white box and other options. I too just need compute - no VT-d or local storage. I want at least 3 hosts. I have boxes of ECC DDR3 (something like 60 x 8GB sticks) that I can use, so I don't mind going with a box that needs ECC. From the little I've read on the ThinkServer, they are interesting but they'll take up a ton of room.

Yeah, that's the other thing - the APU setups are mATX boards and tiny.
 
Still a huge fan of the Supermicro mATX boards due to low cost, small foot print, 4 DIMM slots, IPMI (if board has it), low power usage, wide array of processor support (from Celeron to Xeon sans i3, i5, i7). Slight down side is that the boards require unbuffered, ECC RAM.
The ASRock mITX server boards can use regular RAM.

(I have 3x X9SCM hosts I'm revamping soon. ;) )
 
Still a huge fan of the Supermicro mATX boards due to low cost, small foot print, 4 DIMM slots, IPMI (if board has it), low power usage, wide array of processor support (from Celeron to Xeon sans i3, i5, i7). Slight down side is that the boards require unbuffered, ECC RAM.
The ASRock mITX server boards can use regular RAM.

(I have 3x X9SCM hosts I'm revamping soon. ;) )

Yeah, but I can build 2 APU servers for each SM Intel box I build. While IPMI certainly has value, that's a huge difference in cost, and $300 a server for IPMI buys one hell of an IPKVM system.

Hell, the board and CPU alone cost more than the entire APU setup, and that's before you add PSU/RAM/case/etc.
 
Yeah. IPMI is nice to have, but I don't need it for my home lab. Heck, it doesn't even work on one of my nodes in my C6100 for some reason :D.
I may just whitebox it with these APUs as well after all.

I may try to sell all of this RAM I have... anyone need/want registered ECC DDR3? :D (/hijack)
 
I don't disagree. For me is was about the ram slot count and small profile at the time.
 
True but with mine you get expansion slots and VT-D which you say you dont need now but down the road you might also the cores would be stronger me thinks... Basically you have to upgrade with new board mine u can upgrade with new Proc.
 
True but with mine you get expansion slots and VT-D which you say you dont need now but down the road you might also the cores would be stronger me thinks... Basically you have to upgrade with new board mine u can upgrade with new Proc.

Not ever gonna be a VT-D guy - I've got other solutions for that, and they work very well ;)

I honestly don't need that much power - I've been looking at the Avoton Atoms as well, but same issue there with ECC ram ($600 a node in the end, although they save a TON on power). I'm not doing anything that needs much CPU (all my transcoding is run on a physical FreeNAS box), just running web servers and small databases.

As long as the APU doesn't perform like total junk (sounds like they're on par with the first-gen Nehalems) it'll do the job, and do it cheaply. RAM is far more critical than CPU cycles, as my current PE840s are barely sweating and they're the old Core2 quads, but capped at 8G a server (and spin heat like a fiend, plus suck power).

All I really need is nodes with 16G+ of RAM, a single expansion slot for a dual-port nic, and a USB slot for a boot device :) That'll get me by for now.
 
You could go Intel NUC but you would have to mod it to use the Mini-PCI-E for an extra NIC like Nicolas Farmer did. Use a mSATA and 2.5" and go VSAN...
 
You could go Intel NUC but you would have to mod it to use the Mini-PCI-E for an extra NIC like Nicolas Farmer did. Use a mSATA and 2.5" and go VSAN...

I know too much about VSAN - I'm not running it on AHCI (see my posts in his thread). :) I did think about the NUCs though - his solution is creative for certain.

Like I said, these don't need storage. In fact, if I could save money by getting boards that don't even have SATA connectors, I'd do so. All they need is CPU, memory, a bit of PCI-E bus space for a dual-port nic, and that's it :)

I have an external storage device that these will be talking to.
 
Honestly i would just get 1 APU Board if it doesnt preform well just return it then work out the next idea... Avaton or Xeon depending on what u need...

I have done that in the past, usually you will find out by Day 2 or 3 if its going to be good enough... The memory should translate across boards so no need to return/rebuy... Just make sure they got a good Return policy... like AMZ or whatev

If it becomes the Next Best thing Blog it, if not Blog it!! Save ppl the headache down the road... be the Guinea Pig for testing!!! I Do it all the time just never have the time to blog about it which i really need to do.
 
You know, that's a really good point :p

At the very least, it'd do a good job running VC and the supporting software :)
 
Just go to Microcenter and get an AMD FX-6300 and Gigabyte GA-78LMT-USB3 combo for $89.99 or a FX-8320E and Gigabyte GA-78LMT-USB3 combo for $99.99.

http://www.microcenter.com/site/products/amd_bundles.aspx

Motherboard is mATX, supports 32GB of RAM (4x8GB), and has a PCI-E x16 and x1 slot. I've used it in my lab for years. Just make sure you disable USB3 in the BIOS and you may see a CPU error on the host because ESXi can't read a sensor. Just ignore it.

$100 for CPU and mobo
$200 for 32GB RAM

My systems would idle around 70W with 6-10 VMs running on each. Power management set to balanced.
 
Just go to Microcenter and get an AMD FX-6300 and Gigabyte GA-78LMT-USB3 combo for $89.99 or a FX-8320E and Gigabyte GA-78LMT-USB3 combo for $99.99.

http://www.microcenter.com/site/products/amd_bundles.aspx

Motherboard is mATX, supports 32GB of RAM (4x8GB), and has a PCI-E x16 and x1 slot. I've used it in my lab for years. Just make sure you disable USB3 in the BIOS and you may see a CPU error on the host because ESXi can't read a sensor. Just ignore it.

$100 for CPU and mobo
$200 for 32GB RAM

My systems would idle around 70W with 6-10 VMs running on each. Power management set to balanced.

Bloody hell.
/me salutes.

That'll solve my problem even better. :)
 
The microcenter deal with the 8320e and mobo for ~$100 is pretty sweet. I got one for a VM target.
 
The only thing you wont get is ECC but that becomes the question on does it matter and thats a question you need to ask yourself coming from VMware ;) ...

Scratch that it does take ECC non buffered but still Awesome!! Good Catch C-o-W!
 
General question on ECC RAM. Obviously, the vast majority of the consumer boards do not support ECC or Registered RAM. What would happen if you dropped in a Registered ECC DIMM in though? Would it not work at all, or would you just not get the ECC benefits?
I'd like to get a cheaper and quieter setup, using as much "stuff" as I have laying around.

Sorry for the hijack, lopoetve.
 
Depends on the board. I've had a couple that would take either without really complaining, and a couple that promptly refused to POST. Most boards were in the "no post" category, the ones that did work were high-end consumer boards with somewhat unique chipsets (the early AMD 760 DDR chipset or the like).
 
General question on ECC RAM. Obviously, the vast majority of the consumer boards do not support ECC or Registered RAM. What would happen if you dropped in a Registered ECC DIMM in though? Would it not work at all, or would you just not get the ECC benefits?
I'd like to get a cheaper and quieter setup, using as much "stuff" as I have laying around.

Sorry for the hijack, lopoetve.

Wouldn't work. System won't boot. Tried it. :D
 
<snipped>

If it becomes the Next Best thing Blog it, if not Blog it!! Save ppl the headache down the road... be the Guinea Pig for testing!!! I Do it all the time just never have the time to blog about it which i really need to do.

Amen.
I love reading up on what everyone else uses
in their builds and caveats.

I tried to come up with something to meet requirements
but found nothing better than already mentioned.

Hard to beat AMD cost.
 
I haven't tried ESXi on my Athlon 5350s yet. They do support nested KVM virtualization, though, which made me pretty happy. At some point I'll actually get around to building up the cluster I have planned. 3 nodes + shared storage for a Pacemaker cluster. Still need a quad port nic and some sort network controlled PDU.
 
Because we know 5.0? study material is widely available? forums, blogs, documentation.
I'd rather stick with what I know works, and only upgrade to something new (which probably breaks the one think that I used most)

That said..... that Xeon looks sweet. *drool*
 
Back
Top