CPU Mining Builds (My adventure with Epycs)

Andrew_Carr

2[H]4U
Joined
Feb 26, 2005
Messages
2,777
I thought I'd share my results of exploring sketchy ebay & alibaba listings in search of higher CPU hashrates in randomx at a low cost. (These results are basically the same for raptoreum except you can save money by using slower memory / less channels) I basically narrowed things down to a few viable routes:

1.) Single CPU Consumer Builds (AMD Ryzen 3xxx series) - Generally 3900x CPUs are the best in this area. I buy cheap RAM between 3200MHz - 3600MHz in bulk whenever possible ($20-40 / stick)
3950x CPUs are usually too expensive to justify since it's a roughly 30% performance increase for usually something like a 50% price increase ($300 vs $500ish). Additionally, you'll need to buy an aftermarket cooler for the 3950x instead of just using a stock wraith prism on the 3900x. Going lower than the 3900x and the costs of things like the motherboard, case, PSU, etc. eat up too much of the budget.

On all builds you'll need to tweak memory settings to lower your timings and this can be time consuming. Because of this, I would go with a single type of memory and motherboard so you can save time and (hopefully) reuse timings as much as possible.

----Server Builds----

2.) Single CPU Server Builds - The CPUs for these are generally the P-type Epyc 7001/7002 series CPUs and because they can only be used in a single CPU configuration they're usually cheaper for the same # of cores vs a dual-socket capable CPU. I've had good luck with the 7551P CPUs. However, lesson learned here though is that despite the cheaper motherboards and cheaper CPUs, you actually end up spending more for the same hashrate because you're buying double the number of PSUs, cases, motherboards, etc. It's not a massive difference though, so if you find really good prices on P CPUs it can be worthwhile.

3.) Dual CPU Server Builds - These generally aren't much more complicated to setup than a single CPU setup and are a better investment. I generally jump to 4U cases at this point though because of the cooler size and motherboard size increases, and sometimes put in an oversized ATX PSU so I have the option of running a few GPUs as well. I haven't had good luck running the 1U dynatron coolers on my 7601 CPUs for whatever reason (could be user error), but temps with their larger a26 or a38 heatsinks are great.

Another odd thing I didn't expect is that 64 core options are also generally the most cost effective. Despite costing 4x as much per CPU, again you're saving a lot of money on fewer motherboards and DIMMs. Power efficiency should also be better here and space savings will also add up. I've had a hard time finding good deals on 64 core CPUs though, while the 32 core options are plentiful and keep dropping in price the 64 core CPUs have remained steady for awhile. So if the 64 core Epycs drop to around $1k/each as I hope they will, I think these would be the best option by far (This is for QS/ES samples that will require tweaking. Retail versions are much more expensive).


General server notes:
While you don't have to mess with memory timings here, server parts are more finnicky in my experience. As an example, I installed some of my CPUs with too little pressure and had instability that took awhile to troubleshoot. Memory requirements are also something to be more aware of, although I haven't had any issues yet with memory (even when mixing different speeds, sizes, etc. It all clocked to the lowest stick as expected). I generally have targeted older Epyc CPUs and QS/ES samples. With the zenstates and rome overclocker from the servethehome forums you can overclock these. Make sure you buy CPUs that aren't vendor locked, or if you come across a cheap Dell motherboard, snatch it up and buy the cheap vendor locked CPUs. The newer Epyc CPUs blow fuses when installed in certain vendor's motherboards so there's no reverting that.

I've tried 2133MHz through 2666MHz memory in my Epyc builds and the performance loss is minimal unlike with the ryzen machines (2-3kh/s loss on a 31kh/s machine). So you can probably save money and buy 2133MHz memory @ $8/stick instead of the $20 or so per stick for 2666MHz stuff. And when it comes to a lot of other algorithms, that high memory speed is sometimes even less important. Generally the number of memory channels seriously impacts performance though so you want enough DIMMs to populate every slot, so go for low density 4-8GB DIMMs to save money. While Ryzen CPUs are limited to 1 memory channel, Epyc CPUs can have up to 8 and this is a major difference. Also, while I spend about 50% more on memory for Epyc builds, it's very easy to spend a ton more on memory if you buy higher density stuff so be cautious of that. You also want to go for 32 Core CPUs and higher for the most part. The high cost of motherboards and memory makes it cost prohibitive to use the lower end parts even though those CPU prices are dirt cheap.

Another major benefit to server motherboards is that you get IPMI for remote management. This lets you hard reboot your computer, view POST errors, etc. via a separate on-board processor. This reason alone makes me want to switch everything over to server motherboards and should also save you ~$20/computer versus buying smart plugs or something.


TLDR Numbers
The hashrate figures are actual hashrates for the 32 Core builds (31khash on 2 x 32 core 7601 QS when using 2666MHz memory, about 28khash with slow memory). I get between 14-16khash/s per 32 core Epyc (7551P and 7601 ES perform very similarly). I need to tweak my 3900x builds still but I put in 15khash for now as a conservative figure for an ideal build that doesn't require expensive memory. Once I start overclocking the Epycs I expect to get closer to 18-19khash per CPU (https://xmrig.com/benchmark?cpu=AMD+EPYC+7601+32-Core+Processor) which puts the dual CPU 32 core Epyc builds very close to Ryzens in terms of performance per dollar. Performance per watt I don't think they'd pull ahead until I start using zen 2 64 core CPUs. Also, I did end up with two duds in the 7601 QS sampels that I bought. They're locked to 1.9GHz and only hash at half what the others do (the others are 1.4GHz QS samples and boost to 2.5GHz without any tweaking) so these will require overclocking to fix.

1649254189307.png
 
Nice! I like the cheap supermicro psus. I did most of my mining on pws-561-1h20s (560w) and they have been spectacularly reliable.

I really should get the 3900x in my desktop mining. I was looking to go with a sp3 setup but couldn't get a cheap motherboard.

Just for comparison. My phi 7210s are getting 40mh/s each on vrsc on ~200w ($34 a month before electric) I think that still has most monero profits beat although emphasis on the shitcoins mining.

Honestly with the slim margins on CPU mining my gear would be running on a piece of cardboard on a wire rack. No case bs lol
 
Last edited:
Great info. Any power measurements on them, at the wall? The only ryzen's even worth mining with (with residential electric rates) are really only 7nm's, and I have a feeling those epyc's aren't nearly that efficient. With zen/zen+ you really need to scout for shitcoins, as they're rarely profitable without cheap/free electric.
[Update: Varies between 206W - 213W at the wall on my 850W gold PSU and 120V on the 7551P build at 16khash/s. Since I'm only loading the PSU at 20% and this isn't on 220V I think once I switch in a 500W PSU and put it back on 220V it'll decrease by 10W or so. Still terrible profitability though, lol]

Nice! I like the cheap supermicro psus. I did most of my mining on pws-561-1h20s (560w) and they have been spectacularly reliable.

I really should get the 3900x in my desktop mining. I was looking to go with a sp3 setup but couldn't get a cheap motherboard.

Just for comparison. My phi 7210s are getting 40mh/s each on vrsc on ~200w ($34 a month before electric) I think that still has most monero profits beat although emphasis on the shitcoins mining.

Honestly with the slim margins on CPU mining my gear would be running on a piece of cardboard on a wire rack. No case bs lol
Sometimes I come across deals like this: https://www.ebay.com/itm/294590896959?hash=item4496fc5b3f:g:9UMAAOSwNFthpPVF
(The cheapest compatible PSU I can find for these is $300 used though which kinda kills the deal, and I'm not sure if it'll work with a CPU only in the 2nd CPU socket or if it has to be the 1st, damaged socket)
Zedicus sold me a similar motherboard with bent pins and I'm able to get 14.16kh/s out of it even with only about half of the DIMMs filled after fixing some of the bent pins, which is only 2kh/s lower than my fully functional board with the same CPU.

And yeah, most of my CPU builds are running straight on the wire racks in a giant enclosure. But I'm trying to improve cooling efficiency in my setup and I'm going to move eventually, so I'd like to get everything into server cases for airflow / cooling efficiency and portability.
 
Last edited:
As an eBay Associate, HardForum may earn from qualifying purchases.
Ok, finished my first 1U test system with a 3900x and it was relatively painless. I used the dynatron a18, which is only rated for 95W TDP CPUs so I set the PBO limit to 95W to lower temps (I typically set low PBO limits anyway for better mining efficiency anyway). Temps in a room with 85F ambient were 83C on a bare metal rack and pegged at 95C in the 1U case, so the cooler is definitely marginal (This is 12C higher than an adjacent build with a wraith prism, and that's prior to installing in the 1U case). Airflow in the case is pretty awful and routing the wires is weird due to some metal dividers so I had to jerry rig it with a narrow PSU. The single fan blows straight into the memory sticks and the CPU cooler fan is directed at the side of the case instead out the back. Unfortunately unless I try a much more expensive motherboard like the asrock X470D4U I don't think I'll be able to find something with a different orientation of the memory and cpu socket. I might drill some holes into the metal divider so I can mount small case fans and maybe that'll help.

Also, memory clearance isn't great. Even with relatively short heat spreaders they still contact the lid of the case. I can still put it on and everything but it's not great.
 

Attachments

  • DSC_0671.JPG
    DSC_0671.JPG
    387.2 KB · Views: 1
Last edited:
Yeah, I need to adjust the PBO limits on the CPU. I run a lot of them more around 80-90W for that reason but this one is running a bit higher than normal currently, wanted to see how capable the cooler was. Just picked up a couple of full server racks for $75 / each, about the same price as a single 1U case, lol. Cheapest I've found 1U cases for is about $40 locally but it looks like in bulk I'd spend about $70/each new. I might try the ones that include PSUs next time since that might save me time it ends up costing the same as buying both separately. I just don't like using offbrand PSUs in anything though so I was kinda leery.
 
Last edited:
Andrew_Carr , nice setup (y). I find naples much easier to overclock than rome ES/QS (via EDC bug) but rome certainly is more efficient. I can rock 3.2 to 3.3GHz all cores all day on DC with 32 core system pulling 220W to 240W off the wall depending on projects. Naples are so cheap nowadays but the MB itself probably costs more than the cpu.

Talking about DC, might want to consider supporting [H] team in the upcoming annual BOINC pentathlon race? ;)
 
Andrew_Carr , nice setup (y). I find naples much easier to overclock than rome ES/QS (via EDC bug) but rome certainly is more efficient. I can rock 3.2 to 3.3GHz all cores all day on DC with 32 core system pulling 220W to 240W off the wall depending on projects. Naples are so cheap nowadays but the MB itself probably costs more than the cpu.

Talking about DC, might want to consider supporting [H] team in the upcoming annual BOINC pentathlon race? ;)
Ok, I'll take a look into it.
 
It seems to be a nice setup to me. But only you can judge how it works practically. I would love to know about your experience with it.
 
Back
Top