F@H 3770k with 3 or 6 GPUs?

Flybye

Limp Gawd
Joined
Jun 29, 2006
Messages
374
Hi all. How CPU bound are folds these days? I've been thinking of using my EVGA Z68 FTW 1155 board since it has 6 PCIe x16 slots and using, maybe, 6 slim GPUs on it or perhaps just 3 regular sized GPUs. But the fastest CPU the board takes is my 3770k. Will the 3770k bog down 3 or 6 GPUs folding?
 
Yeah, that I'm aware of. You made me think about it a bit more this morning, and I came across this nifty thread here:

https://hardforum.com/threads/pcie-speed-folding-performance.1902126/

Seems like there isn't too much of a difference between PCIe 2.0 and PCIe 3.0. One of my slots will run at PCIe 2.0 16x but with 5 taken up they will run at PCIe 2.0 8x. Of course, it really wouldn't be worth it to run this board again with only 1 card in it. And that thread did mention running a PCIe 3.0 x16 card on PCIe 2.0 x 8 I believe.
 
It will probably depend on the cards you put in it as well. Newer gens probably won't be fed well enough by the CPU or PCIe 2.0 lane speeds.
 
Yeah, that I'm aware of. You made me think about it a bit more this morning, and I came across this nifty thread here:

https://hardforum.com/threads/pcie-speed-folding-performance.1902126/

Seems like there isn't too much of a difference between PCIe 2.0 and PCIe 3.0. One of my slots will run at PCIe 2.0 16x but with 5 taken up they will run at PCIe 2.0 8x. Of course, it really wouldn't be worth it to run this board again with only 1 card in it. And that thread did mention running a PCIe 3.0 x16 card on PCIe 2.0 x 8 I believe.
Also, what board is it? I had a 3930k die on me a year ago that was being used for my Plex server. If you decide to not fire it back up, I might be interested...
 
Oh, it's just a socket 1155 with a 3770k. I'm sure I'll get it up and running. I used to have a few rigs folding, and I really believe in the cause.
 
Oh, it's just a socket 1155 with a 3770k. I'm sure I'll get it up and running. I used to have a few rigs folding, and I really believe in the cause.
Hmm... not sure why but was thinking that was a 2011 but you are right with 1155. I would need a 2011v1/v2 board to get my 3990k back up. Oh well...back to looking for can't pass up high core count deals...lol.
 
3770K has 16 PCI-e lanes.

I would think 2 GPUs (x8) is the maximum number of GPUs you can have before the system bogs down.
 
Last edited:
3770K has 16 PCI-e lanes.

I would think 2 GPUs (x8) is the maximum number of GPUs you can have before the system bogs down.
Right, but the MB he said he was going to use also has an NF200 chip that is why is has extra PCIe lanes.
 
Originally I was thinking of starting out with a bunch of 1050s or 1060s. Then I came across single slot 1660s but discovered they are too rare. Then I kept comparing PPDs vs watts with all cards up to the 4000s, and I felt like I was back at square one lol. A 4060 will eat up about 110w pumping out about 3.27m PPD vs 3x1660s totaling about 360w with 3.6m PPD. And then I came across new mining motherboards that are practically being given away because of the crash that have 5+ PCIe 3.0, but I don't know anything about them, and you can't fit those in a case. Ah rabbit holes.
 
Originally I was thinking of starting out with a bunch of 1050s or 1060s. Then I came across single slot 1660s but discovered they are too rare. Then I kept comparing PPDs vs watts with all cards up to the 4000s, and I felt like I was back at square one lol. A 4060 will eat up about 110w pumping out about 3.27m PPD vs 3x1660s totaling about 360w with 3.6m PPD. And then I came across new mining motherboards that are practically being given away because of the crash that have 5+ PCIe 3.0, but I don't know anything about them, and you can't fit those in a case. Ah rabbit holes.
Definetley a rabbit hole. Yeah...a single 4060 would be a much better solution in general but the question is... what are you needing to buy and what do you already have? If you have the 10XX series cards already but looking to replace them because of performance and efficiency than by all means yes do that. However, on older hardware the 4xxx cards might be bottlenecked by the CPU and RAM performances. So, it will really depend on what you pair it with. I don't have a lot of experience with 3xxx+ cards because I mostly operate on hand me downs and cheap opportunities. But I have read a lot of people mention this issue when gamers talk about taking old systems like Optiplex 7010's to make cheap gaming boxes out of with high end GPU's. It used to be a great idea but old legacy hardware just bottlenecks newer cards in certain situations if not most.
 
3770K has 16 PCI-e lanes.

I would think 2 GPUs (x8) is the maximum number of GPUs you can have before the system bogs down.
Most F@H jobs will use around 75% of the bandwidth of a PCIe 2.0 4x slot, so it should run good on each GPU as long as each have two PCIe 3.0 lanes or four PCIe 2.0 lanes. (y)
More than 4 GPUs would most likely overload the CPU during the jobs, though, so it might not be a bad idea to not go beyond 4 tasks/GPUs for a quad-core CPU.
 
Last edited:
I participated in the F@H challenge last year during the Eth mining craze (had multiple GPUs mining Eth 🙃) and switched them over to F@H on a 6700K (held up ok, an Athlon XP 435 which could not keep up, couldn't feed two GPUs, and my main PC which fed the 3080 Ti with a Ryzen 3800X on a full 3.0 x16 slot). I was as definitely limited by the PCI x1 bandwidth on the two Eth mining machines. On a CPU that old you might have trouble feeding newer GPUs, let alone multiple of them. Might be able to mostly feed a pair of GTX 1080's or RTX 2060s with that CPU as long as they have PCI2.0 at x8 for both.

Edit: I would go with a single 4060 or something like that as the 4000 series can crunch WU's way better than 4 GTX 1060 cards (assuming 2 PCs with 2 cards each). Newer cards are much more capable and use less energy to do it.
 
Last edited:
Originally I was thinking of starting out with a bunch of 1050s or 1060s. Then I came across single slot 1660s but discovered they are too rare. Then I kept comparing PPDs vs watts with all cards up to the 4000s, and I felt like I was back at square one lol. A 4060 will eat up about 110w pumping out about 3.27m PPD vs 3x1660s totaling about 360w with 3.6m PPD. And then I came across new mining motherboards that are practically being given away because of the crash that have 5+ PCIe 3.0, but I don't know anything about them, and you can't fit those in a case. Ah rabbit holes.

Mining mobos are limited on PCI bandwidth, ETH didn't require more than x1 electrical for data movement. F@H needs more bandwidth and those mining boards will likely be only x1 electrical per slot except possibly the top x16 size slot. Research this before buying such a mobo. Also, they may not like being paired with higher end processors since they were designed specifically for mining ETH.
 
Last edited:
I've decided to stay away from mining boards all together. The power thing you mentioned is now just one more reason to stay away.
 
Your best bet is a 4070 Ti for Windows, and a 4090 for Linux. 4070 Ti can net you around 12-15 million PPD, a 4090 on Linux can nearly double that.
 
I've decided to stay away from mining boards all together. The power thing you mentioned is now just one more reason to stay away.
Good call.

To add to the power commentary, the boards themselves are all foreign-made with f-tier quality garbage power delivery traces to not just the CPU but the GPUs. Looking back, literally the worst power delivery subsystems i've ever seen in my life.

I'm truly surprised I only lost a few boards and no cards at the time.

These days i've rebased my minings rigs from those x8 slot types to consumer grade AM5 Ryzen units with no more than 4 cards per. More to manage, but absolutely rock solid with enough PCIe lanes for even Vast.AI jobs.

These types would do well for the F@H jobs as well. .. now, if only my power was 5c/kwh like it was back pre-pandemic...
 
if only my power was 5c/kwh like it was back pre-pandemic...
my rate is like 43c/kWhr in fall/winter seasons and I'm still DCing...., lol

Here is a picture of 7 water-cooled Titan V (from another DC team) running DC projects using epyc system for maximum PCIE bandwidth per slot ;)

1700415043877.png
 
my rate is like 43c/kWhr in fall/winter seasons and I'm still DCing...., lol

Here is a picture of 7 water-cooled Titan V (from another DC team) running DC projects using epyc system for maximum PCIE bandwidth per slot ;)

View attachment 614694

Yowza. Reminds me of the time i almost did a full 8 card rig of the Titan Blacks for that one project which had full FP64 IIRC.

That being said, um...

nice
 
Yowza. Reminds me of the time i almost did a full 8 card rig of the Titan Blacks for that one project which had full FP64 IIRC.

That being said, um...

nice
That project recently went CPU work only.
 
Back
Top