Dual GPU2 vs single GPU3 client CPU usage

APOLLO

[H]ard|DCer of the Month - March 2009
Joined
Sep 17, 2000
Messages
9,089
I'm thinking of revising the GPU components in my farm due to aging architectures and looking to optimize CPU usage for SMP performance. I'm not as interested in the highest performing video card setup to the extent I'm interested in obtaining maximum efficiency in terms of CPU usage for total system PPD. What I have running now is predominately a combination of Core2 dual-socket systems with dual G92 cards in each. Some run -bigadv clients while others are processing regular SMP.

What I want to determine is if it is better for me to ditch my dual G92 cards running GPU2 for one Fermi running GPU3 instead. Even one GTS 450 card will net me slightly better production than two G92 cards with most GPU3 WUs. Running one Fermi card *should* also consume less power and create less heat if I stick to a GTX 460 or GTS 450. What I want to know is will the CPU utilization of the new GPU3 WUs be less than two typical GPU2 WUs? If not, I won't go through the trouble just yet, and wait on revamping my farm until better options are available or I'm forced to upgrade.

In a nutshell, I'm hoping to maintain the current production of my systems or even improving it while reducing the number of GPU clients and CPU utilization to improve overall system efficiency and increase SMP performance if possible, especially -bigadv.

TIA
 
just for reference i run 2 gtx-260's in a i7-875k @4.0 bigadv the 2 cards get on average 16,000 ppd but my net gain is only 12,000 ppd. that on a 2685 soo im losing 4,000ppd on the bigadv
 
Would running a dedicated GPU box (say one that has 4 GPUs in it) and then leaving the other hardware to work on SMP/-bigadv make sense?

Edit:

You could move 4 GPUs into this with a cheap X2 and let the SMP systems do what they do best.
Either run what GPUs you currently have, or upgrade and have no concern over GPU3 taking more CPU.
 
Would running a dedicated GPU box (say one that has 4 GPUs in it) and then leaving the other hardware to work on SMP/-bigadv make sense?
Yes, but that would necessitate building another system since none of my current systems have more than two PCI-E slots except the Skulltrail. I already shut down a system I had running before the summer with one Ati client and no CPU client because it was a total waste of resources. To increase my farm by one system again would need to be worth the total system expense, and I just don't see it for the express purpose of moving half my cards over when I could be going the single GPU route on my existing boxen, and downsizing the dual card setups. Does this make sense or am I overly trying to convince myself on a specific course of action, LOL.
 
From the bit I understand, the GPU3 client eats up a fair amount of CPU, and on a 8+ thread boxen, it hurts.... bad.

Currently it seems segregating the two, if possible, is what makes sense rite now.

How many GPUs are you running?

If you can get that down to 3 or 4 of the newer flavor, keep the ppd the same, and run it on a low end base, you should do well.

But if your going that far with it..... Why not just replace a current SMP system with updated hardware with the money you get from selling the updated system and the GPUs

This is what got me to my first SR-2. Sold off most of my GPUs, upped my ppd and lowerd my power by 50%
 
From the bit I understand, the GPU3 client eats up a fair amount of CPU, and on a 8+ thread boxen, it hurts.... bad.
Yeah, that's what I also understand and want to quantify the difference between one GPU3 client compared to two GPU2 clients.

Currently it seems segregating the two, if possible, is what makes sense rite now.
Core2 architecture is too slow running any form of SMP to be worth segregating different client types. Either SMP client bonus is too negligible running with or without GPU to be worth removing GPU clients altogether. The loss in overall production would be prohibitive. I tried running both regular and -bigadv SMP without GPU. The only way separating the two client types would yield positive performance results would be to build enough systems for housing all the cards I run, and that leads to another problem....

How many GPUs are you running?
More than a dozen clients but less than that number of cards because two of them are dual-GPU cards. Figure around 10 cards. Nearly all are G92 architecture.

If you can get that down to 3 or 4 of the newer flavor, keep the ppd the same, and run it on a low end base, you should do well.
I am thinking along those lines. Replacing as many of the old cards with Fermi. I just need to determine the CPU utilization trade offs to calculate the advantage, if there is any.

But if your going that far with it..... Why not just replace a current SMP system with updated hardware with the money you get from selling the updated system and the GPUs
Tons of obstacles along this route. For one thing I'm in Canada. The second hand market simply does not move on this type of hardware. I know people who tried with the exact or similar hardware - no interest. Another problem is very low returns even for my G92 cards ($50-$80 max). Basically, my equipment is nearly worthless and/or no one wants it.

This is what got me to my first SR-2. Sold off most of my GPUs, upped my ppd and lowerd my power by 50%
In the GWN, an SR-2 system even with 'lowly' quads is something that would cost ~$2000 - no way around it. Ask 10e or 404. It's not feasible for those looking at budgeting or just dirt poor. :(
 
Last edited:
2 gpu2 clients are about equal to a single gpu3 client.. but it depends on the WU.. non fermi based gpu3 WU's arent bad.. actually very similar to cpu usage as gpu2 WU's.. but fermi gpu3 WU's on the other hand are cpu hogs..

the fermi WU's use between 3-5% of the cpu..
non fermi gpu3 WU's use 1-3%..
and gpu2 fluctuates between 0-2% depending on the WU..
 
2 gpu2 clients are about equal to a single gpu3 client.. but it depends on the WU.. non fermi based gpu3 WU's arent bad.. actually very similar to cpu usage as gpu2 WU's.. but fermi gpu3 WU's on the other hand are cpu hogs..

the fermi WU's use between 3-5% of the cpu..
non fermi gpu3 WU's use 1-3%..
and gpu2 fluctuates between 0-2% depending on the WU..
Ah I see. So, in effect the best I could hope for is slightly better total GPU production going up to a 450 or 460 from a dual G92 config with less power consumption and heat output, but no real benefit for the SMP clients. Hmm, I'm going to need to think through this some more.
 
I lost 2k PPD on my 3.4 x6 when i swapped a GTS 250 to a to a GTX460, the card gets between 8.5 & 12.5k depending on projects so at worst its a wash, but the gain can be up to 4k over my old setup with the right projects. I do however have less heat and less power draw than before
 
I lost 2k PPD on my 3.4 x6 when i swapped a GTS 250 to a to a GTX460, the card gets between 8.5 & 12.5k depending on projects so at worst its a wash, but the gain can be up to 4k over my old setup with the right projects. I do however have less heat and less power draw than before
In my case it would be moving from a dual 8800GT setup to a single 450 or 460, so it's even less of a difference. The bottom line: it will cost a significant amount and the biggest advantage is less power draw and less heat production for the few warm months we have here. It's a disadvantage for the long Canadian winters, though. It's looking less and less like a viable option unless I'm guaranteed the best GPU3 WUs that net a minimum of 12k PPD. Two 8800GT cards never produce under ~8400 PPD, and more often than not I see over 9000 PPD.
 
Apollo,

I just averaged the ppd for 97 work units on my 460.

48 WU's were 611 points, 23 were for 912, and 26 for 925. Averages out to 11.5k/ppd. This average is on the low side as I've significantly overclocked the card since I started. I don't know if 97 wu's is enough for decent size sample or not, but it looks like it gets pretty close to the 12kppd you'd want.

Take anything I say with a grain of salt however, I'm still a n00b, and not a statistician either.
 
I vote keep your G92 gpus.. for now. That's what I'm doing.. (8 gpus core) until at least the end of winter. Unless there's something that consumes in a good margin less watts and produces more points and costs not too much, I wont upgrade. Right now I don't see that.... gtx 460, 200W ~11kppd average? thats exactly what I'm getting with my old 9800gx2s!!
8800gt's produces a bit less but still decent performers, I'd still wait for additional price drop after winter.

Btw, my SR-2 parts cost me $1898 total. But that's with the erotic sexy chips. Very near prices you'd find in the US when you search around, the ability to price match at nc!x is really great.
But since there's no more of them around, 970 x58 hex core @ 4.3ghz is a very good option I think. Your dual processor rigs might be worth something good if you sell the system as a whole, and that can finance an hex build..
 
I vote keep your G92 gpus.. for now. That's what I'm doing.. (8 gpus core) until at least the end of winter. Unless there's something that consumes in a good margin less watts and produces more points and costs not too much, I wont upgrade. Right now I don't see that.... gtx 460, 200W ~11kppd average? thats exactly what I'm getting with my old 9800gx2s!!
8800gt's produces a bit less but still decent performers, I'd still wait for additional price drop after winter.

Mine doesnt use 200w but it isn't heavily overclocked -mines only running at 700/1400
 
I was running my GTX 460 and noticed a drop of several thousand PPD when running -bigadv units along with the higher-PPD GPU3 units. Due to some issues, I moved the 460 into my Q9550 machine and my GTX 260 back into my main rig. In the Q9550 machine, I don't notice any significant drop in CPU PPD regardless of the unit. So I would say it depends on what computers you are dealing with. C2Qs shouldn't take too much of a hit, but PCs running -bigadv units might notice one. As for replacing two GPU clients with a single GPU3 one, I think with the higher-PPD units the GPU3 client might still require more CPU time.
 
Back
Top