New GPU3 Projects

jfb9301

[H]ard DCOTM February 2016
Joined
Jan 11, 2005
Messages
1,652
So yesterday I got home to all my GPU3 clients hung for the next 24 hours, because Stanford released a new WU without updating the core to support them. Seems that the new core was available a couple hours after release of the WUs (Stanford, this is an example of Piss Poor Planning). Good to see that you are consistent in making poor decisions.......


</SOAPBOX>

These new WUs are:

Project P7620 and P7621.
5187 Points per WU

Results?

HOT as F#$K!!!!! my GTX570s are at 85% fan speed and 85C on stock clocks........
Hungry..... my sig rig is now drawing 1100W from the wall... up from 890W

but more interestingly is how they affect the SMP bigadv WU on the 970.....
settings -smp 10 -bigadv
Frame times down to 25:30 from 27:30

Yes... 2 minutes a frame improvement if you run these new WUs, in fact the frame time of the SMP seems to be independent of the GPU WU, With all GPU clients hung, it was almost the identical frame time as with the 3 GPUs folding the new WUs

Now I don't have time to run experiments as I have to go to work and do other stuff, but anyone with GPU3 hardware could you please test if we can stop gimping our i7s by using -smp 6 and -smp 10 to make room for our GPUs? The PPD increase could be worth it.

Bench marks for the record

17KPPD on a GTX570, about 4:26 a frame (and even my "slow" GTX570 gets the full PPD instead of 2KPPD less than the "fast" one, thank god Stanford, seems you fixed this problem)

8850PPD on a GTX460, these WU Kill 460s.... 8:26 a frame at best with major slowdowns up to 30min a frame.... it might not be worth it to fold on 460s any more based on PPDPW My 460 has yet to complete it's first WU 11 hours later.....

I'm not sure why these numbers work out the way they do, I don't have the time to do extensive research, but maybe we can collectively figgure out the best way to make the most of them.

 
yea my 570gtx and 450gts got them last night i was gonna take like 12 hours on the 450gts 570 temps went up to 83c
 
I like the smell of burnt GPUs in the morning. Not from our team of course. ;)
 
I noticed these WU's before heading to bed last night. My 2x460's TPF was 6:45 if I remember correctly. The v7 client was reporting 10.8k PPD on them. This was in a rig with an E8200 which makes an embarrassing 2k PPD folding SMP.
 
just wanted to subscribe to this thread

word around is that the 280 drivers are much more stable
this client uses less cpu utilisation and gives better ppd on both clients
heat is the enemy as will be the power draw ... thank god its winter in Australia
 
Did some research on the other teams Forum (EVGA)

These are advmethods WUs.....

If you don't want to risk them don't run -advmethods

Just know that if they are advmethods, that typically indicates they are the wave of the future, so everyone get ready to cook your GPUs this fall......

On a good note, EVGA does not like them, the guys running them are having to down clock their GPUs to stock and don't like the heat

<SOAPBOX>
Since EVGA has the ear of Stanford, maybe they will convince their lap dog to up the points because of the how ridiculously power hungry these WUs are.
</SOAPBOX>
 
So yesterday I got home to all my GPU3 clients hung for the next 24 hours, because Stanford released a new WU without updating the core to support them. Seems that the new core was available a couple hours after release of the WUs (Stanford, this is an example of Piss Poor Planning). Good to see that you are consistent in making poor decisions.......
facepalm
Frame times down to 25:30 from 27:30

polite applause
 
wait til they bring in a quick return bonus for GPU clients

im betting it wont be long til that happens
 
wait til they bring in a quick return bonus for GPU clients

im betting it wont be long til that happens
hopefully right about the time I restock my GPU supply to heat my house this fall......
 
Was browsing over at FF last night, apparently they did upload the new core but there was a glitch that meant it wouldn't download properly, hence the hung clients

What they need to do is send out the new core a couple of days before the new WU are released. Maybe K can mention it to PG
 
Was browsing over at FF last night, apparently they did upload the new core but there was a glitch that meant it wouldn't download properly, hence the hung clients

What they need to do is send out the new core a couple of days before the new WU are released. Maybe K can mention it to PG

I just made a post to this effect....

However since it was -advmethods "glitches should be expected" or something to that effect. It was resolved quickly to their credit.

I can also say that I haven't seen anything int he DAB about the new GPU WU and such.
Also, nothing that I have seen in the DAB recently about a QRB on GPU WU.

The idea on QRB on GPU WU seems silly since most of the WU are sent back in under a day as it is (if not a few hours).
 
New Cores

Postby kendrak » Fri Aug 19, 2011 4:25 pm
A simple suggestion was made on the [H]ard forum:

Is it possible to put the new cores in the severs and ready for download a day or so in advance of new WU needing them? That way we can avoid the hiccup that happened last night with the GPU clients.

Or was there a different issue(s) that would not have been fixed with this?

Re: New Cores

New postby tjlane » Fri Aug 19, 2011 5:09 pm
kendrak,

The process you mention is common practice here at Stanford. What happened yesterday was the result of a miscommunication between me (the project manager) and the software engineer who built the new core. Basically I release the projects thinking the core had been uploaded, when it had not. I do apologize for any lost work that occurred - obviously it is in all of our interests to ensure that this kind of thing doesn't happen! Maybe you could communicate this message back to the guys at [H] along with my apologies - we'll do better in the future!

Well, it seems it was a one off mistake.
 
Ah, this explains my 'sleeping' GPUs. Interesting post about the 280 drivers being better. [Anyone else notice an improvement over the 275's ?]

edit - never mind the bracketed stuff. don't want to hijack the thread so I'll post a new one
 
Last edited:
only problem im seeing wit the 280's is once i stop em they wont kick back into 3d mode when i restart the gpus
 
well, tried smp 12 instead of smp 10

25:13

so shaves another 17 seconds per frame off (as opposed to loosing several minutes)
Good news for dedicated folders, but at a savings of 17 seconds per frame, I suspect that if you use the machine for much more than folding, it will probably be at a loss. I don't know the exact bigadv frame times for the rig with no GPU folding, but as I recall it's around 25 minutes......

edited to make sense.....
 
Last edited:
just did some math as I started to ponder what my power supply was feeling.......

1140W from the wall.....
1200W Power supply......
Power supply at 95% full load....

I don't think it is as easy as that...

It's probably safe to assume an efficiency of ~85%
0.85x1140=969W
969W/1200W=80% full load

but for conservation's sake it's safer to assume your power supply is 100% efficient rather than guess.


I really hope BFG power supplies are solid, as I cannot afford to go shopping for an AX1200 right now.
 
BFG higher end units are solid. I killed am 800w one drawing 820w from the wall though :)
 
After reading this I went and turned on -advmethods. The next unit I got was one of the two new units. The first thing I notice is that these units make ordinary desktop applications run real choppy. The next thing I noticed is that my GPU temp quickly climbed over 93C (was still rising), I quickly down-clocked it, now it is sitting stable at 92C. Normal units my GPU sits at 74 to 76C, so this is quite a temp increase, fans are at 100%.

My 560ti sitting at 870/1740 right now is reading an estimated 13.7k ppd with unit 7620. I will be removing the flag after this work unit completes and bringing the clock back up.
 
wow my 580gtxx are running hot normally i sit in the 80c range now im about 98c with full fan load.
 
I have to also report these units are degrading desktop performance and running hot.

Also I noticed, (never saw before, think it's a new issue) that the GPU usage drops to nearly 0 and then spikes when clicking a link and rendering a page on firefox. I assume this is the hardware acceleration causing the F@H CUDA prioritization to drop to low.

I have SLI'd cards, and this only happens on GPU2 according to EVGA Precision.
 
I have to also report these units are degrading desktop performance and running hot.
Consider using different drivers. I had to revert to older drivers to get rid of severe 'desktop' lag. I can't tell the implications on gaming, as I don't do any. Hot? Absolutely, these new units more fully engage the GPUs than most of the previous projects.
Also I noticed, (never saw before, think it's a new issue) that the GPU usage drops to nearly 0 and then spikes when clicking a link and rendering a page on firefox.
That shouldn't be happening. I just tested that several times to try to reproduce it. With MSI Afterburner open and starting Firefox and opening links in Firefox, all 3 GPUs in the machine stayed unflinching at 99% usage.
I have SLI'd cards, and this only happens on GPU2 according to EVGA Precision.
Sorry, I don't quite understand what you mean. 7620 & 7621 are GPU3 projects.
 
its killing my 460 with the 768meg ram. its at 8kppd. way down from the normal units going to switch the back after this unit is done.
 
its killing my 460 with the 768meg ram. its at 8kppd. way down from the normal units going to switch the back after this unit is done.

Wow, this would be the first time ram was factor would it not?
 
im confused why my 580gtx are using 912meg of vram and the 460 is using 312m vram on same wu.
Are they at the same progress level? It's possible that the RAM usage changes as you complete more of the work unit.
 
Are they at the same progress level? It's possible that the RAM usage changes as you complete more of the work unit.

no there not but still would explain why the 460 768meg version is dog slow on these new units


edit ive rebooted the computer and seem the 460 now is back at 11.k
 
Looks like they've finally released WU's that can keep a gf100/110 fully taxed, while afterburner and others have always reported the gpu's were fully utilized I believe the bottleneck was somewhere outside of primary execution on the smaller WU's. This also explains why the 460's don't like it, they are likely having to do excess polling from cache/memory because the WU's don't fit neatly within available resources.

MCW80's can be had for about 40 bucks and modified to sit over the stock nvidia vrm/memory plate. I highly recommend the watercooling route, letting your gpu's sit folding in the 90-100C range is asking for trouble.
 
Looks like they've finally released WU's that can keep a gf100/110 fully taxed, while afterburner and others have always reported the gpu's were fully utilized I believe the bottleneck was somewhere outside of primary execution on the smaller WU's. This also explains why the 460's don't like it, they are likely having to do excess polling from cache/memory because the WU's don't fit neatly within available resources.

MCW80's can be had for about 40 bucks and modified to sit over the stock nvidia vrm/memory plate. I highly recommend the watercooling route, letting your gpu's sit folding in the 90-100C range is asking for trouble.

I've been looking to water cool my 460's (768MB) and never stumbled across this information. Thank you very much, R-Type!

It's been awhile... is FrozenCPU still the place to go for WC stuff?

On topic, I've noticed the desktop sluggishness as well on these new WU's. I am using the new 280 drivers, and my 460's are SLI'd. Those with sluggishness, what is your setup? Those without, as well!
 
what are the temps like on stock clocked cards?

If it's not answered by 5pm EST, I'll post my temps. Maybe I'll check on lunch...

Edit: A quick peek at lunch shows my 460's hovering around 70C at 80% fan. They're clocked at 800/1600/1925 and ambient is 24C (75F).
 
Last edited:
Restarted my sig rig after reading that your 460 went up in ppd..... nothing doing.... I'm still getting the same numbers, but the benefits to a 2684 are even more pronounced than the benefits to a 6900. I've gone down almost 3 minutes per frame on those, 35000 PPD on a 2684 is a real improvement for this rig.

maybe if I were to upgrade to the 280 drivers.... I'll hold out for more numbers before upgrading to them.
 
I've been looking to water cool my 460's (768MB) and never stumbled across this information. Thank you very much, R-Type!

It's been awhile... is FrozenCPU still the place to go for WC stuff?

On topic, I've noticed the desktop sluggishness as well on these new WU's. I am using the new 280 drivers, and my 460's are SLI'd. Those with sluggishness, what is your setup? Those without, as well!

No problem, do note however that only the GF100 and 110 cards have a true vrm/memory backplate that will keep them cool with the waterblock mounted. For the 460 you will need some sinks for the vrm's and memory.

The MCW80's seem to be mostly out of stock (though you may find some out there still), their replacement is the functionally identical MCW82.
http://jab-tech.com/Swiftech-MCW-82-VGA-Cooler-pr-4853.html

For the memory you will need something like this, you may have to dremel some to make them fit under the barbs.
http://jab-tech.com/Enzotech-Forged-Copper-VGA-Memory-Heatsink-BMR-C1-pr-3724.html

Depending on when your 460 was made, it may or may not already have a vrm sink on it. In this picture, it is the black rectangular heatsink to the right of the card.
EKhSv.jpg


If yours doesn't have that, you will need VRM sinks.
http://jab-tech.com/Enzontech-Mosfet-Heatsink-MOS-C1-pr-4112.html

That should get you up and running with the card's cooling and the beautiful thing is all the parts are reusable for when you upgrade. If you don't already have a loop, I recommend the XSPC kits as they have everything you need and include a waterblock for your cpu too.
(RS360 is the thinner radiator but should handle your heatload fine)
http://www.sidewindercomputers.com/xsra750rswak1.html

(RX360 is the thicker radiator that can handle a couple cards in sli along with your cpu)
http://jab-tech.com/XSPC-Rasa-750-RX360-CPU-watercooling-kit-pr-4780.html


Jab-tech and Sidewinder have the best prices for this stuff and jab-tech has a 5% coupon you can use by entering 'facebook' when checking out.
 
You've been far too helpful. The last time I ordered parts from Jab-tech, I was WCing an Athlon XP. Time to get back in the game, I think!

Of note, after a restart, a clean install of 280.26 drivers, and a few rounds of these WU's my desktop is normal again and my temps are about average for 100% load (70C). Also, my GPU Memory usage floats around 700MB (768MB cards) no matter the WU percentage complete. Perhaps I'm just lucky (first time for everything) but these WU's seem to have no negative affects for me.
 
You've been far too helpful. The last time I ordered parts from Jab-tech, I was WCing an Athlon XP. Time to get back in the game, I think!

Of note, after a restart, a clean install of 280.26 drivers, and a few rounds of these WU's my desktop is normal again and my temps are about average for 100% load (70C). Also, my GPU Memory usage floats around 700MB (768MB cards) no matter the WU percentage complete. Perhaps I'm just lucky (first time for everything) but these WU's seem to have no negative affects for me.

Very nice, to be honest I would hold off watercooling those cards then as you won't see much benefit in going from 70c to 40c other than noise. With 28nm a few months away I would plan to refresh and watercool the new cards. Everything I listed should be applicable although you may need a mounting adapter for the MCW82's.

Has anyone else seen their temps normalize with restarts or driver changes? Between the heat/power draw and the greatly decreased SMP impact, it seems the WU fits well enough on the GF100/110's to keep the card busy without hitting up the cpu as often. Its entirely possible that the GF104 in your GTX460 can't keep itself as busy due to bottlenecks elsewhere in the chip, hence the lower temps.
 
Back
Top