112XX GPU3 WU's are a HUGE improvement

R-Type

[H]ard|DCer of the Month - October 2011
Joined
Mar 6, 2006
Messages
2,801
Anyone else playing with these new GPU3 units? With my 470's clocked at 800/1600/1800, my ppd has increased over 500ppd per card compared to the old WU's for a total of 15.9-16.1k ppd per card. What's more impressive is that power draw as measured at the outlet has dropped by about 40 watts a card to a total of 105 watts per card over idle. Pretty damn good efficiency in GPU folding terms! :D
 
+juan, my PPD went up 4k on my GTX460

WUs from the same pool as 10953 as well.
 
I am seeing basically the same thing with 10956 and 10975 WU's. PPD went up from 11.6k to 15.3 to 15.9k
Also my wattage for my machine has dropped 55 watts on dual 460's. At first I started searching to see what was wrong. After not finding anything, AND then seeing this post I realized it is just the efficiency of these WU's.
 
Anyone else playing with these new GPU3 units? With my 470's clocked at 800/1600/1800, my ppd has increased over 500ppd per card compared to the old WU's for a total of 15.9-16.1k ppd per card. What's more impressive is that power draw as measured at the outlet has dropped by about 40 watts a card to a total of 105 watts per card over idle. Pretty damn good efficiency in GPU folding terms! :D

Sadly these WU are not new and shouldn't even be on release any more. They were originally designed as the test simulations for the then new Fermi Core15. As core 15 went into actual production these were switched off. If you are getting them it is most likely an error on the part of the servers. Whilst they are out though, enjoy:D

Oh and the reason why they run faster on lower end cards is that they were small WU and just nicely "fitted" into the GTS450's core, the more powerful cards have more shaders and for some reason the WU seem to bounce around and not calculate efficently. This perfect fit for the 450 is why many people bought them at the time and then got upset when bigger production WU were released and PPD dropped
 
Anyone else playing with these new GPU3 units? With my 470's clocked at 800/1600/1800, my ppd has increased over 500ppd per card compared to the old WU's for a total of 15.9-16.1k ppd per card. What's more impressive is that power draw as measured at the outlet has dropped by about 40 watts a card to a total of 105 watts per card over idle. Pretty damn good efficiency in GPU folding terms! :D


they aren't efficient. they are smaller WU's. it means its using less of the shaders.

but like Nathan_P said.. these are just test WU's that were used at the very beginning of Fermi GPU3. they don't actually do any science. they just test the algorithm code used for the real WU's. which means this is probably a sign that there will be a series of new WU's running a different algorithm in the coming months.
 
Sadly these WU are not new and shouldn't even be on release any more. They were originally designed as the test simulations for the then new Fermi Core15. As core 15 went into actual production these were switched off. If you are getting them it is most likely an error on the part of the servers. Whilst they are out though, enjoy:D

they aren't efficient. they are smaller WU's. it means its using less of the shaders.

but like Nathan_P said.. these are just test WU's that were used at the very beginning of Fermi GPU3. they don't actually do any science. they just test the algorithm code used for the real WU's..
You guys are correct. I thought there was something familiar about them. They're definitely not efficient. One thing I don't like about them is their high CPU usage. It's not the first time these test WUs made a resurrection. It also happened for a day or so a couple of months ago. Maybe Stanford ran out of regular WUs.
 
Well I guess I look like the noob here, I figured higher numbers equaled newer..:rolleyes:

At the end of the day though I would just as soon NOT be getting the same points as a 450 while running 470's:p
 
Do these WUs have any scientific value? The points are nice, but it would suck if that is all they are good for.
 
Do these WUs have any scientific value? The points are nice, but it would suck if that is all they are good for.


they have no direct scientific value but they do serve a purpose. since you are testing the code that will be used with future WU's.
 
I currently have:

GTX460: P10927 (R0, C73, G1)
GTX460: P10966 (R2, C68, G12)
GTX460: P10956 (R1, C71, G9)

GTX470: P10958 (R0, C67, G17)
GTX470: P10939 (R2, C61, G27)

And, I know I had more of these 10xxx units earlier today. The 6800s appear completely gone.
 
On my only Fermi client, I'm still seeing P112s and they're killing my -bigadv client. Moreover, the core15 WUs have all but disappeared from non-Fermi clients. I'm seeing only core11 WUs, which run much hotter and less productive. The past several days have been very bad for GPU folders.
 
And I have another 470 arriving on my doorstep tomorrow. Just in time for summer.

I don't know why I did it either.
 
grr getting normal wu not. LOL loved getting 15kppd on a 460
They're great if you don't mind there's no science being worked on and don't have any CPU clients running on the same machine. These WUs hit my -bigadv client so hard that I had no choice but to shut down the GPU client for a couple of days, otherwise the SMP WU may not have made the bonus deadline. For me it was a lose/lose situation with these WUs, unfortunately.
 
They're great if you don't mind there's no science being worked on and don't have any CPU clients running on the same machine. These WUs hit my -bigadv client so hard that I had no choice but to shut down the GPU client for a couple of days, otherwise the SMP WU may not have made the bonus deadline. For me it was a lose/lose situation with these WUs, unfortunately.

they didnt make a dent in my big adv folding.

didnt realize the no science. as i didnt read the whole thread.
 
I currently have:

GTX460: P10927 (R0, C73, G1)
GTX460: P10966 (R2, C68, G12)
GTX460: P10956 (R1, C71, G9)

GTX470: P10958 (R0, C67, G17)
GTX470: P10939 (R2, C61, G27)

And, I know I had more of these 10xxx units earlier today. The 6800s appear completely gone.
I'm folding a 6801 right now with my GTX 465. FYI: my temp is 43c at stock speed. Arctic Cooling cooler rocks but it is sure expensive!
 
They're great if you don't mind there's no science being worked on and don't have any CPU clients running on the same machine. These WUs hit my -bigadv client so hard that I had no choice but to shut down the GPU client for a couple of days, otherwise the SMP WU may not have made the bonus deadline. For me it was a lose/lose situation with these WUs, unfortunately.

could be a difference in the architecture of the processors since your running primarily LGA771 dual processor systems right?
 
They affected mine. Had to change to -smp 7.

Gah, overnight I picked up a 2684 bigadv and two 6801's on the gpu's. Power draw is up to 595W from 470W before and total ppd from the box is down to 58k from 71k. Seems Stanford is making me pay for those several days of nice work units.
 
Gah, overnight I picked up a 2684 bigadv and two 6801's on the gpu's. Power draw is up to 595W from 470W before and total ppd from the box is down to 58k from 71k. Seems Stanford is making me pay for those several days of nice work units.

You are not alone. I have a 2684 running right now as well as a 6801 on the GPU.

They affected mine. Had to change to -smp 7.

Did the a5 core let you do this? It was my understanding that the number of threads would be changed if the client did not like it. I'm curious if your FAH log has "Mapping NT from 7 to 8" in it.
 
You are not alone. I have a 2684 running right now as well as a 6801 on the GPU.



Did the a5 core let you do this? It was my understanding that the number of threads would be changed if the client did not like it. I'm curious if your FAH log has "Mapping NT from 7 to 8" in it.

I'll probably jinx myself, but for some reason, I have not downloaded a 2684 for weeks.

Hmm, found a small problem with my flags LOL!

"Arguments: -smp 8 -bigadv -smp 7 -verbosity 9 "

Edit: " Mapping NT from 7 to 7" on mine too.
 
Last edited:
Did the a5 core let you do this? It was my understanding that the number of threads would be changed if the client did not like it. I'm curious if your FAH log has "Mapping NT from 7 to 8" in it.
smp 7 still works OK

Code:
--- Opening Log file [March 10 05:10:07 UTC] 


# Windows SMP Console Edition #################################################
###############################################################################

                       Folding@Home Client Version 6.34

                          http://folding.stanford.edu

###############################################################################
###############################################################################

Launch directory: C:\Fold\SMP
Executable: C:\Fold\SMP\FAH6.34-win32-SMP.exe
Arguments: -smp 7 -bigadv 

[05:10:07] Configuring Folding@Home...


[05:11:44] - Ask before connecting: No
[05:11:44] - User name: jebo_4jc (Team 33)
[05:11:44] - User ID not found locally
[05:11:44] + Requesting User ID from server
[05:11:45] - Machine ID: 1
[05:11:45] 
[05:11:45] Work directory not found. Creating...
[05:11:45] Could not open work queue, generating new queue...
[05:11:45] - Preparing to get new work unit...
[05:11:45] Cleaning up work directory
[05:11:45] + Attempting to get work packet
[05:11:45] Passkey found
[05:11:45] - Connecting to assignment server
[05:11:46] - Successful: assigned to (171.67.108.22).
[05:11:46] + News From Folding@Home: Welcome to Folding@Home
[05:11:46] Loaded queue successfully.
[05:13:00] + Closed connections
[05:13:00] 
[05:13:00] + Processing work unit
[05:13:00] Core required: FahCore_a5.exe
[05:13:00] Core not found.
[05:13:00] - Core is not present or corrupted.
[05:13:00] - Attempting to download new core...
[05:13:00] + Downloading new core: FahCore_a5.exe
[05:13:01] + 10240 bytes downloaded
[05:13:01] + 20480 bytes downloaded
[05:13:01] + 30720 bytes downloaded

(snip)

[05:13:17] + 2711739 bytes downloaded
[05:13:17] Verifying core Core_a5.fah...
[05:13:17] Signature is VALID
[05:13:17] 
[05:13:17] Trying to unzip core FahCore_a5.exe
[05:13:17] Decompressed FahCore_a5.exe (9326080 bytes) successfully
[05:13:22] + Core successfully engaged
[05:13:28] 
[05:13:28] + Processing work unit
[05:13:28] Core required: FahCore_a5.exe
[05:13:28] Core found.
[05:13:28] Working on queue slot 01 [March 10 05:13:28 UTC]
[05:13:28] + Working ...
[05:13:28] 
[05:13:28] *------------------------------*
[05:13:28] Folding@Home Gromacs SMP Core
[05:13:28] Version 2.27 (Mar 12, 2010)
[05:13:28] 
[05:13:28] Preparing to commence simulation
[05:13:28] - Looking at optimizations...
[05:13:28] - Created dyn
[05:13:28] - Files status OK
[05:13:33] - Expanded 25467667 -> 31941441 (decompressed 125.4 percent)
[05:13:33] Called DecompressByteArray: compressed_data_size=25467667 data_size=31941441, decompressed_data_size=31941441 diff=0
[05:13:33] - Digital signature verified
[05:13:33] 
[05:13:33] Project: 2686 (Run 0, Clone 12, Gen 79)
[05:13:33] 
[05:13:33] Assembly optimizations on if available.
[05:13:33] Entering M.D.
[05:13:39] Mapping NT from 7 to 7 
[05:13:42] Completed 0 out of 250000 steps  (0%)
 
could be a difference in the architecture of the processors since your running primarily LGA771 dual processor systems right?
Yes and you could be right, I just don't have the foggiest. If it's indeed the architecture, it speaks of something very bizarre going on. I only have one Fermi GPU client. Task Manager was indicating approximately 4-5% CPU usage, yet the -bigadv WU this system was processing jumped over 15 minutes a frame! From a single GPU client?? That's freaking retarded...! Glad the P680x WUs are back... :eek:

They affected mine. Had to change to -smp 7.
Can't do that on my systems. Even without any GPU clients running whatsoever, using the -smp 7 flag is extremely risky and would probably yield very similar PPD as regular SMP, that is IF the client manages to make the preferred deadline at all, which is highly doubtful.
 
Yes and you could be right, I just don't have the foggiest. If it's indeed the architecture, it speaks of something very bizarre going on. I only have one Fermi GPU client. Task Manager was indicating approximately 4-5% CPU usage, yet the -bigadv WU this system was processing jumped over 15 minutes a frame! From a single GPU client?? That's freaking retarded...! Glad the P680x WUs are back... :eek:

Do you happen to be running a GTX 460? There is a well documented issue with DPC latency spikes when using these GF106 gpus on Intel FSB based systems. People normally complain about it in games where it creates stuttering but this WU could be creating the same conditions to spike DPC latency and ruin your TPF.
 
You are not alone. I have a 2684 running right now as well as a 6801 on the GPU.



Did the a5 core let you do this? It was my understanding that the number of threads would be changed if the client did not like it. I'm curious if your FAH log has "Mapping NT from 7 to 8" in it.

They said that, but I'm not convinced that it works. I updated all my clients on the day they were released, but had a dual quad core machine still set for smp 7. I didn't really check but it ran through the newer 101xx SMP units and failed on about 20 of them before I realized it. I changed the config to smp 6 and it was fine after that.
 
Yes and you could be right, I just don't have the foggiest. If it's indeed the architecture, it speaks of something very bizarre going on. I only have one Fermi GPU client. Task Manager was indicating approximately 4-5% CPU usage, yet the -bigadv WU this system was processing jumped over 15 minutes a frame! From a single GPU client?? That's freaking retarded...! Glad the P680x WUs are back... :eek:

Can't do that on my systems. Even without any GPU clients running whatsoever, using the -smp 7 flag is extremely risky and would probably yield very similar PPD as regular SMP, that is IF the client manages to make the preferred deadline at all, which is highly doubtful.


probably need to play with the priority leaves on the client. but not really a big deal. the non fermi gpu3 client used between 1-3% of the cpu and was killing my smp client as well. if i set the client above idle within the client or within windows it would jump up to 4-6% cpu usage per client.
 
You all are lucky getting -smp 7. I tried smp -11 and got knocked down to 10 threads.

Code:
Launch directory: C:\FAHSMP
Executable: C:\FAHSMP\FAH6.exe
Arguments: -smp 11 -bigadv -verbosity 9 

[01:34:58] - Ask before connecting: No
[01:34:58] - User name: tjmagneto (Team 33)
[01:34:58] - User ID: nunyabiznuss
[01:34:58] - Machine ID: 1
[01:34:58] 
[01:34:58] Loaded queue successfully.
[01:34:58] 
[01:34:58] - Autosending finished units... [March 23 01:34:58 UTC]
[01:34:58] + Processing work unit
[01:34:58] Trying to send all finished work units
[01:34:58] Core required: FahCore_a5.exe
[01:34:58] + No unsent completed units remaining.
[01:34:58] Core found.
[01:34:58] - Autosend completed
[01:34:58] Working on queue slot 01 [March 23 01:34:58 UTC]
[01:34:58] + Working ...
[01:34:58] - Calling '.\FahCore_a5.exe -dir work/ -nice 19 -suffix 01 -[B]np 11[/B] -checkpoint 15 -verbose -lifeline 6572 -version 634'

[01:34:58] 
[01:34:58] *------------------------------*
[01:34:58] Folding@Home Gromacs SMP Core
[01:34:58] Version 2.27 (Mar 12, 2010)
[01:34:58] 
[01:34:58] Preparing to commence simulation
[01:34:58] - Ensuring status. Please wait.
[01:35:08] - Looking at optimizations...
[01:35:08] - Working with standard loops on this execution.
[01:35:08] - Previous termination of core was improper.
[01:35:08] - Files status OK
[01:35:12] - Expanded 24827899 -> 30791309 (decompressed 124.0 percent)
[01:35:12] Called DecompressByteArray: compressed_data_size=24827899 data_size=30791309, decompressed_data_size=30791309 diff=0
[01:35:12] - Digital signature verified
[01:35:12] 
[01:35:12] Project: 2684 (Run 7, Clone 24, Gen 58)
[01:35:12] 
[01:35:12] Entering M.D.
[01:35:18] Using Gromacs checkpoints
[01:35:19] [B]Mapping NT from 11 to 10 [/B]
[01:35:24] Resuming from checkpoint
 
Back
Top