GPUGrid

Gilthanis

[H]ard|DCer of the Year - 2014
Joined
Jan 29, 2006
Messages
8,718
It appears that they will be adding Virtualbox apps

http://www.gpugrid.net/forum_thread.php?id=3838#37524

Hi,

Eagle eyes!

Yes, we've been running vbox apps in-house for a while now, and may start to run them on GPUGRID soon. Right now, it'll only work with Linux hosts, and probably only with Virtual Box 4.3. I'm not planning to send any work units just yet, will post onthe news when I do.

Matt



I am adding this app_config info in the first post so that any whom come here will have easy access. This allows for 2 tasks to run on your GPU at a time. You can change the numbers to run more, but it is up to you to decide what maximizes your points on your hardware.:

<app_config>
<app>
<name>acemdshort</name>
<gpu_versions>
<gpu_usage>0.5</gpu_usage>
<cpu_usage>0.3</cpu_usage>
</gpu_versions>
</app>
</app_config>

Many people don't see an increase in efficiency by running more than one work unit, however it is up to you to test and decide based on your specific hardware.

Edit: I created an account called HardOCPtest so that anyone whom wanted to test or contribute anonymously could do so.

to attach a computer to your account without using the BOINC Manager.
To do so, install BOINC, create a file named account_www.gpugrid.net.xml in the BOINC data directory, and set its contents to:
<account>
<master_url>http://www.gpugrid.net/</master_url>
<authenticator>111740_8a218e5a7cf06d5f3ffb3fb12418e67c</authenticator>
</account>

teamdaily.php
 
Last edited:
Yep - annoying. I went back to testing ocores for a little while (come back proteneer!).
 
I was able to get mine to upload (and to get new work) a few minutes ago.
 
I went back to testing ocores for a little while (come back proteneer!).

That was also my plan, but due to maxwell I had to install cuda 6 sdk... Which version of GCC did you use when executing the .run from nvidia's site (the one for 13.04) ? 4.7.3 or 4.8.2 (default for 14.04) with -override compiler flag ? (I posted more info on irc)
 
Got my first badge: Gly
(I know, I know, that was easy) :rolleyes:

But seems smooth overall on GTX 780.
 
Wow! The numbers for my 780ti haven't' been all that great as it is throttling itself due to heat. I need to fix that problem. :(
 
My dual gpu cards outperform by doing 2 at once... but I will never make it on those charts.
 
To my great surprise, my fastest GTX 770 has been awarded #15 for GIANNI_trypbenContacts1MC2, #9 for NOELIA_TRP188 :)
 
Back to comparison chart (yes it took me some time to collect data), here are my numbers :
- GTX 770 - average 425 000 PPD - maximum 475 000 PPD - average is better than chart
- GTX 750 Ti - average 250 000 PPD - maximum 275 000 PPD - average is lower than chart*
- GT 640 - average 60 000 PPD - maximum 73 000 PPD

this is mostly for august work units running 24/7, september is slightly higher with the new batches.

*I used the original GPUGRID forum to decide if I shall by a 750 Ti, and found the value reported in forum post was "the best case" and not representative of average long term performance. Slight disappointment :)
 
Last edited:
However, 750Tis are so light on power, I still love them. You can shove them into just about any computer (any PSU) and they will run cool and quiet with good points.

Edit: is your 750Ti OC'ed (or factory OC'ed)?
 
Yes, factory OC'ed @ 1306 MHz.
And I like this card for sure. Just wanted to state a more realistic performance level than the gpugrid forum chart, so potential buyers are correctly advised :)

Changing topic but... anybody ordered a 980/970 ? :)
 
Changes have happened to the scheduling policy to better adapt CUDA version to driver capability.

Hi all,

In attempt to rationalise the rules for assigning WUS to crunchers, I've made some changes to the underlying scheduler program. Here are the new rules:



* If you have driver >= 343.00 and sm >= 2.0 you will get a CUDA 6.5

* If you have driver >= 334.21 and < 343.00 and sm >= 2.0 you will get a CUDA 6.0

* If you have driver >= 295.30 and< 334.21 and sm >= 2.0 and < 5.0, you'll get a CUDA 4.2

* If you have driver >= 295.30 and sm == 1.3 you'll only get CUDA 4.2


Matt

Please note that if you have a driver >= 343.00 on Linux, you will only get acemd short version 8.46 since CUDA 6.5 version of acemd long has not yet been released for Linux. That will decrease your PPD, though there is some excitment to be cutting edge :)

Original post : http://www.gpugrid.net/forum_thread.php?id=3874

Current CUDA 6.5 app is more or less a recompile, optimized version with performance improvement over CUDA 6.0 app should be released this winter.

I'm doing a bit of work to improve the performance of the code for Maxwell hardware - expect an update before the end of the year.
 
Last edited:
I know this a bit late on the chart, but I've been averaging 430K-435K on a stock 680 - far above the 380K shown... :)
 
#2 with the NOELIA_SH2 scores @ 2.65 hours :)

This is even with a bunch of rigs with GTX 980s in my rear view mirror. :D
 
Has anyone else noticed heavy IO's while running a CPU MT work unit? My 2P 2ith 32 threads is seeing a ton. Even if I suspend work units I am still seeing high usage. This could be caused by the work unit not adhering to the BOINC settings which does pop up from time to time with some projects. Being that these are still test work units, it could be very likely. I just want to make sure whether I am the only one seeing it.

I can't tell if it is these work units or possibly RNA work that was suspended. However, one of them was still using threads and heavy IO's.
 
Last edited:
Well... it turned out to be the RNA work units causing all of my chaos. I aborted both of them and everything is running smooth again. First time that has happened.
 
I got this in my Notices in the BOINC Manager on the 19th.

GPUGRID: Important: No new work for pre-Fermi GPUs
Hello,

From today no new work will be scheduled to old pre-Fermi GPUs. This includes Geforce 8800, GTX200 series and GTX9800.


Matt
 
Not too many people must be participating. I'm placed in the top 30 in every WU i've completed. Best was #7 in GERARD_CXCL12_adap_LIG1.

Single Asus STRIX GTX 980 @ stock clocks. Might turn it up a bit and see how it does.
 
I think what it is.... is that there aren't a lot of people with 980's.... lol There were a lot that jumped on the 780 band wagon when it was released.

Anyways, congrats. That isn't an easy task.
 
I think what it is.... is that there aren't a lot of people with 980's.... lol There were a lot that jumped on the 780 band wagon when it was released.

Anyways, congrats. That isn't an easy task.


I suppose that makes sense.

w8JO6Bc.png



I bumped it up about 20Mhz on both GPU/Mem but i don't think i'll catch multiple card systems.
 
We just launched 400 very long WU (they will take about 24h in a 780GTX) named VERYLONG_CXCL12_confAna whose results we need as soon as possible (we are in a hurry). They come with a credit+bonus of 400K. Please, if you don't have a good graphic card, reject them. For the brave ones, take it as a challenge and see you on the performance tab ;)
http://www.gpugrid.net/forum_thread.php?id=3988#39575
 
There is an issue with these WUs that will yield an error when sending results, and a complete loss of credit. Workaround is some manual modification for each WU of the first 400 WUs batch. New WUs shall have been fixed. More details in the original link provided by Gilthanis.
 
There is an issue with these WUs that will yield an error when sending results, and a complete loss of credit. Workaround is some manual modification for each WU of the first 400 WUs batch. New WUs shall have been fixed. More details in the original link provided by Gilthanis.

Thanks for the heads up! I was able to modify my client_state.xml file before my WU was finished, so this one should pass and grant me credit.
 
That very long WU finished and validated, so the manual fix worked. It took just under 29 hours to complete.
 
Tempting, I have 280X here I could give it a go on but those stupid things are hot and consume twice as much power.
 
Not so sweet... All the WU's failed immediately after they started. This is obviously an Alpha test.
 
Back
Top