World Community Grid

Gilthanis

[H]ard|DCer of the Year - 2014
Joined
Jan 29, 2006
Messages
8,200
Anyone wanting to sign up can do so through the BOINC client, BOINCStats.com using their account manager, or directly at their site World Community Grid That link will also sign you up to our team during the setup process. That will help limit the accidental wrong team issue we have suffered.

Apparently two BETA tests went out last night.

Testing a new project app. : World Community Grid - View Thread - New BETA test - Sept 10, 2014 [ Issues Thread ]
I got 4 of these thus far.

Testing FAAH on Android : World Community Grid - View Thread - FightAIDS@Home - VINA Android Beta Test - Spet 10, 2014 [ Issues Thread ]
I got 5 of these thus far.


Edit: I created an account called HardOCPtest so that anyone whom wanted to test or contribute anonymously could do so.

to attach a computer to your account without using the BOINC Manager.
To do so, install BOINC, create a file named account_www.worldcommunitgrid.org.xml in the BOINC data directory, and set its contents to:
<account>
<master_url>www.worldcommunitygrid.org</master_url>
<authenticator>943687_dd7504d11eb9dd987306ba050e0dfa35</authenticator>
</account>

WCG



 
Last edited:

Gilthanis

[H]ard|DCer of the Year - 2014
Joined
Jan 29, 2006
Messages
8,200
Another beta went out today for the new project. Hopefully we find out what it is soon.
 

Gilthanis

[H]ard|DCer of the Year - 2014
Joined
Jan 29, 2006
Messages
8,200
Here are a couple handy charts for points and run times at WCG.


 

Gilthanis

[H]ard|DCer of the Year - 2014
Joined
Jan 29, 2006
Messages
8,200
I too am interested on if this is different than SIMAP. I know SIMAP is closing its doors to BOINC at the end of the year, but I would certainly hope this isn't repeat research...
 

AgrFan

[H]ard DCOTM October 2012
Joined
Sep 29, 2007
Messages
531
Pushing for 20 years on Mapping Cancer Markers before it ends August 2015. Going to keep a few cores on the new project to get it to Sapphire. I hope it isn't repeat research either.
 

Gilthanis

[H]ard|DCer of the Year - 2014
Joined
Jan 29, 2006
Messages
8,200
Any idea how it is scoring yet compared to the others? The charts above haven't gotten updated info yet...
 

AgrFan

[H]ard DCOTM October 2012
Joined
Sep 29, 2007
Messages
531
I'm seeing 4 hour runtimes on my Pentium E5200 boxes (XP/Ubuntu). Estimating 20-22 credits/hour. Should know in a couple days after work gets validated.
 
Last edited:

AgrFan

[H]ard DCOTM October 2012
Joined
Sep 29, 2007
Messages
531
Had a couple of the new units validate this morning. They completed in 4 hours and were granted 88 credits. 22 credits/hour.
 
Last edited:

Gilthanis

[H]ard|DCer of the Year - 2014
Joined
Jan 29, 2006
Messages
8,200
Thanks, but anyone I may have lured in the past don't count. And neither do those who sign up without using the referral link. So, no badge for me. :(
 

Gilthanis

[H]ard|DCer of the Year - 2014
Joined
Jan 29, 2006
Messages
8,200
I got this in an email today.

To celebrate our first decade of discovery and power the next one, we invite you to take part in a community-wide competition to recruit as many new volunteers as possible to World Community Grid.

There are some great prizes up for grabs. The 20 people who recruit the highest number of new volunteers by November 16 will win a limited-edition World Community Grid prize. The top 3 will also receive special additional awards.

You can find my link above if you want to help me earn the badges that also come with it.
 

Gilthanis

[H]ard|DCer of the Year - 2014
Joined
Jan 29, 2006
Messages
8,200
Thanks for the heads up. I hadn't noticed anything earlier. Getting the VM's up and going now. ;p Looks like I have 14 at the moment.
 

EXT64

DCOTM x4
Joined
Mar 27, 2013
Messages
598
Nice - yeah this was the first I saw them (and I just had done a forced update) so I think they are fairly 'fresh'.

Edit: do you use VMs because there is a per client limit? Or for some other reason?
 

Gilthanis

[H]ard|DCer of the Year - 2014
Joined
Jan 29, 2006
Messages
8,200
There is a per core/thread limit. But I use them for BETA hunting
1. Because of the per core/thread limit. VM's allow me to pull more work units at a time per system. This can also possibly be tricked by telling BOINC you have more cores than you really do with a cc_config flag.

2. Multiple VM's means my system is requesting work more often without having to wait as long for server backoff. WCG requires you to wait so much time before forcing another update. By having the VM's running with max available CPU cores, I am more likely to get work. Thus increasing chances of requesting work when there is work to pull.

If by chance a VM gets its max fill of work units, I will set them back to NNW or reduce the core count so the system isn't as bogged down. However, while requesting work, I have them hammering away. I also tend to have a mix of OS's trying to pull using the VM's because sometimes there is work for one OS but not another due to how the feeder assigns work.
 

Gilthanis

[H]ard|DCer of the Year - 2014
Joined
Jan 29, 2006
Messages
8,200
WCG does it again. Gotta love real world results rather than just another paper being published or data being collected and maintained.

https://secure.worldcommunitygrid.org/about_us/viewNewsArticle.do?articleId=397

Decade of Discovery: A new drug lead to combat dengue fever
By: Dr. Stan Watowich, PhD
University of Texas Medical Branch (UTMB) in Galveston, Texas
10 Nov 2014

Summary
For week five of our decade of discovery celebrations we&#8217;re looking back at the Discovering Dengue Drugs - Together project, which helped researchers at the University of Texas Medical Branch at Galveston search for drugs to help combat dengue - a debilitating tropical disease that threatens 40% of the world&#8217;s population. Thanks to World Community Grid volunteers, researchers have identified a drug lead that has the potential to stop the virus in its tracks.
 

metallicafan

[H]ard|DCer of the Month - May 2010
Joined
Mar 30, 2005
Messages
2,201
Also, we are signed up for the Christmas Race, which runs from 12/4 - 12/24. Looks like we are still ramped up good from the last challenge so keep it rolling!
:)
 

Gilthanis

[H]ard|DCer of the Year - 2014
Joined
Jan 29, 2006
Messages
8,200
LAIM stands for leave applications in memory. It is in the BOINC client. On version 7+ you click on Tools > Computer Preferences > disk and memory usage tab > check the box "leave applications in memory while suspended".

If you don't have LAIM turned on, every time the client suspends, it will lose any work done since last check point. If the computer is used often, then you are essentially wasting processing time. However, if the system is heavily in need of memory, then leaving applications in memory could take longer for the system to dump everything to virtual memory... so it really comes down to preference. I typically don't have any issues with leaving them in memory.
 

fastgeek

[H]ard|DCOTM x4 aka "That Company"
Joined
Jun 6, 2000
Messages
6,508
Ah, OK, thanks for the explanation. :) They all have that setting enabled; but for the most part they don't do anything else and pretty much all have 256GB RAM each. :p

Am a bit bummed to not have more Outsmarting Ebola Together WU's. Per my results pages I've had a grand total of six WUs... not sure I can keep up with that heavy load! ;) Oh well, they'll come when they come.
 

Gilthanis

[H]ard|DCer of the Year - 2014
Joined
Jan 29, 2006
Messages
8,200
Yeah... I didn't keep up with the politics, but I heard they had a technical issue and possibly launching some more BETA's to try and address it. So, keep your eyes peeled. In the mean time, there are plenty of other causes to support. :D
 

Jathanis

[H]ard|DCer of the Month - Feb. 2013
Joined
Apr 22, 2008
Messages
985
Am a bit bummed to not have more Outsmarting Ebola Together WU's. Per my results pages I've had a grand total of six WUs... not sure I can keep up with that heavy load! ;) Oh well, they'll come when they come.

Glad to see I'm not the only one... Came here to see what was up with not gettng a single WU over 36 hours, Then, following the link above to the WCG forums and looking some more, I found this:

Please note, this project is going to be running slower than our normal applications. So there will be periods during the day that no work can be downloaded for the application. So badge hunting in the early stages of this project will be difficult.

Thanks,
-Uplinger
So I guess patience is going to be important on this one! :D

edit: updates and whining about no work here if interested: http://www.worldcommunitygrid.org/forums/wcg/viewthread_thread,37507_offset,0
 
Last edited:

Gilthanis

[H]ard|DCer of the Year - 2014
Joined
Jan 29, 2006
Messages
8,200

RFGuy_KCCO

DCOTM x4, [H]ard|DCer of the Year 2019
Joined
Sep 23, 2006
Messages
908
I figured out that running CEP tasks slows down Collatz processing on my AMD GPU's considerably (anywhere from 50% -100% slower!), while it only marginally slows down my Nvidia GPU's (maybe 5% - 10% slower) running the same Collatz Large WU's. I had recently started focusing on CEP and was running anywhere from 1-7 WU's at a time on my machines. All the CPU tasks ran fine and I didn't notice any slow downs on other projects on these same AMD GPU's, so it is some interaction between Collatz, CEP, and the AMD drivers. Since Collatz performance is more important to me, I have stopped CEP work for now and am accepting work from all the other WCG projects instead.
 

Gilthanis

[H]ard|DCer of the Year - 2014
Joined
Jan 29, 2006
Messages
8,200
Are you reserving a full CPU core for each GPU? How much RAM is in the systems experiencing this behavior. I would also like to know if this is a SSD, traditional spinner (assuming its not RAID), RAMdisk, or some other medium that the BOINC data directory is stored on.

Also, have you tried just running one CEP2 work unit to see if it has the same behavior...
 

RFGuy_KCCO

DCOTM x4, [H]ard|DCer of the Year 2019
Joined
Sep 23, 2006
Messages
908
Are you reserving a full CPU core for each GPU? How much RAM is in the systems experiencing this behavior. I would also like to know if this is a SSD, traditional spinner (assuming its not RAID), RAMdisk, or some other medium that the BOINC data directory is stored on.

Also, have you tried just running one CEP2 work unit to see if it has the same behavior...

Answers to your questions, in order:

1. Yes, always. I use an app_config.xml to reserve one CPU per GPU for Collatz.
2. Most of my machines have 8GB of RAM, with one having 16GB (see my signature for details).
3. All but one of my machines uses an SSD. The one machine utilizing a spinner (Orthanc) has an Nvidia GPU which isn't really impacted when CEP runs. That spinner is a 450GB 10K WD Raptor drive, so it is a fast drive and shouldn't be a limiting factor.
4. Yes, I have tried running one at a time and that seems to work okay. The slowdown in Collatz seems to be proportionate to the number of CEP tasks running; the more tasks running, the slower Collatz runs on my AMD GPU's. I know CEP stresses the I/O's, so I am guessing this is the cause of the slowdown.
 

Gilthanis

[H]ard|DCer of the Year - 2014
Joined
Jan 29, 2006
Messages
8,200
I'm guessing it is the I/Os as well...but I wouldn't think that the SSD's would have trouble with it. I would also try limiting the number of CPU cores available to BOINC by changing it in the client. I honestly am not sure if doing it via the app_config will actually prevent other CPU work from trying to access them. I would start by making the change in the client and seeing if that makes a difference. How many total work units are running on CPU's when your AMDs are churning out Collatz work? Does the client actually reflect a reduced number of CPU work units? And if you limit the number of CPU cores within the client, does your app_config reduce the number even more?

The only time I use the app_config like that is if I'm trying to run multiple work units on each card. That way I can tell it how much to designate to each work unit essentially. If I am running stock settings, I typically just reduce the number of cores BOINC is allowed to use and it still feeds the GPU's like they had reserved cores...

I'm just curious is all. :)
 

RFGuy_KCCO

DCOTM x4, [H]ard|DCer of the Year 2019
Joined
Sep 23, 2006
Messages
908
I'm guessing it is the I/Os as well...but I wouldn't think that the SSD's would have trouble with it. I would also try limiting the number of CPU cores available to BOINC by changing it in the client. I honestly am not sure if doing it via the app_config will actually prevent other CPU work from trying to access them. I would start by making the change in the client and seeing if that makes a difference. How many total work units are running on CPU's when your AMDs are churning out Collatz work? Does the client actually reflect a reduced number of CPU work units? And if you limit the number of CPU cores within the client, does your app_config reduce the number even more?

The only time I use the app_config like that is if I'm trying to run multiple work units on each card. That way I can tell it how much to designate to each work unit essentially. If I am running stock settings, I typically just reduce the number of cores BOINC is allowed to use and it still feeds the GPU's like they had reserved cores...

I'm just curious is all. :)

I have done lots of experimenting to find the right combo of CPU/GPU work which produces the most PPD on my machines, and I do mean LOTS of experimenting. I am an RF Engineer by training and trade, so tweaking and experimenting to produce the best performance is in my blood. :D

That being said, I use a combo of limiting processor usage in Preferences, as well as app_config.xml files for all of my GPU projects I run. On all of my Hyper-Threading capable machines (all but two of my crunchers), I set processor usage in Prefs to 99%. This frees one "core" for GPU usage. I then use the app_config.xml to restrict core usage further, based on the needs of the individual projects. Collatz on AMD GPU's is a processor hog and needs a whole core per WU/GPU (I only run one Collatz Large WU per GPU), so I set CPU usage to "1" per WU using app_config. This does work and does reduce the number of running CPU WU's on all of my machines by the expected amounts. I do it a little differently on my 4 core machines, but the end result is the same. I do all of this hoop jumping because I run many different projects on all of my machines at any given time, instead of optimizing and running only one or two at a time. I try to achieve 100% processor usage across all cores at all times with the combined CPU and GPU work. I use the app_config files to achieve this goal. It works very well and ensures I am not wasting any CPU cycles sitting idle. It has taken me a long time of observing and tweaking to get the right combination, but it works well for me. Also note that this slow down in Collatz occurred with no changes to either the software or hardware configs in any of my machines, so something else has changed - my switch to running lots of CEP tasks is the only change I have made. Once I stopped running them (though I see I still got sent some CEP tasks, despite deselecting that project in my WCG settings), my Collatz speeds have gone back to normal.

To answer your specific questions:

1) Depends on the machine and GPU count, but on Dol-Gulder for example, there would be 5 CPU tasks running along with 3 GPU tasks.

2) Yes, the client shows reduced numbers of CPU WU's running that match how I have set up app_config.

3) Yes, that is exactly how it works.

I use the excellent program "BoincTasks" by eFMer to monitor all of my machines remotely. Here is a screen shot of the currently running tasks on my cruncher "Moria." Keep in mind I set processor usage in the client to 99%, so that limits this machine to only 7 CPU tasks at one time. I then use app_config to reserve 1 CPU "core" per Collatz WU. With two Collatz tasks running, that leaves 5 cores for CPU tasks and that is exactly what is running.

 
Last edited:

Gilthanis

[H]ard|DCer of the Year - 2014
Joined
Jan 29, 2006
Messages
8,200
For getting CEP2 work units when you de-selected it. Did you de-select it for a specific profile or for all of them? (I don't know if you set any others up...) Also, do you have the option if No work available from selected apps to send work from others at WCG checked? That might have been why depending on how many other projects you are attached to. Also, if your client was requesting work before it had checked in since the change, it may have requested work before getting the new preferences. Keep an eye on it and see if you continue to get more.

Also, if you want to keep a cache of CEP2 but only run 1 (or some other designated number) at a time without using all of your cores, you can restrict that in app_config as well. WCG's options only restricts how many work units you pull at one time where as the app_config can tell your client how many to run at any given time. It is handy in some cases.

But the above info you give is nice to know. Glad you have done the homework. :D
 
Top