BOINC configurations and How To's

Gilthanis

[H]ard|DCer of the Year - 2014
Joined
Jan 29, 2006
Messages
8,718
Since there isn't a lot of BOINC related threads and how to discussions, I figured I would add one.

To run more then one GPU at a time in BOINC:

1. You need to create an .xml file using Notepad or other text editer.
2. Name the new file cc_config.xml
3. In the file you need to add:
<cc_config>
<options>
<use_all_gpus>1</use_all_gpus>
</options>
</cc_config>
4. Save this file to your BOINC data directory. In Vista/7 it is in a hidden folder named C:\Program Data\BOINC\ In Windows XP is is normally in C:\Documents and Settings\All Users\Application Data\BOINC\
5. Restart the BOINC client so that it reads the file if it was already running.

To run GPU work in BOINC

1. Make sure BOINC is NOT installed to run as a service. BOINC can't run GPU if it is.
2. Make sure you have up do date drivers installed.
- In the case of Nvidia GPUs, you'll need an up-to-date driver.

- In the case of ATI/AMD GPUs, you'll need an up-to-date APP driver.
In the case that your GPU cannot support the APP driver, for whatever reason, you'll need to install the ATI Stream SDK for Windows and Linux, since it contains specific libraries that are needed. You will also require Catalysts 10.4 or above. The SDK page will tell you which Catalysts are required for the latest SDK.

So far:
- SDK v2.1 needed 10.4
- SDK v2.2 needed 10.7
- SDK v2.3 needed 10.9
- SDK v2.4 needed 11.3/11/4
- SDK v2.5 needed 11.7

You will need to install this SDK as AMD does not allow that projects distribute the needed libraries on their own yet.

Macintosh OS X 10.6 has these libraries built in.
 
Last edited:
If you want all users on a PC to be able to control/access to BOINC:

1. During the installation of the BOINC software, there is an option for Protected Application Mode. This will install BOINC as a service and will start automatically when the computer starts regardless of who is logged in. However, this option will not let you do any GPU work if that is an option you want as well.

2. Otherwise, you will need to choose All users can control BOINC option during installation.

3. If you did not choose option 2, then you would need to add each user to the boinc_users group within Windows.

Both options 2 and 3 allow GPU work if you have the option.
Also, both options 2 and 3 means a user has to be logged in for BOINC to run as well. So, if everyone logs off...the PC will sit idle.
 
If Boinc has issues getting through your firewall then add this http_1_0 line to your cc_config.xml and restart Boinc.

<cc_config>
<options>
<use_all_gpus>1</use_all_gpus>
<http_1_0>1</http_1_0>
<report_results_immediately>1</report_results_immediately>
</options>
</cc_config>
 
How to pause BOINC if CPU usage is too high:
Version 7.x.x clients
Under Advance View => Tools => Computing Preferences
Processor Usage Tab => While Processor usage is less than box (default is 25)
Change this number to whatever percentage of the CPU being used by non-BOINC apps to get to before pausing BOINC. Setting this to 0 disables it and will allow BOINC to run regardless of other apps usage.
 
How to pause BOINC if certain programs are running. For example... World of Warcraft
Version 7 clients
Advanced view => Tools => Computing Preferences
exclusive applications tab => Add => select the executable file for the program you want to exclude.
Repeat the above steps for each app you want to exclude.

Keep in mind to close out of the app when done or BOINC will stay paused.
 
TIP: If you have a rather large cache size (say 9 days) and the deadline is approaching for a lot of the tasks, your system will start designating tasks as "high priority". A couple problems can arise (but not always) from the high priority situation.

1. You risk not completed work on time because some WU's will pause to try and finish others. End result is that you may have several "panicked" work units not complete by the deadline.

2. If you have LAIM (leave application in memory) checked, each of the paused work units will sit in memory (whether it be RAM or Virtual Memory) until a system restart or the work unit is finished. If there are a lot of work units pausing like this due to a large cache size, your system can become sluggish.

The solution is to not over fill your cache. Keep it small to begin with and then ramp it up. It takes a few completed tasks for BOINC to learn how quick it can complete work. At projects like WCG that have multiple apps, it is easy to confuse BOINC because it has to keep re-adjusting for the different sizes.

A suggestion: If you are filling a cache up for several days of work, try to focus on one type of app at a project. At WCG for example, HCC tasks but not FAH or one of the many others. This would adjust BOINC to run based on the times for HCC and it will be better managed. BOINC is designed to managed based on project not sub projects. WCG is considered a project whereas DSFL is a sub-project at WCG.
 
Here is an example of a cc_config.xml for a system that has 2 nVidia GPU's but wants to exclude device 1 and let device 0 run and to report results immediately. The file is telling BOINC to use all GPU's (because BOINC by default tries to use the best GPU), ignore device 1, and report results immediately.

<cc_config>
<options>
<use_all_gpus>1</use_all_gpus>
<ignore_nvidia_dev>1</ignore_nvidia_dev>
<report_results_immediately>1</report_results_immediately>
</options>
</cc_config>
 
Last edited:
If you need to temporarily pause BOINC while you are working and don't want to forget to resume it, simply right click on the BOINC icon in the taskbar and select "snooze". If you just want to snooze GPU processing and you have GPU processing enabled, then choose "snooze GPU" and this will let CPU work continue crunching while GPU is paused. This snooze feature only lasts for an hour before BOINC will start it up again. When BOINC starts again, you would have to re-snooze if you wanted it paused again. I recommend using the above mentioned option for pausing BOINC when certain apps are open if you use the system for streaming videos.
 
Many people ask if they can run both AMD/ATI cards with nVidia cards within the same machine. This is possible, but highly not recommended. It is just too much of a pain to setup and keep happy. Especially if you update drivers. However, here is how I was told to do it. I did not do it myself because I don't see the need when I can just use two AMD or two nVidia. Perhaps when I get a capable Intel GPU (yes they are now getting into the game) I will try mixing it.

AMD card needs to be in slot 0. (From what I read, this is a must) You must also use this as your primary display and should have the monitor attached to it. Install the Catalyst package of your choice. I'm not sure the exact versions that are confirmed working.

nVidia card needs to have ONLY the drivers installed and nothing else. Do NOT attached a dummy plug or a monitor into it. This way it is seen as a co-processor and not as a display. (That is how I understood it)

Make sure you have a cc_config.xml file setup as instructed above telling BOINC to use all GPU's.
 
Many people have had problems with Aero in Vista/7 causing issues with GPU work units. It is best practice to disable Aero in these cases. If you experience failed work units, try this first.
 
If you are running Windows 8, you will need version 7.0.28 or newer. I recommend the latest build. You will also need to install it as a local user account (not a MS/Live account) in order to use GPU. You can also install it as a service if you don't want to use GPU.
 
Since Intel GPU's are still early on in the OpenCL DC world, I figured I would give mention to them here. From what I have read in various posts is that if you have Intel GPU in the same system that has other GPU's, you will need to have the monitor attached to the Intel GPU for BOINC to detect it. I do not have any Intel GPU's to test this with, but figured it may save some people a few headaches. I have not heard of anyone trying to run Intel, AMD, and nVidia all in one system yet. I would assume that to be a total nightmare, but would be interested to hear the results if someone actually does attempt it.

I also believe they need 7.0.40 and above, but my memory is flaky right now. Either way, I recommend using the latest build if you are going to use Intel GPU's.

Also, the only projects I am aware of with Intel GPU capable apps is Collatz, SETI (possibly BETA), and Einstein (possibly Albert)

You may also need to download their OpenCL SDK. I believe it can be found here: http://software.intel.com/en-us/vcsource/tools/opencl-sdk
 
Last edited:
Helpful tip for GPU users: Don't mix single precision and dual precision cards in the same system unless you take the time to setup a cc_config.xml or an app_config or if you are only contributing to a project that uses only single precision. If you have both types in the same system without special configurations at a project that has both single and double precision apps, you may download work for dual precision which then tries and fails to run on the single precision card.
 
Last edited:
Use a cc_config.xml file like this:

<cc_config>
<options>
<use_all_gpus>1</use_all_gpus>
<exclude_gpu>
<url>http://boinc.fzk.de/poem/</url>
<device_num>0</device_num>
</exclude_gpu>
<exclude_gpu>
<url>http://moowrap.net//</url>
<device_num>1</device_num>
</exclude_gpu>
</options>
</cc_config>

The file above excludes gpu zero from Poem and gpu one from Moo. You can use any projects url, gotten from its home page where it gives you the address in case it isn't listed in the list of projects.
 
app_config.xml files are similar to cc_config.xml files only they apply to specific projects. app_config files are also meant to replace app_info.xml files. They are much more versatile and don't require as much overhead and updating. Some projects support both and some projects are eliminating app_info.xml altogether. WCG will be eliminating the use of app_info files.

Here is an example of one for POEM that runs 2 work units on the GPU and reserving one CPU core to feed them.

<app_config>
<app>
<name>poemcl</name>
<gpu_versions>
<gpu_usage>0.50</gpu_usage>
<cpu_usage>1.0</cpu_usage>
</gpu_versions>
</app>
</app_config>

Here is an example for SETI (and I believe it will work for SETI BETA too) It allows for 2 work units to run on the GPU. However one app only uses .06 CPU per task and the other app uses a full core to feed both apps:

<app_config>
<app>
<name>setiathome_v7</name>
<gpu_versions>
<gpu_usage>0.5</gpu_usage>
<cpu_usage>0.06</cpu_usage>
</gpu_versions>
</app>
<app>
<name>astropulse_v6</name>
<gpu_versions>
<gpu_usage>0.5</gpu_usage>
<cpu_usage>1.00</cpu_usage>
</gpu_versions>
</app>
</app_config>

app_config.xml file are placed in the individual project data directory.
 
Last edited:
Here is an app_config example for RNA to run just one of the VM app work units at a time.

<app_config>
<app>
<name>cmsearch3</name>
<max_concurrent>1</max_concurrent>
</app>
</app_config>
 
This cc_config is an example of when you have two or more GPU's in a system but you don't want one of your cards running a specific app. I used PrimeGrid as the example using AMD GPU's. N should be changed to the device number you want excluded.


<cc_config>
<options>
...
<exclude_gpu>
<url>http://www.primegrid.com/</url>
<device_num>N</device_num>
<type>ATI</type>
<app>genefer</app>
</exclude_gpu>
<exclude_gpu>
<url>http://www.primegrid.com/</url>
<device_num>N</device_num>
<type>ATI</type>
<app>genefer_wr</app>
</exclude_gpu>
...
</options>
...
</cc_config>


The general formula is below:
<exclude_gpu>
<url>project_URL</url>
[<device_num>N</device_num>]
[<type>NVIDIA|ATI|intel_gpu</type>]
[<app>appname</app>]
</exclude_gpu>
 
For the heavy hitters that are slowly moving over to BOINC, I have recently read people having issues with systems that have more than 32 cores/threads getting work beyond 32 work units. There are two things to try if you end up having this issues.

1. Make sure you are running the 64bit client instead of the 32bit one. Then verify your preferences at the project itself in case there is an option there to limit the number of work units.

2. You can try adding the following to a cc_config.xml file.

<cc_config>
<options>
<ncpus>48</ncpus>
</options>
</cc_config>

48 is just an example for a 48 core machine. Change this number to however many cores you have.
 
If you want to limit the number of work units that run at a time from a specific project, you can create an app_config with the following parameters:

<app_config>
<app>
<name>name</name>
<max_concurrent>N</max_concurrent>
</app>
</app_config>

name obviously is the apps name and N is the number of work units to allow to run at a given time of that specific app.

An example of why to use this for example is say you want to cache up CEP2 work units but your system only has the resources or doesn't handle the stress of more than one work unit at a time. You can use this app_config to limit the number that run while still having more than one work unit in your cache. WCG's settings on their website allow you to limit number of work units, but this limits how many you can download at a given time as well. CEP2 is known for heavy disk IO's. An example for running 1 RNA work unit was posted above.
 
Last edited:
Sometimes the latest and greatest cards are not yet supported by the BOINC code. In these cases, you will probably have to go to the project forums and see if there is a fix or tips on how to get it working.

For example: The R9 290 sometimes wont pull work from Milkyway at the moment. However, by following the info in this thread: http://milkyway.cs.rpi.edu/milkyway/forum_thread.php?id=3517#61479 you are able to discern the work around using an app_info file and a direct software download. Other projects may require the same work around. It may even be exclusive to certain versions of the BOINC client.
 
http://boinc.thesonntags.com/collatz/forum_thread.php?id=1009#16503

Here is some info on how to optimize the Collatz GPU apps.

Each Collatz 4.07 application is distributed with an empty config file. The config file has the same name as the executable but with the extension ".config".

There are a number of parameters that can be altered to improve speed or video response or to aid in solving issues. They are:

verbose=[0|1]
A value of 1 causes more information about the GPU, OpenCL version, etc. to be written to the log file. If enabled, this should be the first line of the config file so that it will report the other settings in the log file.

items_per_kernel=[10..22]
The number is the power of two 256-bit numbers (e.g. 2^N) that will be calculated per kernel call. Setting this number higher places a larger load on the GPU. Setting the number too high WILL cause the driver to crash and the application to hang. The default is 14, or 2^14, or 16384 items.

kernels_per_reduction=[2..9]The number (2^N once again) of kernels to run before doing a reduction. The default is 8 or 2^8 = 256. A lower number can improve video response. A larger number may result in a higher GPU load. Too high a number will result in CPU as well as GPU utilization.

threads=[5..10]
This contains the number of work groups to run in parallel. Higher is not necessarily faster. This number is device dependent. If set too high, the application will automatically reduce it to a value compatible with the device.
Most AMD GPUs allow up to 256 (a setting of 8). NVidia GPUs may allow 512 or even 1024 (a setting of 9 or 10). OpenCL requires a minimum of 32 (a setting of 5) according to the Khronos specifications.

build_options=[string containing any optional OpenCL build options]
This was added strictly for debugging in order to be able to use "-cl-opt-disable -Werror". If the OpenCL application crashes within 1-2 seconds of starting, you may want to use "build_options=-cl-opt-disable -Werror" and see if that fixes the problem.

sleep=[1..1000]
This controls the number of milliseconds that the application goes into a sleep state while waiting for the asynchronous kernel calls to complete. The default is 1. Setting this higher (e.g. 2-5) will result in better video response but will slow down the application considerably.

The config file will be renamed to collatz.config when it is copied to the BOINC slot folder when an application starts running. Exiting BOINC and editing the version in the project folder will not change the settings of the applications in progress as their config is taken from the slot folder.

A sample collatz.config file looks like:

verbose=1
items_per_kernel=20
kernels_per_reduction=9
threads=8
sleep=1
build_options=-Werror


Since the workunits very somewhat in the number of total steps they produce, I would suggest that you run several and take the average runtime to determine whether one set of values in the config works better than another set.

Note: The values in the sample above work quite well on my HD 6970 and HD 7970 without making either too sluggish.
 
There will be a new option for app_config files added to BOINC v. 7.4.9 and newer

<app_config>
<project_max_concurrent>1</project_max_concurrent>
</app_config>

This will basically set the project to only run 1 work unit at a time from that project. So, if you only wanted one work unit from SETI while dedicating the rest of your resources elsewhere, you could without having to tweak the project preferences. This is especially handy if you need more than the traditional 3 (4 if you count default) profile settings that most projects have. People with farms or borgs will probably be the ones to appreciate this the most.
 
Changes made to the cc_config.xml after you update to BOINC 7.4.36

<cc_config>
<options>
<coproc>
<type>miner_asic</type>
<count>#</count>
<non_gpu/>
</coproc>
</options>
</cc_config>

One nice thing I found was that when you Suspend BOINC before 7.4.36 and this setting change <non_gpu/> it would reset and start the task over.
But with 7.4.36 and this setting <non_gpu/> in the cc_config.xml the task continues from where it left off at.
http://www.bitcoinutopia.net/bitcoinutopia/forum_thread.php?id=710#7578

# refers to the number of work units you are wanting to run at a time. Not how many are cached.
This config change is nice so that your ASICs will continue running in the event that you suspend you GPU's.
 
More and more users are having GPU issues under Linux. One of the causes is that BOINC is initializing the GPU work units before Linux has gotten around to loading the drivers. One of the ways to get around this is to add a line to your cc_config.xml file (assuming you created one)

<start_delay>nseconds</start_delay>
Specify a number of seconds to delay running applications after client startup.

Example of a 30 second delay


<cc_config>
<options>
<start_delay>30</start_delay>
</options>
</cc_config>
 
Since some projects have both virtualbox and non-virtualbox tasks, people may not want to pull virtualbox work on a system that has it installed. If that is the case, you need to add this line to your cc_config.xml file

<dont_use_vbox>0|1</dont_use_vbox>

As usual, you only put the 0 or the 1 depending on if you want it turned on or off.
0 = off meaning not to use the flag thus letting vbox work units come through.
1 = on meaning to block vbox work units from coming through

Note, this was new as of BOINC v. 7.5 and thus older clients cannot use it.
 
Found here: How can the BOINC client allocate customized CPU/Memory resources to specific applications

How can the BOINC client allocate customized CPU/Memory resources to specific applications
Some of the users might only want to give a certain ratio of their CPU/Memory to a specific application. Take the ATLAS@home Multiple core application ATLAS_MCORE for example, if you want to this app to use less than the CPU/Memory that you allow the BOINC client to use, you can do it through creating a configuration file app_config.xml in your project directory:


You can limit the MultiCore-App by using an app_config.xml.

Below is an example to limit ATLAS_MCORE to use only 4 Cores:


<app_config> <app_version> <app_name>ATLAS_MCORE</app_name> <avg_ncpus>4.000000</avg_ncpus> <plan_class>vbox_64_mt_mcore</plan_class> <cmdline>--memory_size_mb 5300</cmdline> </app_version> </app_config>


You should change these two lines to your needs:

<avg_ncpus>4.000000</avg_ncpus> <cmdline>--memory_size_mb 5300</cmdline>


Memory usage calculated by the ATLAS_MCORE app is by this formula:

memory = 1300 + (1000* NumerOfCores)

so it is 5300MB for 4 cores.


Thanks to Yeti for giving this recipe.
 
Back
Top