BOINC

Probably not. Can barely play Minesweeper!!!
 
I have been talking with Travis Dessell (admin at multiple projects) and he has decided to take my advice and merge the projects (or possibly even just the forums) into one so that it would be much easier for him to communicate with donors and to keep the projects from appearing dead. He is asking for suggestions on how to best do this according to the opinions of the volunteers. Per our discussions, I had pointed out that people who have had goals based on points and such may have issues with the sites being merged. I personally think that merging them all under one roof is the best route to go. You can give insight:

http://volunteer.cs.und.edu/dna/forum_thread.php?id=185#1480
http://volunteer.cs.und.edu/subset_sum/forum_thread.php?id=89#601
 
My rig at the minute has a 5850 and onboard 4250 graphics. Is there a way for me to get both crunching at the same time?
I found this thread which has some info on a similar situation, but the guy there has an issue with units that won't run on the IGP being sent to it anyway, not sure if that is fixed now or what.
It would seem to be worth it as otherwise the IGP is sitting doing nothing, it's basically some "free" RAC for me if I can get it crunching. Just wondering if anyone else has run across a similar situation and if you got it working.
 
He scheduler for each project should be able to send the proper unit to the proper GPU, if there are errors, then it's definitely an application and the project server side problem. So far, Einstein works well, as does SETI. Other than those, I don't know. Sorry I couldn't really answer your question!
 
Captain, I would highly recommend NOT doing that because the headache more than likely won't be worth it. Now, what I would recommend it possibly running two BOINC clients. Each excluding a different card perhaps. The problem I see happening with the situations in that thread you linked is that AMD probably doesn't have their software written well enough or possibly the BOINC software doesn't understand the difference. Have you tried running this setup in the latest builds? That thread was from a year ago.
 
I haven't tried it yet, no, but what you said, that either the drivers or software doesn't understand the difference, makes sense. Running another BOINC client sounds like a better solution, I hadn't thought of that - didn't know it could be done. It does seem worth running if running two clients does work (I see no reason why it shouldn't) as it can run 24/7 and as the IGP doesn't have a display to render, it'll be completely free of any interruptions.
 
I have tried to avoid AMD when possible. :) I have a feeling I *cough Kai cough* will be using some AMD cards in the very near future. Running two clients has been done by some, but I haven't ever tried it. You will most likely have to use a cc_config file to get it to exclude the right GPU.
 
Last edited:
Heh, I usually run Nvidia too, but this is what I have for now. I have a GTX 460 I could swap out, but I've been gaming more with this setup and the 5850 is faster for gaming and most DC too. *Touch wood* it's also been nice and stable so far.
 
Lol Gil, hopefully they run, that's all I'm saying!

I've had no problems with three different amd cards in one system. Nor with two different amd cards in a 2010 Mac Pro either. However I was only running Einstein with them at the time. And I borked the bios of the PC with three in it. :( But it should work!!!! The only thing I had problems with multiple and differing types of cards was when I tried both nVidia and Amd in Linux. That was a horrible experience.
 
I'm waiting for someone to make a serious attempt at having all three players in one ship. That person would really be a glutton for punishment. Let alone multiple generations of cards. I ran into issues with DP cards in with SP only cards before the option of the app_config files came out. It is just much easier to keep similar cards together and then shelf the others or put them in their own rigs. If I knew they would run BOINC even part time, I would put some of the older GPU's in donation PC's. But as of now, it is not worth the free upgrade.
 
This is exactly why I chose to run my 680 in my dedicated old folding rig along side my 4850 - easy to set a project to only Nvidia or ATI without worrying why the other card is erroring out... And yes, it was a pain to set up, and GPUGrid STILL won't run on it due to driver issues, but other than that, it runs great!
 
GPUGrid wont work on the 680? Bummer...I took 4 cards off of GPUGrid lately because of BSOD issues with the SANTI work units. They are telling everyone to down clock their cards to fix it. They claim the manufacturers may have overclocked these cards or they may not be up to the demand the work units are putting on the cards. Worked great until a few weeks ago. They work great everywhere else too. Since one of the boxes is a borg machine, there wont be any tweaking of that card. And Dagaroth is a complete dick everywhere he posts, so its hard to get much discussion done without him trying to insult you.
 
GPUGrid wont work on the 680? Bummer...I took 4 cards off of GPUGrid lately because of BSOD issues with the SANTI work units. They are telling everyone to down clock their cards to fix it. They claim the manufacturers may have overclocked these cards or they may not be up to the demand the work units are putting on the cards. Worked great until a few weeks ago. They work great everywhere else too. Since one of the boxes is a borg machine, there wont be any tweaking of that card. And Dagaroth is a complete dick everywhere he posts, so its hard to get much discussion done without him trying to insult you.

I was having the problem a week or so ago I fixed it by dropping the OC from the card it stil has the factory OC though GTX 680 (1175) and also giving the CPU more voltage. The cpu had previousley been stable for over a year.

Anyway guys I just wanted to say I was pulling the 4P's and putting them back on fah my 3 GTX 680's will remaine on GPUGrid along with 2 - 980X and 1 - 2700k running Roseta@Home :cool:
 
Grandpa, we are glad you are donating in any way you can. Thanks for the HUGE push.
 
I was having the problem a week or so ago I fixed it by dropping the OC from the card it stil has the factory OC though GTX 680 (1175) and also giving the CPU more voltage. The cpu had previousley been stable for over a year.

Anyway guys I just wanted to say I was pulling the 4P's and putting them back on fah my 3 GTX 680's will remaine on GPUGrid along with 2 - 980X and 1 - 2700k running Roseta@Home :cool:

Thanks for the help!
 
Thanks for the contributions Grandpa! We always appreciate what you bring to the team!
 
Seems there is an issue with Milkyway and it's not just me having problems, there are several reports on their forums where people with 5 and 6 series AMD cards are getting computation errors within 10 seconds or so on their Milkyway@Home 1.02 app, whereas the Separation units run just fine.
Reports go back 4-6 weeks or so, this would explain why I had major issues running it the other day.
My 5850 is running through Separation tasks quite happily now so I've just excluded the 1.02 app from my list.
 
Asteroids now has a CUDA (nVidia) app! http://asteroidsathome.net/boinc/forum_thread.php?id=233#2227

According to this thread, the GPU's aren't much faster than some CPU's.

At first I was all :D

Then I read that thread and was all :(

Then I took a second look at the developer's replies and now I'm all
:rolleyes:

Seems like they didn't really think about the extra capabilities that GPUs have to offer, and just gave us the equivalent of porting a console game to the PC... Hopefully this will lead to more optimized GPU apps in the future, but for now, it doesn't seem like it will be very good on the ppd/W front.

Gilthanis, any idea if WUProp gives a different category to these? I'm guessing no...
 
I am doubting that they will. skgiven already announced on their page that they had the app. However, it is pretty much the same app from what I gathered from that thread, so not sure.

I will let you know if I see it show up.

And my response was pretty much the same. They were trying to say they were just giving people options. I haven't looked to see what my boxes are doing and such as far as CPU/GPU usage, but I'm thinking my GPU's are probably better off elsewhere.
 
Last edited:
I resurrected a 5770 that I got for £3, it was sold to me as "totally faulty, PC doesn't POST with it in" but in reality it just had a fan that makes a grinding noise, so I've fixed that.

I want to use it to run Collatz while my 5850 runs Milkyway. I set up two lines in my cc_config, as follows:

<exclude_gpu>
<url>http://milkyway.cs.rpi.edu/milkyway/</url>
<device_num>1</device_num>
<type>ATI</type>
</exclude_gpu>

<exclude_gpu>
<url>http://boinc.thesonntags.com/collatz/</url>
<device_num>2</device_num>
<type>ATI</type>
</exclude_gpu>

But, all I'm getting is GPU1 (5850) switching between both and GPU2 (5770) idle. I'm assuming I did this the wrong way, but I can't find any other way to do it.

Edit: I've removed both lines and in the BOIN log, I see:
03/01/2014 17:54:10 | | CAL: ATI GPU 0: ATI Radeon HD 5800/5900 series (Cypress/Hemlock) (CAL version 1.4.1848, 1024MB, 991MB available, 5184 GFLOPS peak)
03/01/2014 17:54:10 | | CAL: ATI GPU 1: (not used) ATI Radeon HD 5700/6750/6770 series (Juniper) (CAL version 1.4.1848, 1024MB, 991MB available, 2880 GFLOPS peak)
03/01/2014 17:54:10 | | OpenCL: AMD/ATI GPU 0: ATI Radeon HD 5800/5900 series (Cypress/Hemlock) (driver version 1348.5 (VM), device version OpenCL 1.2 AMD-APP (1348.5), 1024MB, 991MB available, 5184 GFLOPS peak)
03/01/2014 17:54:10 | | OpenCL: AMD/ATI GPU 1 (ignored by config): ATI Radeon HD 5700/6750/6770 series (Juniper) (driver version 1348.5 (VM), device version OpenCL 1.2 AMD-APP (1348.5), 1024MB, 991MB available, 2880 GFLOPS peak)
03/01/2014 17:54:10 | | OpenCL CPU: AMD Phenom(tm) II X4 850 Processor (OpenCL driver vendor: Advanced Micro Devices, Inc., driver version 1348.5 (sse2), device version OpenCL 1.2 AMD-APP (1348.5))

Seems to be ignoring the 5770. DO I need a dummy plug or anything in it to take it work by any chance?
 
Last edited:
Did you have this in there somewhere?

<use_all_gpus>1</use_all_gpus>

If you don't tell BOINC to use all gpus, it will try and use the "most capable" only. Please refer to my post here for an example:

http://hardforum.com/showthread.php?t=1729016

Edit: also make sure you put it under the <options> like how my cc_config in that example has it. I have heard that it works best that way.
 
Last edited:
I've been doing some digging and I've added that line after seeing it recommended somewhere. Still doing the same thing though, very odd that it says it's ignored by config.

Ah hah...I blew away my cc_config.xml, despite the fact that it said absolutely *nothing* about excluding GPUs there, created a new one with just that line in, and both cards are now crunching Collatz, which is a start. Now to try getting them to crunch different things...well, using the same config I posted above is now working fine, both cards are under load and crunching.

How utterly bizarre!
 
Glad it is working. If you would, post your cc_config in that other thread I linked to so that others may gain knowledge from it. :)

And to answer the above question about a dummy plug, you should NOT need one with your setup. If you had an Intel GPU and wanted to use it along with those, then I have read that the Intel would need either a dummy plug or monitor plugged in. I have also read that if you mix nVidia with AMD, that the AMD's would need to be the primary display so that the nVidia's appear to be coprocessors.
 
Will do :)

I didn't think I would need a dummy plug, it was just that I found it odd that the GPU was showing as disabled and that was all I could think of, being as my config file was definitely not excluding it.

Edit: just checked and your last post in that thread is now exactly how my cc_config file looks, obviously with different URLs for the projects.
 
So I just noticed that my Galaxy S3, which has been running OGR on Yoyo for the past few months, just picked up a Harmonious Trees Android WU... As I had thought this project for Android had ended, I was a bit surprised, so went to check the forum - and found nothing... I'll post updates if I see more pop up!
 
They were talking about it ending back when I was doing my push there. That was months ago and the buzz kinda died and work units kept coming. So, I was a bit confused too and just kinda wrote it off as a it will end when it ends thing. Let me know what you find out. :)

I also found out a few other things in regards to the ARM client from Berkeley.
1. If you want to use an account manager, run a development build. The Stable version doesn't have the option, but since 7.2.17 there is an option for it.
2. If you are using an Android device with no battery, you need to use nativeBOINC because the current versions from Berkeley don't currently work unless there is a battery.
 
Last edited:
800K at Climate Prediction. Only 200k more to go before I can write that project off entirely. woo hoo :D
 
I still have a couple hundred hours left on a few work units of Climate Prediction, it's great that it computes so much, but I'm trying to get all Cosmology on my CPU threads! Dang it!
 
I typically add CPDN on systems that either are super slow and need that 1 year deadline to get credits or are laptops that are turned off and on quite a bit and may take a while to actually complete anything. The beauty with that project is that it will trickle up partial work as you are crunching it and you will get credit before the entire work unit is finished. Also, the entire work unit doesn't have to finish to be useful.
 
Are there any projects out there specifically targeting MS or at least doing work on something related a la WCG's recently completed HCMD?
 
Honestly, I am not aware of any, but I will keep my eyes open for any.

I think currently your only options for that field are the protein based sciences in hopes that they lead to break throughs down the road...
 
Last edited:
Back
Top