WCG Christmas Race 2012

I just remembered WCG points are 7x BOINC points. I guess I can stop waiting for that massive update to show up. :(
 
Yeah..I'm not sure what the huge problem is. Its not that hard to run a script to convert the UD clients points into WCG/BOINC points and then convert everyones points to standard BOINC. Adding the UD to the current BOINC should be a simple task. But at WCG, everything seems to be a time consuming process.
 
I forgot to mention that yesterday WCG started BETA testing HCC work on Linux.
 
So are you trying to tell us to start running the Mac client before Apple sues us. Cause now that you can fold on a Mac, they will prove that they had patented the folding concept first.
 
I just remembered WCG points are 7x BOINC points. I guess I can stop waiting for that massive update to show up. :(
Seems like some pretty massive updates to me! :)

Great work everyone! Maybe with a few more crunchers we could even take 10th! I'll have to see if I can find a couple more cores for the weekend.
 
My home just went down by 2 cores. Had to help a family member with a temporary PC and I know she wont leave it turned on crunching when she isn't playing on it. However, she plans on having me build a $2k desktop during tax return season that will be left on full time crunching when not gaming. So, I see this as a long term investment...lol
 
My home just went down by 2 cores. Had to help a family member with a temporary PC and I know she wont leave it turned on crunching when she isn't playing on it. However, she plans on having me build a $2k desktop during tax return season that will be left on full time crunching when not gaming. So, I see this as a long term investment...lol

haha, yeah im down four cores for pretty much that same reason, although im kinda glad at this point, its been a very warm december so far and dont think my room can take much more heat
 
Thought you could slip under the radar, didn't you Grandpa? Thanks for the help.
 
I have the fan in my AC unit still cycling air from the outside into my office. If it weren't for my wife our heat would never be turned on.
 
I added both of my daughters computers to the cause. They both won't be back until the 21st from college.
 
Thought you could slip under the radar, didn't you Grandpa? Thanks for the help.
Welcome Grandpa! Thanks for joining the fun!

I added both of my daughters computers to the cause. They both won't be back until the 21st from college.
Excellent. Maybe these new additions can push us over that 3M PPD mark! I see we moved up to 11th place in the Race. Great work everyone!
 
I am a little curious here are there actually 2 [H]ardOCP / HardOCP teams on the BOINC projects.
 
There is actually 3 of them.
[H]ardOCP 4 members
[H]ard|OCP 5 members
HardOCP 268 members
 
LOL so when I thought I was helping the HardOCP team out in the past I really was not woooooppsss :eek:
 
Well...I just borged my brothers Athlon XP 3200. So, that is another core for the cause.
 
Only 14 more days to go and we are in 11th Place. We are behind 10th by 2 Million WCG points. If any more of our team has systems (or partial systems) they can contribute, it would be a great help.

I also just Borged my brothers dual core laptop so that I can stress test it after a few repairs and updates. It wont do a whole lot but is putting up a few points while I have it.
 
Thanks for recruiting a couple more machines Gil! Wish I could round up a few more but Im about at my limit.

Three straight days at 3.22M points is quite an achievement! Thanks for contributing everyone! We almost have in 10 days what we produced in points during the entire challenge last year! Keep crunching!
 
Also, we are approaching the 1 Billion WCG Points Milestone. If we kept the current momentum, we would be there in about a month. (~28 days) With 13 more days of this challenge, we would have to roughly double our output. Too bad, that would have been a great X-mas present this year.
 
My boss just tossed me like 4 or 5 Dells that had the hard drives and ram pulled from them. They are each dual cores. I guess I will be upgrading some family members in the very near future. Still got a lot of Pentium D and Celerons floating around out there. If it wasn't the holidays, I would have them done over the weekend. So, those borged devices will be extra cores added to the cause. Gotta love the IT world.

The sad part is that these systems were just going to go to the e-cycling place here in town. I scolded him letting him know there are a lot of people who could use these free upgrades. Keep in mind we do the donation thing for non-profits. However, some systems are still tossed because of cosmetic concerns. Hopefully from now on, I will oversee all discards...lol.

The three systems at my feet right now are a Optiplex GX520 - Pentium D dual core 2.8GHz, PowerEdge T100 - 3GHz Xeon and the other is an Optiplex 330 2.13GHz Core 2 Duo. The others are already sitting in my truck...lol
 
Last edited:
My boss just tossed me like 4 or 5 Dells that had the hard drives and ram pulled from them. They are each dual cores. I guess I will be upgrading some family members in the very near future. Still got a lot of Pentium D and Celerons floating around out there. If it wasn't the holidays, I would have them done over the weekend. So, those borged devices will be extra cores added to the cause. Gotta love the IT world.

The sad part is that these systems were just going to go to the e-cycling place here in town. I scolded him letting him know there are a lot of people who could use these free upgrades. Keep in mind we do the donation thing for non-profits. However, some systems are still tossed because of cosmetic concerns. Hopefully from now on, I will oversee all discards...lol.

The three systems at my feet right now are a Optiplex GX520 - Pentium D dual core 2.8GHz, PowerEdge T100 - 3GHz Xeon and the other is an Optiplex 330 2.13GHz Core 2 Duo. The others are already sitting in my truck...lol

haha nice find , although i question why the ram was pulled, was it to upgrade other systems at work?
 
Yeah...most organizations do that on non-donated PC's. I used to pick some up from a local community college and they did the same thing. The hard drives in this case also had important data that couldn't be risked being outside the company too.

The other two systems were Optiplex 330's with 2.66GHz dual cores.
 
Well I have yet to figure out how to figure out actual points turned in each day, but I have made it to 1.5 million points and most of them have went to your christmas contest. I only had my main rig running a couple days before the contest. Since the contest started I added my htpc, and both of my daughters computers.

edit... This is just a question that went through my head. Why is F@H more popular than BOINC. Seems like BOINC has a broader spectrum of projects that you can crunch for. I am not sure what all F@H folds for, I just ran their program.
 
This is just a question that went through my head. Why is F@H more popular than BOINC. Seems like BOINC has a broader spectrum of projects that you can crunch for. I am not sure what all F@H folds for, I just ran their program.

Do you mean here at [H] or in general?

Stanford actually tested the BOINC client years ago but decided it wasn't for them. I prefer the BOINC client because it requires less overhead. I can borg systems of non-techies (with permission) and don't have to worry about them. Pretty much set and forget. Use an account manager and you can change most of what you need from one place. Not to mention that there is a wide variety as you noted. If I decide to support something else tomorrow, no real software change. Just attach to the other project and I'm ready to go. (However, that doesn't mean its perfect. There is still problems at any project that sometimes requires tweaks.)
 
Question: can I use the official BOINC client or do I have to use the IBM one to join the team? I ask because I have no problems getting the official one to run but have some troubles installing the linked on here in Windows 8. Thanks!
 
macaw, that is a great question. You actually can't use WCG's version. (I don't recommend it anyways.) You need to download and run Berkeley's version to properly run in Windows 8. Version 7.0.31 or newer is required. I think the latest test version now is 7.0.42. 7.0.40 added a new feature that WCG is pushing for. However, very few people will know about it unless they do a lot of reading in the forums. The new feature lets you limit how many of each app runs at a time regardless of what is in the cache. This is helpful for WCG because CEP2 doesn't play well with some machines.

https://secure.worldcommunitygrid.org/forums/wcg/viewthread_thread,33527
https://secure.worldcommunitygrid.org/forums/wcg/viewthread_thread,34395_offset,0
 
Thanks Gilthanis. I have the Berkeley one up and running. How do I join the HardOCP team though? It seems to have a whole different set of teams.
 
Hi macaw. Welcome to the team! Our team name is HardOCP and the id# is 1411. If you log into the WCG site and click on the MyGrid tab - then the My Team link on the left hand side - then I think there is a search team or join team button. (I think. Its been a while since Ive done this!) :)

Glad to have you crunching with us!
 
macaw, I would also recommend joining WUProp http://wuprop.boinc-af.org/ team name [H]ard|OCP
That project is a non-CPU intensive project and will run an extra work unit on top of the WCG ones.

Other projects with NCI apps: (You have to log into their accounts and select the non cpu intensive ones) FreeHAL https://www.freehal.net/freehal_at_home/ , OProject (ALX app) http://oproject.info/

If you run Macs or an old IBM or are willing to buy one of their accelerometers QCN http://qcn.stanford.edu/sensor/

If you are willing to buy one of their detectors Radioactive http://radioactiveathome.org/boinc/

My i7 runs 8 WCG work units, 1 WUProp, 8 FreeHAL, and 1 OProject ALX apps and that is when my GPU's are running WCG HHC work. If they support other projects, I also run 2 or more GPU work units.
 
macaw, (if you are using the rig in your sig) be aware that BOINC doesn't like SLI too much and you will see no gain from using it. If you have troubles running on the GTX 680s, you may want to try without SLI. However, that is up to you since the rig in your sig mentions Eyefinity. However, if you do want to maximize those two 680's for the cause, make sure to delve into the world of using an app_info file so you can run multiple work units on each card. You could put out some serious points with those.
 
Do you mean here at [H] or in general?

Stanford actually tested the BOINC client years ago but decided it wasn't for them. I prefer the BOINC client because it requires less overhead. I can borg systems of non-techies (with permission) and don't have to worry about them. Pretty much set and forget. Use an account manager and you can change most of what you need from one place. Not to mention that there is a wide variety as you noted. If I decide to support something else tomorrow, no real software change. Just attach to the other project and I'm ready to go. (However, that doesn't mean its perfect. There is still problems at any project that sometimes requires tweaks.)

I was meaning here a [H]. I really don't know the number of people using one vs the other world wide. But if you look at the F@H stats, [H] has 3549 contributors in the last 30 days were for WCG we have 271 current members.
I know I initially signed up for both but was folding for F@H because more people were folding for that one. I figured that they must of had the better projects and support is why people were choosing them more. Then when I really looked into WCG this year to help you out for the Christmas competion, I was amazed at all you could fold for.

I really like the idea of being able to help create better alternative energy and stuff like that. Currently all my computers are folding for schistosoma, malaria, leishmaniasis, clean energy 2, clean water, and the gpu are folding for conquer cancer.
 
Yeah, you would have to ask each person why they chose the route they did. I think that BOINC as a whole has a lot of great options as well as a few less worthy ones. But choice is good and I'm glad we have people who are willing to represent.

And I would say that WCG has one of the best forums out of all of them. They are a bit slow to adopt to the latest and greatest, but they are about be reliable, secure, and keeping systems minimally obtrusive to the users.
 
macaw, (if you are using the rig in your sig) be aware that BOINC doesn't like SLI too much and you will see no gain from using it. If you have troubles running on the GTX 680s, you may want to try without SLI. However, that is up to you since the rig in your sig mentions Eyefinity. However, if you do want to maximize those two 680's for the cause, make sure to delve into the world of using an app_info file so you can run multiple work units on each card. You could put out some serious points with those.

If I look at the tasks in BOINC manager, it appears both GPUs are being used. In the task list I see two HCC tasks each using a GPU.

12/13/2012 4:01:45 PM | | NVIDIA GPU 0: GeForce GTX 680 (driver version 310.70, CUDA version 5.0, compute capability 3.0, 4096MB, 8378940MB available, 3252 GFLOPS peak)
12/13/2012 4:01:45 PM | | NVIDIA GPU 1: GeForce GTX 680 (driver version 310.70, CUDA version 5.0, compute capability 3.0, 4096MB, 2620MB available, 3252 GFLOPS peak)

In the NVidia control panel, I have SLI set to "Span displays with Surround" and I put "<use_all_gpus>1</use_all_gpus>" in my config file. So, I think it is all being used.

My work machine is pretty loaded so I'd like to figure out how to use it for BOINC. I haven't done the research to figure out how well BOINC plays with HPC but it's quite a beefy box with all the blades maxed.
 
If I look at the tasks in BOINC manager, it appears both GPUs are being used. In the task list I see two HCC tasks each using a GPU.

12/13/2012 4:01:45 PM | | NVIDIA GPU 0: GeForce GTX 680 (driver version 310.70, CUDA version 5.0, compute capability 3.0, 4096MB, 8378940MB available, 3252 GFLOPS peak)
12/13/2012 4:01:45 PM | | NVIDIA GPU 1: GeForce GTX 680 (driver version 310.70, CUDA version 5.0, compute capability 3.0, 4096MB, 2620MB available, 3252 GFLOPS peak)

In the NVidia control panel, I have SLI set to "Span displays with Surround" and I put "<use_all_gpus>1</use_all_gpus>" in my config file. So, I think it is all being used.

My work machine is pretty loaded so I'd like to figure out how to use it for BOINC. I haven't done the research to figure out how well BOINC plays with HPC but it's quite a beefy box with all the blades maxed.

To get your work machine going, make sure you get permission first. There are too many scenarios of people getting fired for using company resources like this. If you have, post some more details and I'm sure we could get you going.

I gotta ask, are you using an app_info file to run multiple work units at a time on each card and if so, how many are you running?
 
I remember about a decade ago someone getting canned for using a cluster of servers to run distributed.net during off hours. Reading accounts of the incident, I thought the employer was tremendously unfair and ignorant. It's likely there was other circumstances for booting this guy but it gave me pause "if I'm ever in charge..."

Well that day has come and I have 140 outstanding employees. I've been purposeful in hiring curious people who have a passion for computing that extends beyond a paycheck. Technical stars who put in hard weeks' work and delight our clients yet have a little in the reserve to learn a new API or configure something interesting.

Creating that culture even involved an investment of carving out time for curiosity. Google made this practice famous on a bigger scale. I'd hope that everyone who works for me would feel free to try something like this without any finger wagging. I also know my line of business and clientele work well with this culture but it wouldn't translate every where.

Anyway I'll step off my tiny soap box here :)

It's a Cray CX1 running Windows Server 2012 with the HPC Pack 2012. 8 blades, each blade has two 4-core Xeon so 64 procs.

re: app_info - just reading up on that. I hadn't customized that file and was assuming BOINC would smartly handle it but it sounds like I need to manually config it?
 
Last edited:
Back
Top