World Community Grid

Though WCG is my preferred project, if you are using it to stress test the build... you may want to run one of the math projects like PrimeGrid or SRBase. They will take advantage of the AVX extensions which will push the system a bit harder. That way if you are OC'ing, that would give you the most reliable testing. IMO anyways. Glad to have you back.
 
I'm getting back into things, I'm going to be building a new system soon and will likely be using WCG to bake it in. In the meantime I set up the time of use and everything on my old system to get back in the hang of things. I'm in Cali so I'm only doing this on Off-Peak time, but it is still something. Used to Fold for the [H]orde and I figure I'll start with this and see where things go.
Are there anything "cheap" in Cali after adding sales axe and income axe?:D

Glad I installed solar (small one though) a few years back and with PG&E going bankrupt, expect to pay more in the future:(

I love WCG but currently chasing after personal goals and helping our team in Formula BOINC (FB).

Welcome back!
 
Are there anything "cheap" in Cali

No... Not at all... I wish I could get somes but unfortunately I'm renting in the Bay area, unless someone knows how to do temporary solar.

I've installed through BOINC so I might jump on some of the FB challenges, I figured I saw WCG on there so it would support either way.

Regarding stressing the system, my current rig is a poor 2600k so I don't want to beat it up too much. The new system will be water-cooled as well so I'd feel more comfortable pushing that one.

Just to confirm our WCG team is HardOCP right? I see on FB that it has the brackets but that's the one I found searching for teams.
 
That's correct. The correct team on WCG is HardOCP. All others use the brackets.
 
A new beta project was introduced yesterday. It's climate/weather related project.

We are starting a beta test for a new research project. Here are the basic details:

  • 2,000 workunits available initially
  • The project will use redundant copies so there will be at least 4,000 total results available in this initial phase
  • Much longer than normal runtime - I would expect the runtime for this initial beta to average 20+ CPU hours
  • Much larger than usual input and output data - The input data sizes for this initial test are about 29 MB compressed and the output data sizes are about 128 MB compressed.

To register for beta testing please log into the website and navigate to My Contribution -> Beta Testing and verify that the "Participate in Beta Testing" checkboxes are checked for the profiles you want participating.

IMPORTANT NOTE - PLEASE READ
I want to set everyone’s expectations appropriately for this beta test. Due to some issues and complexities with the software for this project, our internal development and testing time has been much longer than normal. Due to that, I anticipate a longer beta testing time than usual as well. We will learn a lot from this initial beta test.
Once I have enough data to determine which way it is headed, I will update this thread.

THE FINE PRINT
Since everyone has been looking forward to a new project for awhile, I will include some more details for those that are interested.
Many may guess that this is one of the climate projects. I cannot confirm nor deny that, but I will say the research application being used is the Weather Research & Forecasting (WRF) model.
WRF is a very large, mostly Fortran application, and this has been one of the factors contributing to the longer than usual development time. For this project, the only method available to validate results is to run redundant copies and check for binary equivalence. While the WRF application does include restart capabilities, we have run into bugs that are causing slight variations in output after a restart, which means that the results are not binary equivalent. This is one of the issues that has extended the development time.
Additionally, WRF is typically run on large compute clusters and restarts are not typically used as granular as we need them. Some of the bugs we have found we have been able to track down the cause and fix in the code. However, there is still a bug or two that have proven to be very difficult to squash. We are currently testing a work-around internally that is behaving nicely in our environment. However, the real test will be with our beta environment to prove whether or not the work-around is sufficient.
For those not familiar with the WRF application, it is used to simulate weather conditions over a region over a defined time period. The work for this project will be broken into small geographical regions, and in the end each region will be simulated for one calendar year. Each individual work unit represents 48 hours calendar time for this simulation. Once a result has been validated for the 48 hours, the output will be used to build the input for the next 48 hours of runtime. This is similar to some of our other projects, but the good news here is that the work units for the next piece of the simulation will be generated solely by us. There will be no delay in sending the work units back to the researchers for generation.
For this first beta test, all 2,000 workunits will be only the initial 48 hour simulation period.

As always, thanks so much to all of the beta testers.
armstrdj
 
Cool! I set one of my boxes to beta test, so hopefully it can pull some. Just said there aren't any available, but let's wait and see what happens.
 
Guess I had beta testing on already and one of the rigs pulled down 8 beta WUs. Takes a long time on a 2P lol
 
Looks like I got 7 BETA work units on my G34 box. Good thing it didn't get shut down yet...lol
 
More details on the BETA

IMPORTANT NOTE - PLEASE READ
I want to set everyone’s expectations appropriately for this beta test. Due to some issues and complexities with the software for this project, our internal development and testing time has been much longer than normal. Due to that, I anticipate a longer beta testing time than usual as well. We will learn a lot from this initial beta test.
Once I have enough data to determine which way it is headed, I will update this thread.

THE FINE PRINT
Since everyone has been looking forward to a new project for awhile, I will include some more details for those that are interested.
Many may guess that this is one of the climate projects. I cannot confirm nor deny that, but I will say the research application being used is the Weather Research & Forecasting (WRF) model.
WRF is a very large, mostly Fortran application, and this has been one of the factors contributing to the longer than usual development time. For this project, the only method available to validate results is to run redundant copies and check for binary equivalence. While the WRF application does include restart capabilities, we have run into bugs that are causing slight variations in output after a restart, which means that the results are not binary equivalent. This is one of the issues that has extended the development time.
Additionally, WRF is typically run on large compute clusters and restarts are not typically used as granular as we need them. Some of the bugs we have found we have been able to track down the cause and fix in the code. However, there is still a bug or two that have proven to be very difficult to squash. We are currently testing a work-around internally that is behaving nicely in our environment. However, the real test will be with our beta environment to prove whether or not the work-around is sufficient.
For those not familiar with the WRF application, it is used to simulate weather conditions over a region over a defined time period. The work for this project will be broken into small geographical regions, and in the end each region will be simulated for one calendar year. Each individual work unit represents 48 hours calendar time for this simulation. Once a result has been validated for the 48 hours, the output will be used to build the input for the next 48 hours of runtime. This is similar to some of our other projects, but the good news here is that the work units for the next piece of the simulation will be generated solely by us. There will be no delay in sending the work units back to the researchers for generation.
For this first beta test, all 2,000 workunits will be only the initial 48 hour simulation period.

As always, thanks so much to all of the beta testers.
armstrdj
 
Well guys, I was buckets to 11 for the pentathlon, but it's too dammed hot now...

I'm out till fall, fuck cancer, see you in October-ish
 
OpenZika project is temporarily paused. New tasks are hard to come by, so I'm taking a break.

Current project stat for OZ.
 
According to the WCG admin here, there will be some OZ work soon and they will be the last batch of work.

For those doing badge hunting, this will be a great opportunity then.

We just finished our monthly call with the research team.

1. They are getting new batches ready for us--we should have them in the next two weeks if not sooner.
2. The researchers at UCSD are finishing up a round of testing on compounds with the potential to treat Zika. Their next step is to try to replicate their results.
3. The big news: they've reviewed their progress over the life of the project and realized that not only do they have a huge amount of data to analyze, but they've also almost completed all the work on World Community Grid that they set out to do.

So the next round of batches will be the last. We don't have an exact ending date, since it will depend on how quickly the work is processed, but we think that the project's work on World Community Grid will end in late September or early October.

We have a project update from them that we're reviewing and will publish tomorrow with more details.


Edit: today's update. This project is ending after the release of approx 20,000 batches of work units.
 
Last edited:
Just a heads up to the badge hunters. WCG's Open Zika is about to end. There was an announcement of another batch added that "might" extend it to the end of the month.
 
when is the next penathalon or whatever, it is getting colder, I actually fired up some SETI to warm up the house the other day
 
Just a heads up to the badge hunters. WCG's Open Zika is about to end. There was an announcement of another batch added that "might" extend it to the end of the month.

Pretty sure OZ is done. Haven't had a task for it since early yesterday.
 
I will have to check as I was loading up my cache on a couple systems.

Edit: Yup still getting work.

ZIKA_ 000459424_ x5wz3_ ZIKV_ NS5pol_ s3_ 0861_ 0-- Cooter-Lenovo In Progress 10/7/19 02:46:59 10/17/19 02:46:59 0.00 / 0.00 0.0 / 0.0
ZIKA_ 000459439_ x5wz3_ ZIKV_ NS5pol_ s3_ 0754_ 0-- Coleslaw In Progress 10/7/19 02:27:52 10/17/19 02:27:52 0.00 / 0.00 0.0 / 0.0
ZIKA_ 000459442_ x5wz3_ ZIKV_ NS5pol_ s3_ 0060_ 0-- Coleslaw In Progress 10/7/19 02:27:52 10/17/19 02:27:52 0.00 / 0.00 0.0 / 0.0
ZIKA_ 000459439_ x5wz3_ ZIKV_ NS5pol_ s3_ 0619_ 0-- Coleslaw In Progress 10/7/19 02:27:52 10/17/19 02:27:52 0.00 / 0.00 0.0 / 0.0
ZIKA_ 000459442_ x5wz3_ ZIKV_ NS5pol_ s3_ 0093_ 0-- Coleslaw In Progress 10/7/19 02:27:52 10/17/19 02:27:52 0.00 / 0.00 0.0 / 0.0
ZIKA_ 000459439_ x5wz3_ ZIKV_ NS5pol_ s3_ 0807_ 0-- Coleslaw In Progress 10/7/19 02:27:52 10/17/19 02:27:52 0.00 / 0.00 0.0 / 0.0
 
yeah, I generally only visit threads I have posted in in the past at this point, so if you really want participation posting it here is probably a good idea

I agree, but at the same time we typically open a new thread for these challenges as that is the best way to easily search for historical data. That, and I don't want to have to remember to update several posts with the same info when we have had the challenges page stickied each year for several years now. Also, when people are posted in multiple threads, it is hard to keep the conversations in an organized place. For example, the Pentathlon is multiple projects. Don't want to talk about it in the WCG thread because then the WCG thread starts to get off topic... Talking about just the WCG portion would be fine but then others may miss the conversation important to the challenge. It is much easier just to check the general forum for anything new.
 
I am still getting some OZ work but i agree its drying up :( still hoping i can get my 10y badge yet
 
The latest word is that the final batch number is 465016. Currently I'm seeing work from batches 459XXX. (ex: ZIKA_ 000459707_ x5kqr_ NS5met_ s3_ 0140_ 0) They are not distributed completely linearly, so keep trying if you see batches 465XXX.

I imagine we'll see temporary outages between now and the end for a variety of reasons. Just keep an eye on the batch number and keep cranking away.

Best of luck hitting your goals!
 
Not to mention that they only load so many into que at a time. So, one user loading up could drain that que and then if you aren't hammering the server, you may not request again when the que refills. So, just keep at it to get your fill.
 
https://www.worldcommunitygrid.org/forums/wcg/viewthread_thread,41865

We are conducting a beta test of the application last tested on September 25, 2019. This test includes a very minor application update related to the compression used to reduce the output size. We have made some minor modifications to increase the compression and decrease the output file size for some of the output files. The application version with this change is 7.26.

I will initially load a single batch of 100 work units and will add 4 more batches later.

Thanks,
armstrdj
 
World Community Grid: Planned Maintenance on Thursday, October 10, 2019
We are updating the operating system on our servers on Thursday, October 10, beginning at 15:00 UTC.
10/9/2019 7:41:39 PM · more...
 
posted yesterday

There is an issue with results not validating for this new beta test. We have stopped sending out new work while we investigate. The issue will require a new application build so it will take a bit to get things running again.

Thanks,
armstrdj
 
Posted on the 14th about OpenZika

We had our monthly call with the research team today.

1. The researchers confirmed that they've sent the last of the work units to us. Based on our tech team's estimated projections, there's about two weeks worth of work to be processed.
2. One of the researchers will be speaking at a conference in the US in early December.
3. They have several papers in the works--mostly in the writing stage.
4. They're making plans for analyzing their data--the project produced a huge amount of information.


Work unit status (as of October 14):

Available for Download: 0
In Progress: 6,807 batches (7,753,614 work units)
Completed: 457,567 batches - 6,155 batches in the last 30 days - average of 205 per day
 
Posted today

The application has been updated and beta has started again. The new application version is 7.27. The issue was that some of the output archives included a file that should not have been included. Unfortunately this means that results run under version 7.26 will not validate and will all end up invalid, but since this is beta everyone will get credit for those.
Thanks,
armstrdj
 
For Current Volunteers: Advance Information on Our Newest Project
Hello Everyone,

Very, very soon we'll be officially launching our newest project, and you can start seeing work units today. It's been a long wait and has taken a lot of behind-the-scenes work to get this one ready, so we're even more excited than usual. Before the official launch, we wanted to give you all a heads-up about the technical background and requirements for the project.

Background

The Africa Rainfall Project utilizes the Weather Research & Forecasting (WRF) model for the research application. WRF is a very large, mostly Fortran application and the simulations being run require more resources than are typically used for a World Community Grid project. For this reason, volunteers will not be automatically opted into this project.

Two Important Points (Please Read):

More Info on WRF, Work Units, and Checkpoints

For those not familiar with the WRF application, it is used to simulate weather conditions over a region over a defined time period. The work for this project will be broken into small geographical regions, and in the end each region will be simulated for one calendar year.

Each individual work unit represents 48 hours calendar time for this simulation. Once a result has been validated for the 48 hours, the output will be used to build the input for the next 48 hours of runtime. The average CPU runtime for an individual workunit in beta testing was about 24 hours. This runtime is also much longer than a typical project.

Due to limitations in the application, we are only able to take 8 checkpoints during each workunit. Depending on the speed of the host, this could result in long times between checkpoints. For this reason, volunteers who choose to run this project may want to consider using the 'Leave applications in memory while suspended' option in their device profile.

Please ask any questions in this thread, and as always be sure to read previous questions and answers before posting. Thanks to everyone for your support.

Thanks,
armstrdj

Research Project Memory Available Disk Space Operating Systems
Africa Rainfall Project 1 GB 1.5 GB Windows1,2, Mac2, Linux1,2
 
I completed an ARP work unit yay

ran for 1 day and 5 hours resulting in 3,560 points (WCG points)
however points will be a roller coaster for a while as it levels out
ran on a Intel(R) Core(TM) i5 CPU M 520 @ 2.40GHz [Family 6 Model 37 Stepping 5]
 
Last edited:
The ARP run time is nuts for me but the points seem to balance out atleast.
 
We are running a small, windows only, beta test for HSTB. There are no code changes in the new version. The issue being addressed is that currently the 32 bit Windows version is being distributed as both 32 and 64 bit. For this beta the 64 bit version as well as the 32 bit version of the windows application should be distributed properly. The updated app version is 7.30.
Thanks,
armstrdj
 
Yeah, possibly lol

Spread the word if you can. I don't have accounts on many other forums like you do :)
 
Back
Top