World Community Grid

fastgeek not purposefully, but I think I had used a different port out of convenience because I had routed the new cable differently. The drive remains ineffectual through soft-reboots (reset switch) but revives after hard reboot.

Software is gsmartcontrol (or something like that), should be available in most Linux distros. I'd use a windows tool, but neither the drive nor pc it's in has windows installed.
 
Swapped the corsair with a Samsung 250GB 970 Evo+ nvme drive, and I'm back in business.

Edit: Here's the SMART data for the nvme...looks like more details than that aren't supported on this drive under linux.
Code:
smartctl 7.1 2019-12-30 r5022 [x86_64-linux-5.8.5-arch1-1] (local build)
Copyright (C) 2002-19, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Model Number:                       Samsung SSD 970 EVO Plus 250GB
Serial Number:                      [REDACTED]
Firmware Version:                   2B2QEXM7
PCI Vendor/Subsystem ID:            0x144d
IEEE OUI Identifier:                0x002538
Total NVM Capacity:                 250,059,350,016 [250 GB]
Unallocated NVM Capacity:           0
Controller ID:                      4
Number of Namespaces:               1
Namespace 1 Size/Capacity:          250,059,350,016 [250 GB]
Namespace 1 Utilization:            38,080,303,104 [38.0 GB]
Namespace 1 Formatted LBA Size:     512
Namespace 1 IEEE EUI-64:            002538 55019d5782
Local Time is:                      Sun Sep  6 16:06:41 2020 CDT
Firmware Updates (0x16):            3 Slots, no Reset required
Optional Admin Commands (0x0017):   Security Format Frmw_DL Self_Test
Optional NVM Commands (0x005f):     Comp Wr_Unc DS_Mngmt Wr_Zero Sav/Sel_Feat Timestmp
Maximum Data Transfer Size:         512 Pages
Warning  Comp. Temp. Threshold:     85 Celsius
Critical Comp. Temp. Threshold:     85 Celsius

Supported Power States
St Op     Max   Active     Idle   RL RT WL WT  Ent_Lat  Ex_Lat
0 +     7.80W       -        -    0  0  0  0        0       0
1 +     6.00W       -        -    1  1  1  1        0       0
2 +     3.40W       -        -    2  2  2  2        0       0
3 -   0.0700W       -        -    3  3  3  3      210    1200
4 -   0.0100W       -        -    4  4  4  4     2000    8000

Supported LBA Sizes (NSID 0x1)
Id Fmt  Data  Metadt  Rel_Perf
0 +     512       0         0

=== START OF SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED


Anyone set processor affinity on Linux, or do you just let it be? I tried setting processes and their threads to cores 0,4; 1,5; 2,6; 3,7... but any effect is non-obvious in the short time since I've done it. Well, other than processor usage looking more uniform across the cores.
 
Last edited:
The only SSD i have ever had die was an OG 64GB OCZ Vertex. I think the controller died, it also dropped out a few times here and there and then one day it went dark.
Similar story. Only SSD to die was a 120GB OCZ but it was the previous model and they sent me a Vertex for warranty replacement. It was the controller that went kaput. Vertex became the boot drive in my old main PC for over 6 years before I swapped it for a 250GB Sammy. Still works fine with very little wear but relegated to spares pile now.
 
Still waiting and waiting and waiting for gpu apps for OP :whistle:

Someone with 3080 might be itching to test it out, lol.

17 Sep 2020
spacer.gif
Summary
The researchers and the World Community Grid tech team are continuing their work to get the project working on GPU.
spacer.gif

OpenPandemics.jpg
Background
OpenPandemics - COVID-19 was created to help accelerate the search for COVID-19 treatments. You can learn more about the details of the work on the research team's website.
GPU version of OpenPandemics
Both the research team and World Community Grid tech team are continuing to make progress on porting the software that powers OpenPandemics to GPU.
The researchers are working on performance improvements for an OpenCL version. Meanwhile, World Community Grid has submitted the code for IBM's Open Source review and a security review. We don't currently know exactly when the IBM reviews will be done.
AutoDock Suite at 30
The research team recently published a paper on the history of AutoDock, which is the software that powers OpenPandemics, FightAIDS@Home, and other projects that have searched for potential treatments against various diseases. You can read the paper here.
Current status of work units
  • Available for download: 3,452 batches
  • In progress: 2,259 batches (18,949,527 work units)
  • Completed: 9,479 batches
    2,991 batches in the last 30 days
    Average of 99.7 batches per day)
  • Estimated backlog: 34.6 days
 
Got a Dell Optiplex 990 with a i7-2600 running four SCC threads that is hitting 70C on a stock cooler. I can't run 8 threads without the fans kicking up. CPU paste was reapplied when I got the machine off ebay several years ago. It's always had high temps. Autodock VINA projects (SCC, FAH2) typically generate more heat than the other projects.

I'm running four SCC threads on a Dell Inspiron 660 with a i5-3330 and only seeing 55-60C temps.

I'm guessing this is normal - IvyBridge runs cooler than SandyBridge due to the die shrink from 32nm to 22nm.

What's everyone seeing for CPU temps running four SCC threads?

Is the i7-2600 EOL at this point for WCG and should I consider upgrading?

Not interested in GPU crunching ...
 
Last edited:
Unless you are looking to increase core/thread count, I wouldn't worry too much about upgrading. You would be spending a lot of money to save pennies a day in efficiency. I recently picked up a few dual socket 2011 boxes off of EXT64 for a steal of a deal. Has 32 threads in each box. These days, I am phasing out boxes with less than 12 threads.
 
Unless you are looking to increase core/thread count, I wouldn't worry too much about upgrading.

I'd like to run all 8 threads without all the fan noise. Only running 6 cores on WCG now. I've been getting rid of old hardware and my production is way down. I just retired a C2D box last week. SandyBridge is almost 10 years old. Might be time to put it up for sale on CL. Just need to decide on a replacement. Waiting to see what Zen 3 looks like in a couple weeks.
 
Last edited:
Unless you are looking to increase core/thread count, I wouldn't worry too much about upgrading. You would be spending a lot of money to save pennies a day in efficiency. I recently picked up a few dual socket 2011 boxes off of EXT64 for a steal of a deal. Has 32 threads in each box. These days, I am phasing out boxes with less than 12 threads.
Well, you don’t have to spend lots of money as you could replace all the really old gear with raspberry pis to dramatically cut power usage. Depending on project, I’ve found 2x pi 4s to be roughly equal to a core 2 quad 9650 at roughly 140 watts less power draw.

the downside is having to manage extra Linux hosts - not as much of an issue if you like to stick with the same few projects, but a pita if you like to swap between projects.
 
As someone that had used several cell phones in the past along with a lot of other hardware, managing a bunch of micro devices is not a fun time. I love what you can do with ARM but until a player comes along and drops an affordable server with hundreds of cores in it, I will pass. Even AMD's A-series was a joke. Hopefully, nVidia buys out ARM and does something cool with it. However, I'm very doubtful of them doing anything but line their pockets. DC'ing is to a point where if you want to contribute big, you need big rigs. The days of "every little bit counts" is about gone. Most projects cannot wait weeks or months to get work units back. Even WCG sub projects all come in batches that the researchers a lot of time analyze either during new batches or wait to release new batches. They don't necessarily need them within a few days like GPUGrid likes to do, but they also don't want work units out there for weeks on end.

SETI had that luxury where they didn't need results back any time soon. PrimeGrid could be that way but they understand users would be pissed if they had to wait months for their work unit to be validated against a super slow yet highly efficient host. That is why they cater to the highest grade systems out there for speed. There really does come a time when perfectly good hardware just isn't worth using anymore. The Pi's have come a long way, but even the threadrippers now can become efficient enough to match what ARM can do. And they have a lot more uses with a lot less management.
 
I'd like to run all 8 threads without all the fan noise. Only running 6 cores on WCG now. I've been getting rid of old hardware and my production is way down. I just retired a C2D box last week. SandyBridge is almost 10 years old. Might be time to put it up for sale on CL. Just need to decide on a replacement. Waiting to see what Zen 3 looks like in a couple weeks.
I've sold a few 990's during this pandemic that had i5's in them. They typically sell for between $75 and $100 in my area without a discreet video card. However, I feel that is mostly because people still really don't know a lot about computers. With everyone dealing with remote learning, there was a rush on laptops as well until the schools finally started offering them up. I probably sold a dozen refurbed laptops this year easy and they weren't all laptops I would have normally sold to anyone due to age. But people are desperate for something "cheap" for their kids to destroy.

I don't know what kind of budget and life expectancy you have in mind but there are still a lot of beasts out there to be had for really good prices. Efficiency is great when one has the luxury to afford it. I would certainly avoid all pre-Ryzen AMD offerings without hesitation unless it was "free". Even the G34 setups are really becoming inefficient enough to not power on these days. Mine sits cold waiting for either a local buyer or a big challenge. I think long term, the Ryzen 3's will certainly be the better investment for DC needs. But that will come at a premium up front cost. I only have one rig that uses DDR4. I can honestly say that I have no plans to buy any other DDR4 rigs at this time. There is literally no "need". I have DDR3 coming out my ears in 2 and 4GB sticks. The rigs I have compute Facebook just as fast...lol. I will probably skip anything with DDR4 and go with something that has DDR5 when that is available or just keep buying "outdated" as it truly is "good enough"... for me.
 
As someone that had used several cell phones in the past along with a lot of other hardware, managing a bunch of micro devices is not a fun time. I love what you can do with ARM but until a player comes along and drops an affordable server with hundreds of cores in it, I will pass. Even AMD's A-series was a joke. Hopefully, nVidia buys out ARM and does something cool with it. However, I'm very doubtful of them doing anything but line their pockets. DC'ing is to a point where if you want to contribute big, you need big rigs. The days of "every little bit counts" is about gone. Most projects cannot wait weeks or months to get work units back. Even WCG sub projects all come in batches that the researchers a lot of time analyze either during new batches or wait to release new batches. They don't necessarily need them within a few days like GPUGrid likes to do, but they also don't want work units out there for weeks on end.
How much management is there really though? These Pi are just little Linux boxes. The uptime on my first pi 3 I setup for WCG OP is 66 days. The uptime on my Pi2 running pihole is 691 days, though it hasn’t been running boinc that whole time. They really are pretty much set it and forget it unless you regularly want to change projects - and if you really want to change projects, setup kubernetes and run containers specific to each project and redeploy as desired.

As for performance, if a project is OK with core 2 quads or even sandy bridge, it should be ok with pis. Each of my pi 4s are averaging around 10 WCG OP WUs per day, so the WUs aren’t taking days to process.
 
Well... WCG welcomes everything due to no optimizations and a willingness to get all the free resources they can. It isn't whether the project is OK with it but whether it is worth the efforts of the end user to invest in that path. They (WCG) just recently got Pi support. Mobile devices need more management because of how often they throttle and their poor thermal design for things like this. I know Pi's have better cooling design but when you are talking 4 cores per rig and the extremely low output, you really do have a mess on your hands. I mean, to get a reasonable cluster for all of this you will have wires galore to manage. How many Pi's are you willing to pickup and manage to be equivalent? Even with different x86 rigs, you get to a point where you start to miss a few on the list when there is problems.

This is why I'm interested in something more along these lines - https://www.techradar.com/news/arm-wants-to-obliterate-intel-and-amd-with-gigantic-192-core-cpu . If they can make that work on BOINC and make it a reasonable price, then maybe we will have something. However, I have a feeling these are more like GPU cores rather than general purpose ones. And that would take a while to get projects willing to support them.
 
So, I realized today I'm only running two SCC threads on the IvyBridge box. I enabled 4 SCC threads and the temps are pretty close to the SandyBridge box. The Inspiron 660 is running 5 degrees cooler than the Optiplex 990 according to CPU Temp. Probably due to the die shrink.

I'm leaning toward selling the Optiplex 990 and replacing it with a Skylake or newer Dell machine. Need to do a clean install of Windows 10 Pro before I can put it up for sale.

Thanks for the responses!
 
Let me know what model you go with. Dell is my preferred vendor and I like getting feedback on their systems.
 
Sold the Optiplex 990 today on CL. My production is going to be way down for awhile. Looking at a cheap Dell Inspiron or a Dell Optiplex refurb. Budget is $400. Any suggestions? Would be used for basic Windows stuff, network storage and WCG.
 
Last edited:
Sold the Optiplex 990 today on CL. My production is going to be way down for awhile. Looking at a cheap Dell Inspiron or a Dell Optiplex refurb. Budget is $400. Any suggestions? Would be used for basic Windows stuff, network storage and WCG.
I have an Optiplex 5060 with an i5 8500 and it has worked great.
 
Let me know what you think of it after a few months of use. I'm always interested in quirks and such of the Dells.
 
Got the new Dell Inspiron running last night. Just a few quirks with it.
  1. The new Inspirons are really small now. Only fan is the CPU fan vented to the back. Temps go sky high at 100% load with turbo boost enabled. I had to disable turbo boost in the power settings to get reasonable temps.
  2. Single 8GB stick of DDR4 memory so it's running single channel.
  3. Optical drive is a flimsy laptop drive.
  4. PSU is proprietary and can't be replaced with a standard unit.
  5. Low pitch oscillating hum coming from the front of the case. I think it's the Seagate Barracuda 1TB drive. It's mounted vertically to the front panel underneath the optical drive. I'll need to troubleshoot that soon. It's next to my WFH desk and the hum is pretty noticeable.
Overall, it's a decent machine for real basic stuff. Looking back, I'd probably get the i3-10100 instead. Dell is already sending catalogs in the mail. They have a Inspiron machine (i3/4GB/1TB) doorbuster for $299 starting 11/26 at 6pm ET. They also have a XPS machine (i3/8GB/1TB) doorbuster for $399 starting 11/27 at 11am ET. You can't build them this cheap.

It's dedicated to SCC right now. Elapsed times are similar to my i5-3330 Inspiron. Temps are much better. Running 50% load on both boxes (6+2 threads).
 
Last edited:
Yeah...I've seen that chip in various Dell models on sale right now. Pretty nice price setups. I would certainly get another 8GB chip to use dual channel for that thing.
 
I just wanted to add this in here in case it helps others later.

If anyone wants to run multiple GPU work units on a single card to increase utilization, you can create the app config file with the following parameters. Just adjust to your liking.


<app_config>
<app>
<name>opng</name>
<gpu_versions>
<gpu_usage>1.0</gpu_usage>
<cpu_usage>0.20</cpu_usage>
</gpu_versions>
</app>
</app_config>
 
huh

I still haven't picked up any GPU work...and there still isn't any advanced view in linux sooooooo...they get what they get until they support my OS
 
Check the log - does it say it is trying to get GPU work? I find boinc preferences can be a confusing web at times to track down which level is blocking your work. WCG doesn't help this by having a non-standard website.
 
That's just the thing, there are no BOINC preferences in the linux manager. The view/advanced view dialog is simply absent.
 
That's just the thing, there are no BOINC preferences in the linux manager. The view/advanced view dialog is simply absent.
Have you verified your settings at WCG are set to accept GPU work? My contribution (top right) > My Projects (on the left) > Click Device Profiles > Click Default (or whatever profile you set up)

1619623176806.png


Also, what version of Linux are you using? I ask because it sounds like your version of Linux or maybe the version of the client you have running. You can see from my installation videos that there certainly is a view/advanced view in Linux distros. I have videos for Mint and Ubuntu that show it.



 
Last edited:
Yeah, it is an older Ubuntu. 16.04 LTS, and it has been nagging me for some time, but apart from this minor annoyance it just works so I have been reluctant to open that can of worms.

I always click on the BOINC updates when they come by, I figured that would suffice, I guess not.

Also...finding my WCG login creds...hmmmm, that's been a few years, lol
 
Yeah, it is an older Ubuntu. 16.04 LTS, and it has been nagging me for some time, but apart from this minor annoyance it just works so I have been reluctant to open that can of worms.

I always click on the BOINC updates when they come by, I figured that would suffice, I guess not.

Also...finding my WCG login creds...hmmmm, that's been a few years, lol
I have had VM's for Ubuntu going back to version 10 and they had the advanced view as well. Did you get yours from the Ubuntu repository?

WCG login now goes by email address like the other projects. It used to go by user name. If you recall the email, it should be easy.
 
hmmm, when I go to the software repo and search BOINC the only thing that comes up is "BoincTasks Js"
 
That is odd because I have always just pulled it from there. Maybe it is a good time for an upgrade....lol
 
I just wanted to add this in here in case it helps others later.

If anyone wants to run multiple GPU work units on a single card to increase utilization, you can create the app config file with the following parameters. Just adjust to your liking.


<app_config>
<app>
<name>opng</name>
<gpu_versions>
<gpu_usage>1.0</gpu_usage>
<cpu_usage>0.20</cpu_usage>
</gpu_versions>
</app>
</app_config>
created app_config.xml in C:\ProgramData\BOINC\projects\www.worldcommunitygrid.org with .33 GPU usage and .2 CPU usage, but GPU tasks still read in the task list as .8 CPU and 1 GPU. Reading local configs / prefs have no effect. Any tips on getting this to apply?
 
Did you have the work units downloaded before you created the app_config.xml? Wait until the ones you have complete and then see what new work units state. You may need to close the BOINC client entirely and launch it again too.
created app_config.xml in C:\ProgramData\BOINC\projects\www.worldcommunitygrid.org with .33 GPU usage and .2 CPU usage, but GPU tasks still read in the task list as .8 CPU and 1 GPU. Reading local configs / prefs have no effect. Any tips on getting this to apply?
 
Also Endgame (in addition to what Gil says above), when you tell your boinc client to "read config files" it has log entries that tell you how that goes and if it has any issues with your new config it will mention specifics.
additionally the most common reason that users get stuck is that they don't save the app_config.xml as "all files" and it saves instead as a text file, so make sure you didn't do that.
 
I'm going to have to figure out what is wrong on my end. When I was running the few beta work units I got (on a different OS installation and mobo/CPU/RAM) my RX570 was running through work units in about 10-15 minutes. When I finally pulled in some of the new GPU work units last night (forgot to install OpenCL when I reinstalled the OS) I had some nasty lockup issues trying to run the work units while gaming which didn't happen previous. I left it disabled until I went to bed. When I got up and turned monitors back on my system basically locked up although it was showing the GPU work unit having run for over three hours. When I rebooted it was gone and started another work unit. That work unit has now been processing for more than three hours, showing 100% completed but still going and still running something on the GPU. There is no way in hell this is working right for me as it's taking waaaaaaaaaaay too long to crunch work units along with the other problems I'm having which I didn't have previously.

I may have to setup a different device profile, boot into Windows and setup BOINC there with only GPU work units to see which, if any of the problems disappear.
 
I'm going to have to figure out what is wrong on my end. When I was running the few beta work units I got (on a different OS installation and mobo/CPU/RAM) my RX570 was running through work units in about 10-15 minutes. When I finally pulled in some of the new GPU work units last night (forgot to install OpenCL when I reinstalled the OS) I had some nasty lockup issues trying to run the work units while gaming which didn't happen previous. I left it disabled until I went to bed. When I got up and turned monitors back on my system basically locked up although it was showing the GPU work unit having run for over three hours. When I rebooted it was gone and started another work unit. That work unit has now been processing for more than three hours, showing 100% completed but still going and still running something on the GPU. There is no way in hell this is working right for me as it's taking waaaaaaaaaaay too long to crunch work units along with the other problems I'm having which I didn't have previously.

I may have to setup a different device profile, boot into Windows and setup BOINC there with only GPU work units to see which, if any of the problems disappear.

Yeah run a second instance and assign it different device profile with GPU work only. I am currently running WCG on all my rigs as "GPU ONLY" and then crunching other various projects as CPU (but under the same boinc)

EDIT: Also maybe try lowering the power target on your GPU some? these really dont' seem to stress the GPU at all but i still always run ~ 60% power targets with boinc mostly.
 
Yeah run a second instance and assign it different device profile with GPU work only. I am currently running WCG on all my rigs as "GPU ONLY" and then crunching other various projects as CPU (but under the same boinc)

EDIT: Also maybe try lowering the power target on your GPU some? these really dont' seem to stress the GPU at all but i still always run ~ 60% power targets with boinc mostly.
Power wasn't the issue. It was only using 75w showing 100% usage and the max power is 120w stock on the RX570. If I'd been in Windows the power target would be closer to 140w-150w since I can overclock and make changes with MSI Afterburner in Windows. Under Manjaro I have no choice but to run the card at stock.

I did hop into Windows and setup a WCG client with GPU only work and there was no trouble at all running GPU units. I left everything alone at first with a single GPU WU running and had no issues. I made the changes to run more than one and had two going with no trouble again. Power usage was 105w when it was actually crunching on the GPU which is fine. Another difference is under Linux it was never showing any CPU usage at all for the GPU WU and never went below 100% GPU usage.

One other issue is that when I stopped all GPU processing it still didn't free up my GPU nor was I able to restart to boot into Windows. I had to do a hard reset to do anything. I'll have to look into a couple more things and try running it on Manjaro again. I really hate to miss out on the work and the points. I suspect it has something to do with OpenCL libraries. I remember a while back when I was running F@H GPU tasks for a challenge the team was in I had to do some extra steps or use specific libraries to get it running properly. I don't remember exactly what it was I did but I'm going to look into it. I suspect what I had setup then was good for openpandemics GPU because I did get some work during the beta and had no trouble with it.
 
Lol a bird in the hand is worth two in the bush. Also have you tried to run more than two in Windows? I would keep increasing number of tasks till you reach 100% gpu usage in afterburner and you should get into that 150 range your talking about and get some good points. I mean i assume good points since i don't have a RX570 lol. Best utilization maybe
 
I'm back in Manjaro again and did some messing around. Uninstalled the official OpenCL-mesa libs and installed a different set which will run alongside the open source driver on Polaris architecture and everything seems to be running fine now. The first WU it started crunching after I did this just finished with no problem and is showing up as valid. It definitely took longer since I don't have an overclock anymore but I can live with that and it's using about 20w less in power. I'll definitely need to set it up to run a minimum of two or three WUs simultaneously. It's spending about a quarter to a third of the time running CPU calculations and not running on the GPU. Two units concurrently seemed fine under Windows but it's definitely looking like I'll need three under Manjaro.

I do wish I had my overclock, though. 1284 core 1750 mem is stock but I had a stable 1404/2050 overclock in Windows. If I wanted to push it unstable I could do 1450/2200.
 
Back
Top