World Community Grid

Nobu

Supreme [H]ardness
Joined
Jun 7, 2007
Messages
4,764
fastgeek not purposefully, but I think I had used a different port out of convenience because I had routed the new cable differently. The drive remains ineffectual through soft-reboots (reset switch) but revives after hard reboot.

Software is gsmartcontrol (or something like that), should be available in most Linux distros. I'd use a windows tool, but neither the drive nor pc it's in has windows installed.
 

Nobu

Supreme [H]ardness
Joined
Jun 7, 2007
Messages
4,764
Swapped the corsair with a Samsung 250GB 970 Evo+ nvme drive, and I'm back in business.

Edit: Here's the SMART data for the nvme...looks like more details than that aren't supported on this drive under linux.
Code:
smartctl 7.1 2019-12-30 r5022 [x86_64-linux-5.8.5-arch1-1] (local build)
Copyright (C) 2002-19, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Model Number:                       Samsung SSD 970 EVO Plus 250GB
Serial Number:                      [REDACTED]
Firmware Version:                   2B2QEXM7
PCI Vendor/Subsystem ID:            0x144d
IEEE OUI Identifier:                0x002538
Total NVM Capacity:                 250,059,350,016 [250 GB]
Unallocated NVM Capacity:           0
Controller ID:                      4
Number of Namespaces:               1
Namespace 1 Size/Capacity:          250,059,350,016 [250 GB]
Namespace 1 Utilization:            38,080,303,104 [38.0 GB]
Namespace 1 Formatted LBA Size:     512
Namespace 1 IEEE EUI-64:            002538 55019d5782
Local Time is:                      Sun Sep  6 16:06:41 2020 CDT
Firmware Updates (0x16):            3 Slots, no Reset required
Optional Admin Commands (0x0017):   Security Format Frmw_DL Self_Test
Optional NVM Commands (0x005f):     Comp Wr_Unc DS_Mngmt Wr_Zero Sav/Sel_Feat Timestmp
Maximum Data Transfer Size:         512 Pages
Warning  Comp. Temp. Threshold:     85 Celsius
Critical Comp. Temp. Threshold:     85 Celsius

Supported Power States
St Op     Max   Active     Idle   RL RT WL WT  Ent_Lat  Ex_Lat
0 +     7.80W       -        -    0  0  0  0        0       0
1 +     6.00W       -        -    1  1  1  1        0       0
2 +     3.40W       -        -    2  2  2  2        0       0
3 -   0.0700W       -        -    3  3  3  3      210    1200
4 -   0.0100W       -        -    4  4  4  4     2000    8000

Supported LBA Sizes (NSID 0x1)
Id Fmt  Data  Metadt  Rel_Perf
0 +     512       0         0

=== START OF SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED


Anyone set processor affinity on Linux, or do you just let it be? I tried setting processes and their threads to cores 0,4; 1,5; 2,6; 3,7... but any effect is non-obvious in the short time since I've done it. Well, other than processor usage looking more uniform across the cores.
 
Last edited:

Toconator

Gawd
Joined
Jul 8, 2005
Messages
709
The only SSD i have ever had die was an OG 64GB OCZ Vertex. I think the controller died, it also dropped out a few times here and there and then one day it went dark.
Similar story. Only SSD to die was a 120GB OCZ but it was the previous model and they sent me a Vertex for warranty replacement. It was the controller that went kaput. Vertex became the boot drive in my old main PC for over 6 years before I swapped it for a 250GB Sammy. Still works fine with very little wear but relegated to spares pile now.
 

pututu

[H]ard DCOTM x2
Joined
Dec 27, 2015
Messages
1,952
Still waiting and waiting and waiting for gpu apps for OP :whistle:

Someone with 3080 might be itching to test it out, lol.

17 Sep 2020
spacer.gif
Summary
The researchers and the World Community Grid tech team are continuing their work to get the project working on GPU.
spacer.gif

OpenPandemics.jpg
Background
OpenPandemics - COVID-19 was created to help accelerate the search for COVID-19 treatments. You can learn more about the details of the work on the research team's website.
GPU version of OpenPandemics
Both the research team and World Community Grid tech team are continuing to make progress on porting the software that powers OpenPandemics to GPU.
The researchers are working on performance improvements for an OpenCL version. Meanwhile, World Community Grid has submitted the code for IBM's Open Source review and a security review. We don't currently know exactly when the IBM reviews will be done.
AutoDock Suite at 30
The research team recently published a paper on the history of AutoDock, which is the software that powers OpenPandemics, FightAIDS@Home, and other projects that have searched for potential treatments against various diseases. You can read the paper here.
Current status of work units
  • Available for download: 3,452 batches
  • In progress: 2,259 batches (18,949,527 work units)
  • Completed: 9,479 batches
    2,991 batches in the last 30 days
    Average of 99.7 batches per day)
  • Estimated backlog: 34.6 days
 

AgrFan

[H]ard DCOTM October 2012
Joined
Sep 29, 2007
Messages
531
Got a Dell Optiplex 990 with a i7-2600 running four SCC threads that is hitting 70C on a stock cooler. I can't run 8 threads without the fans kicking up. CPU paste was reapplied when I got the machine off ebay several years ago. It's always had high temps. Autodock VINA projects (SCC, FAH2) typically generate more heat than the other projects.

I'm running four SCC threads on a Dell Inspiron 660 with a i5-3330 and only seeing 55-60C temps.

I'm guessing this is normal - IvyBridge runs cooler than SandyBridge due to the die shrink from 32nm to 22nm.

What's everyone seeing for CPU temps running four SCC threads?

Is the i7-2600 EOL at this point for WCG and should I consider upgrading?

Not interested in GPU crunching ...
 
Last edited:

Gilthanis

[H]ard|DCer of the Year - 2014
Joined
Jan 29, 2006
Messages
8,200
Unless you are looking to increase core/thread count, I wouldn't worry too much about upgrading. You would be spending a lot of money to save pennies a day in efficiency. I recently picked up a few dual socket 2011 boxes off of EXT64 for a steal of a deal. Has 32 threads in each box. These days, I am phasing out boxes with less than 12 threads.
 

AgrFan

[H]ard DCOTM October 2012
Joined
Sep 29, 2007
Messages
531
Unless you are looking to increase core/thread count, I wouldn't worry too much about upgrading.

I'd like to run all 8 threads without all the fan noise. Only running 6 cores on WCG now. I've been getting rid of old hardware and my production is way down. I just retired a C2D box last week. SandyBridge is almost 10 years old. Might be time to put it up for sale on CL. Just need to decide on a replacement. Waiting to see what Zen 3 looks like in a couple weeks.
 
Last edited:

Endgame

Limp Gawd
Joined
Jan 10, 2007
Messages
340
Unless you are looking to increase core/thread count, I wouldn't worry too much about upgrading. You would be spending a lot of money to save pennies a day in efficiency. I recently picked up a few dual socket 2011 boxes off of EXT64 for a steal of a deal. Has 32 threads in each box. These days, I am phasing out boxes with less than 12 threads.
Well, you don’t have to spend lots of money as you could replace all the really old gear with raspberry pis to dramatically cut power usage. Depending on project, I’ve found 2x pi 4s to be roughly equal to a core 2 quad 9650 at roughly 140 watts less power draw.

the downside is having to manage extra Linux hosts - not as much of an issue if you like to stick with the same few projects, but a pita if you like to swap between projects.
 

Gilthanis

[H]ard|DCer of the Year - 2014
Joined
Jan 29, 2006
Messages
8,200
As someone that had used several cell phones in the past along with a lot of other hardware, managing a bunch of micro devices is not a fun time. I love what you can do with ARM but until a player comes along and drops an affordable server with hundreds of cores in it, I will pass. Even AMD's A-series was a joke. Hopefully, nVidia buys out ARM and does something cool with it. However, I'm very doubtful of them doing anything but line their pockets. DC'ing is to a point where if you want to contribute big, you need big rigs. The days of "every little bit counts" is about gone. Most projects cannot wait weeks or months to get work units back. Even WCG sub projects all come in batches that the researchers a lot of time analyze either during new batches or wait to release new batches. They don't necessarily need them within a few days like GPUGrid likes to do, but they also don't want work units out there for weeks on end.

SETI had that luxury where they didn't need results back any time soon. PrimeGrid could be that way but they understand users would be pissed if they had to wait months for their work unit to be validated against a super slow yet highly efficient host. That is why they cater to the highest grade systems out there for speed. There really does come a time when perfectly good hardware just isn't worth using anymore. The Pi's have come a long way, but even the threadrippers now can become efficient enough to match what ARM can do. And they have a lot more uses with a lot less management.
 

Gilthanis

[H]ard|DCer of the Year - 2014
Joined
Jan 29, 2006
Messages
8,200
I'd like to run all 8 threads without all the fan noise. Only running 6 cores on WCG now. I've been getting rid of old hardware and my production is way down. I just retired a C2D box last week. SandyBridge is almost 10 years old. Might be time to put it up for sale on CL. Just need to decide on a replacement. Waiting to see what Zen 3 looks like in a couple weeks.
I've sold a few 990's during this pandemic that had i5's in them. They typically sell for between $75 and $100 in my area without a discreet video card. However, I feel that is mostly because people still really don't know a lot about computers. With everyone dealing with remote learning, there was a rush on laptops as well until the schools finally started offering them up. I probably sold a dozen refurbed laptops this year easy and they weren't all laptops I would have normally sold to anyone due to age. But people are desperate for something "cheap" for their kids to destroy.

I don't know what kind of budget and life expectancy you have in mind but there are still a lot of beasts out there to be had for really good prices. Efficiency is great when one has the luxury to afford it. I would certainly avoid all pre-Ryzen AMD offerings without hesitation unless it was "free". Even the G34 setups are really becoming inefficient enough to not power on these days. Mine sits cold waiting for either a local buyer or a big challenge. I think long term, the Ryzen 3's will certainly be the better investment for DC needs. But that will come at a premium up front cost. I only have one rig that uses DDR4. I can honestly say that I have no plans to buy any other DDR4 rigs at this time. There is literally no "need". I have DDR3 coming out my ears in 2 and 4GB sticks. The rigs I have compute Facebook just as fast...lol. I will probably skip anything with DDR4 and go with something that has DDR5 when that is available or just keep buying "outdated" as it truly is "good enough"... for me.
 

Endgame

Limp Gawd
Joined
Jan 10, 2007
Messages
340
As someone that had used several cell phones in the past along with a lot of other hardware, managing a bunch of micro devices is not a fun time. I love what you can do with ARM but until a player comes along and drops an affordable server with hundreds of cores in it, I will pass. Even AMD's A-series was a joke. Hopefully, nVidia buys out ARM and does something cool with it. However, I'm very doubtful of them doing anything but line their pockets. DC'ing is to a point where if you want to contribute big, you need big rigs. The days of "every little bit counts" is about gone. Most projects cannot wait weeks or months to get work units back. Even WCG sub projects all come in batches that the researchers a lot of time analyze either during new batches or wait to release new batches. They don't necessarily need them within a few days like GPUGrid likes to do, but they also don't want work units out there for weeks on end.
How much management is there really though? These Pi are just little Linux boxes. The uptime on my first pi 3 I setup for WCG OP is 66 days. The uptime on my Pi2 running pihole is 691 days, though it hasn’t been running boinc that whole time. They really are pretty much set it and forget it unless you regularly want to change projects - and if you really want to change projects, setup kubernetes and run containers specific to each project and redeploy as desired.

As for performance, if a project is OK with core 2 quads or even sandy bridge, it should be ok with pis. Each of my pi 4s are averaging around 10 WCG OP WUs per day, so the WUs aren’t taking days to process.
 

Gilthanis

[H]ard|DCer of the Year - 2014
Joined
Jan 29, 2006
Messages
8,200
Well... WCG welcomes everything due to no optimizations and a willingness to get all the free resources they can. It isn't whether the project is OK with it but whether it is worth the efforts of the end user to invest in that path. They (WCG) just recently got Pi support. Mobile devices need more management because of how often they throttle and their poor thermal design for things like this. I know Pi's have better cooling design but when you are talking 4 cores per rig and the extremely low output, you really do have a mess on your hands. I mean, to get a reasonable cluster for all of this you will have wires galore to manage. How many Pi's are you willing to pickup and manage to be equivalent? Even with different x86 rigs, you get to a point where you start to miss a few on the list when there is problems.

This is why I'm interested in something more along these lines - https://www.techradar.com/news/arm-wants-to-obliterate-intel-and-amd-with-gigantic-192-core-cpu . If they can make that work on BOINC and make it a reasonable price, then maybe we will have something. However, I have a feeling these are more like GPU cores rather than general purpose ones. And that would take a while to get projects willing to support them.
 

AgrFan

[H]ard DCOTM October 2012
Joined
Sep 29, 2007
Messages
531
So, I realized today I'm only running two SCC threads on the IvyBridge box. I enabled 4 SCC threads and the temps are pretty close to the SandyBridge box. The Inspiron 660 is running 5 degrees cooler than the Optiplex 990 according to CPU Temp. Probably due to the die shrink.

I'm leaning toward selling the Optiplex 990 and replacing it with a Skylake or newer Dell machine. Need to do a clean install of Windows 10 Pro before I can put it up for sale.

Thanks for the responses!
 

Gilthanis

[H]ard|DCer of the Year - 2014
Joined
Jan 29, 2006
Messages
8,200
Let me know what model you go with. Dell is my preferred vendor and I like getting feedback on their systems.
 

AgrFan

[H]ard DCOTM October 2012
Joined
Sep 29, 2007
Messages
531
Sold the Optiplex 990 today on CL. My production is going to be way down for awhile. Looking at a cheap Dell Inspiron or a Dell Optiplex refurb. Budget is $400. Any suggestions? Would be used for basic Windows stuff, network storage and WCG.
 
Last edited:

Icecold

n00b
Joined
Jul 21, 2013
Messages
45
Sold the Optiplex 990 today on CL. My production is going to be way down for awhile. Looking at a cheap Dell Inspiron or a Dell Optiplex refurb. Budget is $400. Any suggestions? Would be used for basic Windows stuff, network storage and WCG.
I have an Optiplex 5060 with an i5 8500 and it has worked great.
 
Top