DC Vault 2

Thanks for the nods chaps even if I don’t deserve any more credit than the rest of you who have spent considerable amount of time, money and effort laying the foundation on which we’re all building. It’s a great team and I’m really enjoying being part of [H] and the close knit Commando community. The team has reinvigorated my interest in DC after losing heart subsequent to an ill advised visit to ff.org.

To remove any mystery the farm consists of 2 x SR2s, a 4P Octocore, a 980X and a 2600K. The latter two with 2 x 560Ti and 2 x 570 GPUs. Muon is being crunched on the SR2s and a bit of the 980X.

Next to some of the multi 4P guys folding for [H] I’m a comparative minnow.

My rear view mirror is now slowly (quickly?! Only 15 days left!) filling with Phoenicis on Muon1...

Just trying to inspire you to keep firing up those servers each weekend ;) Assume we’re pressing on to 9th place? I’m game if everybody else is.

I want to know what Phoenicis has crunching on PSP PRPnet now.

Half of the 4P rig that just happened to land it’s units together. The intent is to speed up the release of your and Sazans’ GPUs for use on GIMPS or whatever else took your fancy. I’d like to move over the other 50% at the weekend but getting the first set up and running was a pain as I had to launch each of the 16 clients individually.

I ran into the same issue as mmmmmdonuts with regard to the batch file causing some white flashing and then nothing. I’m not a linux aficionado so after prefixing the command with sudo failed (in an attempt to run as a super-user), I was stumped :eek: Any help on this would be appreciated.

It's nice to have a big hitter like phoenicis helping out, even if I do get mowed here and there. :D

Sorry about the whole mowing thing mate :D It is completely ridiculous that I can overtake your lifetime GIMPS points within one week using GPUs.

Phoenicis, I have done some testing with PSP PRP First Pass (8100) & Second Pass (8101) work and I come up with that over time that I produces more points with First Pass tests.
Please let me know if you experience something different.

From initial observations I think I’m seeing the same thing. I didn’t know which server to put 100% against. I do now, thanks :)
 
I ran into the same issue as mmmmmdonuts with regard to the batch file causing some white flashing and then nothing. I’m not a linux aficionado so after prefixing the command with sudo failed (in an attempt to run as a super-user), I was stumped :eek: Any help on this would be appreciated.
I do a little customizing for my PSP PRP in Windows.
I create directory tree like this for a QuadCore -

C:\PRPNet\ (contains all folders and .bat & .ini files)
C:\PRPNet\programs
C:\PRPNet\prpclient-1
C:\PRPNet\prpclient-2
C:\PRPNet\prpclient-3
C:\PRPNet\prpclient-4

I create and use this file instead of using the "4-quad-start-prpclient.bat"
PRPNet.bat
Code:
ECHO
cd C:\PRPNet\prpclient-1
start "prpclient-1" /MIN prpclient.exe
cd C:\PRPNet\prpclient-2
start "prpclient-2" /MIN prpclient.exe
cd C:\PRPNet\prpclient-3
start "prpclient-3" /MIN prpclient.exe
cd C:\PRPNet\prpclient-4
start "prpclient-4" /MIN prpclient.exe
EXIT
I can then run the PRPNet.bat from anywhere and all 4 prpclients start.
 
If your trying to use the messed up "master_prpclient.ini" that came with your download that has all the fields ran together -
Here is a copy of my Windows x64 PRPNet 4.3.5 "master_prpclient.ini" that is used to create the "prpclient.ini" file in each of the C:\PRPNet\prpclient-x folders. (email= userid= clientid= fields will need to be changed)
master_prpclient.ini
Code:
// email= is a REQUIRED field.  The server will be use this address
// to send you an e-mail when your client discovers a prime.
[EMAIL="[email protected]"]email=[B][COLOR=red][email protected][/COLOR][/B][/EMAIL]
 
// userid= is a REQUIRED field that will be used by the server
// to report on stats, etc. without having to reveal the user's
// e-mail address.  DO NOT USE spaces.  Instead use underscore _.
userid=[B][COLOR=red]PG_username[/COLOR][/B]
 
// This value differentiates clients using the same e-mail ID
// DO NOT USE spaces.  Instead use underscore _.
clientid=[B][COLOR=red]clientID[/COLOR][/B]
 
// Tests completed by this "team" will be rolled-up as part of team stats.  This
// will be recorded on tests that are pending and then updated when tests are
// completed.  Since it is stored on the server per test, it is possible for a
// single user to be a member of multiple teams.  If no value is specified for
// this field, then completed tests and primes will not be awarded to any teams.
// DO NOT USE spaces.  Instead use underscore _.
teamid=[H]ard|OCP
 
// server= configures the mix of work to perform across one or more
// servers.  It is parsed as follows:
//   <suffix>:<pct>:<workunits>:<server IP>:<port>
//
// <suffix>     - a unique suffix for the server.  This is used to distinguish
//                file names that are created for each configured server.
// <pct>        - the percentage of PRP tests to do from the server.
// <workunits>  - the number of PRP tests to get from the server.  The 
//                server also has a limit, so the server will never return
//                more than its limit.
// <server IP>  - the IP address or name for the server
// <port>       - the port of the PRPNet server, normally 7101
//
// Setting pct to 0 means that the client will only get work from the
// server if it cannot connect to one of the other configured servers.
// Please read the prpnet_servers.txt in this directory for information
// on the latest PRPNet servers.
 
// The following servers are from the Prime Sierpinski Project
// These servers are external to PrimeGrid.
server=PSPfp:100:1:[URL="http://www.psp-project.de:8100"]www.psp-project.de:8100[/URL]
server=PSPdc:0:1:[URL="http://www.psp-project.de:8101"]www.psp-project.de:8101[/URL]
 
// This is the name of LLR executable.  On Windows, this needs to be
// the LLR console application, not the GUI application.  The GUI 
// application does not terminate when the PRP test is done.
// On some systems you will need to put a "./" in front of the executable
// name so that it looks in the current directory for it rather than 
// in the system path.
// LLR can be downloaded from [URL]http://jpenne.free.fr/index2.html[/URL]
llrexe=llr.exe
 
// This is the name of the PFGW executable.  On Windows, this needs to
// be the PFGW console application, not the GUI application.
// PFGW can be downloaded from [URL]http://tech.groups.yahoo.com/group/openpfgw/[/URL]
// If you are running a 64 bit OS, comment out the pfgw32 line
// and uncomment the pfgw64 line.
//pfgwexe=pfgw32.exe
pfgwexe=pfgw64.exe
 
// This is the name of the genefer executables used for GFN searches.  Up
// to four different Genefer programs can be specified.  The client will
// attempt a test with genefercuda first if available...otherwise, genefx64
// will be first.  If a round off error occurs in either, it will try genefer.
// If a round off occurs in genefer, it will try genefer80.  If
// genefer80 fails, then the number cannot be tested with the Genefers.  It will
// then be tested with pfgw if available.  The order they are specified here 
// is not important. (NOTE:  Linux and MacIntel only have genefer available for CPU)
// Uncomment the line (genefx64) if you are running on a 64 bit machine.
//geneferexe=genefercuda.exe
geneferexe=genefx64.exe
geneferexe=genefer.exe
geneferexe=genefer80.exe
 
// This sets the CPU affinity for LLR on multi-CPU machines.  It defaults to
// -1, which means that LLR can run on an CPU.
cpuaffinity=
 
// This sets the GPU affinity for CUDA apps on multi-GPU machines.  It defaults to
// -1, which means that the CUDA app can run on an GPU.
gpuaffinity=
 
// Set to 1 to tell PFGW to run in NORMAL priority.  It defaults to 0, which means
// that PFGW will run in IDLE priority, the same priority used by LLR, phrot,
// and genefer.
normalpriority=0
 
// This option is used to default the startup option if the PREVIOUS
// SHUTDOWN LEFT UNCOMPLETED WORKUNITS.  If no previous work was left
// this will act like option 9.
//    0 - prompt 
//    1 - Return completed work units, abandon the rest, then get more work 
//    2 - Return completed work units, abandon the rest, then shut down
//    3 - Return completed, then continue
//    4 - Complete in-progress work units, abandon the rest, then get more work 
//    5 - Complete in-progress work units, abandon the rest, then shut down 
//    6 - Complete all work units, report them, then shut down 
//    9 - Continue from where client left off when it was shut down 
startoption=3
 
// stopoption= tells the client what to do when it is stopped with CTRL-C and there is
// work that has not been completed and returned to the server.  Options 2, 5, and 6 will
// return all workunits.  This will override stopasapoption.  The accepted values are:
//    0 - prompt
//    2 - Return completed work units, abandon the rest, then shut down
//    3 - Return completed work units (keep the rest), then shut down
//    5 - Complete in-progress work units, abandon the rest, report them, then shut down
//    6 - Complete all work units, report them, then shut down 
//    9 - Do nothing and shut down (presumes you will restart with startoption=9)
stopoption=3
 
// stopasapoption= tells the client that it needs to be shutdown automatically, i.e. without
// a CTRL-C.  It is evaluated after each test is completed.  It should be 0 upon startup.
// The accepted values are:
//    0 - Continue processing work units
//    2 - Return completed work units and abandon the rest
//    3 - Return completed work units (keep the rest)
//    6 - Complete all work units and return them
stopasapoption=0
 
// Timeout on communications errors
// (default is 60 minutes, minimum is 1 minute if not specified here...)
// Note that the actual used in the client is anywhere from 90% to 110% of this value
errortimeout=3
 
// Size limit in megabytes for the prpclient.log file...
// 0 means no limit.
// -1 means no log.
loglimit=1
 
// Set the debug level for the client
//    0 - no debug messages
//    1 - all debug messages
//    2 - output debug messages from socket communication 
debuglevel=0
 
// Whether or not to echo "INFO" messages from server to console for accepted tests
//    0 - no echo
//    1 - echo (default)
echotest=1
 
Last edited:
Razor, have you tried running PSP in Linux before? I got it to initially work but then not after a restart of the system. I managed to get it working in Windows briefly (as a test case) without the flashing problems that Linux had quite easily. Maybe it's a bug? I had it running in Ubuntu 11.04 x64. Good luck phoenicis with it and let me know if you do get it to work on linux and what you did. I got frustrated trying to get it to work after a few days.

phoenicis (or anyone else) - How fast is your 4P relative to your SR-2 and 980X in boinc and such with the much slower speed? I am looking to do a more efficient overhaul in the next few months with all my systems and I am trying to get a range of where things perform especially because of electricity costs. I know SB-E/BD is right around the corner so I am trying to figure out a sort of direction to go in. I also have no problem in overclocking at all. (I don't know if your systems are)

Currently right now I have a Xeon E3-1230 as an ESXi Server, an [email protected] as a gaming/media encoding rig but I am gaming less and less so I may switch that over to Ubuntu soon anyways and a E6750 as a HTPC which I will be getting rid of soon after I make my PC upgrade in the next few months. I have about about a grand but I am flexible if I can kill many birds with one stone. All this is drawing around 500-700W depending on GPU stressing or not. My initial thoughts were a SB-E, but I don't mind going a G34 and adding CPUs as I get money. Any ideas?
 
Sorry mmmmmdonuts, haven't tried it in Linux. Does it put an error in the log file?

For Commandos running PSP PRPNet 4.3.5 in Windows there is a new version of PFGW that can be copied from the extracted files into your prpclient directories.
Copy files : pfgw32.exe & pfgw64.exe
PFGW 3.5.0 download for Windows PRPNet client.
 
Even though I may have seem to left forever I still browse the forum and check in to see how you guys are doing. :) It seems after I left there has been a increase in population in the Commandos so I say hello to everyone I have not met yet! :D

The Commandos have come so far!

School has been killer and I wish I could browse the forum more often and post but I'm procrastinating writing up my lab report for TCP as we speak :p
 
Thanks for the PSP info guys.

@ mmmmmdonuts - If you struggled at defeating the Ubuntu start file, I’ll probably not have any more success and so will leave it alone as it already consumed an hour of my time last weekend. It’s irritating not to beat it but setting the individual clients off only took 5-10 mins.

@ Razor - I’ll likely give PSP on Windows a go when a rig comes free after we finish the muon assault. The linux master_prpclient.ini client actually functioned OK once I established which servers were for PSP although the team ID didn’t work. Fortunately, Lars came to the rescue after I followed the traditional approach and PMed him.

phoenicis (or anyone else) - How fast is your 4P relative to your SR-2 and 980X in boinc and such with the much slower speed? I am looking to do a more efficient overhaul in the next few months with all my systems and I am trying to get a range of where things perform especially because of electricity costs. I know SB-E/BD is right around the corner so I am trying to figure out a sort of direction to go in. I also have no problem in overclocking at all. (I don't know if your systems are)

The 4P is comprised of Opteron 6128HEs at stock clocks and, on BOINC, is almost exactly as fast as the slower of my SR2s (2 x 5660s) when it was running at it’s Winter OC of 4.2GHz. The memory in neither is exactly cutting edge so it’s probably a fair comparison. Speaking again just for BOINC it is over twice as fast as the 980X @ 3.8GHz.

Other than being linux only (some issues with both DPAD and PSP) I prefer the 4P for both initial cost and running costs although, to be fair, you can get much cheaper SR2 CPUs than the ones I purchased. The 4P pulls over 100W less power (430W) than the SR2 at the clocks previously mentioned. It also has the advantage of being upgradable to 12 core chips when funds allow or even Interlagos (with a bios upgrade) when it’s released.

Either approach will currently set you back more than $1K or at least it will at UK prices. As you mentioned, you could start with a single chip and build from that or wait until the market floods with Opterons from people with upgradeitis, like me :D, when they move on to the next best thing.

I haven’t looked much BD or SB-E I’m afraid.

It seems after I left there has been a increase in population in the Commandos so I say hello to everyone I have not met yet! :D

Hello back at ya. Thanks again for your guide on how to join difficult projects. It's come in quite handy lately :)
 
The 4P is comprised of Opteron 6128HEs at stock clocks and, on BOINC, is almost exactly as fast as the slower of my SR2s (2 x 5660s) when it was running at it&#8217;s Winter OC of 4.2GHz. The memory in neither is exactly cutting edge so it&#8217;s probably a fair comparison. Speaking again just for BOINC it is over twice as fast as the 980X @ 3.8GHz.

Other than being linux only (some issues with both DPAD and PSP) I prefer the 4P for both initial cost and running costs although, to be fair, you can get much cheaper SR2 CPUs than the ones I purchased. The 4P pulls over 100W less power (430W) than the SR2 at the clocks previously mentioned. It also has the advantage of being upgradable to 12 core chips when funds allow or even Interlagos (with a bios upgrade) when it&#8217;s released.

Either approach will currently set you back more than $1K or at least it will at UK prices. As you mentioned, you could start with a single chip and build from that or wait until the market floods with Opterons from people with upgradeitis, like me :D, when they move on to the next best thing.

I haven&#8217;t looked much BD or SB-E I&#8217;m afraid.

Thanks for the information. I guess I will keep my eyes peeled on the FS thread or maybe setup a WTB for a G34 possibly. Not sure what I am going to do yet but I will figure something out.

I decided to join you guys in PSP as well so we can hopefully take over the top 5 fairly soon.

Nice to see you swing by Eric. I can definitely relate to school papers unfortunately.
 
My desktop SR-2 is done with RNA, but I still need to do some work on it this weekend before I start something new. I'd like to fire up some GPUs for GIMPS but my power bill is still around $350, down from the heat of summer but still pretty high. I'm looking forward to truly cold weather when I can turn off the A/C, open the windows, and fire up everything to heat the condo.
 
We are now up to 7th place in the Vault! Excellent job everyone!
 
OMG! - phoenicis just pounded 2814.297 Ghz-days into Gimps today!:eek:
/me tucks tail between legs and goes pee in the corner.
borgsmile.gif

So thats nothing compared to what he just dropped: 3596.785 GHz-days. Moved us 8 spots up and 212 dc vault points in GIMPS at once. Congrats on making 7th place everyone. Free-DC prepare to be terrified.
 
Great work Comandos! Great progress is being made on Yoyo, GIMPS, Muon, and others. Onward and upward!
 
looks like we've run them out of good TF tasks on gimps, all i've got in the last couple days are 68 to 69 factors. they only take 15 mins though, just got to get a bunch of them to keep the gpus busy. strange i dont seem many other teams doing gpu gimps.
 
looks like we've run them out of good TF tasks on gimps, all i've got in the last couple days are 68 to 69 factors. they only take 15 mins though, just got to get a bunch of them to keep the gpus busy. strange i dont seem many other teams doing gpu gimps.

If you want you can change the numbers from 68-69 to 72-73 or higher. I don't know if this is encouraged or not but your run times will increase exponentially.
 
I received a mixed bag of ranges on Friday too. As mmmmmdonuts suggests you can easily change the range.

It takes a few minutes but I now change them all to end at 72. So for instance TF ending in 69,70 is amended to 69,72 which will not only run the range requested (ie 69,70) but also 70,71 and 71,72 earning a total of about 14+ GHz/days across the 3 results.

From what I've read it's perfectly OK to add a couple of bit levels to a TF assignment.

Not sure why many other teams aren't using GPUs for GIMPS. Probably because it's not obvious how to set up and the client is contained on the official forum rather than the site itself. Once others get curious over our recent output, they may pop by to see what we're up to. Time for us to engage our cloaking device methinks :D

The GIMPS gravy train will unfortunately slow down soon with the gaps between teams getting much larger.
 
I see Majestic12 Boost just pulled the servers off NFS after keeping them on 6 extra days more than the 24 hours normally donated to a team.:D
We gained 2 spots and very close to a 3rd spot.
I think Alex decided to put a little extra in since we put a lot extra into Majestic-12.:cool:
 
I see an old friend popping up in familiar places.:)
Good to see your still with us tr0ach.
 
I see Majestic12 Boost just pulled the servers off NFS after keeping them on 6 extra days more than the 24 hours normally donated to a team.:D
We gained 2 spots and very close to a 3rd spot.
I think Alex decided to put a little extra in since we put a lot extra into Majestic-12.:cool:
A big thanks to Alex and the Majestic12 project for the big boost in NFS! You guys rock.

I see an old friend popping up in familiar places.:)
Good to see your still with us tr0ach.
x2 :)
 
I haven't had much time to look into this, but in PRP-PSP the results are being sent and then it locks up in an endless loop on the program attempting to send results that are already sent. Basically I go into the folder and delete the save file (bc work was sent) and start it up manually. I have the exact code Razor posted up top, with the exception of user information changes on the first 3 lines. Its no big deal but I do lose some time after the unit is completed. Any ideas.
 
I haven't had much time to look into this, but in PRP-PSP the results are being sent and then it locks up in an endless loop on the program attempting to send results that are already sent. Basically I go into the folder and delete the save file (bc work was sent) and start it up manually. I have the exact code Razor posted up top, with the exception of user information changes on the first 3 lines. Its no big deal but I do lose some time after the unit is completed. Any ideas.
The server may have been down this morning, to add a backup server to get work if primary is down (but still wont report work from primary server to primary server until its back up and current work unit completes) change the 0 below to a 1 in your "master_prpclient.ini" & "prpclient.ini" files then restart prpclient clients.

server=PSPfp:100:1:www.psp-project.de:8100
server=PSPdc:0:1:www.psp-project.de:8101

PS: Thanks for the HD4850, works great and has a super quite fan that works great for my HTPC, have it working Gimps and all runs perfectly.

Edit: I changed it now in my original code post.
 
Last edited:
After reviewing my code post I have found that some line breaks were lost when I copied and pasted my "master_prpclient.ini" I'll fix those and make sure it matches mine perfectly. I don't think those line breaks should make any difference but they may.
 
Glad to hear the 4850 is up and running perfectly.

Thanks for fixing the code. I wish I had time to look at it but I have been swamped at work lately.
 
Here's my latest unofficial priority list:

1. Prime Sierpinski Problem - PRP
2. Wieferich@Home
3. Muon1 DPAD
4. GIMPS
5. Seventeen Or Bust
6. OGR-27
7. Majestic-12
8. yoyo@home
9. Docking@Home
9. Enigma@Home
11. Cosmology@Home
12. ABC@Home
13. RNA World @ Home
14. The Lattice Project
15. SZTAKI Desktop Grid
16. GPUGRID
17. NFS@Home
18. POEM@Home
18. SIMAP
20. Climate Prediction
21. Collatz Conjecture
22. Malariacontrol.net
23. QMC@Home
24. Leiden Classic
25. PrimeGrid
26. Rosetta@Home
27. Dimes
28. MilkyWay@Home
29. Einstein@Home
30. RC5-72
31. Spinhenge@Home
32. Seti (BOINC)
33. World Community Grid
34. Folding@Home

I also ran it with a different weighting, and the top 12 were the same, in a slightly different order. The top 12 are also the same as the last list, with the exception of Aqua which was dropped from DC Vault. The big difference is that scores have gone up in most of the top projects: last time there were five projects below 9K and now there are only three. We're close to getting above 9K in Muon1, but we'll have to move up three spots to get there and the third will take a while.
 
/Salute Phonenicis, at least you were gentle with the mowing :D

Well done sir! This has been a bad couple weeks for me in getting the workstations running overnight/weekends :(
 
Cheers Briliu :) Sorry to hear about the workstation issues – I’ve missed seeing those occasional mega boosts. Not much longer before we reach our targets now.

@SazanEyes – Many thanks for the updated priority list. Diverting resources (post DPAD) to chase the DPCs on PSP looks promising provided they don’t just pick up the pace and run away from us. Another possibility high on the list is Wieferich@Home but you’ve previously associated the words; weird, slow, painful and silly with that project so I’m not exactly brimming with enthusiasm :D
 
W@H has been stable for me over the last couple months, so it's not that bad. The weirdness is that the "multicore" client just runs a copy of the graphical client for each core, and sometimes the clients can behave oddly, so I avoid touching them after startup.

The other weird/silly thing is that there's some negative weighting to the scoring so that the more people running the project, the less points each person gets. That's the general idea; there's a detailed explanation on their forum somewhere. What it means is that it's very difficult to do a "push" in the project because after you run it a bit the weighting is adjusted and your PPD drops.

Anyway, we're about 2 months away from overtaking the next W@H team with just my hex, so if somebody else wants to jump in, be my guest. I just hit 500K yesterday so we've got a nice cushion if I stop and do something else. The one good thing about W@H is that you can stop running it for a while and it's very unlikely the team will drop in position. I stopped running it for almost 2 years and I think we only dropped one spot.
 
The more I read about W@H the more I wonder why its in the DC vault at all. It almost seems like a "Bit Coin" scoring system which to me is stupid and discourages growth by giving initial adopters a lot more credit. It is what it is though.

Just upgraded my HTPC to a i7 920. Hopefully will get the OC stable and I will start crunching some 17 or bust soon.
 
I think the idea behind the W@H point system is to have constant point output over time. For example, if the average PC five years ago was a Core 2 Duo, it would do 500 PPD (to pick a number). Today the average PC is a quad-core Sandy Bridge, but it still does 500 PPD. This means the guys that racked up points years ago won't get mowed by newbies with modern hardware. It's different than Bitcoin but has the same result, that latecomers with high-end hardware do not have an advantage over the original crunchers. Teams have asked the W@H admins to turn off the weighting for contests, but they didn't seem that interested.

I haven't read about how exactly the point system works for a while, but I assume there is some way to throw enough hardware at the project to overcome any point balancing and make some real progress.
 
Nice job everyone. Just moved up to 6th place in PRP (and netted 357 points as well for it) and should be moving up to 5th place in another day or two.

Also, should we start looking into maybe doing a holiday race like we did last year for WCG with its Christmas Challenge? I think it's a great way to get new recruits and increase our presence on the DC map around here. Plus it gives us a new challenge to look at. Thoughts anyone?
 
looks like we will take over 6th overall in DC soon also, with the next big jump in PSP, and at the same time those points leave Free DC plus Muon1 is coming due soon.

I'm for a christmas challenge but i'm going to keep my cpus on Muon1 long enough to pad our space so we dont lose the spaces. We need something easy enough to setup for new Commandos to do, the ones with the biggest gains are the biggest pains.
 
Wow. Great work everyone. With a couple extra spots in PRP it looks like we are only a couple hundred points out of sixth place! :eek: And in a couple more days I think a couple spots in Muon will take care of those couple hundred points!

I had a couple days of pretty low/no production due to some reconfiguring of systems, OS reinstalls, etc. I have a couple machines back crunching now with one more to come yet later this week.

As far as a holiday challenge, I would really like to continue with the WCG Christmas Challenge in Dec if thats OK will the Commandos. We could try and organize another project for November perhaps. Perhaps a BOINC project?

Crunch on!
 
The latest experiment is some Dimes work alongside tr0ach. I know that there’s not a lot of DC Vault points in it but CPU and network usage seems to be minimal and so it doesn’t appear to impact other projects. Is there a reason why there’s not much team activity on this?

I haven't read about how exactly the point system works for a while, but I assume there is some way to throw enough hardware at the project to overcome any point balancing and make some real progress.

Thanks for expanding on the W@H quirks. I’m willing to give it a go unless there’s a preference to chase the DPCs on PSP.

Nice job everyone. Just moved up to 6th place in PRP (and netted 357 points as well for it) and should be moving up to 5th place in another day or two.

Very nice indeed mate. A big congrats to Sazan and Razor for doing most of the heavy lifting on this one.

I'm for a christmas challenge but i'm going to keep my cpus on Muon1 long enough to pad our space so we dont lose the spaces.

I'll try to help out with a bit of padding. Great teamwork all round on DPAD.

As far as a holiday challenge, I would really like to continue with the WCG Christmas Challenge in Dec if thats OK will the Commandos. We could try and organize another project for November perhaps. Perhaps a BOINC project?

Team challenge, yay!. Happy to follow the lead of whomever normally expends the time and effort to organise these events … oh wait … that’s you. So yep, sounds fine to me.
 
Outstanding work guys!!! We're up another spot to sixth in DC-Vault!!!!
 
Even more impressive is we moved from 10th on July 29th to 6th place in only 77 days. Thats a great team effort. Great job everyone.
 
Congrats, everyone!

Razor (or anyone), is something up with PSP? The stats aren't updating on Free-DC, but my local clients still seem to be crunching. I noticed I've been pulling double-check work recently, so I'm wondering if the servers are moving or if I need to update the IPs.

I've started running some Climate Prediction and Docking. Climate Prediction was having server trouble for a while but the issues have been resolved.
 
Razor (or anyone), is something up with PSP? The stats aren't updating on Free-DC, but my local clients still seem to be crunching. I noticed I've been pulling double-check work recently, so I'm wondering if the servers are moving or if I need to update the IPs.

First pass work dried up about a week ago then reappeared a couple of days later. As 2nd pass work takes much less time this reappearance seems to be having same effect as when I first started PSP ie nothing for a few days then a whole bunch of units due to land together.
 
Congrats, everyone!

Razor (or anyone), is something up with PSP? The stats aren't updating on Free-DC, but my local clients still seem to be crunching. I noticed I've been pulling double-check work recently, so I'm wondering if the servers are moving or if I need to update the IPs.

All last week I was getting server errors with PSP but it kept crunching until the unit was done. I then had to delete the save file and when I manually restarted the client I got some more work. I still got points though. I don't know if thats whats happening to you or not?

This week it gave me more first pass stuff after my last two batches of double check and I haven't had to restart anything manually.
 
Razor (or anyone), is something up with PSP? The stats aren't updating on Free-DC, but my local clients still seem to be crunching. I noticed I've been pulling double-check work recently, so I'm wondering if the servers are moving or if I need to update the IPs.
We all currently have First Pass work assigned to us and mine have been reporting fine.
Found that a member from a different team was having some problems and restarting the client fixed his issues.
 
I restarted both PSP machines yesterday when I installed Windows updates. It says I have 36 tests assigned and that's how many threads are running. When I shut them down yesterday they all seemed to be crunching, so maybe they were idle before but recently got work. I'll let it run another day or so and investigate more if I still don't see points. Maybe I just have bad luck and haven't found anything recently?

edit: Just did a spot check, and it looks like all my tests are between 50% and 80% complete. That makes me think they were all idle until recently getting work at about the same time. I should see some points tomorrow.

I was having trouble getting Climate Prediction work earlier, but it looks like they repopulated the work queue earlier today.
 
Last edited:
Back
Top