DC Vault 2

Noticed that my PSP output is still low so I checked the clients. Something seems to have gone very wrong during the power cut as I'm returning results for tests that aren't expected and therefore they're being ignored. I haven't a scooby why it's happening but there's almost a 100K that's gone down the swanny so far.

I think I'll take a break from the project and come back to it later. Credit to SazanEyes and mmmmmdonuts for keeping up the pace for so long on this one.

Good news everybody! (Dr. Farnsworth voice)

I shall be returning this Christmas with some crunching power! I will be moving to San Jose for 8 months to do a Coop at Cisco! :D

The apartment will not be cheap but, since it will have power included in the price, I will be able to bring some of my beasts back online!

I've missed you guys and how much you have grown in my absence!

(p.s. Its not July anymore :p)

Half the time I'm not sure I know what year it is.

Great to have you back :)
 
Something seems to have gone very wrong during the power cut as I'm returning results for tests that aren't expected and therefore they're being ignored.
Be sure that the "clientid=xxxxx" in the prpclient.ini is the same clientid= that downloaded the work unit.
I moved work from one computer to another and changed it's clientid name and it wouldn't give credit for the work that was downloaded by the previous computers clientid name.
Another thing that has caused that for me is downloading work in one version of app and upgrading the client before returning that work unit also said it wasn't expected and I lost it's credit.
 
Good news everybody! (Dr. Farnsworth voice)

I shall be returning this Christmas with some crunching power! I will be moving to San Jose for 8 months to do a Coop at Cisco! :D

The apartment will not be cheap but, since it will have power included in the price, I will be able to bring some of my beasts back online!

I've missed you guys and how much you have grown in my absence!

(p.s. Its not July anymore :p)
Glad to have you back Eric!
The team has been growing in your absence with really nice people that bring a lot to the team effort.

It's not July? - Is it still 2010?:D
Good point, maybe we should start a new thread.
 
Hey Eric. Good to hear from you. What does your coop at Cisco consist of? San Jose should be sweet during the winter months! :)
 
(p.s. Its not July anymore :p)
I thought July was a really kicking month.

I'm not trying to step on Sazan's toes - besides, this information probably won't be as useful - but here's a quick look at some Vault stats.

Ten Projects to make up the most Vault ground:
Code:
                                        Available
Project            Score       Place     Points     Points/Place
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Wieferich       8,448.28	 10	1,551.72	172.41
PSP - PRP       8,571.43	  5	1,428.57	357.14
Muon1           8,888.89	  9	1,111.11	138.89
Seventeen       9,218.47	 45	  781.53	 17.76
Cosmology       9,221.79	121	  778.21	  6.49
OGR-27          9,313.85	 55	  686.15	 12.71
ABC@Home        9,324.74	111	  675.26	  6.14
Climate Pred    9,350.86	485	  649.14	  1.34
Docking@Home    9,371.67	 60	  628.33	 10.65
Enigma@Home     9,419.35	 55	  580.65	 10.75

Ten Projects with the highest gain per place taken
Code:
                                        Available
Project            Score       Place     Points     Points/Place
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
PSP - PRP 	8,571.43	 5	1,428.57	357.14
Wieferich 	8,448.28	10	1,551.72	172.41
Muon1         	8,888.89 	 9	1,111.11	138.89
Majestic-12 	9,550.56	 9	  449.44	 56.18
RNA World 	9,600.00	16	  400.00	 26.67
GIMPS    	9,484.54	21	  515.46	 25.77
NFS@Home 	9,736.84	14	  263.16	 20.24
Seventeen 	9,218.47	45	  781.53	 17.76
yoyo@home 	9,593.61	29	  406.39	 14.51
OGR-27         	9,313.85	55	  686.15	 12.71
The bad news is that there are a lot of non Boinc projects listed. The good news is that there are plenty of points to be had for those who are up for a c[h]allenge.
 
Very Nice Dr P. Now what we need is something that combines them to see which project will/can be a quick takeover or how many days based on a current crunching rate.

For instance:

17 or bust has 45 spots to gain, but based on our current days to overtake (from Free-DC stats) 11 teams can be taken down in the next 90 days for 195.36 points. If you look at PSP-PRP we can overtake 1 team in that same time period for 357.14 points.

I guess what might be most beneficial would be some type of determining the ease/length of passing teams as well and how long it would take. I guess maybe take some reference ppd maybe using a i7 920 or 2600k or something like that for each project and then scale it based on how many of those systems you would need to bypass teams.

I don't know if you already do something like this in your ranking Sazan or if I am starting to get to complicated. I can possibly make some sort of spreadsheet if you guys are interested in trying something like that.
 
Be sure that the "clientid=xxxxx" in the prpclient.ini is the same clientid= that downloaded the work unit.
I moved work from one computer to another and changed it's clientid name and it wouldn't give credit for the work that was downloaded by the previous computers clientid name.
Another thing that has caused that for me is downloading work in one version of app and upgrading the client before returning that work unit also said it wasn't expected and I lost it's credit.

Same computer, same client ID and no upgrades. The ignored tests are not listed here whilst most the ones that are listed are no longer being worked on. I can only think that somehow an earlier checkpoint was picked up after the power loss and in fact I've been returning the same units twice. I'm really just guessing and think a reinstall and fresh start is required when I come back to PSP.

In the interim I'm crunching through some of this month's remaining SIMAP work and will probably throw the proverbial kitchen sink at yoyo by the 15th. I've always really been a sucker for team pushes/events - the valiant F@H battle with EVGA to 4 bil over a year ago is what drew me to the [H]orde in the first place :)
 
I don't know if you already do something like this in your ranking Sazan or if I am starting to get to complicated. I can possibly make some sort of spreadsheet if you guys are interested in trying something like that.

I'd like to do something like this, but I currently just look at the Vault stats similar to Dr P. You're right that the missing element is how easy/hard it is to move up in a project. The reason we dropped Wieferich for a while is that it takes forever to advance, so usually the CPU time is best spent elsewhere. At some point, though, to compete with the very top Vault teams you need to have a high score in every project. We're not quite there yet.
 
Priority List:

1. Wieferich@Home
2. Prime Sierpinski Problem - PRP
3. Muon1 DPAD
4. Seventeen Or Bust
5. OGR-27
7. Cosmology@Home
9. Docking@Home
7. Enigma@Home
11. ABC@Home
6. GIMPS
9. Majestic-12
14. SZTAKI Desktop Grid
12. The Lattice Project
20. Climate Prediction
12. RNA World @ Home
16. GPUGRID
19. SIMAP
14. yoyo@home
17. POEM@Home
21. Collatz Conjecture
17. NFS@Home
22. Malariacontrol.net
23. QMC@Home
24. Leiden Classic
25. PrimeGrid
26. Rosetta@Home
26. Dimes
28. MilkyWay@Home
29. Einstein@Home
30. RC5-72
32. Seti (BOINC)
30. Spinhenge@Home
33. World Community Grid
34. Folding@Home

Good news: the top 3 are the only projects below 9K on the DC Vault. Of the rest, there are only 9 that are below 9500, and 6 of those are BOINC projects (Cosmology, Docking, Enigma, ABC, Climate Prediction, and SIMAP). The bottom 8 are over 9800, but work on any project is helpful because it helps us maintain our position. There are only a handful of projects (F@H, WCG, SETI, maybe Einstein) where there are so many teams that it would take a while for our Vault points to drop.

To get to 9K in all projects, we need to move up one spot in Muon, two in Prime Sierpinski, and four in Wieferich. At our current output, that will take 15 months, 9 months, and over 12 years, respectively. :eek:
 
We're real close to Top 5 in DC-Vault!! The last few months have been outstanding for the Commandos!!!
 
I think I'll start work on cosmology@home after the yoyo@home giveaway.
 
Hey Eric. Good to hear from you. What does your coop at Cisco consist of? San Jose should be sweet during the winter months! :)

It will be a CALO position doing mostly lab work by rebuilding and troubleshooting customer topologies. Then again this could change. But, this means I get to play with lots and lots of Cisco gear :)

I really need a new rig to dedicate to DC again. Its been awhile but what are people using for dual CPU socket setups? My goal would be to get it at around $1000 and used is an option.
 
Last edited:
I see that we're now #5 in the DC Vault. Great job guys!
 
I really need a new rig to dedicate to DC again. Its been awhile but what are people using for dual CPU socket setups? My goal would be to get it at around $1000 and used is an option.

The popular option is the Asus Z8NA populated with the fastest i7 Xeons that you can afford. There alternative boards from supermicro & Tyan available as well. Another option could be dual G34 opteron but that may be above your budget
 
I really need a new rig to dedicate to DC again. Its been awhile but what are people using for dual CPU socket setups? My goal would be to get it at around $1000 and used is an option.
I just built a dual hex (12-cores total) system for $435 using ebay CPUs and an open box ASUS motherboard. Check out my thread for more details.
 
Congrats to all of the Commandos on getting to 5th place in the DC Vault. Its going to be a long battle to 4th but this is a far cry from where we were 2-3 years ago when we started this! :) Keep crunching!
 
Wow Razor way to bring the heat with 17 or bust. 44T in one day is impressive. I am getting my clippers ready Eric. I see your not going to go down easy. Watch out:)

Congrats everyone on 5th place.
 
I'm honestly surprised to see Muon1 still way up on the priority list.

I think what I'm going to do is switch my work computer (and the bigger firepower whenever i get access to it) over to muon after the yoyo giveaway. Muon1 works well on those as our proxy server hates quite a few projects and the intermittant access i get tends to fudge with bigger longer work units.

That'll leave my 2600k to roam around wherever, or hop on muon also.
 
Wow Razor way to bring the heat with 17 or bust. 44T in one day is impressive.
Thanks. I finally have the 2700K up and running and working on it's overclock.
I have had it up over 5ghz but too much heat for my Zalman Reserator XT & Koolance CPU-340 when running SoB so I'm running it a little slower until I can add more rad to the loop or buy a mini-fridge to put it into.:D

5th place in the DC-Vault!
We have defiantly shown that Team [H]ard|OCP is to be taken seriously.
My thanks to all that have contributed, every little bit makes a big difference.
 
Razor,

Any idea why your points got chopped in 17 or bust? Also which version of Prime 95 are you running for 17 or bust because I updated mine to 26.6 and I am only getting 9.5T with 12 cores running @ 2.53 (2xL5649) which seems awfully low if your getting 20+ on just a 4 core 2700k.
 
Razor,

Any idea why your points got chopped in 17 or bust? Also which version of Prime 95 are you running for 17 or bust because I updated mine to 26.6 and I am only getting 9.5T with 12 cores running @ 2.53 (2xL5649) which seems awfully low if your getting 20+ on just a 4 core 2700k.
Thats my question to. SoB Forum post.
I'm running Windows64,Prime95,v26.6,build 3
What is your Per iteration time: ?
On my QuadCores with all 4 cores running SoB I get on average 0.035 sec. with no other CPU/GPU projects running.
If I run just 3 SoB threads on the QuadCores the average drops to 0.032 sec. with no other CPU/GPU projects running. (I run 4 on all my QuadCores except my HTPC. (due to cooling limitations))
With the 2700K and 8 SoB threads running the average is 0.032 sec. with no other CPU/GPU projects running.
 
I'm honestly surprised to see Muon1 still way up on the priority list.

I think what I'm going to do is switch my work computer (and the bigger firepower whenever i get access to it) over to muon after the yoyo giveaway. Muon1 works well on those as our proxy server hates quite a few projects and the intermittant access i get tends to fudge with bigger longer work units.

That'll leave my 2600k to roam around wherever, or hop on muon also.
It will take some work to make up ground here. We trail the next spot by about 168,000,000. Our best month on record was September of this year with about 75,000,000. Of the eight teams ahead, we are only outpacing two, neither of which will be caught in a year at current rates. On the plus side, nobody behind is on pace to catch us and serious progress has been made to climb this high.
 
Thats my question to. SoB Forum post.
I'm running Windows64,Prime95,v26.6,build 3
What is your Per iteration time: ?
On my QuadCores with all 4 cores running SoB I get on average 0.035 sec. with no other CPU/GPU projects running.
If I run just 3 SoB threads on the QuadCores the average drops to 0.032 sec. with no other CPU/GPU projects running. (I run 4 on all my QuadCores except my HTPC. (due to cooling limitations))
With the 2700K and 8 SoB threads running the average is 0.032 sec. with no other CPU/GPU projects running.

I am getting the same per iteration times as you 0.032 sec I believe, although I will double check when I get home. I got 12 workers running, because it says 2 logical cores per thread. I wonder if thats holding me down although it shouldn't?

I know you got a much faster processor but the results are scaling fairly well between my two systems so it must be my error somehow. I was getting around 4.4T with an i7 920 @ 3.3 GHz (4 workers) and 9.5T with 12 workers @ 2.53.

Unless Prime 95 uses AVX or some sort of bonus system I believe I should be hitting at least what your getting for T per day if not higher with the 12 cores.

Are you running Quadcores along with the 2700k?
 
I am getting the same per iteration times as you 0.032 sec I believe, although I will double check when I get home. I got 12 workers running, because it says 2 logical cores per thread. I wonder if thats holding me down although it shouldn't?

I know you got a much faster processor but the results are scaling fairly well between my two systems so it must be my error somehow. I was getting around 4.4T with an i7 920 @ 3.3 GHz (4 workers) and 9.5T with 12 workers @ 2.53.

Unless Prime 95 uses AVX or some sort of bonus system I believe I should be hitting at least what your getting for T per day if not higher with the 12 cores.

Are you running Quadcores along with the 2700k?
I get about 4-5T/day/ [email protected] with 4 workers on smart assignment affinity and I run 4x QuadCores and now have added the 2700K
For stage 2 to run quickly I set in menu bar item "Options"/"CPU..." Daytime & Nighttime Stage 2 Memory setting to total computer memory -2GB for the OS and what not.
So the QuadCores with 4GB memory I would set Stage 2 memory setting to 2048MB and on the 2700K with 16GB memory I set it to 14336MB
 
I get about 4-5T/day/ [email protected] with 4 workers on smart assignment affinity and I run 4x QuadCores and now have added the 2700K
For stage 2 to run quickly I set in menu bar item "Options"/"CPU..." Daytime & Nighttime Stage 2 Memory setting to total computer memory -2GB for the OS and what not.
So the QuadCores with 4GB memory I would set Stage 2 memory setting to 2048MB and on the 2700K with 16GB memory I set it to 14336MB

Okay. Thanks. That answers my questions more throughly because I thought you only had one 2700k doing all that work and I thought I had something drastically wrong.
 
Okay. Thanks. That answers my questions more throughly because I thought you only had one 2700k doing all that work and I thought I had something drastically wrong.
So after doing some reading it may be best on an I7 with HyperThreading to run only 4 workers instead of 8.
Some testing shows me that each of the 4 workers run half the iteration time as compared to using 8 workers but with lower CPU temps.
 
So after doing some reading it may be best on an I7 with HyperThreading to run only 4 workers instead of 8.
Some testing shows me that each of the 4 workers run half the iteration time as compared to using 8 workers but with lower CPU temps.

This is what I am currently doing. Two logical cores for each worker. Yesterday I tried the old version 26.3 and got iteration times of 0.053. Switching back to 26.6 I get 0.038 on some workers and 0.043 on others which I can't figure out why yet. (Clock speed is 2.534 GHz)

It might have to do with it being a 2p system and NUMA settings I am thinking. I think for a Windows system I want this to be disabled I believe, especially with BOINC/Prime 95 because they are individual threads anyways. Don't remember what settings I currently have in the BIOS and will check on my next reset.

The CPU monitor shows 50% load across the cores which makes sense. This seems to be the recommended way with the latest version of Prime 95.

Thanks for your help Razor. Here comes 10T a day hopefully.
 
For yoyo@home a bunch of my ECM tasks failed overnight and it didn't go and get new work to fill the buffer. :(

First time I've seen that. Not running OC'ed. I7 920, 12 gigs ram. Will have to see if it continues.

I don't have much of a buffer on the 100k challenge, so this is no good.
 
For yoyo@home a bunch of my ECM tasks failed overnight and it didn't go and get new work to fill the buffer. :(

First time I've seen that. Not running OC'ed. I7 920, 12 gigs ram. Will have to see if it continues.

I don't have much of a buffer on the 100k challenge, so this is no good.

That sucks. I run muon and evo and never had any failures. I'd drop ECM and concentrate on the other projects. If you have additional problems let me know and I can change one of my computers over to you if you are going to have trouble making the 100K.
 
I had to drop ECM projects because it kept eating up all of my memory and eventually failing on multiple PCs. Not sure what the issue is.
 
I had to drop ECM projects because it kept eating up all of my memory and eventually failing on multiple PCs. Not sure what the issue is.

I dropped ECM from my list. The last ECM tast completed alright, but I won't be getting anymore. Now I have a bunch of Muon, Harmonius Trees, and Evolution@home tasks. Will have to see how it goes.
 
OK Commandos - I just signed us up for the WCG Christmas Race 2011. :)
http://www.worldcommunitygrid.org/team/challenge/viewTeamChallenge.do?challengeId=4408

I want to wait a few more days to make a dedicated post as to try and avoid (as much as possible) affecting DooKey's giveaway that has skyrocketed our yoyo@home production! Nice work DooKey!

Hopefully we can get some good support for the Christmas Race. Its a points race this year and we are the 60th team to register. There are some fairly big name teams participating so hopefully we can make a good showing. I see our friends over a DPC will be polishing up their CPUs for the event!
 
So after doing some reading it may be best on an I7 with HyperThreading to run only 4 workers instead of 8.
Some testing shows me that each of the 4 workers run half the iteration time as compared to using 8 workers but with lower CPU temps.

Is this only regarding SoB or does it include other BOINC projects?
 
I've finally given up on Linux for non F@H DCing after realising that you can only run yoyo's Muon and Evolution under wine. Windows Server 2008 has been installed on the 4p and seems to be behaving fine.

I don't have much of a buffer on the 100k challenge, so this is no good.

From observation across multiple rigs, Muon seems to return the highest points per cpu time spent. You should always crunch what interests you most but, if neutral, this may be the best choice to hit the 100K target.

Thanks to you and many others for giving yoyo a bash :cool:
 
So after doing some reading it may be best on an I7 with HyperThreading to run only 4 workers instead of 8.
Some testing shows me that each of the 4 workers run half the iteration time as compared to using 8 workers but with lower CPU temps.
Is this only regarding SoB or does it include other BOINC projects?
I haven't tested Hyperthreading in Boinc yet but my understanding is that most Boinc projects would benefit from using all 8 logical cores.
Intense CPU applications like searching for primes do not benefit from using all 8 logical cores because there is no free time on the CPU for HyperThreading to do task switching and utilize.

A good way to know may be that if a work unit type produces more CPU heat (like prime searching) these would most likely not have any free physical CPU core cycle time for HyperThreading to take advantage of and would run better using only 4 work units running on the 4 physical cores - but as far as I know Boinc doesn't have options to set affinity when Boinc is set to use 50% of the CPU.
With this theory the opposite would also apply. If work units produce less heat, HyperThreading 8 work units on all 8 logical cores may be beneficial.

edit: Perhaps other members with HyperThreading CPU can post there experiances...
 
Last edited:
Perhaps other members with HyperThreading CPU can post there experiances...

Started testing with 5 instances across 10 threads on PSP yesterday and so far it seems to be achieving the same output (ie a halving in iteration time) whilst using less cpu and having lower temps.

I haven't had a chance to play with any BOINC projects. We'd probably need to test on a project with both consistent and short work unit lengths like poem or SIMAP perhaps?
 
NFS@home I believe has super short run times. They are memory hogs but take only an hour or two if I remember correctly to test them on. I honestly believe hyperthreading in BOINC is going to depend on each individual project depending on whether or not the project is FP or integer intensive. It does not surprise me that you see the same with PSP run times because that is almost like Prime 95.
 
Back
Top