DC Vault 2

If anybody is interested all of my rigs will be running in the Primegrid Shakespeare's Birthday Challenge for the 3 days of the chalange, we did pretty good in last months PSP challange we took 5th we beat EVGA they took 6th :) in the PSP challange. I am pretty sure we could take the #1 spot if we tried. ;)
http://www.primegrid.com/forum_thread.php?id=5617

The challenge will begin 20 April 2014 16:16 UTC and end 23 April 2014 16:16 UTC http://www.primegrid.com/
 
If anybody is interested all of my rigs will be running in the Primegrid Shakespeare's Birthday Challenge for the 3 days of the chalange, we did pretty good in last months PSP challange we took 5th we beat EVGA they took 6th :) in the PSP challange. I am pretty sure we could take the #1 spot if we tried. ;)
http://www.primegrid.com/forum_thread.php?id=5617

The challenge will begin 20 April 2014 16:16 UTC and end 23 April 2014 16:16 UTC http://www.primegrid.com/

Grandpa, can you give an overview of how you're handling this switchover? Do you update your PrimeGrid preferences directly and only allow the PPS project? Do you then modify BAM! to only provide resource share for PrimeGrid and set everything else to 0? I'm trying to figure out the steps required to change all my clients to PrimeGrid/PPS for the purpose of this challenge. Thanks.
 
brilong, in BAM!, I would just set No New Tasks to all projects except PrimeGrid and possibly one other project. That one other project, I would then give a priority 0 just so that if for some reason PGrid went down or ran out of work, your clients would stay busy.

At PrimeGrid, yes you would just disable all of the other apps unless you wanted GPU work to run on your rigs as well.
 
What about the LLR beta app? Do we need to manually download the latest version and use app_info.xml to enable it?

http://jpenne.free.fr/index2.html

I just signed up for PrimeGrid and only selected PPS. My clients downloaded the following:
16-Apr-2014 09:35:48 [PrimeGrid] Started download of primegrid_llr_wrapper_6.17_x86_64-pc-linux-gnu
16-Apr-2014 09:35:48 [PrimeGrid] Started download of primegrid_sllr_3.8.9_x86_64-pc-linux-gnu

It sounds like 3.8.9 is old and 3.8.13 is the latest one with AVX and AVX2 patches.
EDIT: Reference to the new build: http://www.primegrid.com/forum_thread.php?id=5557#74062
 
Last edited:
What about the LLR beta app? Do we need to manually download the latest version and use app_info.xml to enable it?

http://jpenne.free.fr/index2.html

I just signed up for PrimeGrid and only selected PPS. My clients downloaded the following:
16-Apr-2014 09:35:48 [PrimeGrid] Started download of primegrid_llr_wrapper_6.17_x86_64-pc-linux-gnu
16-Apr-2014 09:35:48 [PrimeGrid] Started download of primegrid_sllr_3.8.9_x86_64-pc-linux-gnu

It sounds like 3.8.9 is old and 3.8.13 is the latest one with AVX and AVX2 patches. Do I need to do anything special to run 3.8.13?

http://www.primegrid.com/forum_thread.php?id=5557#74062
If you don't have a Haswell CPU there's no need to switch to this version of LLR.

To use this, you will need to use app_info. Remember -- when you turn app_info on or off you lose any in-progress tasks, so wait until everything is finished!

I will also point out that these work units may run just fine with HT turned on. Do some testing ahead of time to maximize production.
 
Last edited:
http://www.primegrid.com/forum_thread.php?id=5557#74062

I will also point out that these work units may run just fine with HT turned on. Do some testing ahead of time to maximize production.

Okay, where's the guide to using app_info.xml to replace 3.8.9 with 3.8.13? Also, I'm confused about the relationship between GIMPS and PrimeGrid. The fact that PrimeGrid is using LLR 3.8.13 software developed and released by Jean on the GIMPS Software forum is confusing. So far I've only used GIMPS GPU72 mfaktc, not Prime95, but I'm wondering how all this relates when it comes to DC-Vault points, etc. Are there projects in which we can participate where we get double credit (i.e. PrimeGrid & GIMPS)? :confused:
 
No we don't get double points. I checked into it to make sure. The contributions through the GIMPS client and the BOINC client are kept separate. So, you would need to run the GIMPS client(s) to improve our position in the Vault for GIMPS specifically. You can run any of the sub projects to improve our PrimeGrid position in the Vault.

As far as where they get their apps, that is above my head.

app_info.xml can be a bear to write and I'm not real good at them. Possibly use the one NickOfTime wrote for the POGS optimized app as a base possibly. Most projects are pushing the use of app_config.xml files but you can't do everything and app_info.xml can and vice versa

<app_info>

<app>

<name>magphys_wrapper</name>

<user_friendly_name>fitsedwrapper 3.40 v2</user_friendly_name>

</app>

<file_info>

<name>wrapper_x86_64-pc-linux-gnu_340</name>

<executable/>

</file_info>

<file_info>

<name>fit_sed_x86_64-pc-linux-gnu_340</name>

<executable/>

</file_info>

<file_info>

<name>concat_x86_64-pc-linux-gnu_340</name>

<executable/>

</file_info>


<app_version>

<app_name>magphys_wrapper</app_name>

<version_num>340</version_num>

<file_ref>

<file_name>wrapper_x86_64-pc-linux-gnu_340</file_name>

<open_name>wrapper</open_name>

<main_program/>

</file_ref>

<file_ref>

<file_name>fit_sed_x86_64-pc-linux-gnu_340</file_name>

<open_name>fit_sed</open_name>

</file_ref>

<file_ref>

<file_name>concat_x86_64-pc-linux-gnu_340</file_name>

<open_name>concat</open_name>

</file_ref>

</app_version>

</app_info>

You create a text file and name it app_info.xml and save it to the PrimeGrid project folder in the BOINC data directory.

XstremeSystems has a good how to: http://www.xtremesystems.org/forums/showthread.php?283510-Customizing-BOINC-app_info-xml
 
Last edited:
Brilong HT off is about 4.5% greater ppd. I ran for a day at 64 cores and a day at 32 cores I took random 100 WU sample from both days. On a 4P 4650L at 3038Mhz so only a slight advantage to HT off. It may be diffrent with the new core but that will not be known until tomorow. ;)

32 cores = 2201.5608 sec per WU = 1255.8363 WU's per day
64 cores = 4594.0443 sec per WU = 1203.6453 WU's per day

I have already attached all of the rigs to Primgrid in Bam and have set my preferences in Primegrid to only accept PPS and have most of the 4P's set to no new work for Primgrid.

On the evening before the challenge I will set the 4P's that are running on other projects to no new work on the projects they are running on. On the day of the challenge when the start time comes I will remove the no new work from the Primgrid project in Bam. Once that is done I will have to go to each computer and have the communicate with Bam but that will not take long. I have found that most of the rigs will usually update with Bam fairly quickly within a few minutes if the are out of work but occasionally it will take them a while that is the reason for manually updating Bam.

https://www.dropbox.com/s/m2xk8rassv3wwto/projects%20%28Case%20Conflict%29.png
 
Grandpa, if you are manually going to each computer to sync with BAM! so that they get the flag to accept new work, it would just be easier to manually tell each one to accept new work right from the clients. You wait less time that way. BAM! will not override that but will actually sync with your manual setting. Save you the hassle of waiting for the syncing to finish and thus less clicks.
 
Brilong HT off is about 4.5% greater ppd.
Thanks for the details. This is very interesting, but I wonder how it affects other BOINC projects. I cannot easily enable/disable HT across my 39 clients in an automated fashion. Do you have similar benchmarks running 32 non-HT tasks on other BOINC projects like SIMAP, Rosetta, Poem, etc.?

Grandpa_01 said:
I have already attached all of the rigs to Primgrid in Bam and have set my preferences in Primegrid to only accept PPS and have most of the 4P's set to no new work for Primgrid.

On the evening before the challenge I will set the 4P's that are running on other projects to no new work on the projects they are running on. On the day of the challenge when the start time comes I will remove the no new work from the Primgrid project in Bam. Once that is done I will have to go to each computer and have the communicate with Bam but that will not take long. I have found that most of the rigs will usually update with Bam fairly quickly within a few minutes if the are out of work but occasionally it will take them a while that is the reason for manually updating Bam.
This is an interesting technique I had not thought of. I've got all my clients attached to Primegrid with a resource share of 0. Most of the clients (non-GPU) have SIMAP and Rosetta running with resource share of 200. I was planning on adjusting the resource share to set everything but PrimeGrid to 0 and then update all the hosts. I like your idea better.

The problem I have is that I'm out of town Sunday until late in the evening and I will not have remote access to my clients. I was wondering if I could use boinccmd or another command in cron Sunday around 12:20 US/Eastern to switch everything to Primegrid-only.

On another note, I just ordered 3 more GTX 580 GPUs for additional GIMPS horsepower. Two of them are the EVGA Superclocked ones (about 440 GHZ-d/day) and one Zotac AMP! edition (should be even faster). I'm using the stock/factory overclock on my current Linux-based GPUs, but a couple of the new 580's might go in my home Windoze PCs and I can play / tweak them easily.
 
You can adjust how often Bam contacts your rigs in (Bam! settings), I have mine set for every hour but you can manually enter shorter duration .025 etc. I do not know if it works as expected or not with fractional hrs. I have never tried it, but I can give it a shot after work today and see what it does.
 
Here's a proof-of-concept script I plan to run at 16:16 UTC (12:16 US/Eastern) to suspend all projects and allow more work on PrimeGrid. Does this look sufficient to you guys?

Code:
PRIMEGRID="http://www.primegrid.com/"
for url in $(boinccmd --get_project_status | sed -n 's/\s*master URL: //p' | egrep -v 'primegrid|wuprop'); do
    boinccmd --project ${url} suspend
done
boinccmd --project $PRIMEGRID allowmorework
 
Last edited:
LOL

There is a reason I do the things the way I do, And that may be because that is totally Greek to me. :eek:
 
Hmm, Primegrid Shakespeare challenge, looks like about 9000 wu's assigned in the first 1/2 hour. I wonder how many when to Grandpa and Brilong :)
 
More went to brilong than went to me :D it looks like his script worked well for him :)
 
Most of my initial results won't finish till another 50m, CPU temps are up by 12C over any other processing I have done...:cool:
 
brilong your computer power just amazes me :) I asuming that some of the 32 processor machines you are running, are 2P 4650's by looking at your WU run times since the time is about what a 4P 4650 with HT on would be.

GenuineIntel
Genuine Intel(R) CPU @ 2.70GHz [Family 6 Model 45 Stepping 5]
(32 processors)

I assume that is a 2P 4650 rig ?
 
Looks like you guys are kicking ass now! Great job! I added a quad to the mix. There is so much other work in my queue im not sure how many WUs I'll complete in 3 days but I'll kick in a few more. :)

We've been making great upward progress in the Vault. Lets keep it going!
 
Well, our Gimps production has caught up to a nice team cluster that should give a few positions :)
 
Hmm, for Prime Sierpinski Problem - PRP, I wonder how many complaints their would be if we reserved a range and crunched one unit for each of the vault teams , causing the points per position to drop to double digits ;)
 
Hmm, for Prime Sierpinski Problem - PRP, I wonder how many complaints their would be if we reserved a range and crunched one unit for each of the vault teams , causing the points per position to drop to double digits ;)

Looking for trouble, eh? :D
 
Looking for trouble, eh? :D

Well, we would be testing the vault requirement "accept new members and teams immediately upon registration"

or we just write a guide on how to setup and run it, and just advertise it to the other teams , "how to get some easy points in the vault" :D

Even just getting 3 new teams joining would give us 150pts, and they would get either 300, 600, or 900 pts ...
 
Last edited:
Yeah...it's funny how easy it is to manipulate things. But I would hold off a bit before doing that...
 
brilong your computer power just amazes me :) I asuming that some of the 32 processor machines you are running, are 2P 4650's by looking at your WU run times since the time is about what a 4P 4650 with HT on would be.

GenuineIntel
Genuine Intel(R) CPU @ 2.70GHz [Family 6 Model 45 Stepping 5]
(32 processors)

I assume that is a 2P 4650 rig ?

Similar. It's a 2P E5-2680 extra spicy setup. :) For some reason, the ES CPUs don't identify properly in /proc/cpuinfo so they have that generic string mentioned above. My script was successful in switching everything to PrimeGrid and my smart PDU started complaining that I've hit new power levels.

Also, unfortunately, everything will be offline (power outage) for about four hours this evening. It's out of my control. Glad I grabbed so many WUs right after the floodgates opened. I verified all my clients are running 6.24 (which has the newer subclient).
 
I do not think the 4 hr outtage is going to hurt us since we are currently 1.25M out in front of #2 after 1 full day of the challange, so we are out producing them by quite a bit, close to 100% at this point in time. :cool:

By the way I did get a PPS LLR prime that made it into the top 5000 record books today :)
 
I do not think the 4 hr outtage is going to hurt us since we are currently 1.25M out in front of #2 after 1 full day of the challange, so we are out producing them by quite a bit, close to 100% at this point in time. :cool:

My BOINC clients are all shut down while electricians install 5 new 30A 3-phase outlets. :D Hope to have everything back online this evening assuming all goes well.
 
Christmas-Vacation-Clark-Griswold-Lights.jpg
christmas-vacation-house-seen-from-sky-511x288.jpeg
clarkgriswaldchistmasvacation.jpeg
 
I'm in the process of loading up another very part time borg for donation. It is an Optiplex 330. They haven't fully decided whether to keep the DC software on it yet. Waiting to get approval. That would be 2 more cores (probably 6pm-6am) and another DIMES client. :)
 
We gained one position in GIMPS over the last few days and we should gain three more positions in the next week. Each position is 19.12 DC-Vault points.

I also got two more GTX 580's I plan to bring online ASAP which should increase our throughput by another 800-860 GHz-d/day.

I'm very happy Free-DC has started importing GIMPS stats so it's really easy to track Opportunities / Days to Overtake. :D
 
I do not think the 4 hr outtage is going to hurt us since we are currently 1.25M out in front of #2 after 1 full day of the challange, so we are out producing them by quite a bit, close to 100% at this point in time. :cool:

It turns out most of my hosts were down for at least 7 hours and they were without network connectivity for about 15 hours. I still have 7 systems offline. I hope to bring them back online tomorrow morning.

Grandpa_01 said:
By the way I did get a PPS LLR prime that made it into the top 5000 record books today :)

Very nice!
 
I'm very happy Free-DC has started importing GIMPS stats so it's really easy to track Opportunities / Days to Overtake. :D

Your welcome...lol

Great work by the way. My GIMPS contribution is two cores on an i5 at work and two part time cores on a slow laptop. So, as you could imagine they are chugging along pretty slow. :rolleyes:
 
GIMPS! Crap, I have to sign up for that one... Chrome keeps blocking the sign-up page. And congrats Brilong for the awesome contributions, and the finds. I think you may be close to a record for most primes found by one person. Then again, your cluster of compute is on the top 500 super-computer list right?
 
Back
Top