SETI & SETI BETA

I added additional info in the first post about app_config changes due to them phasing out v7 work units and going to v8.
 
Astropulse has also been updated and the current version is now v7. Might want to add that one to your first post, as well.
 
Special Fundraiser for Parkes Data Store and GPU development system
Thanks to our collaboration with Breakthrough Listen and their international colleagues, we will soon have the ability to access data recorded at the Parkes Radio Telescope in Australia. For the first time, this will give us full-sky coverage, including the southern hemisphere, in our search for ET. However, due to the enormous amount of data that the 13-beam receiver at Parkes can generate, we will require extra hardware to store and distribute it. We need your help to purchase this.

Thanks to some special offers, one from the Hanson family in memory of Robert W. & Mary P. Hanson, one from Richard Haselgrove in memory of his mother, Jenifer Leech, and one from Mr. Kevvy, we have the possibility to match donations which will allow us to purchase the Parkes Data store server (~$34K) and the GPU development systems (~$10K) we will need for the coming year. For every dollar you donate in this special fund raiser, Mr. Kevvy will match it, the Hanson Family will match it with two, and Richard Haselgrove will match it with two more. That means for every dollar you donate we'll get six dollars towards the purchase of these servers!

Because this is a special fundraiser we'll have special notation on your account page and on the forums. The 12TB disk drives in the Parkes Data Store cost $450. One sixth of that is $75. So, a donation of $75 or more gets you a disk drive icon. The GPUs in the GPU development machine that we will be using for our recording systems at future telescopes are $1500 each, so a donation of $250 or more will get you a GPU icon. And as always, any donation of $10 or more will get you a green star.

But in the end, it's not about the icons: it's about getting the data and making a discovery for the ages. Your help, whether by crunching data or by donating, is always appreciated.
7 Dec 2017, 19:15:11 UTC · Discuss
 
I'm switched over to the WCG Christmas race till the end of the month. It is loaded with worthy bioscience projects and [H]ardOCP is in a tight race with team China. I will be back in the new year.
 
Anyone have any idea what’s going on with the SETI Beta stats exporting? I recently crunched a few hundred thousand points there, but none of the stats sites reflect this. I know this was a problem with them a few years ago... Is it still?
 
I found a thread on the SETI Beta boards that said there hasn’t been a stats export since 7/18. No response from the admins yet.
 
New optimized SETI apps:
New CUDA 10.2 v0.99 Mutex Special App
Both petri33 and Oddbjornik have given the OK to release this more publicly so here it is!

This is the work sync mutex version of the famed Linux "special sauce" application. named V0.99. petri33 is the author of the bulk of the source code, but with the work sync mutex function added by Oddbjornik. I only compiled it :). This app cuts out the load time of the next WU by pre-loading it while the first WU is running. This saves 1-5 seconds, sometimes more, per WU processing time. As such, it is slightly more productive overall than the regular v0.98 application.

Download here:

Requirements/notes:
1. You must be running Linux, I built/tested this on Ubuntu 18.04
2. You must have Nvidia driver version 440.xx+ installed, 440.36 is the latest driver at this time. this is a CUDA 10.2 requirement, not a Special app requirement. If you don't
3. As always, make sure the executable is set to allow execution, or you will generate compute errors due to lack of permissions.
4. This is the app ONLY. you need to add this to either the AIO package distributed by Tbar and edit the app_info.xml file appropriately, or add this to your boinc directory for a repository install (if you've added the special app previously to your repo install then you should already know where and how to put this)
5. For this to function as intended, you need to configure BOINC to run 2 WUs at a time on your GPU(s). you do this using an app_config.xml file like so:
<app_config>
<app>
<name>astropulse_v7</name>
<gpu_versions>
<gpu_usage>0.5</gpu_usage>
<cpu_usage>1.0</cpu_usage>
</gpu_versions>
</app>
<app>
<name>setiathome_v8</name>
<gpu_versions>
<gpu_usage>0.5</gpu_usage>
<cpu_usage>1.0</cpu_usage>
</gpu_versions>
</app>
</app_config>

*Note, If you run AP tasks on your GPU and you want them to be processed in FIFO fashion you need to also configure 2x WUs for AP tasks, or they will never run, or only run when they get close to timing out or you have no MB/v8 tasks to work on.

The one downside:
Since you are running double the number of WUs, you will use double the system memory resources, both on the CPU and GPU.

with 2 RTX 2080s running this:
system using ~4GB of system RAM
and ~3GB per GPU.
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 440.36 Driver Version: 440.36 CUDA Version: 10.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce RTX 2080 On | 00000000:01:00.0 On | N/A |
| 65% 70C P2 193W / 200W | 3254MiB / 7979MiB | 98% Default |
+-------------------------------+----------------------+----------------------+
| 1 GeForce RTX 2080 On | 00000000:03:00.0 Off | N/A |
| 50% 64C P2 198W / 200W | 2832MiB / 7982MiB | 99% Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 970 G /usr/lib/xorg/Xorg 240MiB |
| 0 1109 G /usr/bin/gnome-shell 195MiB |
| 0 1704 C ./keepP2 111MiB |
| 0 19733 C ...p_V0.99b1p3_x86_64-pc-linux-gnu_cuda102 1351MiB |
| 0 19793 C ...p_V0.99b1p3_x86_64-pc-linux-gnu_cuda102 1351MiB |
| 1 970 G /usr/lib/xorg/Xorg 6MiB |
| 1 1705 C ./keepP2 111MiB |
| 1 19692 C ...p_V0.99b1p3_x86_64-pc-linux-gnu_cuda102 1351MiB |
| 1 19759 C ...p_V0.99b1p3_x86_64-pc-linux-gnu_cuda102 1351MiB |
+-----------------------------------------------------------------------------+
 
Back
Top