SETI & SETI BETA

Gilthanis

[H]ard|DCer of the Year - 2014
Joined
Jan 29, 2006
Messages
8,718
For those looking to run optimized apps, you can find them here: Seti@Home optimized science apps and information

We also have a challenge on 8/15/14-8/29/14

I am adding this app_config info in the first post so that any whom come here will have easy access. This allows for 2 tasks to run on your GPU at a time. You can change the numbers to run more, but it is up to you to decide what maximizes your points on your hardware.:

<app_config>
<app>
<name>astropulse_v7</name>
<gpu_versions>
<gpu_usage>0.5</gpu_usage>
<cpu_usage>0.3</cpu_usage>
</gpu_versions>
</app>
<app>
<name>setiathome_v7</name>
<gpu_versions>
<gpu_usage>0.5</gpu_usage>
<cpu_usage>0.3</cpu_usage>
</gpu_versions>
</app>
</app_config>
<app_config>
<app>
<name>setiathome_v8</name>
<gpu_versions>
<gpu_usage>0.5</gpu_usage>
<cpu_usage>0.5</cpu_usage>
</gpu_versions>
</app>
</app_config>

Edit: I created an account called HardOCPtest so that anyone whom wanted to test or contribute anonymously could do so.
SETI
&#8226;to attach a computer to your account without using the BOINC Manager. To do so, install BOINC, create a file named account_setiathome.berkeley.edu.xml in the BOINC data directory, and set its contents to:
<account>
<master_url>http://setiathome.berkeley.edu/</master_url>
<authenticator>10047042_37571956872286d7bb338dd0ca7a693b</authenticator>
</account>

SETI BETA
&#8226;to attach a computer to your account without using the BOINC Manager. To do so, install BOINC, create a file named account_setiweb.ssl.berkeley.edu_beta.xml in the BOINC data directory, and set its contents to:
<account>
<master_url>http://setiweb.ssl.berkeley.edu/beta/</master_url>
<authenticator>26285_3b01b7aa8efe59344accd5f76d6e589b</authenticator>
</account>

SETI
teamdaily.php


SETI BETA
teamdaily.php


Edit again: I edited the app_config above to reflect them moving from v7 to v8. Once v7 is completely finished you can remove it from the app_config
 
Last edited:
Signed for the WOW event and it worked for the first night.
Since this morning no new assignments come my way. I added GPUGrid to cross check which works ok.

Any idea why SETI don't give me continuously work ?
Ubuntu 14.04, CUDA6, two GTX 780 (one for something else in use)

Since I'm rather new to BOINC (did some few CPU tasks last year and begin of this year) any help and guidance is appreciated!
 
Last edited:
Can you take a look at the event log and post the first 30 or so lines? I would recommend doing it from a fresh client start and would also do a manual update of SETI with all other projects suspended so that SETI is the only project requesting work. This will give us clues as to what is going on and your system setup.
 
this one is the latest try ... it identify my GPU, and that one is excluded but still not giving any work to the other nV anymore. Funny enough it was ok the first day.

Code:
Sun Aug 17 15:17:21 2014 |  | Starting BOINC client version 7.2.42 for x86_64-pc-linux-gnu
Sun Aug 17 15:17:21 2014 |  | log flags: file_xfer, sched_ops, task
Sun Aug 17 15:17:21 2014 |  | Libraries: libcurl/7.35.0 OpenSSL/1.0.1f zlib/1.2.8 libidn/1.28 librtmp/2.3
Sun Aug 17 15:17:21 2014 |  | Data directory: /var/lib/boinc-client
Sun Aug 17 15:17:21 2014 |  | CUDA: NVIDIA GPU 0: GeForce GTX 780 (driver version unknown, CUDA version 6.0, compute capability 3.5, 3072MB, 2746MB available, 4154 GFLOPS peak)
Sun Aug 17 15:17:21 2014 |  | CUDA: NVIDIA GPU 1 (ignored by config): GeForce GTX 780 (driver version unknown, CUDA version 6.0, compute capability 3.5, 3072MB, 2910MB available, 4154 GFLOPS peak)
Sun Aug 17 15:17:21 2014 |  | OpenCL: NVIDIA GPU 0: GeForce GTX 780 (driver version 331.62, device version OpenCL 1.1 CUDA, 3072MB, 2746MB available, 4154 GFLOPS peak)
Sun Aug 17 15:17:21 2014 |  | OpenCL: NVIDIA GPU 1 (ignored by config): GeForce GTX 780 (driver version 331.62, device version OpenCL 1.1 CUDA, 3072MB, 2910MB available, 4154 GFLOPS peak)
Sun Aug 17 15:17:21 2014 |  | Host name: linuxpowered
Sun Aug 17 15:17:21 2014 |  | Processor: 8 GenuineIntel Intel(R) Core(TM) i7-2600S CPU @ 2.80GHz [Family 6 Model 42 Stepping 7]
Sun Aug 17 15:17:21 2014 |  | Processor features: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx lahf_lm ida arat epb xsaveopt pln pts dtherm tpr_shadow vnmi flexpriority ept vpid
Sun Aug 17 15:17:21 2014 |  | OS: Linux: 3.13.0-29-generic
Sun Aug 17 15:17:21 2014 |  | Memory: 7.74 GB physical, 27.00 GB virtual
Sun Aug 17 15:17:21 2014 |  | Disk: 82.38 GB total, 71.10 GB free
Sun Aug 17 15:17:21 2014 |  | Local time is UTC +9 hours
Sun Aug 17 15:17:21 2014 |  | Config: ignoring NVIDIA GPU 1
Sun Aug 17 15:17:21 2014 |  | Config: GUI RPCs allowed from:
Sun Aug 17 15:17:21 2014 |  | 192.xx.xx.xx
Sun Aug 17 15:17:21 2014 | SETI@home | URL http://setiathome.berkeley.edu/; Computer ID 7361513; resource share 400
Sun Aug 17 15:17:21 2014 | SETI@home | General prefs: from SETI@home (last modified 17-Aug-2014 00:10:06)
Sun Aug 17 15:17:21 2014 | SETI@home | Host location: none
Sun Aug 17 15:17:21 2014 | SETI@home | General prefs: using your defaults
Sun Aug 17 15:17:21 2014 |  | Preferences:
Sun Aug 17 15:17:21 2014 |  | max memory usage when active: 3964.22MB
Sun Aug 17 15:17:21 2014 |  | max memory usage when idle: 7135.60MB
Sun Aug 17 15:17:21 2014 |  | max disk usage: 41.19GB
Sun Aug 17 15:17:21 2014 |  | suspend work if non-BOINC CPU load exceeds 88%
Sun Aug 17 15:17:21 2014 |  | (to change preferences, visit a project web site or select Preferences in the Manager)
Sun Aug 17 15:17:21 2014 |  | gui_rpc_auth.cfg is empty - no GUI RPC password protection
Sun Aug 17 15:17:21 2014 |  | Not using a proxy
Sun Aug 17 15:20:55 2014 | SETI@home | update requested by user
Sun Aug 17 15:20:59 2014 | SETI@home | Sending scheduler request: Requested by user.
Sun Aug 17 15:20:59 2014 | SETI@home | Not requesting tasks: don't need
Sun Aug 17 15:21:01 2014 | SETI@home | Scheduler request completed
Sun Aug 17 15:26:07 2014 | SETI@home | Sending scheduler request: To fetch work.
Sun Aug 17 15:26:07 2014 | SETI@home | Requesting new tasks for NVIDIA
Sun Aug 17 15:26:09 2014 | SETI@home | Scheduler request completed: got 0 new tasks

are you guys with CPU on the way or also GPU (AMD or nV) ?
 
once I allowed usage of CPU too it gave me a bunch of new assignments ... 100 or so ... but still nothing on nV anymore.
 
Try detaching from all other GPU capable projects. Then see if it will pull work. It could be a resource share issue where it owes time to other projects. Also, double check your preferences on the SETI website to make sure nothing got changed there.
 
thanks; I dropped in there; let see if I get some info there.

I stopped all other folding processes; two GTX 780 would be fully for SETI but still I get only CPU

Code:
Mon Aug 18 20:22:28 2014 |  | Starting BOINC client version 7.2.42 for x86_64-pc-linux-gnu
Mon Aug 18 20:22:28 2014 |  | log flags: file_xfer, sched_ops, task
Mon Aug 18 20:22:28 2014 |  | Libraries: libcurl/7.35.0 OpenSSL/1.0.1f zlib/1.2.8 libidn/1.28 librtmp/2.3
Mon Aug 18 20:22:28 2014 |  | Data directory: /var/lib/boinc-client
Mon Aug 18 20:22:28 2014 |  | CUDA: NVIDIA GPU 0: GeForce GTX 780 (driver version unknown, CUDA version 6.0, compute capability 3.5, 3072MB, 2746MB available, 4154 GFLOPS peak)
Mon Aug 18 20:22:28 2014 |  | CUDA: NVIDIA GPU 1: GeForce GTX 780 (driver version unknown, CUDA version 6.0, compute capability 3.5, 3072MB, 2986MB available, 4154 GFLOPS peak)
Mon Aug 18 20:22:28 2014 |  | OpenCL: NVIDIA GPU 0: GeForce GTX 780 (driver version 331.62, device version OpenCL 1.1 CUDA, 3072MB, 2746MB available, 4154 GFLOPS peak)
Mon Aug 18 20:22:28 2014 |  | OpenCL: NVIDIA GPU 1: GeForce GTX 780 (driver version 331.62, device version OpenCL 1.1 CUDA, 3072MB, 2986MB available, 4154 GFLOPS peak)
Mon Aug 18 20:22:28 2014 |  | Host name: linuxpowered
Mon Aug 18 20:22:28 2014 |  | Processor: 8 GenuineIntel Intel(R) Core(TM) i7-2600S CPU @ 2.80GHz [Family 6 Model 42 Stepping 7]
Mon Aug 18 20:22:28 2014 |  | Processor features: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx lahf_lm ida arat epb xsaveopt pln pts dtherm tpr_shadow vnmi flexpriority ept vpid
Mon Aug 18 20:22:28 2014 |  | OS: Linux: 3.13.0-29-generic
Mon Aug 18 20:22:28 2014 |  | Memory: 7.74 GB physical, 27.00 GB virtual
Mon Aug 18 20:22:28 2014 |  | Disk: 82.38 GB total, 71.17 GB free
Mon Aug 18 20:22:28 2014 |  | Local time is UTC +9 hours
Mon Aug 18 20:22:28 2014 |  | Config: GUI RPCs allowed from:
Mon Aug 18 20:22:28 2014 |  | 192.xxx.xxx.xxx
Mon Aug 18 20:22:28 2014 | SETI@home | URL http://setiathome.berkeley.edu/; Computer ID 7361513; resource share 400
Mon Aug 18 20:22:28 2014 | SETI@home | General prefs: from SETI@home (last modified 17-Aug-2014 00:10:06)
Mon Aug 18 20:22:28 2014 | SETI@home | Host location: none
Mon Aug 18 20:22:28 2014 | SETI@home | General prefs: using your defaults
Mon Aug 18 20:22:28 2014 |  | Reading preferences override file
Mon Aug 18 20:22:28 2014 |  | Preferences:
Mon Aug 18 20:22:28 2014 |  | max memory usage when active: 3964.22MB
Mon Aug 18 20:22:28 2014 |  | max memory usage when idle: 7135.60MB
Mon Aug 18 20:22:28 2014 |  | max disk usage: 41.19GB
Mon Aug 18 20:22:28 2014 |  | suspend work if non-BOINC CPU load exceeds 88%
Mon Aug 18 20:22:28 2014 |  | (to change preferences, visit a project web site or select Preferences in the Manager)
Mon Aug 18 20:22:28 2014 |  | gui_rpc_auth.cfg is empty - no GUI RPC password protection
Mon Aug 18 20:22:28 2014 |  | Not using a proxy
Mon Aug 18 20:23:56 2014 | SETI@home | update requested by user
Mon Aug 18 20:24:00 2014 | SETI@home | Sending scheduler request: Requested by user.
Mon Aug 18 20:24:00 2014 | SETI@home | Not requesting tasks: "no new tasks" requested via Manager
Mon Aug 18 20:24:04 2014 | SETI@home | Scheduler request completed
Mon Aug 18 20:24:12 2014 | SETI@home | work fetch resumed by user
Mon Aug 18 20:24:17 2014 | SETI@home | update requested by user
Mon Aug 18 20:24:19 2014 | SETI@home | Sending scheduler request: Requested by user.
Mon Aug 18 20:24:19 2014 | SETI@home | Requesting new tasks for CPU and NVIDIA
Mon Aug 18 20:24:21 2014 | SETI@home | Scheduler request completed: got 0 new tasks
Mon Aug 18 20:24:21 2014 | SETI@home | Not sending work - last request too recent: 17 sec
Mon Aug 18 20:29:28 2014 | SETI@home | Sending scheduler request: To fetch work.
Mon Aug 18 20:29:28 2014 | SETI@home | Requesting new tasks for CPU and NVIDIA
Mon Aug 18 20:29:30 2014 | SETI@home | Scheduler request completed: got 45 new tasks
Mon Aug 18 20:29:32 2014 | SETI@home | Started download of 15se08aa.10825.23385.438086664197.12.189.vlar
Mon Aug 18 20:29:32 2014 | SETI@home | Started download of 15no08ab.13917.19295.438086664201.12.220
Mon Aug 18 20:29:35 2014 | SETI@home | Finished download of 15se08aa.10825.23385.438086664197.12.189.vlar
...
Mon Aug 18 20:30:19 2014 | SETI@home | Finished download of 15se08aa.10825.23385.438086664197.12.253.vlar
Mon Aug 18 20:30:24 2014 | SETI@home | Computation for task 03fe09ae.2433.111158.438086664207.12.122_0 finished
Mon Aug 18 20:30:24 2014 | SETI@home | Starting task 03fe09ae.2433.111158.438086664207.12.92_1
Mon Aug 18 20:30:26 2014 | SETI@home | Started upload of 03fe09ae.2433.111158.438086664207.12.122_0_0
Mon Aug 18 20:30:30 2014 | SETI@home | Finished upload of 03fe09ae.2433.111158.438086664207.12.122_0_0
 
Looks like SETI does not have a CUDA app for Linux for MB WU's, only for AP WU's. The problem is there are no AP WU's available at this time. If you were on Windows, you would have both MB and AP WU's available to you, so you could crunch MB WU's when there are no AP WU's available. Looks like this is one of the drawbacks of running Linux, unfortunately.
 
Seems to be that my combo is not a good fit with SETI ... I got a small number of opencl-Wu though. But yeah, nothing I can do. Try to get as much as possible done during the WOW event. With not bring us on the podium. :(

Works great for FAH or GPUGrid ... Back to protein in 10 days
 
Poem is another good medical project you could try. They have an OpenCL app for Nvidia GPU's on Linux. You could easily run 2 WU's at a time on each of your 780's if you use an app_config.xml file, as well. They don't always have GPU work, though.

If you are interested in space projects, there are also Einstein and Milkyway. I believe both should have Linux apps. Asteroids is another one with Nvidia work.

If you're interested in cryptography, there is Moo! Wrapper which is trying to crack an RC5-72 encrypted message by trying every key combination through brute force.

Finally, if you have any interest in mathematics, there is PrimeGrid. Their PPS Sieve project runs like gangbusters on Nvidia GPU's.

Probably more info than you were looking for, but I wanted to let you know about all the GPU projects you can try out with your Nvidia GPU's.
 
Last edited:
You could also always run SETI BETA or Albert :p Then of course if you don't care about efficiencies, there is BitcoinUtopia.

However, what I would suggest is to also attach to GPUGrid because they have good bio/medical research and score pretty well. They will occaisionally run out of work but if you are attached to a few projects, you should be fine with staying busy.
 
thanks guys for the suggestions; originally goal was to participate in the WOW! event ... but I'm really running dry on WU for a sufficient contribution. I don't have Windows as I prefer Linux for DC-projects. So I just keep my CPU running; whatever it gets. Hopefully there will be a similar challenge on GPUgrid; that worked well for me in the breaks. try Poem again too ...
 
hurra, finally SETI send me some ap* package for opencl_nvidia_100

[H]appy champer here; ok; now I just have to finish some WU from another DC project. But then I try to catch up as long as the WU coming in.
 
Yes...pretty much every Tuesday is maintenance day. So, if you are wanting to guarantee the work units, make sure to raise your cache...
 
One thing I like with SETI: the temps are much lower compared to other DC projects:



Very summer friendly, fan fixed on 80%, GTX 780 for both GPU (red line)
Blue lines are i7 2600S cores on AIO-WC folding some proteins.
 
How's the core usage though?
 
CPU:6 for FAH, rest for SETI and Ubuntu.



The last minutes since 20:50 GPUGrid is running, temps are higher again.
 
Yeah... GPUGrid tends to stress cards in ways other projects don't. The demand on the card is different with each batch too. There for a bit I had to underclock a few GPU's because they would error out if I didn't. Since then, they are back to stock settings. Some projects put a priority on not fully utilizing equipment so that it lessens the possibility of reducing the user experience.
 
Neither Seti or Seti Beta have updated any stats since 11/3. I tried poking around the Seti site and message boards, but didn't find anything about this issue. Does anyone know what's going on?
 
they lost Bruno...one of their main servers, last weekend
 
I'm guessing it has something to do with the post below made by Eric Korpela from Seti, but there is no mention of stats exports:

Here's a long overdue status update that will hopefully answer some of the questions you may have about the last week.

  • 1. Bruno started hanging with a "stuck cpu" linux kernel message. I don't know what causes this sort of thing. Moving all the services except uploads to other machines seems to have solved the problem, so far. Next week we're planning to replace Bruno with a Sun X4540 that the Lab was removing from service.

    2. Around the same time, the Astropulse assimilators started failing with a message "-603 Cannot close TEXT or BYTE value." Turns out we had run up against another informix limit. I'm resolving that, but it's looking like we are only 1/8th of the way done after 24 hours. Until it's done we won't be able to generate Astropulse work.

    3. New Data. Finally, in the last few weeks, we've gotten some data from 2014 split and out the door. Not a lot, though. There are a few reasons why it took so long. First, the Arecibo itself and the ALFA receiver that we use for SETI@home was offline for much of 2014 (mostly January to June), so we don't have that much data. Second, because of funding, Astronomy isn't top dog at Arecibo anymore, so Astronomers get a smaller fraction of the observing time. And compounding it is that disks are bigger and we don't send a box until it's full (to save on shipping costs), so it takes longer to fill a box of disks. Which brings us to...

    4. Old Data. We have been working on old data. It's not a "make work" thing. Most of the data we've been sending had a problem the first time around. Either part of the data was left unprocessed, or the results were questionable. In addition, all the old data that has been sent had not been processed with S@H v7, so there was no autocorrelation analysis done on it. We've still got big chunks of data that have never been processed with Astropulse to send out. We don't believe in making work for the sake of making work.

    5. Will we run out of data? It depends upon what you mean by that. We may run out of SETI@home data taken by the current data recorder, although there is still plenty of Astropulse data to process. Jeff is prioritizing the GBT data splitter, so we hope to have that on line before too long. It will also be the starting point for the next thing, which will be to use SERENDIP VI as a data recorder. It should be capable of much higher data rates (GBps) than the current recorder, and therefore much higher bandwidths. It should also give us our first taste of the 327MHz Sky Survey data.
Hope that answers some of your questions.
 
they failed to even provide work for about 30 hours, at least the heaters are working again
 
aaaaaaaaand they're gone

SETI@home: Running out of workunits
We are likely to (temporarily) run out of workunits to process as early as this weekend...We are currently solving various database issues, developing a new splitter so we can start processing out new Green Bank Telescope data, and hunting for tapes that haven't fully been processed to get out of the current drought.

Had to switch over to Rosetta to avoid freezing to death.
 
Yeah... SETI has lots of server issues. So, make sure to keep a backup project or two if you plan on running them. Also, make sure to run a larger cache. That is what most dedicated SETI DC'ers do.

However, we are part of a WCG challenge right now. Why not jump over to there and help with a few work units. Just make sure you attach to HardOCP and not [H]ard|OCP. ;)
 
The new data from Arecibo observatory has been processed and SETI started busting new units about an hour ago, approximately 06:40 UT Saturday.
 
SETI Beta still hasn't exported any stats since 11/3 and a thread about this on their forum hasn't received any response from the admins. I have now removed this project from all of my machines until they can get their act together. I increased my work share at SETI to make up the difference.

Patience has never been one of my virtues. :rolleyes:
 
Pretty nice achievement for SETI. Those work units can sometimes take months to validate...lol
 
Pretty nice achievement for SETI. Those work units can sometimes take months to validate...lol

No kidding. I have been running SETI for about a day or two to load up the queue and then switch over to WCG for 3 or 4 days while the SETI queue drains down. Then I repeat the cycle.
 
How do you have your cache settings and your priority settings for the projects? Are you just running SETI on GPU or CPU as well?
 
How do you have your cache settings and your priority settings for the projects? Are you just running SETI on GPU or CPU as well?

Just running CPUs as there does not appear to be any GPU tasks for linux. I have the cache setup for 2 days + 2 days reserve. My 2 2Ps can burn through that work in about 1.5 days and is about 500 tasks. My 4P burns through that in about the same time frame. After about a day of processing I I set the project not to allow new tasks. When the work queue about matches the number of CPUs/Threads, I update another project to accept new tasks. It is about 6 to 8 hours of overlap between the projects before the other project has all resources. Most likely not the most elegant approach, but it works :)
 
Well...not knowing why you have your settings the way you do... here are a few suggestions.

If SETI is your primary project, you can count on their servers being down every week on Monday (I think or it is Tuesday... either way it is weekly same bat time same bat channel). Now, if you are trying to cache enough work, then just keep running a 2-3 day cache. Now, if you are wanting WCG to just be a backup project for when you run out of work, you should just change your project priority level from the default 100 to 0 for WCG. What that does is tell the client to only pull work from it if there is no work available from all other projects. Then when SETI has work again, you will start pulling work from them again. It of course will finish the work in progress according to deadlines and such. You can imagine the time it will save you from having to manually cache up each week and tweak settings.

However, if y ou are wanting to run WCG more, then it really just depends on how you want it to play out. You can assign your priority levels so that it balances work accordingly. For example if you set SETI to 300 and WCG to 100. SETI will run 3 times as much work (based on wall clock) as WCG. It won't guarantee 3 SETI and 1 WCG running at all times. It just balances the amount of time given to the projects. If for some reason a project is down when it is time for BOINC to call for more work, it will keep chugging away at the other accumulating a "debt". Then when work is available again, it will make up the difference accordingly.

Let me know if this helps, if you need assistance, or if you had something completely different in mind. :)
 
Team name has been changed to [H]ard|OCP. You should not have to do anything if you were previously part of HardOCP.
 
Don't seem to have picked up any actives with the change, still only 5 of us turning in work.

My output will be much lower for Northern hemisphere summer
 
I don't believe there were any actives crunching SETI at the other team. However, this will at least get us a unified team name almost across the board. Only 3 projects left. One of which (WCG) will not be changed for sure as they have refused to assist in this ordeal. The only option we would have there is to abandon the one we use and switch to one we don't. Getting the masses to manually change over is highly unlikely.

Edit: it also means that if there is a challenge through BOINCStats, I can sign us up for it there...
 
I updated the link to the lunatics optimized apps in the first post. Apparently KWSN lost their team founder and owner of the site, so they had to switch domains. There is also a newer release of the optimized apps package than when this thread was started. So, people should look to see if they have the latest or not. This is a good time to start testing if you have any plans in joining in the Wow! event coming up in a month.
 
Back
Top