Sztaki Desktop Grid

  • Thread starter Deleted member 88227
  • Start date
I guess we'll call it a draw. I span up 40 EC2 single core free tier instances last night. I'd better shut them down before they start costing me money!

Great call to arms on this motqalden.

This was actually alot of fun and i learned a bunch of stuff so good on them for getting us more mobilized. I bet our enemies have a shit ton of VM"s going . Can you picture how many VM's you could run on something like Fastgeeks monster 192 core machine.

This has also taught me that i need more ram!!!
 
I guess we'll call it a draw. I span up 40 EC2 single core free tier instances last night. I'd better shut them down before they start costing me money!

Great call to arms on this motqalden.

BTW i saw your score yesterday. You stole my half of the half of the pie! LOL.
 
Well...then it comes down to IO and overhead for having that many VM's. Definitely need RAID, SSD's, Ramdisks, etc....
 
My one system could use 2 sticks of DDR3, but i also think i want another 16GB DDR4 in my ryzen. and here i thought 16 was plenty since i was never using it. I will wait for a sale on that.....
 
If this does continue beyond tomorrow, everyone should use up their 750 EC2 free tier hours. I'm hoping it resets for me on June 1st.

For my part, I need to figure out loading VM images in Linux. VB recognised the USB prior to boot but then wouldn't upon booting. Dunno why.
 
What size? Gilthanis how big are your sticks? 2GB? I think I might have a 8GB pair in a system I retired. Ill check tomorrow.
 
My Sztaki task count was terrible today. They are all getting market as good and with credit, but even with all my VMs and a few systems I just can't get the tasks.

Whatever XtremeSystems is doing to pull down all the tasks and put up the huge numbers is impressive. I wish I knew what they were doing. They have some next level skills.
 
Just 2. its a super cheap board i got after splitting with the X. tided me over for a year before i got a real computer again
 
Well..I could provide a matching pair of 2GB sticks more than likely. Just let me know via PM.
 
My Sztaki task count was terrible today. They are all getting market as good and with credit, but even with all my VMs and a few systems I just can't get the tasks.

Whatever XtremeSystems is doing to pull down all the tasks and put up the huge numbers is impressive. I wish I knew what they were doing. They have some next level skills.

Yeah i was 3.5k 2 days ago and then 2.5k yesterday and it seems i will be lucky to get 1.5k today... I think there is just more people hitting the servers?
 
Guess they are serious about shutting it down today. This is posted on their home page now in big bright red lettering.


"SZTAKI Desktop Grid is going to close down officially on May 31. We will try to keep the server open as long as possible afterwards if needed, so all in-progress work units can be finished. Next, the project home page will be replaced by a static site containing the description and results for the applications of the project, and will be updated when we get new results from the scientists. We would like to thank you for your support and dedication for SZDG throughout the years!"
 
Guess they are serious about shutting it down today. This is posted on their home page now in big bright red lettering.


"SZTAKI Desktop Grid is going to close down officially on May 31. We will try to keep the server open as long as possible afterwards if needed, so all in-progress work units can be finished. Next, the project home page will be replaced by a static site containing the description and results for the applications of the project, and will be updated when we get new results from the scientists. We would like to thank you for your support and dedication for SZDG throughout the years!"

That has been there for at least a week already. It was there when i joined the project.
 
Oh haha goes to show how often I visit the main site
 
So - we got this guy - WEP, CAS.

What's the priority on these 3?
 
This first.
CAS second.
WEP third.

Based on task avalibility.
 
We will waste too many resources trying to hold onto 1st in rosetta. some teams focus on it too hard. Let sicit have it. XS isnt even in the top 10 on that project anyway.
 
and now I have 23 "in progress" for SZTAKI ... why not earlier ... :mad: ... seems some people might left the project not expecting new assignments ... no complain as long we get the credits
 
Seems Like MN Scout has been killing it last 2 days on this project
i have 13 in progress right now

Edit 2. looks like we actually beat Ars for points last hour . lol guessing we are not gonna catch up at this rate.
 
I was using the graphs in BOINC Manager and getting depressed because my Sztaki was only going up 100-200 a day. Just figured out it was displaying the graphs as "Host Total". And not "User Total". Oops.

I'm number 13 for RAC, but #3 in reality based on the total in the last day. I guess the 15 VMs paid off. Didn't get us 2nd though.

If we take the rate I'm producing, and extrapolate out we'd need another 30 "hosts" to reach the same production as Ars T. Then significantly more to overtake.

My computer is at its limit. Virtual Box doesn't always boot the VM instances up properly. I have to try a few times to get them past the Ubuntu loading screen.
 
I'm not interested in spinning up more graphical Ubuntu instances. I would like to try and get a text based Linux image that boots up and auto runs BOINC and automatically starts any scripts needed.
 
I'm not interested in spinning up more graphical Ubuntu instances. I would like to try and get a text based Linux image that boots up and auto runs BOINC and automatically starts any scripts needed.
no need for it ... specially if you run only CPU based projects.
For GPU based projects it seems enough to install VNC server which brings enough of X to get the driver loaded. I still have one CentOS running this way for FAH. Console only.
 
Wouldn't it be freaking funny as hell if this project is left running long enough to catch XS's ~600k points?
 
Wouldn't it be freaking funny as hell if this project is left running long enough to catch XS's ~600k points?
They would see it and would need to leave Ars to keep themselves ... and we can pass the roadblock ...

But I’m running dry ...
 
no need for it ... specially if you run only CPU based projects.
For GPU based projects it seems enough to install VNC server which brings enough of X to get the driver loaded. I still have one CentOS running this way for FAH. Console only.

So for Sztaki are you saying I don't need VMs for a CPU project like this? I'd certainly like to know a better way to get tons of "hosts".

I just configured 16 more Debian installs.
 
runningVMS.png
 
So for Sztaki are you saying I don't need VMs for a CPU project like this? I'd certainly like to know a better way to get tons of "hosts".

I just configured 16 more Debian installs.
ah, no ... VM will be ok. As you mentioned "graphical Ubuntu" I meant you could install just text-based Linux and skip all that UI stuff like X, Gnome or KDE ... just plain and simple text.

In which case the installation via boinc sources might be easier to avoid that the standard install reload a bunch of X/GUI stuff unwanted.
 
Not running needing all the space for the GUI stuff should save hdd space as well.
 
Edit: The below will get you tasks. BUT if the project bases its awarded points on your CPU hosts score, then this is a terrible thing to implement. BOINC appears to default to 1000 million ops/sec ops a second, Sztaki pays out based on the time taken and this ops number. If your cpu is actually 5000 million or 20000 million then you'll be awarded a fraction of the points you deserve for the time taken to complete the task. For Sztaki I see that whomever's tasks reported the lowest requested point score for a WU is what is rewarded to you.

I just setup 50 Docker instances of the BOINC client. The computer is still really responsive because even with 50 the tasks are coming in only 1 or 2 at a time, so maybe I should have done 100 or more.

I'm still perfecting this, and the whole setup has only been running for 2 hours. Need to work out how to persist this across shutdowns. This Docker stuff appears very powerful as a weapon for these challenges. I want to become a master.

So far I have the below steps that have to be done once:

My setup was on an Ubuntu install running on a dedicated computer with an Intel i5 2500k, and not in a virtual machine. On Ubuntu run each of these 3 commands to setup Docker from the terminal. These are the commands for installing Docker using the "Convenience Script". Instructions modified from here: https://docs.docker.com/install/linux/docker-ce/ubuntu/#install-using-the-convenience-script
Code:
sudo apt-get curl
curl -fsSL get.docker.com -o get-docker.sh
sudo sh get-docker.sh


Use the text editor in Ubuntu to make these script files in your home directory:
setupswarm.sh with the below:
Code:
#!/bin/bash
docker swarm init
docker network create -d overlay --attachable boinc

launch50clients.sh with the below:
Code:
#!/bin/bash
docker service create \
  --replicas 50 \
  --name boinc \
  --network=boinc \
  -p 31416 \
  -e BOINC_GUI_RPC_PASSWORD="123" \
  -e BOINC_CMD_LINE_OPTIONS="--allow_remote_gui_rpc" \
  boinc/client

Find your Account key at the SztakiWebsite: http://szdg.lpds.sztaki.hu/szdg/home.php and replace the #### in the below addsztakitoswarm.sh file with your key:
Code:
#!/bin/bash
docker run --rm --network boinc boinc/client boinccmd_swarm --passwd 123 --project_attach http://szdg.lpds.sztaki.hu/szdg/ ################

Now in the terminal we are going to change each of those scripts so they are executable. Type each line and click enter.
Code:
chmod +x setupswarm.sh
chmod +x launch50clients.sh
chmod +x addsztakitoswarm.sh

That is the end of the steps that need to be done once. Now we need to run the scripts. The below works, but needs to be perfected and a persistence setup. As far as I know right now the below steps would need to be done each time your computer starts up:

In the terminal type each of these lines one at a time and press enter
Code:
./setupswarm.sh
./launch50clients.sh      <--This line will have notification out with a count up to 50 and saying it converged.
./addsztakitoswarm.sh

Adding the project is quick, but it takes a bit for the project to download on all of them, and get setup onto your user account.

Run the below Docker command to see the projects that are currently attached to all of the 50 instances at once. This should have text for each of the 50 instances, and your username and team should be populated for each instance.:
Code:
docker run --rm --network boinc boinc/client boinccmd_swarm --passwd 123 --get_project_status

We can check on the currently running tasks for all these instances using:
Code:
docker run --rm --network boinc boinc/client boinccmd_swarm --passwd 123 --get_tasks

For Sztaki we need to constantly ping the project servers every 140 seconds, so I've modified the Watch command that has been posted elsewhere. In a new terminal window, run this command:
Code:
watch -n 140 docker run --rm --network boinc boinc/client boinccmd_swarm --passwd 123 --project http://szdg.lpds.sztaki.hu/szdg/ update

You can run the --get_project_status script from above to see how many successful tasks each of the instances have had.



Helpful websites where I found the above general information and tried to make it work for me:
The BOINC Docker we are using:
https://hub.docker.com/r/boinc/client/
The Docker page on installing it in Ubuntu:
https://docs.docker.com/install/linux/docker-ce/ubuntu/#install-using-the-convenience-script
The Boinccmd documentation wiki:
https://boinc.berkeley.edu/wiki/Boinccmd_tool
GIT page for what I believe is the user who made the Docker that is at docker.com:
https://github.com/marius311/boinc-client-docker
Blog post about how to make your own Docker image for BOINC:
https://rsmitty.github.io/Containerizing-The-Grid/
 
Last edited:
let me know when you are done testing and have perfected your setup. Then you should put up a full How-to guide in the guides section for future reference. Docker is something I would love to learn more about but have had less than 0 time to dedicate to learning.
 
Back
Top