Distributed Computing on Raspberry Pi

I've long thought about getting a Pi to tinker around with but have never done so. Just curious about your expereince with the aforementioned FLIRC pi 4 case. Does it provide enough passive cooling to run all four cores on WCG (or other DC projects) 24/7? As I started to search for Pi4 cases and read reviews I saw a lot of mixed thoughts out there about the amount of heat generated and the cooling provided by these cases.
The Flirc case easily passively cools a stock pi 4. If you want to push over _voltage=5 and 2075 MHz, you’ll want to pop off the plastic top of the flic case, as that will net you a sizable drop in temp.

Do note that I run 10 pis in my basement, mixed between Rosetta and open pandemics. The basement rarely gets above 68F, so YMMV on passive cooling in warmer ambient temps.
 
A short update on working with Pis - I've been spending some time trying to figure out network boot w/ iscsi targets for remote storage instead of NFS. It's been slow going as I can usually only work on it at nights when I'm already dead tired, but I seem to be making progress on that front (now my test pi boots then hangs).

Today I had a little extra time and decided to work on something different - I cut down everything running on Pi OS lite, and knocked out a number of services to free up both memory and CPU time. I removed (I think) around 50mb of ram usage, and reduced cpu overhead with the following things:

1) set static IP and disable dhcpd
2) disable wpa_supplicant (used with wifi, but I had already disabled wifi)
3) disable audio and remove drivers and associated software
4) disabled triggerhappy (hotkey daemon that controls things like keyboard volume control)
5) disable avahi-daemon (I don't need to advertise this host to the network)
6) migrate cron to systemd and disable the cron service
7) disable VC4 DRM driver and reduce framebuffers to 0

I still have a fair amount of idle used ram (126MB) - some of which is tied to using NFS that I should be able to remove once I switch to icsi, but I'll take any improvement I can get. If anyone is interested and wants to help trim down the Pi OS lite install, or do further research on disabling hardware to reduce pi power usage, I'd love to have some help!
 
I got myself also a Pi 4 with 8GB RAM. Nice case too. 24/7 on WCG hardly ever over 40 Celsius.

And easy connect to my 5GHz WIFI. Nice little box. Like what I see. 9 Hours is a bit long but why not. One of those fire-and-forget boxen.

Next time I will resinstall with server OS only and save the need to GUI. Waste of CPU cycles.

Or maybe one Pi with UI and three Pi with CLI only ... something like that

8C22ECD9-3B19-4127-86EC-A4E6F422205D.jpeg


80D8F2A2-2E1D-49E7-83E5-EF54C7AE957D.jpeg
 
I really don’t know what the Pi-Cores are doing while I’m sleeping. And I don’t want to know. But there is a second one ... mysterious ...

5723BC2D-4572-4F4D-8564-E4BEC54324FE.jpeg


The one with the Ethernet cable will get ESXi when I find a bit time and a bigger USB stick or SSD. Until then its a headless BOINC Pie (once I removed the HDMI cable)
 
I really don’t know what the Pi-Cores are doing while I’m sleeping. And I don’t want to know. But there is a second one ... mysterious ...

View attachment 299659

The one with the Ethernet cable will get ESXi when I find a bit time and a bigger USB stick or SSD. Until then its a headless BOINC Pie (once I removed the HDMI cable)
I’ve found Pis just keep replicating if you aren’t careful. Somehow there are more than 16 in my basement now...
 
I hear that putting rabbit stickers on them helps...
 
  • Like
Reactions: EXT64
like this
It’s happen again ... I was out shooting my archery competitions over the weekend and when coming home there are more of those ...

E85B20A7-5909-481F-87E8-9C28DF3215B4.jpeg


This time only 2GB version as it looks like enough RAM for dedicated cruncher on WCG right now; 4GB might have been better but saved a few 1000 Yen this way.

What is the best way to stack those ? And still have active cooling via fan ?

Any experience with booting those from USB / SSD ? Or booting over network ?
I envision one Pi with SSD would serve as bootstrap server for the rest of the pack.
 
  • Like
Reactions: EXT64
like this
It’s happen again ... I was out shooting my archery competitions over the weekend and when coming home there are more of those ...

View attachment 302107

This time only 2GB version as it looks like enough RAM for dedicated cruncher on WCG right now; 4GB might have been better but saved a few 1000 Yen this way.

What is the best way to stack those ? And still have active cooling via fan ?

Any experience with booting those from USB / SSD ? Or booting over network ?
I envision one Pi with SSD would serve as bootstrap server for the rest of the pack.
The 2 GB version is plenty for several projects - WCG Open Pandemics, Einstein, and Asteroids for sure, and there are probably several others 2GB would be enough for as well. I currently am about half and half between 2GB and 4GB Pis.

For stacking, I started with using FLIRC passive aluminum cases and just stacking them on on top of another with a 120mm fan blowing at the whole stack. I've since switched to using the same stacking case used by The Heretic earlier in the thread - see this post: https://hardforum.com/threads/distributed-computing-on-raspberry-pi.1997998/post-1044647824. I went with the 8 stack myself.

SSD booting the Pi 4 was still in beta when I checked a few months ago. I would check the official raspberry pi forum if you want to go that direction.

For booting over the network, I initially setup a NFS share on my freenas and set the pis to boot over NFS, leaving /root on NFS. That was mediocre - if a pi swaps and tries to write data to the NFS share, they tend to crash after a few 100mb. I since switched to working on setting up my freenas with iScsi mounts and booting them over iScsi - check this guide: https://shawnwilsher.com/2020/05/network-booting-a-raspberry-pi-4-with-an-iscsi-root-via-freenas/ The guide is slightly out of date in regards to the firmware options you have to set for network boot, so you'll have to read this page to get the correct values for boot order: https://www.raspberrypi.org/documentation/hardware/raspberrypi/bcm2711_bootloader_config.md

Power wise, for multiple pis, I started with 6 port Anker USB chargers. Now that I'm at, uhh, 17 pis, I think I'm going to switch to power over ethernet if I do any more pis. The ease of wiring as well as the higher efficiency power supplies available to PoE switches is likely the correct solution for a very large deployment.
 
For stacking, I started with using FLIRC passive aluminum cases and just stacking them on on top of another with a 120mm fan blowing at the whole stack. I've since switched to using the same stacking case used by The Heretic earlier in the thread - see this post: https://hardforum.com/threads/distributed-computing-on-raspberry-pi.1997998/post-1044647824. I went with the 8 stack myself.

That case looks great. Every time I see it I'm tempted to build a small pi cluster.
 
I'm thinking about going 'outside of the pi box' and trying a similar cluster with Rock Pi X's. Although building one with Raspberry Pi 4's and another with Rock Pi X's and comparing PPD could be pretty interesting.
 
I'm thinking about going 'outside of the pi box' and trying a similar cluster with Rock Pi X's. Although building one with Raspberry Pi 4's and another with Rock Pi X's and comparing PPD could be pretty interesting.
The big deterrent I’ve had for using non pi SBC, is the lack of raspbian. The software ecosystem is nicely defined for raspberry pi, and I was able to pickup power saving and Linux optimizations that may not have been available for other platforms.

that said, if I could find a ARM pi look alike built on 10nm or 7nm instead of 28nm, I would be all over that
 
The big deterrent I’ve had for using non pi SBC, is the lack of raspbian. The software ecosystem is nicely defined for raspberry pi, and I was able to pickup power saving and Linux optimizations that may not have been available for other platforms.

that said, if I could find a ARM pi look alike built on 10nm or 7nm instead of 28nm, I would be all over that
The Rock Pi X is a very low power X86 quad core in a similar form factor to a Raspberry Pi, so it should be able to run any Linux distro (or even Windows, but who would want to run that :ROFLMAO: ). I'm just not sure how the performance, and performance per watt would stack up against the ARM in the Raspberry Pi 4.
 
The Rock Pi X is a very low power X86 quad core in a similar form factor to a Raspberry Pi, so it should be able to run any Linux distro (or even Windows, but who would want to run that :ROFLMAO: ). I'm just not sure how the performance, and performance per watt would stack up against the ARM in the Raspberry Pi 4.
Just looked it up - the Rock Pi X is a cherry trail atom from 2016 on 14nm. There is a thread on the raspberry pi forums regarding the comparison here, but I only skimmed it and didn't see any actual results comparison:
https://www.raspberrypi.org/forums/viewtopic.php?t=252390

Efficiency on the Rock Pi X is going to, in part, be based in how much of the integrated stuff you can turn off - you don't need hdmi, audio, hardware encryption, etc. The more stuff you can turn off, the better efficiency you'll get.
 
It seems like that Atom in the Rock Pi X is pretty slow. The good thing with these devices(both the Rock Pi X and the Pi 4) is they're inexpensive enough and low power enough there's not a huge downside to getting one of each and just letting them run BOINC projects even if the Rock Pi X ends up not being as efficient as the Pi 4.
 
Depending on cost, the Turing pi v2 could be a great way to do clustered pi 4s.

https://www.tomshardware.com/news/turing-pi-2

The advantage is each pi 4 compute module costs $5 - $10 less than the same capacity, has less integrated hardware that you don’t need (wireless), has better zheight on the board (better for heat sinks) and is slightly more power efficient base.

If the Turing pi v2 costs 85 or less, it would break even with just building a bunch of stand alone pis in a 4 stack case with a combo power brick and cables, and would be more power efficient due to using the CM, condensing Ethernet into 2 shared ports, and hopefully being able to use a 90%+ efficient power supply.

I’ll be keeping my eye on this and if it looks cost effective I’ll pick one up and give it a shot.
 
Wow, that does look really neat. Will have to wait to see the details in 2021, but definitely looks interesting.
 
Yeah, a while back I 3d printed a case for my 4 Rock64 4gb boards, a network switch, and their power supply (as well as a single 120mm fan for cooling) so it would all be a single "box". It works great but was a lot more work than this is lol.
 
Yeah, a while back I 3d printed a case for my 4 Rock64 4gb boards, a network switch, and their power supply (as well as a single 120mm fan for cooling) so it would all be a single "box". It works great but was a lot more work than this is lol.

That sounds great. Do you happen to have any pictures of it? Having the switch and power supply integrated so it's just a network cable and 1 or 2 power cables coming out sounds really clean.

I have a Rock Pi X coming in that I should have on Saturday if anybody is interested in any BOINC performance info unless it's too far off topic for a Raspberry Pi thread.
 
That sounds great. Do you happen to have any pictures of it? Having the switch and power supply integrated so it's just a network cable and 1 or 2 power cables coming out sounds really clean.

I have a Rock Pi X coming in that I should have on Saturday if anybody is interested in any BOINC performance info unless it's too far off topic for a Raspberry Pi thread.
By all means, share your experiences with the Rock Pi X - We can call this the Single Board Computer thread if we need to. :)
 
The weekend project: Pimp a Pi
One Pi get ESXi this weekend together with a 256GB M.2 SSD
Back to the roots of the second part of my user name ... then I can keep running WCG in one VM, try FAH in another VM and hopefully a third VM as network boot server for the other Pi’s
 

Attachments

  • 1BB800A1-2786-46AD-BF28-D4ECEF95DFDA.jpeg
    1BB800A1-2786-46AD-BF28-D4ECEF95DFDA.jpeg
    492.9 KB · Views: 0
The weekend project: Pimp a Pi
One Pi get ESXi this weekend together with a 256GB M.2 SSD
Back to the roots of the second part of my user name ... then I can keep running WCG in one VM, try FAH in another VM and hopefully a third VM as network boot server for the other Pi’s
Wow - I didn’t realize that there was a F@H client for raspberry pi. I may have to try that too, based on your results.

what are you planning for network boot? DNSmasq / tftp? How are you going to manage file systems for the net booted pis?
 
I have the Rock Pi X setup. What would be the best thing to run to compare against the Rapsberry Pi's everybody else here is running? WCG Open Pandemics? I have the 4GB version. With the large heatsink they also sell that goes on the back of it where the CPU is(no fan), the CPU cores never hit over 60 degrees.
 
I have the Rock Pi X setup. What would be the best thing to run to compare against the Rapsberry Pi's everybody else here is running? WCG Open Pandemics? I have the 4GB version. With the large heatsink they also sell that goes on the back of it where the CPU is(no fan), the CPU cores never hit over 60 degrees.
I have both WCG OP and Rosetta running on Pi 4s. Rosetta is a little easier to give comparable Rac scores, but it takes longer than WCG to Stabilize I think. If I get some time tonight, I’ll give you some numbers for:

optimized pi4 on WCG OP
Overclocked pi4 on WCG OP
Optimized pi4 on Rosetta

i don’t think I have any OC’d pi 4s on Rosetta right now, but I’ll check. If not, I’ll set one up
 
My ESXi plans got stopped with a banality of 5V adapter and something like DC5525 ... dimension of that pin. Not fitting in the board. Back to Amazon and order some converter.

But second part: install FAHClient 7.6
It works but required a arm64 OS; I used Ubuntu 20.10 for this try and actually it is installing and working. Core A8 is doing the magic.

As for PPD its a but too early; lets wait a few frames. Temps with fan above the heatsink around 43C

Screen Shot 2020-11-29 at 15.12.55.png


06:00:16:WU00:FS00:0xa8: Core: Gromacs
06:00:16:WU00:FS00:0xa8: Type: 0xa8
06:00:16:WU00:FS00:0xa8: Version: 0.0.9
06:00:16:WU00:FS00:0xa8: Author: Joseph Coffland <[email protected]>
06:00:16:WU00:FS00:0xa8: Copyright: 2020 foldingathome.org
06:00:16:WU00:FS00:0xa8: Homepage: https://foldingathome.org/
06:00:16:WU00:FS00:0xa8: Date: Oct 28 2020
06:00:16:WU00:FS00:0xa8: Time: 22:19:53
06:00:16:WU00:FS00:0xa8: Compiler: GNU 8.3.0
06:00:16:WU00:FS00:0xa8: Options: -faligned-new -std=c++14 -fsigned-char -ffunction-sections
06:00:16:WU00:FS00:0xa8: -fdata-sections -O3 -funroll-loops -fno-pie
06:00:16:WU00:FS00:0xa8: Platform: linux2 4.15.0-108-generic
06:00:16:WU00:FS00:0xa8: Bits: 64
06:00:16:WU00:FS00:0xa8: Mode: Release
06:00:16:WU00:FS00:0xa8: SIMD: arm_neon_asimd
06:00:16:WU00:FS00:0xa8: OpenMP: ON
06:00:16:WU00:FS00:0xa8: CUDA: OFF
06:00:16:WU00:FS00:0xa8: Args: -dir 00 -suffix 01 -version 706 -lifeline 30345 -checkpoint 15 -np
06:00:16:WU00:FS00:0xa8: 3
<snip>
06:00:16:WU00:FS00:0xa8:************************************ System ************************************
06:00:16:WU00:FS00:0xa8: CPU: Cortex-A
06:00:16:WU00:FS00:0xa8: CPU ID: Arm Family 8 Model 72 Stepping 3
06:00:16:WU00:FS00:0xa8: CPUs: 4
06:00:16:WU00:FS00:0xa8: Memory: 7.63GiB
06:00:16:WU00:FS00:0xa8:Free Memory: 5.86GiB
06:00:16:WU00:FS00:0xa8: Threads: POSIX_THREADS
06:00:16:WU00:FS00:0xa8: OS Version: 5.8
06:00:16:WU00:FS00:0xa8:Has Battery: false
06:00:16:WU00:FS00:0xa8: On Battery: false
06:00:16:WU00:FS00:0xa8: UTC Offset: 0
06:00:16:WU00:FS00:0xa8: PID: 30349
06:00:16:WU00:FS00:0xa8: CWD: /var/lib/fahclient/work
06:00:16:WU00:FS00:0xa8:********************************************************************************
 
PPD improved a bit ... but I guess until my 20years OP-badge I will stay on WCG (after this WU)

View attachment 303952
Is this with 1 cpu or all 4? Thinking about it, Best case is that you’re only using one cpu and that you can get 4x the points with 4cpu which is like 12k ppd. Even a 1080ti would end up 3x more efficient in points per watt in that case, so probably better to keep the pis on WCG or Rosetta.

you would also likely be able to use the 64bit flag in raspbian instead of Ubuntu - that might net a very small perf improvement due to less OS overhead.
 
Last edited:
Is this with 1 cpu or all 4? Thinking about it, Best case is that you’re only using one cpu and that you can get 4x the points with 4cpu which is like 12k ppd. Even a 1080ti would end up 3x more efficient in points per watt in that case, so probably better to keep the pis on WCG or Rosetta.

you would also likely be able to use the 64bit flag in raspbian instead of Ubuntu - that might net a very small perf improvement due to less OS overhead.
It’s 3 cores ... I started the WU with “medium” and the AS didn’t liked me to increase to 4 (“full”). It dialed back to 3.

06:00:16:WARNING:Changed SMP threads from 3 to 4 this can cause some work units to fail
06:00:16:WARNING:AS lowered CPUs from 4 to 3
06:00:16:Running FahCore: /usr/bin/FAHCoreWrapper /var/lib/fahclient/cores/cores.foldingathome.org/lin/64bit-aarch64/a8-0.0.9/Core_a8.fah/FahCore_a8 -dir 00 -suffix 01 -version 706 -lifeline 29578 -checkpoint 15 -np 3
 
Pi vs ryzen 3600, 12 core
same project, Ryzen got 10cores assigned, PI running on 3 cores

7DD1043D-936A-4F83-BA1A-869EC27A4B97.jpeg

oh, and I run another one to see if 4 cores are ok, they are but PPD not getting better.
DC1ADDFE-6A8A-463B-97BD-F7AB611C377E.jpeg
given the concept of quick return bonus also not really a surprise. to be competitive FAH might want to dedicate projects to the ARMs with adjusted base credits. But as proof of concept it’s nice It’s working.
 
I'm getting far too excited with ESXi running on ARM
A75E67E7-C666-42E4-B496-48A7813AEC9F.jpeg

The whole thing is now a
Pi 4, 8GB RAM
256GB SSD (M.2, NGFF)
Some power card with cooler

tempImageoARklo.gif

Process based on this YT:
 
Last edited:
I'm probably being dense, but what would be the use case for a raspberry pi running esxi? Spinning up android VM's to do development tests on without using an emulator? Running a pihole + retropie + lamp server machine in one without each other affecting things isntead of using docker?
 
I'm probably being dense, but what would be the use case for a raspberry pi running esxi? Spinning up android VM's to do development tests on without using an emulator? Running a pihole + retropie + lamp server machine in one without each other affecting things isntead of using docker?
I’m old. I don’t do this docker-thing. For me setting up multiple services is easier with ESXI. E.g. a mini webserver, or multiple different web server. a focused TFTP server. A FAH or BOINC server. Multiple FAH or BOINC with different versions. etc. might be possible be docker; but I simply don’t have the background for it.
Most: right now I’m fascinated it works on a little box like a Pi
 
Last edited:
I’m old. I don’t do this docker-thing. For me setting up multiple services is easier with ESXI. E.g. a mini webserver, or multiple different web server. a focused TFTP server. A FAH or BOINC server. Multiple FAH or BOINC with different versions. etc. might be possible be docker; but I simply don’t have the background for it.
Most: right now I’m fascinated it works on a little box like a Pi
That makes sense, thank you for the explanation. With the Pi 4 I could see how it might finally have enough resources to do that especially with the 8GB ram version, so it is pretty cool to see ESXi on it. I was just trying to wrap my head around non x86 things somebody would use a VM hypervisor for, but that makes sense.
 
Back
Top