LHC projects


[H]ard|DCer of the Year - 2014
Jan 29, 2006
LHC Six Track

It seems that they are trying to merge all of the LHC projects into 1 (most likely 2) projects. Have one production site and one development site. So, if anyone notices any odd behavior in the near future with projects, please check into the forums of the project to see what's up.

Server Consolidation

They are discussing tips, concerns, ideas, and suggestions now if anyone wants to add their 2 cents. So far, it seems that they plan to eventually merge using the user accounts from lhc six track as it is the oldest and has the largest user base. They also say they plan on migrating BOINC points over to the final project.


DCOTM x4, [H]ard|DCer of the Year 2019
Sep 23, 2006
What I read: hit these individual projects hard now if you have project point and WUProp goals, as most will be going away. Noted. :D


Nov 7, 2011
LHC has made CernVM-FS which is required for their ”native” applications and the various guides on the internet are often out of date.
Following an old guide will probably lead to a lot of extra work compared to what is necessary to get up and running today.

Please note every LHC project is what I would call in development and may change requirements with the only warning being a release note on the LHC message board.

On a standard Ubuntu 18.04 machine, 20.04 is not supported yet, you ”only” have to add a repository, install CernVM-FS & squashfs-tools plus add a pair of configuration files to get started with native tasks.

In order to add the repository and install CernVM-FS, open a console and run the following commands:
sudo apt-get install lsb-release wget https://ecsft.cern.ch/dist/cvmfs/cvmfs-release/cvmfs-release-latest_all.deb sudo dpkg -i cvmfs-release-latest_all.deb rm -f cvmfs-release-latest_all.deb sudo apt-get update sudo apt-get install cvmfs squashfs-tools

Now the two configuration files have to be created.

Edit or create the file /etc/cvmfs/default.local using whatever the nano editor in a console (ctrl-x, y, [enter] to quit and save):
sudo nano /etc/cvmfs/default.local

Insert the configuration below:

CVMFS_QUOTA_LIMIT is the space in megabytes on your machine used for a local cache of files, official guides use 4 gigabytes.
I can see a machine with atlas native is currently using about 6 gigabytes at the moment so I set it to 12 gigabytes, you can adjust as you see fit.

The configuration is only recommended for users with few machines.
If you have many machines they recommend you set up a squid proxy and use this as a repository, as a local proxy can save a lot of bandwidth for you and the people at LHC.
Setting up a squid proxy is beyond the scope of this post, you can try reading this page for help on setting up a proxy:

Make directory for autofs config:
sudo mkdir /etc/auto.master.d

Add the default cvmfs directory to a autofs config using nano (ctrl-x, y, [enter] to quit and save):
sudo nano /etc/auto.master.d/cvmfs.autofs

Insert the configuration below:
/cvmfs /etc/auto.cvmfs

Now restart autofs
sudo systemctl restart autofs

Test the configuration
cvmfs_config probe

You should get the following result
Probing /cvmfs/atlas.cern.ch... OK Probing /cvmfs/atlas-condb.cern.ch... OK Probing /cvmfs/grid.cern.ch... OK

Now the computer should be ready to run native LCH projects,
check your LCH@Home account for computing preferences for the machine and ensure ”Run native if available?” is checked.
Last edited:


Nov 7, 2011
Running Ubuntu 18.04 is getting a bit old, so I have spend some time on getting this to work with 20.04.
I do not have a fresh 20.04 install to test this on at the moment, so I do not know if this works out of the box.

My previous post for getting CernVM-FS is still valid and should be completed.
With CernVM-FS installed Theory native will just work.

Atlas native requires a local install of singularity as the one included in the work units does not work with 20.04,
I got errors complaining about not being able to remount /var.

There is a prebuild singularity package available, but it is an old version and it would not work for me, so I installed
version 3.7.0 and it works with the current atlas native 2.85 tasks.

This guide assumes you do not have go or singularity installed on the system!

Please note that installing singularity this way bypasses the package manager,
You will have to remove this version of singularity manually if you ever want to use another version!

I set a prefix for /opt/singularity and symlink as necessary in this guide, as this makes it easier to maintain a clean
Linux install.
If you do not care about this, just remove the --prefix= part from one of the later steps and do not symlink anything.

The information in this post is not something "new" I have just repackaged and tested what is available in the official guide
which is available here: https://sylabs.io/guides/3.0/user-guide/installation.html

You will have to download a couple of files during the process.
This guide will put the temporary stuff in a folder called singbuild in the home folder, the folder can be deleted after the installation.

Open a console and start copy pasting.
mkdir singbuild
cd singbuild
To install singularity, you will have to download go and add it to the path of the current console first.
Please note that go is not installed on the system, the PATH part is temporary and only applies the console you execute the
path command in.
wget https://golang.org/dl/go1.15.6.linux-amd64.tar.gz
tar -xzf go1.15.6.linux-amd64.tar.gz
cd go/bin/
export PATH=$PATH:$(pwd)
cd ../../
Now download singularity.
wget https://github.com/hpcng/singularity/releases/download/v3.7.0/singularity-3.7.0.tar.gz
tar -xzf singularity-3.7.0.tar.gz
cd singularity/
Build and install singularity, note you can skip the ln -s line if you do not use the prefix= part as singularity will be installed to /usr/local/bin/.
./mconfig --prefix=/opt/singularity
make -C ./builddir
sudo make -C ./builddir install
sudo ln -s /opt/singularity/bin/singularity /usr/local/bin/singularity

The list below is the install actions, kinda handy when manually removing the install, you will have different locations if you did not use the prefix.
INSTALL /opt/singularity/bin/singularity
INSTALL /opt/singularity/etc/bash_completion.d/singularity
INSTALL /opt/singularity/etc/singularity/singularity.conf
INSTALL /opt/singularity/etc/singularity/remote.yaml
INSTALL CNI PLUGIN /opt/singularity/libexec/singularity/cni/dhcp
INSTALL CNI PLUGIN /opt/singularity/libexec/singularity/cni/host-local
INSTALL CNI PLUGIN /opt/singularity/libexec/singularity/cni/static
INSTALL CNI PLUGIN /opt/singularity/libexec/singularity/cni/bridge
INSTALL CNI PLUGIN /opt/singularity/libexec/singularity/cni/host-device
INSTALL CNI PLUGIN /opt/singularity/libexec/singularity/cni/ipvlan
INSTALL CNI PLUGIN /opt/singularity/libexec/singularity/cni/loopback
INSTALL CNI PLUGIN /opt/singularity/libexec/singularity/cni/macvlan
INSTALL CNI PLUGIN /opt/singularity/libexec/singularity/cni/ptp
INSTALL CNI PLUGIN /opt/singularity/libexec/singularity/cni/vlan
INSTALL CNI PLUGIN /opt/singularity/libexec/singularity/cni/bandwidth
INSTALL CNI PLUGIN /opt/singularity/libexec/singularity/cni/firewall
INSTALL CNI PLUGIN /opt/singularity/libexec/singularity/cni/flannel
INSTALL CNI PLUGIN /opt/singularity/libexec/singularity/cni/portmap
INSTALL CNI PLUGIN /opt/singularity/libexec/singularity/cni/tuning
INSTALL /opt/singularity/libexec/singularity/bin/starter
INSTALL /opt/singularity/var/singularity/mnt/session
INSTALL /opt/singularity/bin/run-singularity
INSTALL /opt/singularity/etc/singularity/capability.json
INSTALL /opt/singularity/etc/singularity/ecl.toml
INSTALL /opt/singularity/etc/singularity/seccomp-profiles/default.json
INSTALL /opt/singularity/etc/singularity/nvliblist.conf
INSTALL /opt/singularity/etc/singularity/rocmliblist.conf
INSTALL /opt/singularity/etc/singularity/cgroups/cgroups.toml
INSTALL /opt/singularity/etc/singularity/global-pgp-public
INSTALL SUID /opt/singularity/libexec/singularity/bin/starter-suid
Last edited: