About Singularity

Singularity is a container solution created by necessity for scientific and application driven workloads. Singularity containers are designed to be as portable as possible, spanning many flavors and vintages of Linux. Within the container, there are almost no limitations aside from basic binary compatibility. Each Singularity image includes all of the application’s necessary run-time libraries and can even include the required data and files for a particular application to run. This encapsulation of the entire user-space environment facilitates not only portability but also reproducibility. You can bring with you a stack of software, libraries, and a Linux operating system that is independent of the host computer you run the container on.

We will introduce some basic knowledge about how to use singularity in the following section. You may also refer to offical Singularity User Guide.
We also recommend the youtube video created by Singularity Offical Channel:

Singularity Container Image

By default, Singularity containers are read-only image file that usually end in .simg. You can copy the singuarity image file directly from others pre-build container with any copy file command like cp or scp. You can also pull the container from Docker Hub or Sinularity Hub repositories. Base on the container on the repositories, you may custom your own container desire for your job nature.

Create Singularity Container

To make your own container, you should have a Linux machine where Singularity is installed which require administrative privileges. In Central Cluster, chpc-sandbox is the only node we allow user to run singularity pull and singularity build. To access chpc-sandbox, login to Central Cluster then run following command, you may required to enter your Central Cluster password:

ssh chpc-sandbox
You may first get you container from Docker Hub or Sinularity Hub repositories using the following command:
#From Docker Hub
singularity pull docker://ubuntu:latest
singularity build centos7_python35.simg docker://centos/python-35-centos7
#From Singularity Hub
singularity pull shub://singularityhub/centos
singularity build tensorflow.simg shub://opensciencegrid/osgvo-tensorflow
Some 3rd party Docker repositories may requied login when pulling the image. If you want to access thoes repositories, please refer to our FAQ: How to pull image from 3rd party Docker repositories require login using Singularity?.

Building a singularity container require advanced knowledge on linux command and shell sciprting. If the container build by others already fulfill your needs, we are not recommand to build your own container. To start building a singularity container, we usually use a recipes file to define the container specification. If you define the recipes file as custom_container.recipes, you can build you container using the following command:

singularity build my_container.simg custom_container.recipes
There are lot of options you may define inside the recipes file and here is the documentation from Singularity site.

If you have any questions or problems on building the container, please contact us by mail to centralcluster@cuhk.edu.hk

Access Data Inside Container

By default, singularity will bind serval directories to the container. Those resources locate in the following directories, the container wil able to access them:

$HOME
/project
/scratch/s1
/tmp
If the resource locate on other directories or you want to bind the directories into different path, please refer to our FAQ: How to access my file contents when using Singularity?.

Running Singularity Container

There are serval ways to run your contianer which fulfill different usage. If your container already build with runscript support, you may run the container by the following command:

#assume singularity_container.simg is your conatainer image
./singularity_container.simg pythonscript.py #just run the container as a script with input file or parameter
singularity run singularity_container.simg pythonscript.py
If there are serval commands support by the conatiner, you need to specify the command you need to run by singularity exec as follow:
singularity exec singularity_container.simg python3 pythonscript.py
You may also use singularity shell to run any command interactively inside the conatiner.
singularity shell singularity_container.simg
Singularity: Invoking an interactive shell within container...
Singularity singularity_container.simg:~>
Any variable wants to use inside the container, you may define it outside the container with SINGULARITYENV_ as prefix. The defined variables can use as follow:
export SINGULARITYENV_INSIDEVAR="Passing Variable to Singularity"
singularity shell singularity_container.simg

Singularity: Invoking an interactive shell within container...
Singularity singularity_container.simg:~>
echo $INSIDEVAR
Passing Variable to Singularity
Singularity singularity_container.simg:~>

Using with SLURM

By submitting job with SLURM using Singularity, you should first upload your container in Central Cluster. The container can place in your home folder, project folder or scratch folder which allow access in others compute node. If you run your job using sbatch with your script, you may place the singularity command in the script as follow:

#!/bin/bash
#SBATCH -J JobName
#SBATCH -N 2 -c 16
~/singularity_container.simg ~/pythonscript.py #run container using runscript
singularity exec ~/singularity_container.simg python3 ~/pythonscript.py #execute command with singularity exec
You may also run the container interactively with srun and singularity shell like the following example (Tips: you may use echo $SINGULARITY_NAME to check if your shell inside the container):
srun --pty singularity shell ~/singularity_container.simg
srun: job 123 queued and waiting for resources
srun: error: Lookup failed: Unknown host
srun: job 123 has been allocated resources
Singularity: Invoking an interactive shell within container...
Singularity singularity_container.simg:~>
echo $SINGULARITY_NAME
singularity_container.simg
Singularity singularity_container.simg:~>
All the variables wants to use inside the container just define as usual in login node. By default, SLURM will pass all the environment variables to allocated compute node.