Deprecated content: Example serial job scripts on the Linux-Cluster

Under Construction

The Linux Cluster documentation is work in progress and will be updated incrementally!

The content of this page will be moved and this page will be deleted soon!


Introductory remarks

The job scripts for SLURM partitions are provided as templates which you can adapt for your own settings. In particular, you should account for the following points:

  • Some entries are placeholders, which you must replace with correct, user-specific settings. In particular, path specifications must be adapted. Always specify the appropriate directories instead of the names with the three periods in the following examples!

  • In case you have to work with the environment modules package in your batch script, you also have to source the file /etc/profile.d/modules.sh.

Time and Memory Requirements

Try to estimate the time and mem require as close as possible to your needs. This help to prevent idling nodes or CPUs.

Serial jobs 

This job type normally uses a single core on a shared memory node of the designated SLURM partition. The multiple cores of a node are shared with other users.

Serial jobSerial job (long running)
#!/bin/bash
#SBATCH -J job_name
#SBATCH -o ./%x.%j.%N.out
#SBATCH -D ./
#SBATCH --get-user-env
#SBATCH --clusters=serial
#SBATCH --partition=serial_std
#SBATCH --mem=800M
#SBATCH --cpus-per-task=1
#SBATCH --export=NONE
#SBATCH --time=24:00:00

./myprog.exe

The above example requests 1 core of a node and assumes the binary
is located in the submission directory (-D)

#!/bin/bash
#SBATCH -J job_name
#SBATCH -o ./%x.%j.%N.out
#SBATCH -D ./
#SBATCH --get-user-env
#SBATCH --clusters=serial
#SBATCH --partition=serial_long
#SBATCH --mem=800M
#SBATCH --cpus-per-task=1
#SBATCH --export=NONE
#SBATCH --time=8-00:00:00

./myprog.exe

The above example requests 1 core of a node and assumes the binary
is located in the submission directory (-D)

Shared Memory jobs

For very large memory jobs (more than 8 GByte and up to 240 GBytes) or (going beyond 1 TByte) the teramem_inter partition in the interactive segment should be used.

Shared memory job

up to 28 cores with up to 50 GByte memory

Shared memory job on Lenovo ThinkSystem SR850 V2
"teramem2" with shared memory multithreading
with  up to 96 cores and 6 TByte of memory
#!/bin/bash
#SBATCH -J job_name
#SBATCH -o ./%x.%j.%N.out
#SBATCH -D ./
#SBATCH --get-user-env
#SBATCH --clusters=serial
#SBATCH --partition=serial_std
#SBATCH --mem=10G
#SBATCH --cpus-per-task=7
#SBATCH --export=NONE
#SBATCH --time=8:00:00

export OMP_NUM_THREADS=7
./myprog.exe

#!/bin/bash
#SBATCH -J job_name
#SBATCH -o ./%x.%j.%N.out
#SBATCH -D ./
#SBATCH --get-user-env
#SBATCH --clusters=inter
#SBATCH --partition=teramem_inter
#SBATCH --mem=2000G
#SBATCH --cpus-per-task=32 
#SBATCH --export=NONE
#SBATCH --time=8:00:00

export OMP_NUM_THREADS=32
./myprog.exe



The above example requests 7 cores of a node and assumes the binary

  • is located in the submission directory (-D)
  • can make use of OpenMP for multi-threading across the 7 cores

If you do not specify --mem then 1.7GB*cpus_per_task will be used,
which is also a good balance.
To avoid wasting resources, the number of cores requested should be
commensurate with the memory requirement, i.e. at around 60 GBytes
per core. This may imply that you need to parallelize your code if not
done yet.

TSM Archivation

TSM archivation is only supported on the login nodes of the cluster, not on the SLURM-controlled batch nodes. Please consult the document describing tape archivation on our HPC systems for more details on TSM usage.