Page tree
Skip to end of metadata
Go to start of metadata

Introductory remarks

The job scripts for SLURM partitions are provided as templates which you can adapt for your own settings. In particular, you should account for the following points:

  • Some entries are placeholders, which you must replace with correct, user-specific settings. In particular, path specifications and e-Mail addresses must be adapted. Always specify the appropriate directories instead of the names with the three periods in the following examples!

  • In case you have to work with the environment modules package in your batch script, you also have to source the file /etc/profile.d/modules.sh.

Time and Memory Requirements

Try to estimate the time and mem require as close as possible to your needs. This help to prevent idling nodes or CPUs.

Serial jobs 

This job type normally uses a single core on a shared memory node of the designated SLURM partition. The multiple cores of a node are shared with other users.

Serial jobSerial job (long running)
#!/bin/bash
#SBATCH -J job_name
#SBATCH -o ./%x.%j.%N.out
#SBATCH -D ./
#SBATCH --get-user-env
#SBATCH --clusters=serial
#SBATCH --partition=serial_std
#SBATCH --mail-type=end
#SBATCH --mem=800mb
#SBATCH --cpus-per-task=1
#SBATCH --mail-user=insert_your_email_here
#SBATCH --export=NONE
#SBATCH --time=24:00:00

./myprog.exe

The above example requests 1 core of a node and assumes the binary
is located in the submission directory (-D)

#!/bin/bash
#SBATCH -J job_name
#SBATCH -o ./%x.%j.%N.out
#SBATCH -D ./
#SBATCH --get-user-env
#SBATCH --clusters=serial
#SBATCH --partition=serial_long
#SBATCH --mail-type=end
#SBATCH --mem=800mb
#SBATCH --cpus-per-task=1
#SBATCH --mail-user=insert_your_email_here
#SBATCH --export=NONE
#SBATCH --time=8-00:00:00

./myprog.exe

The above example requests 1 core of a node and assumes the binary
is located in the submission directory (-D)

Shared Memory jobs

For very large memory jobs (more than 8 GByte and up to 240 GBytes) or (going beyond 1 TByte) the teramem_inter partition in the interactive segment should be used.

Shared memory job

up to 28 cores with up to 50 GByte memory

Shared memory job on HP DL580 "teramem1"
with shared memory multithreading
with  up to 96 cores and 6 TByte of memory
#!/bin/bash
#SBATCH -J job_name
#SBATCH -o ./%x.%j.%N.out
#SBATCH -D ./
#SBATCH --get-user-env
#SBATCH --clusters=serial
#SBATCH --partition=serial_std
#SBATCH --mem=10000mb
#SBATCH --cpus-per-task=7
#SBATCH --mail-type=end
#SBATCH --mail-user=insert_your_email_here
#SBATCH --export=NONE
#SBATCH --time=8:00:00

export OMP_NUM_THREADS=7
./myprog.exe

#!/bin/bash
#SBATCH -J job_name
#SBATCH -o ./%x.%j.%N.out
#SBATCH -D ./
#SBATCH --get-user-env
#SBATCH --clusters=inter
#SBATCH --partition=teramem_inter
#SBATCH --mem=2000gb
#SBATCH --cpus-per-task=32 
#SBATCH --mail-type=end
#SBATCH --mail-user=insert_your_email_here
#SBATCH --export=NONE
#SBATCH --time=8:00:00

export OMP_NUM_THREADS=32
./myprog.exe



The above example requests 7 cores of a node and assumes the binary

  • is located in the submission directory (-D)
  • can make use of OpenMP for multi-threading across the 7 cores

If you do not specify --mem then 1.7GB*cpus_per_task will be used,
which is also a good balance.
To avoid wasting resources, the number of cores requested should be
commensurate with the memory requirement, i.e. at around 60 GBytes
per core. This may imply that you need to parallelize your code if not
done yet.

TSM Archivation

TSM archivation is only supported on the login nodes of the cluster, not on the SLURM-controlled batch nodes. Please consult the document describing tape archivation on our HPC systems for more details on TSM usage.

  • No labels