Slurm
- Slurm 101
- Slurm Queues
- Submitting Serial Jobs
- Submitting Interactive Jobs
- Submitting Parallel Jobs (MPI/OpenMP)
- Submitting GPU Jobs
- Submitting Array Jobs and Chain Jobs
- Handling Jobs running into TIMEOUT
- Accessing Webinterfaces (e.g. Jupyterlab, Ray) via SSH Tunnels
- Exclusive jobs for benchmarking
- Controlling the environment of a Job
FAQ and Troubleshooting
- How do I register myself to use the HPC resources?
- How do I get access to the LiCCA or ALCC resources?
- What kind of resources are available on LiCCA?
- What kind of resources are available on ALCC?
- How do I acknowledge the usage of HPC resources on LiCCA in publications?
- How do I acknowledge the usage of HPC resources on ALCC in publications?
- What Slurm Partitions (Queues) are available on LiCCA?
- What Slurm Partitions (Queues) are available on ALCC?
- What is Slurm?
- How do I use Slurm batch system
- How do I submit the serial calculations?
- How do I run multithreaded calculations?
- How do I run parallel calculations on several nodes?
- How do I run GPU based calculations?
- How do I check Slurm current schedule, queue?
- Is there some kind of Remote Desktop for the cluster?
- If I have a question which is not listed here?
- If I want to report a problem?
- Which version of Python could be used?
- Which Anaconda, Miniconda, Miniforge, Micromamba?
- How do I monitor live CPU/GPU/memory/disk utilization?
- How do I check my GPFS filesystem usage and quota situation?
- Why does the slurm squeue command show (PartitionTimeLimit) next to the submitted job?
- Why does the slurm squeue command show (MaxCpuPerAccount), (MaxJobsPerAccount) or (MaxGRESPerAccount) next to the submitted job?
- Why does the slurm squeue command show (QOSMaxJobsPerUserLimit) next to the submitted job?
- Why does the slurm squeue command show (Nodes required for job are DOWN, DRAINED or reserved for jobs in higher priority partitions) next to the submitted job?
- Resources
- Status
- Access
- Data Transfer
- File Systems
- Environment Modules (Lmod)
- Interactive (Debug) Runs (not Slurm)
- Submitting Jobs (Slurm Batch System)
- Slurm
- Slurm 101
- Slurm Queues
- Submitting Serial Jobs
- Submitting Interactive Jobs
- Submitting Parallel Jobs (MPI/OpenMP)
- Submitting GPU Jobs
- Submitting Array Jobs and Chain Jobs
- Handling Jobs running into TIMEOUT
- Accessing Webinterfaces (e.g. Jupyterlab, Ray) via SSH Tunnels
- Exclusive jobs for benchmarking
- Controlling the environment of a Job
- HPC Software and Libraries
- HPC Tuning Guide
- Service and Support
- Origin of the name
- FAQ and Troubleshooting - LiCCA
13.5., 9:00: final migration steps with data synchronization have been started,
no login possible until migration complete
The December module updates and deprecations have been rolled out today. Please look out for deprecation warnings in your Slurm output.
Please be aware of the following module changes (if default appears at the end, this is the new default!):
New/updated scientific Modules:
====================
elk/10.7.8-impi2021.10-intel2023.2 (default)
gromacs/2025.4-ompi5.0-gcc13.2-mkl2023.2-cuda12.9
nwchem/7.3.1-ompi5.0-cf (default)
octave/10.3.0-cf (default)
orca/6.1.1 (default)
qchem/6.3.1
qchem/6.4.0 (default)
siesta/5.4.1-ompi5.0-cf (default)
New/updated common Modules:
====================
cmake/3.31.10
cmake/4.2.1 (default)
ffmpeg/8.0.1 (default)
meson/1.10.0 (default)
meson/1.9.2
ninja/1.13.2 (default)
anaconda/2025.12 (default)
apptainer/1.4.5 (default)
cudnn/cu11x/9.10.2.21 (default)
cudnn/cu12x/9.17.0.29 (default)
cuquantum/cu11x/25.06.0.10 (default)
cuquantum/cu12x/25.11.1.11 (default)
cutensor/cu11x/2.2.0.0 (default)
cutensor/cu12x/2.4.1.4 (default)
gdrcopy/2.5.1
go/1.24.11
go/1.25.5 (default)
julia/1.10.10
julia/1.12.2 (default)
micromamba/2.4.0 (default)
miniforge/25.11.0 (default)
nccl/cu12.9/2.27.7
nccl/cu12.9/2.28.9 (default)
openjdk/11.0.29+7
openjdk/17.0.17+10
openjdk/21.0.9+10
openjdk/25.0.1+8 (default)
openjdk/8.u472-b08
Deprecated Modules (to be hidden on 15th of January 2026 and removed on 30th of January 2026):
=================================================
cmake/4.0.3: Please use cmake/4.2.1 or higher!
anaconda/2024.06: Please use anaconda/2024.10 or higher!
apptainer/1.3.5: Please use apptainer/1.3.6 or higher!
julia/1.10.8: Please use julia/1.10.10 or higher!
julia/1.11.3: Please use julia/1.12.2 or higher!
elk/10.5.16-impi2021.10-intel2023.2: Please use elk/10.7.8-impi2021.10-intel2023.2 or higher!
elk/10.6.2-impi2021.10-intel2023.2: Please use elk/10.7.8-impi2021.10-intel2023.2 or higher!
ffmpeg/6.1: Please use ffmpeg/7.0.1 or higher!
meson/1.4.2: Please use meson/1.8.4 or higher!
meson/1.5.2: Please use meson/1.8.4 or higher!
meson/1.7.2: Please use meson/1.8.4 or higher!
micromamba/2.0.5: Please use micromamba/2.4.0 or higher!
micromamba/2.2.0: Please use micromamba/2.4.0 or higher!
micromamba/2.3.0: Please use micromamba/2.4.0 or higher!
qchem/6.3.0: Please use qchem/6.3.1 or higher!
octave/8.4.0-cf: Please use octave/9.1.0-cf or higher!
siesta/5.2.0-ompi4.1-cf: Please use siesta/5.4.1-ompi5.0-cf or higher!
siesta/5.4.0-ompi5.0-cf: Please use siesta/5.4.1-ompi5.0-cf or higher!
comsol/6.1: Please use comsol/6.2 or higher!
cuda/11.6.2: Please use cuda/11.8.0 or higher!
cuda/12.1.1: Please use cuda/12.5.1 or higher!
cuda/12.2.2: Please use cuda/12.5.1 or higher!
cuda/12.3.2: Please use cuda/12.5.1 or higher!
cuda/12.4.1: Please use cuda/12.5.1 or higher!
cuda/12.6.2: Please use cuda/12.6.3 or higher!
cuda/12.8.0: Please use cuda/12.8.1 or higher!
cuda-compat/12.9.1: The current cuda driver is already newer!
nccl/cu12.2/2.21.5: Please use nccl/cu12.5/2.21.5 or higher!
nccl/cu12.4/2.21.5: Please use nccl/cu12.5/2.21.5 or higher!
cudnn/cu11x/8.9.7.29: Please use cudnn/cu11x/9.10.2.21 or higher!
cudnn/cu11x/9.0.0.312: Please use cudnn/cu11x/9.10.2.21 or higher!
cudnn/cu11x/9.1.1.17: Please use cudnn/cu11x/9.10.2.21 or higher!
cudnn/cu11x/9.2.1.18: Please use cudnn/cu11x/9.10.2.21 or higher!
cudnn/cu11x/9.3.0.75: Please use cudnn/cu11x/9.10.2.21 or higher!
cudnn/cu11x/9.4.0.58: Please use cudnn/cu11x/9.10.2.21 or higher!
cudnn/cu11x/9.5.1.17: Please use cudnn/cu11x/9.10.2.21 or higher!
cudnn/cu12x/8.9.7.29: Please use cudnn/cu12x/9.17.0.29 or higher!
cudnn/cu12x/9.0.0.312: Please use cudnn/cu12x/9.17.0.29 or higher!
cudnn/cu12x/9.1.1.17: Please use cudnn/cu12x/9.17.0.29 or higher!
cudnn/cu12x/9.2.1.18: Please use cudnn/cu12x/9.17.0.29 or higher!
cudnn/cu12x/9.3.0.75: Please use cudnn/cu12x/9.17.0.29 or higher!
cudnn/cu12x/9.4.0.58: Please use cudnn/cu12x/9.17.0.29 or higher!
cudnn/cu12x/9.5.1.17: Please use cudnn/cu12x/9.17.0.29 or higher!
cutensor/cu11x/2.0.1.2: Please use cutensor/cu11x/2.2.0.0 or higher!
cutensor/cu11x/2.0.2.5: Please use cutensor/cu11x/2.2.0.0 or higher!
cutensor/cu12x/2.0.1.2: Please use cutensor/cu12x/2.4.1.4 or higher!
cutensor/cu12x/2.0.2.5: Please use cutensor/cu12x/2.4.1.4 or higher!
cuquantum/cu11x/24.03.0.4: Please use cuquantum/cu11x/25.06.0.10 or higher!
cuquantum/cu11x/24.08.0.5: Please use cuquantum/cu11x/25.06.0.10 or higher!
cuquantum/cu11x/24.11.0.21: Please use cuquantum/cu11x/25.06.0.10 or higher!
cuquantum/cu12x/24.03.0.4: Please use cuquantum/cu12x/25.11.1.11 or higher!
cuquantum/cu12x/24.08.0.5: Please use cuquantum/cu12x/25.11.1.11 or higher!
cuquantum/cu12x/24.11.0.21: Please use cuquantum/cu12x/25.11.1.11 or higher!
If you experience problems with any module, please let us know!
Both clusters ALCC and LiCCA are back online.
We announced a maintenance window for both clusters
ALCC and LiCCA to update the Slurm version to 25.11.
One of the main reasons are improvements to the
GPU allocation for Slurm jobs,
which is broken in the current version 25.05.
We might still have to adjust the Slurm configuration
for GPU job handling in the days following the update,
meaning eventually draining and restarting Slurm
daemons again.
We will at least temporarily lower the TimeLimit
in the GPU partitions from 3 to 2 days.
This might cause some inconvenience for long time active users,
but will provide a good alternative to cancelling/killing jobs
due to required restarts of the system.
Since the last major upgrade of both clusters ALCC
and LiCCA in July, we observe some problems with
Slurm jobs allocating GPUs, and with our Slurm accounting
database. Recent Slurm updates (Slurm version 25.11) should
fix these problems.
Maintenance schedule:
- Friday, 28.November, 9:00, set all partitions to drain
- Monday, 1.December, 9:00, start of Slurm update
-- GPU partitions drained
-- CPU partitions draining, runnning jobs continue,
job survival not guaranteed
- Monday, 1.December: we plan to resume all partitions till 18:00
- login nodes will not be available for users until
the maintenance is finished.













