Overview of clusters, limits and job processing
For details, please also read the Linux-Cluster subchapters!
Cluster specifications | Limits | Cluster- / Partition-specific | Typical job type | |||||||
---|---|---|---|---|---|---|---|---|---|---|
Slurm cluster | Slurm partition | Nodes | Node range per job min - max | Maximum runtime (hours) | Maximum | Maximum | Memory limit (GByte) | |||
cm2 | cm2_large | 404 (overlapping | 25 - 64 | 48 | 2 | 30 |
per node | --clusters=cm2 |
| |
cm2_std | 3 - 24 | 72 | 4 | 50 | --clusters=cm2 | |||||
cm2_tiny | cm2_tiny | 300 | 1 - 4 | 72 | 10 | 50 | --clusters=cm2_tiny | |||
serial | serial_std | 96 (overlapping | 1 - 1 | 96 | dynamically | 250 | --clusters=serial | Shared use of compute nodes among users! | ||
serial_long | 1 - 1 | > 72 (currently 480) | 250 | --clusters=serial | ||||||
inter | cm2_inter | 12 | 1 - 4 | 2 | 1 | 2 | --clusters=inter | Do not run production jobs! | ||
teramem_inter | 1 | 1 - 1 | 240 | 1 | 2 | approx. 60 | --clusters=inter |
| ||
mpp3_inter | 3 | 1 - 3 | 2 | 1 | 2 |
| --clusters=inter | Do not run production jobs! |
| |
mpp3 | mpp3_batch | 145 | 1 - 32 | 48 | 50 | dynamically | --clusters=mpp3 |
Submit hosts
Submit hosts are usually login nodes that permit to submit and manage batch jobs.
Cluster segment | Submit hosts |
---|---|
CooLMUC-2 | lxlogin1, lxlogin2, lxlogin3, lxlogin4 |
CooLMUC-3 | lxlogin8, lxlogin9 |
Teramem | lxlogin8 |
IvyMUC | lxlogin10 |
However note that cross-submission of jobs to other cluster segments is also possible. The only thing you need to take care of is that different cluster segments support different instructions sets, so you need to make sure that your software build produces the appropriate binary that can execute on the targeted cluster segment.
Documentation of SLURM
- SLURM Workload Manger (commands and links to examples).
- Available SLURM clusters and features
- Guidelines for resource selection
- Running parallel jobs on the Linux-Cluster
- Running serial jobs on the Linux-Cluster