Page tree
Skip to end of metadata
Go to start of metadata

Available SLURM clusters


Parallel resources  (policies)
mpp2MPI or shared memory programs on the MPP FDR14 Infiniband cluster "CooLMUC-2"

mpp3

MPI or shared memory programs on the KNL Omnipath cluster "CooLMUC-3". Note that interactive testing is dispatched to a subset of 8 nodes in the same cluster.
ivymucMPI or shared memory programs on the MPP FDR14 Ivy Bridge based Infiniband cluster
myri

MPI or shared memory jobs on the 10G Myrinet cluster. For this cluster, it makes sense to specify the partition. Available partitions are:

matum_u32-way systems with 64 GB per node (short-time unpriorized execution; dedicated to users from MA-TUM)
matum_p32-way systems with 64 GB per node (long-time priorized execution; dedicated to users from MA-TUM)
interInteractive parallel jobs. Which of the clusters is used depends on the login node.
tum_chem
Parallel job processing dedicated to TUM chemistry users
hm_mechParallel job processing dedicated to Hochschule München / Mechatronik users
Serial resources (policies)
serial

For serial job processing. Available partitions are:

serial_mpp2Standard serial jobs
serial_longLong running serial jobs
inter

Interactive or batch shared memory jobs for high memory requirements (beyond 1 TByte) executed on teramem1.

bsbslurm

Serial job processing dedicated to users of BSB

tum_geodesy

Serial job processing dedicated to users of TUM / geodesy

lmu_asc

Serial job processing dedicated to users of the Arnold-Sommerfeld-Centre

Pfeil nach oben


Available Features

This section describes additional features that can be requested via the -C (or --constraint=) option for a job. Note that only values can be specified here; using contradictory values may result in undesired behaviour.

CooLMUC3 features

Purpose of featureValue to be specifiedEffect
Select cluster modequad"quadrant"; affinity between cache management and memory. Recommended for everyday use. In certain shared memory model workloads where application can use all the cores in a single process using a threading library like OpenMP and TBB, this mode can also provide better performance than Sub-NUMA clustering mode.
Select cluster modesnc4"Sub-NUMA clustering"; affinity between tiles, cache management and memory. NUMA-optimized software can profit from this mode. This mode is suitable for distributed memory programming models using MPI or hybrid MPI-OpenMP. Proper pinning of tasks and threads is essential
Select cluster modea2a"alltoall"; no affinity between tiles, cache management and memory. Not recommended because performance is degraded
Select memory modeflat

High bandwidth memory is operated as regular memory mapped into the address space. Note: due to SLURM limitations, the maximum available memory per node for a job will still only be 96 GBytes. The usage of "numactl -p" or Memkind library is recommended.

Select memory modecache

High bandwidth memory is operated as cache to DDR memory.

Select memory modehybrid

High bandwidth memory is evenly split between regular memory (8GB) and cache (8 GB)

For details see: KNL Processor Modes


  • No labels