Page tree
Skip to end of metadata
Go to start of metadata
Page Content


Overview of clusters, limits and job processing


For details, please also read the Linux-Cluster subchapters!

Cluster specificationsLimits


Cluster- / Partition-specific
Slurm job settings


Typical job type


Cluster system

Slurm
cluster
Slurm
partition

Nodes
in partition

Node range
per job
min - max
Maximum
runtime
(hours)

Maximum
running jobs
per user

Maximum
submitted
jobs
per user

Memory limit
(GByte)



cm2



cm2_large

404

(overlapping
partitions)

25 - 6448230








56

per node

--clusters=cm2
--partition=cm2_large
--qos=cm2_large

CoolMUC-2

  • 28-way Haswell-EP nodes with Infiniband FDR14 interconnect and 2 hardware threads per physical core
cm2_std3 - 2472450
--clusters=cm2
--partition=cm2_std
--qos=cm2_std
cm2_tinycm2_tiny3001 - 4721050
--clusters=cm2_tiny



serial

serial_std

96

(overlapping
partitions)

1 - 196

dynamically
adjusted
depending on
workload

250
--clusters=serial
--partition=serial_std
--mem=<memory_per_node>MB

Shared use of compute nodes among users!
Default memory = 
memnode / Ncores_node

serial_long1 - 1> 72
(currently
480)
250
--clusters=serial
--partition=serial_long
--mem=<memory_per_node>MB







inter

cm2_inter121 - 4212
--clusters=inter
--partition=cm2_inter

Do not run production jobs!

teramem_inter1

1 - 1
(up to 64 logical
cores)

24012

approx. 60
per
physical core

--clusters=inter
--partition=teramem_inter

Teramem

  • HP DL580 shared memory system
  • in total 96 physical cores
  • each physical core has 2 hyperthreads
mpp3_inter31 - 3212


approx. 90 DDR
plus 16 HBM
per node

--clusters=inter
--partition=mpp3_inter

Do not run production jobs!

CoolMUC-3

  • 64-way Knight's Landing 7210F nodes with Intel Omnipath 100 interconnect and 4 hardware threads per physical core
mpp3mpp3_batch1451 - 324850

dynamically
adjusted
depending on
workload

--clusters=mpp3
--partition=mpp3_batch

Submit hosts

Submit hosts are usually login nodes that permit to submit and manage batch jobs.

Cluster segmentSubmit hosts
CooLMUC-2lxlogin1, lxlogin2, lxlogin3, lxlogin4
CooLMUC-3lxlogin8, lxlogin9
Teramemlxlogin8
IvyMUClxlogin10

However note that cross-submission of jobs to other cluster segments is also possible. The only thing you need to take care of is that different cluster segments support different instructions sets, so you need to make sure that your software build produces the appropriate binary that can execute on the targeted cluster segment.


  • No labels