CoolMUC-4





Coming soon!





This page contains preliminary information which might be subject to change!

Estimated Start of User Operation

The CoolMUC-4 is currently being installed and tested. CoolMUC-4 replaces its predecessors CoolMUC-2 and CoolMUC-3. We expect start of user operation beginning of December 2024.

Hardware Architecture

Login Nodes

NameArchitecture

Number of
physical cores

MemoryOperating systemRemarks
cm4login1
(lxlogin5)
Intel(R) Xeon(R) Platinum 8380 CPU (Ice Lake)801 TBSLES 15 SP6already accessible (see below)
cm4login2
(lxlogin6)
Intel(R) Xeon(R) Platinum 8380 CPU (Ice Lake)801 TBSLES 15 SP6Not yet available!

Compute Nodes

ArchitectureNumber
of nodes
Number of
cores per node
Total number
of cores
Memory
per node
Operating
system
local temporary file system (attached to the node)temporary file system (across all nodes)Remarks
Intel(R) Xeon(R) Platinum 8360HL CPU (Cooper Lake)196966 TBSLES 15 SP61.7 TB via "/tmp" (SSD)$SCRATCH_DSS

Large Memory Teramem System

Usage: Job Processing on the Linux-Cluster

Intel(R) Xeon(R) Platinum 8380 CPU (Ice Lake)6804801 TBSLES 15 SP61.7 TB via "/tmp" (SSD)$SCRATCH_DSS

already available as partition "cm4_inter_large_mem" in cluster segment "inter"

Usage: Job Processing on the Linux-Cluster

Intel(R) Xeon(R) Platinum 8480+ (Sapphire Rapids)10611211872512 GBSLES 15 SP61.7 TB via "/tmp" (SSD)$SCRATCH_DSSNot yet available!

Access

Access CoolMUC-4 login nodes via

ssh myuserid@lxlogin5.lrz.de

Default File Systems

On login nodes and compute nodes, users have access to:

  • DSS HOME ($HOME): Users of CoolMUC-2/-3 will keep their Home directory. No need to transfer data to the new system.
  • Temporary file system ($SCRATCH_DSS): This is the same temporary file system as currently used on CoolMUC-2/-3.

Queues and Job Processing

Documentation pages on job processing will be updated soon.

As already known from the previous CoolMUC clusters, Slurm partitions for different purposes will be set up:

  • multi-node partition, e.g. for MPI-parallel jobs,
  • single-node partition for shared-memory jobs,
  • shared partition aka "serial" on CoolMUC-2,
  • partition for interactive jobs.

Details coming soon.

Software and Module Environment

Details coming soon.