CoolMUC-4
Access
There are two login nodes available, which can be reached through a load balancer, which is named cool.hpc.lrz.de. Assignment to the least loaded Linux Cluster login node is then made automatically.
Access to the CoolMUC-4 login nodes is then granted via:
ssh -Y cool.hpc.lrz.de -l xxyyyzz |
For details, please refer to Access and Login to the Linux-Cluster
New System, New (Old) Rules
For details, please refer to Policies on the Linux Cluster
Hardware Architecture
Login Nodes
Architecture | Number of login nodes | Number of | Memory | Operating system |
---|---|---|---|---|
Intel(R) Xeon(R) Platinum 8380 CPU (Ice Lake) | 2 | 80 | 1 TB | SLES 15 SP6 |
Compute Nodes
Architecture | Number of nodes | Number of cores per node | Total number of cores | Memory per node | Operating system | local temporary file system (attached to the node) | temporary file system (across all nodes) | Remarks |
---|---|---|---|---|---|---|---|---|
Intel(R) Xeon(R) Platinum 8380 CPU (Ice Lake) | 12 | 80 | 480 | 1 TB | SLES 15 SP6 | 1.7 TB via "/tmp" (SSD) | $SCRATCH_DSS | New CoolMUC-4 |
Intel(R) Xeon(R) Platinum 8480+ (Sapphire Rapids) | 106 | 112 | 11872 | 512 GB | ||||
Intel(R) Xeon(R) Platinum 8360HL CPU (Cooper Lake) | 1 | 96 | 96 | 6 TB | Already existing Large Memory Teramem System |
Default File Systems
On login nodes and compute nodes, users have access to:
- DSS HOME ($HOME): Users of CoolMUC-2/-3 will keep their Home directory. No need to transfer data to the new system!
- Temporary file system ($SCRATCH_DSS): This is the same temporary file system as it was previously used on CoolMUC-2/-3.
Queues and Job Processing
Please refer to Job Processing on the Linux-Cluster
Software and Module Environment
The current default software stack is named spack/23.1.0. Users can switch to other available software stacks as required, e.g. spack/22.2.1).
It is important to note, that on CoolMUC-4 Intel compiler, MPI and MKL modules are no longer loaded by default. Users can activate the Intel environment by inserting the following commands in their corresponing SLURM scripts:
module load intel module load intel-mpi module load intel-mkl
Other versions of the Intel software are available in the spack stack and can be loaded by users depending on their requirements.
Further details are coming soon.