CoolMUC-4
Coming soon! | ||
This page contains preliminary information which might be subject to change!
Estimated Start of User Operation
The CoolMUC-4 is currently being installed and tested. CoolMUC-4 replaces its predecessors CoolMUC-2 and CoolMUC-3. We expect start of user operation beginning of December 2024.
Hardware Architecture
Login Nodes
Name | Architecture | Number of | Memory | Operating system | Remarks |
---|---|---|---|---|---|
cm4login1 (lxlogin5) | Intel(R) Xeon(R) Platinum 8380 CPU (Ice Lake) | 80 | 1 TB | SLES 15 SP6 | already accessible (see below) |
cm4login2 (lxlogin6) | Intel(R) Xeon(R) Platinum 8380 CPU (Ice Lake) | 80 | 1 TB | SLES 15 SP6 | Not yet available! |
Compute Nodes
Architecture | Number of nodes | Number of cores per node | Total number of cores | Memory per node | Operating system | local temporary file system (attached to the node) | temporary file system (across all nodes) | Remarks |
---|---|---|---|---|---|---|---|---|
Intel(R) Xeon(R) Platinum 8360HL CPU (Cooper Lake) | 1 | 96 | 96 | 6 TB | SLES 15 SP6 | 1.7 TB via "/tmp" (SSD) | $SCRATCH_DSS | |
Intel(R) Xeon(R) Platinum 8380 CPU (Ice Lake) | 6 | 80 | 480 | 1 TB | SLES 15 SP6 | 1.7 TB via "/tmp" (SSD) | $SCRATCH_DSS | already available as partition "cm4_inter_large_mem" in cluster segment "inter" |
Intel(R) Xeon(R) Platinum 8480+ (Sapphire Rapids) | 106 | 112 | 11872 | 512 GB | SLES 15 SP6 | 1.7 TB via "/tmp" (SSD) | $SCRATCH_DSS | Not yet available! |
Access
Access CoolMUC-4 login nodes via
ssh myuserid@lxlogin5.lrz.de
Default File Systems
On login nodes and compute nodes, users have access to:
- DSS HOME ($HOME): Users of CoolMUC-2/-3 will keep their Home directory. No need to transfer data to the new system.
- Temporary file system ($SCRATCH_DSS): This is the same temporary file system as currently used on CoolMUC-2/-3.
Queues and Job Processing
Documentation pages on job processing will be updated soon.
As already known from the previous CoolMUC clusters, Slurm partitions for different purposes will be set up:
- multi-node partition, e.g. for MPI-parallel jobs,
- single-node partition for shared-memory jobs,
- shared partition aka "serial" on CoolMUC-2,
- partition for interactive jobs.
Details coming soon.
Software and Module Environment
Details coming soon.