System Status (see also: Access and Overview of HPC Systems)
= fully operational
Status colour Green
= operational with restrictions (see messages below)
Status colour Yellow
= not available
Status colour Red
Block operation and Scheduled Maintenance 1.10.-8.10.
|login nodes: skx.supermuc.lrz.de|
|archive nodes: skx-arch.supermuc.lrz.de|
, general, large
|Globus Online File Transfer:|
|Detailed node status|
Add new user? click here
Add new IP? click here
|serial partitions: serial|
|parallel partitions cm2_(std,large)|
|interactive partition: cm2_inter|
parallel partition: mpp3_batch
interactive partition: mpp3_inter
|teramem, ivymuc, kcslxlogin10.lrz.de|
Compute Cloud and
Compute Cloud: (https://cc.lrz.de)
detailed status and free slots: https://cc.lrz.de/lrz
|LRZ AI Systems|
|Messages for SuperMUC-NG|
Between 18:00 and 8:00 there will be a block operation, during which a selected set of large scale jobs will be scheduled for execution. Access to the login nodes will remain enabled during this time, but queued jobs will not start during that period.
The block operation will be followed by a necessary maintenance of the cooling infrastructure and further tuning measures. Login to the the system will be suspended starting 8:00.
We will update this document as further information becomes available.
The registration for Linux Cluster Introduction and CFD courses is now open.
Archive nodes update
The new ANSYS Software Release, Version 2022.R1 has been installed and provided on SuperMUC-NG. For details please refer to the correspondig announcement:
The Energy Aware Runtime (EAR) has been reactivated. Please be aware that this may have an impact on job processing times.
Please note that WORK/SRATCH on SuperMUC-NG exhibit currently possible performance degradation under heavy I/O load. Take this into account when planning your job runtimes.
The new hpcreport tool is now available to check job performance and accounting on SuperMUC-NG. Please check out
|Messages for Linux Clusters|
SCRATCH is now fully online again. While we expect older data that were temporarily inaccessible to be fully available again, data that were created in the last few days before the problems started might be corrupt and need to be renewed from tape archive (if one exists) or recreated.
The new ANSYS Software Release, Version 2022.R1 has been installed and provided on the LRZ Linux Cluster systems (CM2, CM3 and RVS systems). For details please refer to the correspondig announcement:
The new release of Abaqus, Version 2022 (Dassault Systems Software) has been installed on both Linux Clusters CoolMUC-2 / CoolMUC-3 as well as on the RVS systems. The Abaqus documentation has been updated.
The new release of SimCenter StarCCM+, Version 2021.3.1 (Siemens PLM Software) has been installed and provided on the LRZ HPC systems (CM2, CM3, SNG and RVS systems). For details please see the correspondig announcement:
|Messages for Cloud and other HPC Systems|
The LRZ AI and MCML Systems are back in operation as the maintenance procedure planned from January 7thto January 11th is completed.The RStudio Server service at LRZ was decommissioned. For a replacement offering please see Interactive Web Servers on the LRZ AI Systems and, more generally, LRZ AI Systems.