System Status (see also: Access and Overview of HPC Systems)
= fully operational
Status colour Green
= operational with restrictions (see messages below)
Status colour Yellow
= not available
Status colour Red
Maintenance Feb 1-5 on SuperMUC and Feb 3-5 on the Linux Cluster
|login nodes: skx.supermuc.lrz.de|
|archive nodes: skx-arch.supermuc.lrz.de|
|Globus Online File Transfer:|
|Detailed node status|
Add new user? click here
Add new IP? click here
|serial partitions: serial|
|parallel partitions cm2_(std,large)|
|interactive partition: cm2_inter|
parallel partition: mpp3_batch
interactive partition: mpp3_inter
|teramem, ivymuc, kcs|
Compute Cloud and
Compute Cloud: (https://cc.lrz.de)
detailed status and free slots: https://cc.lrz.de/lrz
|LRZ AI Systems|
The Compute Cloud will be shut down due to a planned power outage on Wed, February 3rd, 17:00 (5 pm). Operation is expected to resume Friday, February 5th, 2021. For further details, please see here.This downtime will also affect RStudio Server
|Messages for SuperMUC-NG|
Due to the job processing backlog and the current job profile, no reservations or priorizations can be granted.
A maintenance has been scheduled for February 1-5. See details.
Please see the upcoming Online Courses and Workshops provided by LRZ.
The new version 2021 of Abaqus (FEM software by Dassault Systems) has been installed on Linux Clusters which are running SLES15 operatig system (i.e. CM2 and CM2_tiny). New corresponding module files are provided. The new version abaqus/2021 has been made the new default Abaqus module.
Due to the very limited use of MSC products or MSC Licenses on High-Performance Systems at LRZ, the MSC software support including installations and license server will be taken off by September 2021. Please do write us a ticket if you want to communicate on this matter with a subject MSC via https://servicedesk.lrz.de/.
The announced maintenance interval has been somewhat shifted to the time frame February 3-5. Please read the details on the impact for the different cluster segments.
Archive nodes update
The new ANSYS Software Release, Version 2022.R1 has been installed and provided on SuperMUC-NG. For details please refer to the correspondig announcement:
The Energy Aware Runtime (EAR) has been reactivated. Please be aware that this may have an impact on job processing times.
Please note that WORK/SRATCH on SuperMUC-NG exhibit currently possible performance degradation under heavy I/O load. Take this into account when planning your job runtimes.
The new hpcreport tool is now available to check job performance and accounting on SuperMUC-NG. Please check out
|Messages for Linux Clusters|
SCRATCH is now fully online again. While we expect older data that were temporarily inaccessible to be fully available again, data that were created in the last few days before the problems started might be corrupt and need to be renewed from tape archive (if one exists) or recreated.
The new ANSYS Software Release, Version 2022.R1 has been installed and provided on the LRZ Linux Cluster systems (CM2, CM3 and RVS systems). For details please refer to the correspondig announcement:
The new release of Abaqus, Version 2022 (Dassault Systems Software) has been installed on both Linux Clusters CoolMUC-2 / CoolMUC-3 as well as on the RVS systems. The Abaqus documentation has been updated.
The new release of SimCenter StarCCM+, Version 2021.3.1 (Siemens PLM Software) has been installed and provided on the LRZ HPC systems (CM2, CM3, SNG and RVS systems). For details please see the correspondig announcement:
|Messages for Cloud and other HPC Systems|
The LRZ AI and MCML Systems are back in operation as the maintenance procedure planned from January 7thto January 11th is completed.The RStudio Server service at LRZ was decommissioned. For a replacement offering please see Interactive Web Servers on the LRZ AI Systems and, more generally, LRZ AI Systems.