High Performance Computing
<< Zurück zur Dokumentationsstartseite
High Performance Computing

Forgot your Password? click here
Add new user (only for SuperMUC-NG)? click here
Add new IP(only for SuperMUC-NG)? click here
How to write good LRZ Service Requests? click here
How to setup two-factor authentication (2FA) on HPC systems? click here
For the FORTRAN Course there are still open seats!
New: Virtual "HPC Lounge" to ask question and get advice. Every Wednesday, 2:00pm - 3:00pm
For details and Zoom Link see: HPC Lounge
System Status (see also: Access and Overview of HPC Systems)
GREEN = fully operational YELLOW = operational with restrictions (see messages below) RED = not available = see messages below
Höchstleistungsrechner (SuperMUC-NG) | |
login nodes: skx.supermuc.lrz.de LOGIN | |
archive nodes: skx-arch.supermuc.lrz.de ARCHIVE | |
File Systems | |
Partitions/Queues: FAT TEST | |
Detailed node status | |
Details:
| |
Submit an Incident Ticket for the SuperMUC-NG Add new user? click here Add new IP? click here Questions about 2FA on SuperMUC-NG? click here |
Linux Cluster | |||
CoolMUC-4 | |||
login nodes: cool.hpc.lrz.de | UP | ||
serial partition serial_std serial partition serial_long parallel partitions cm4_ (tiny | std) interactive partition: cm4_inter | UP UP UP UP | ||
teramem_inter | UP | ||
LXC Housing Clusters (Access only by the specific owners/users of these systems.) | |||
kcs | PARTIALLY UP |
| |
biohpc | MOSTLY UP |
| |
hpda | UP |
| |
File Systems | |||
HOME | UP |
| |
Details: | |||
Compute Cloud and | ||
---|---|---|
Compute Cloud: (https://cc.lrz.de) detailed status: Status | UP | |
LRZ AI Systems | UP | |
Details: | ||
DSS Storage systems |
---|
For the status overview of the Data Science Storage please go to https://doku.lrz.de/display/PUBLIC/Data+Science+Storage+Statuspage |
Messages
see also: Aktuelle LRZ-Informationen / News from LRZ
Messages for all HPC System |
New Version of ANSYS 2025.R1 availableToday the new version of the ANSYS software, Vers. 2025.R1 has been installed, tested and made available on CoolMUC-4 and SuperMUC-NG under operating system SLES 15. The new version has been made the new default version for all major ANSYS software components. |
Messages for SuperMUC-NG |
Maintenance SuperMUC-NG Phase 2We have started a planned maintenance period for the phase 2 system. The maintenance of SuperMUC-NG Phase 2 will continue for the whole week. |
Messages for Linux Clusters |
Maintenance: Change in Slurm configuration on CoolMUC-4Please take note of the following announcement! Although CoolMUC-4 has more powerful CPUs, the system has much less CPU cores and nodes than the predecessor system. The resources are limited. The demand is increasing. We are therefore forced to make changes to the Slurm configuration. This will mainly have an impact on the job processing on the cluster segments "serial" and "cm4". Please refer to Job Processing on the Linux-Cluster for details on the updated cluster configuration. A maintenance was carried out on Friday 21 to apply the changes. The most important changes are:
Users need to do after maintenance:
We strongly recommend that users of the "serial" cluster check whether their workflows can be parallelized, in order to be able to benefit from the parallel cluster segment "cm4". If even "cm4" is insufficient, you may also consider an application for a (test) project on the SuperMUC-NG. Do you need further consulting? Don't hesitate to contact us via Servicedesk or via virtual Zoom meeting in our HPC Lounge. |
Access to the new CoolMUC-4 has been openedIn December 2024 the access to the new CoolMUC-4 (CM4) Linux Cluster has been granted. The CM4 cluster comprises some ~12.000 cores based on Intel® Xeon®Platinum 8480+ (Sapphire Rapids) interconnected by an Infiniband network. Please have a look at the updated documentation before filing a ticket to the LRZ Service Desk. Please mind the changed outline of the module system and the number of 112 CPU cores and 512Gb RAM per compute node on CM4 hardware. |
Remarks on Spack software stack availability and INTEL related modulesPlease note: Since the last maintenance at the end of November 2024 the latest LRZ software stack spack/23.1.0 is set as default on the CoolMUC-4 partitions! The old software stack spack/22.2.1 is still available via the according module. |
Messages for Compute Cloud and other HPC Systems |
The AI Systems (including the MCML system segment) will undergo a maintenance procedure between February 17th and 19th, 2025. On these days, the system will not be available to users. Normal user operation is expected to resume during the course of Wednesday, February 19th. We are currently observing and investigating connection issues to https://login.ai.lrz.de. UPDATE: The issue has been resolved. The AI Systems will be affected by an infrastructure power cut scheduled in November 2024. The following system partitions will become unavailable for 3 days during the specified time frame. We apologise for the inconvenience associated with that. Calendar Week 46, 2024-11-11 - 2024-11-13
The AI Systems (including the MCML system segment) are under maintenance between September 30th and October 2nd, 2024. On these days, the system will not be available to users. Normal user operation is expected to resume during the course of Wednesday, October 2nd. The previously announced scheduled downtime between 2024-09-16 and 2024-09-27 (Calendar Week 38 & 39) has been postponed until further notice. |
HPC Services
Attended Cloud Housing |