High Performance Computing
<< Zurück zur Dokumentationsstartseite
High Performance Computing

Forgot your Password? click here
Add new user (only for SuperMUC-NG)? click here
Add new IP(only for SuperMUC-NG)? click here
How to write good LRZ Service Requests? click here
How to setup two-factor authentication (2FA) on HPC systems? click here
New: Virtual "HPC Lounge" to ask question and get advice. Every Wednesday, 2:00pm - 3:00pm
For details and Zoom Link see: HPC Lounge
System Status (see also: Access and Overview of HPC Systems)
GREEN = fully operational YELLOW = operational with restrictions (see messages below) RED = not available = see messages below
Höchstleistungsrechner (SuperMUC-NG) | |
login nodes: skx.supermuc.lrz.de LOGIN | |
archive nodes: skx-arch.supermuc.lrz.de ARCHIVE | |
File Systems | |
Partitions/Queues: FAT TEST | |
Detailed node status | |
Details:
| |
Submit an Incident Ticket for the SuperMUC-NG Add new user? click here Add new IP? click here Questions about 2FA on SuperMUC-NG? click here |
Linux Cluster | |||
CoolMUC-4 | |||
login nodes: cool.hpc.lrz.de | UP | ||
serial partition serial_std serial partition serial_long parallel partitions cm4_ (tiny | std) interactive partition: cm4_inter | UP UP UP UP | ||
teramem_inter | UP | ||
Housing Clusters (Access restricted to owners/users) | |||
kcs | PARTIALLY UP |
| |
biohpc | MOSTLY UP |
| |
hpda | UP |
| |
File Systems | |||
HOME | UP |
| |
Details: | |||
Compute Cloud and | ||
---|---|---|
Compute Cloud: (https://cc.lrz.de) detailed status: Status | UP | |
AI Systems | UP | |
Details: | ||
DSS Storage systems |
---|
For the status overview of the Data Science Storage please go to https://doku.lrz.de/display/PUBLIC/Data+Science+Storage+Statuspage |
Messages
see also: Aktuelle LRZ-Informationen / News from LRZ
Messages for all HPC System |
The new version of the CFD solver software StarCCM+ by Siemens PLM (Vers. 2502.0001 = 20.02.008 = 2025.1.1) has been installed, tested and rolled out on CoolMUC-4 and SuperMUC-NG Phase 1. |
Messages for SuperMUC-NG |
Phase 1 (CPU): A maintenance of the WORK file system hardware is scheduled for Monday 8:00 - 14:00. General job processing is suspended and access to the system will not be possible during this period. |
Phase 2 (GPU): A maintenance is scheduled from to . Job operation will be suspended and access to the system might be limited. |
Environment module: Changes roll out scheduled for :
Important! If you are using If you launch shortly before the , you can avoid failure by replicating the commands for spack. For example: module switch spack/22.2.1 || true
|
Messages for Linux Clusters |
Environment module: Changes roll out scheduled for :
Important! If you are using If you launch shortly before the , you can avoid failure by replicating the commands for spack. For example: module switch spack/22.2.1 || true |
Messages for Compute Cloud and other HPC Systems |
The AI Systems (including the BayernKI and MCML system segments) will undergo a maintenance procedure between May 19th and 21st, 2025. On these days, the system will not be available to users. Normal user operation is expected to resume during the course of Wednesday, May 21st. |
HPC Services
Attended Cloud Housing |