High Performance Computing

<< Zurück zur Dokumentationsstartseite

High Performance Computing

 

Forgot your Password? click here
Add new user (only for SuperMUC-NG)?
click here

Add new IP(only for SuperMUC-NG)?
click here
How to write good LRZ Service Requests? click here

How to setup two-factor authentication (2FA) on HPC systems? click here

New: Virtual "HPC Lounge" to ask question and get advice. Every Wednesday, 2:00pm - 3:00pm
For details and Zoom Link see: HPC Lounge

System Status (see also: Access and Overview of HPC Systems)

GREEN = fully operational YELLOW = operational with restrictions (see messages below) RED = not available = see messages below



Höchstleistungsrechner (SuperMUC-NG)

login nodes: skx.supermuc.lrz.de LOGIN

archive nodes: skx-arch.supermuc.lrz.de ARCHIVE

File Systems  
HOME WORK SCRATCH DSS DSA

Partitions/Queues: 
 MIRCRO GENERAL LARGE

  FAT TEST

Detailed node status

Details:

Submit an Incident Ticket for the SuperMUC-NG

Add new user? click here

Add new IP? click here

Questions about 2FA on SuperMUC-NG? click here


Linux Cluster 

CoolMUC-4

login nodes: cool.hpc.lrz.de

UP


serial partition serial_std

serial partition serial_long

parallel partitions cm4_ (tiny | std)

interactive partition: cm4_inter

UP

UP

UP

UP


teramem_inter

UP


Housing Clusters
(Access restricted to owners/users)

kcs

PARTIALLY UP

 

biohpc

MOSTLY UP

 

hpda

UP

 

File Systems

HOME
SCRATCH_DSS
DSS
DSA

UP
UP
UP
UP


 

Detailed node status
Detailed queue status



Details:

Submit an Incident Ticket for the Linux Cluster 


DSS Storage systems

For the status overview of the Data Science Storage please go to

https://doku.lrz.de/display/PUBLIC/Data+Science+Storage+Statuspage


Messages

see also: Aktuelle LRZ-Informationen / News from LRZ

Messages for all HPC System

The new version of the CFD solver software StarCCM+ by Siemens PLM (Vers. 2502.0001 = 20.02.008 = 2025.1.1) has been installed, tested and rolled out on CoolMUC-4 and SuperMUC-NG Phase 1.
The documentation has been updated in due course: https://doku.lrz.de/siemens-plm-on-hpc-systems-10746502.html


Messages for SuperMUC-NG

Phase 1 (CPU): A maintenance of the WORK file system hardware is scheduled for Monday 8:00 - 14:00. General job processing is suspended and access to the system will not be possible during this period.

Phase 2 (GPU): A maintenance is scheduled from to . Job operation will be suspended and access to the system might be limited.

Environment module: Changes roll out scheduled for :

  1. Renaming spack to stack.
  2. Fixing of salloc issues for the housing systems attached to CoolMUC-4.
  3. Loading stack/24.4.0 as default on CoolMUC-4 and stack/22.2.1 on SuperMUC-NG Phase 1.
  4. Hiding administrative modules.

Important! If you are using module switch spack/... or module remove spack/... please rename in your batch scripts spack to stack.

If you launch shortly before the , you can avoid failure by replicating the commands for spack. For example:

module switch spack/22.2.1 || true
module switch stack/22.2.1 || true

 

Messages for Linux Clusters
Environment module: Changes roll out scheduled for :
  1. Renaming spack to stack.
  2. Fixing of salloc issues for the housing systems attached to CoolMUC-4.
  3. Loading stack/24.4.0 as default on CoolMUC-4 and stack/22.2.1 on SuperMUC-NG Phase 1.
  4. Hiding administrative modules.

Important! If you are using module switch spack/... or module remove spack/... please rename in your batch scripts spack to stack.

If you launch shortly before the , you can avoid failure by replicating the commands for spack. For example:

module switch spack/22.2.1 || true
module switch stack/22.2.1 || true

Messages for Compute Cloud and other HPC Systems

The AI Systems (including the BayernKI and MCML system segments) will undergo a maintenance procedure between May 19th and 21st, 2025. On these days, the system will not be available to users. Normal user operation is expected to resume during the course of Wednesday, May 21st.