High Performance Computing

<< Zurück zur Dokumentationsstartseite

High Performance Computing

 

Forgot your Password? click here
Add new user (only for SuperMUC-NG)?
click here

Add new IP(only for SuperMUC-NG)?
click here
How to write good LRZ Service Requests? click here

How to setup two-factor authentication (2FA) on HPC systems? click here


Virtual HPC Lounge to ask question and get advice. Every Wednesday, 2:00pm - 3:00pm

For Users of SuperMUG-NG: New GCS Large-Scale Call Open: January 12th to February 9th 2025, 17:00 o’clock CET (strict deadline)

System Status (see also: Access and Overview of HPC Systems)

GREEN = fully operational YELLOW = operational with restrictions (see messages below) RED = not available = see messages below



 

Höchstleistungsrechner RUNNING
(SuperMUC-NG)


login nodes: skx.supermuc.lrz.de UP

Partitions/Queues: 
TEST MICRO GENERAL LARGE TEST

login nodes: pvc.supermuc.lrz.de UP

Partitions/Queues: 
 TEST GENERAL LARGE

HOME WORK SCRATCH DSS DSA

SuperMUC-NG Phase 2 only: DAOS

Further documentation

Submit an Incident Ticket for the SuperMUC-NG

Add new user? click here

Add new IP? click here

Questions about 2FA on SuperMUC-NG? click here


 

Linux Cluster RUNNING


CoolMUC-4

login nodes: cool.hpc.lrz.de

UP


serial partition serial_std

serial partition serial_long

parallel partitions cm4_ (tiny | std)

interactive partition: cm4_inter

PARTIALLY UP

UP

UP

UP



 

teramem_inter

UP


Housing Clusters
(Access restricted to owners/users)

biohpc

MOSTLY UP

 

hpda

UP

 

File Systems

HOME
SCRATCH_DSS
DSS
DSA

UP
UP
UP
UP


 

Detailed node status
Detailed queue status



Details:

Submit an Incident Ticket for the Linux Cluster 

 

Messages

Messages for all HPC Systems

The new ANSYS Release 2025.R2 has been installed, tested and rolled-out on SuperMUC-NG Phase 1 and CoolMUC-4. ANSYS 2025.R2 has been made the new default ANSYS Release on those systems and the LRZ documentation has been updated in due course.
In case of any related observations, please file an LRZ Service Request. The LRZ download portal for the ANSYS software will be updated with the new ANSYS installation files asap.
It might be worth mentioning, that with ANSYS Release 2025.R2 the Rocky DEM solver is supporting for the first time SuSE Linux Enterprise Server operating system, i.e. SLES 15 SP4,5,6.

The new Siemens PLM Release of StarCCM+ 2025.3.1 (= 2510.0001 = v20.06.010) has been installed, tested and rolled-out on SuperMUC-NG Phase 1 and CoolMUC-4. StarCCM+ 2025.3.1 has been made the new default StarCCM+ Release on those systems and the LRZ documentation has been updated in due course. Older versions of StarCCM+ with version numbers 2024.x.1 and 2025.x.1, x=1,2,(3) remain concurrently available on the LRZ HPC systems until the first 2026.1.1 release in the next year (April 2026).
In case of any related observations, please file an LRZ Service Request.

SuperMUC-NG

Höchstleistungsrechner on LRZ Service Status
Ankündigungen und Vorfälle


Linux Cluster

Linux Cluster on LRZ Service Status
Ankündigungen und Vorfälle
[Abgeschlossen] [Wartung] Data Science Storage Online Maintenance
Fr., 28.11.2025 13:30 – Mo., 15.12.2025 11:00
Betroffene Services: [Data Science Storage],[Linux Cluster]

Details

We update the storage clusters online to new versions. There is no outage of the systems, since the parts are all redundant and updated sequentially. But you might observe slower performance.

All clusters/environments have now been sucessfully updated:

DSS02 Linux and SNG home and project fs Done (01.12.-12.12.)

File System: dsshome1, dssfs02, dssfs03, lrzsys, sngslurm

DSS05, LMU BIO Done (13.-17.10.)

File System: dsslegfs01, dsslegfs02

DSS03 terrabyte ECE&grid: Done (27.10.-21.11.)

File System: tbyscratch

DSS06 MCML, LRZ AI and LRZ Scratch, tby: Done (27.10.-24.11.)

File System: dssmcmlfs01, dssfs04, dssfs05, lxclscratch, dsstbyfs03

DSS03 tby main Done (10.11.-28.11.)

File System: dsstbyfs01, dsstbyfs02


Attended Cluster Node Housing

Attended Cluster Node Housing on LRZ Service Status
Ankündigungen und Vorfälle
[i] Retirement of CoolMUC-3-housing end of 2025
Mi., 31.12.2025 07:00 – voraussichtlich bis Do., 01.01.2026 00:00
Betroffene Services: [Attended Cluster Node Housing]

CoolMUC-3-housing has reached end of life and decommissioning is scheduled for Dec 31st, 2025 as announced beforehand. The following partitions are affected:

  • htce_{all,para,long,short,special}
  • htfd_batch
  • htrp_batch
  • httf_{skylake,bigmem}
  • htus_batch
  • kcs_batch
  • kcs_nim_batch
  • tum_aer_huge

AI Systems

AI Systems on LRZ Service Status
Ankündigungen und Vorfälle
[Abgeschlossen] [Wartung] AI Systems Maintenance December 1st-3rd, 2025
Mo., 01.12.2025 06:00 – Mi., 03.12.2025 17:00
Betroffene Services: [AI Systems]

The AI Systems (including the BayernKI and MCML system segments) will undergo a maintenance procedure between December 1st and 3rd, 2025. On these days, the system will not be available. Normal user operation is expected to resume during the course of Wednesday, December 3rd.

All (LRZ and MCML) DGX systems had to be powered off. This message will be updated as as new information becomes available.

The AI Systems (including the BayernKI and MCML system segments) will undergo a maintenance procedure between September 8th and 10th, 2025. On these days, the system will not be available to users. Normal user operation is expected to resume during the course of Wednesday, September 10th.

The LRZ AI Systems have to undergo a short maintenance early next week. For this, the system will be drained over the weekend and start-up of new jobs will be delayed until after the maintenance. We expect the actual downtime to not exceed more than 10 minutes.

The AI Systems (including the BayernKI and MCML system segments) will undergo a maintenance procedure between May 19th and 21st, 2025. On these days, the system will not be available to users. Normal user operation is expected to resume during the course of Wednesday, May 21st.


Compute Cloud

Compute Cloud on LRZ Service Status
Ankündigungen und Vorfälle