High Performance Computing

<< Zurück zur Dokumentationsstartseite

High Performance Computing

 

Forgot your Password? click here
Add new user (only for SuperMUC-NG)?
click here

Add new IP(only for SuperMUC-NG)?
click here
How to write good LRZ Service Requests? click here

Important Announcement:
Two-Factor Authentication (2FA) becomes mandatory on all HPC-Systems starting on Sept 18th! Please setup 2FA!
Other login methods will NOT  be possible any more! For more Info click here

System Status (see also: Access and Overview of HPC Systems)

GREEN = fully operational YELLOW = operational with restrictions (see messages below) RED = not available



Höchstleistungsrechner (SuperMUC-NG)

login nodes: skx.supermuc.lrz.de LOGIN

archive nodes: skx-arch.supermuc.lrz.de ARCHIVE

File Systems  
HOME WORK SCRATCH DSS DSA

Partitions/Queues: 
micro, general, large: UP

fat, test: UP

Detailed node status

Details:

Submit an Incident Ticket for the SuperMUC-NG

Add new user? click here

Add new IP? click here

Questions about 2FA on SuperMUC-NG? click here


Linux Cluster

CoolMUC-2
lxlogin(1,2,3,4).lrz.de

 UP

serial partitions: serial

UP

parallel partitions cm2_(std,large)

UP

cluster cm2_tiny

UP

interactive partition: cm2_inter

UP

c2pap

UP

CoolMUC-3

lxlogin(8,9).lrz.de

parallel partition: mpp3_batch

interactive partition: mpp3_inter


UP

UP

UP

CoolMUC-4

lxlogin(1,2,3,4).lrz.de

interactive partition: cm4_inter_large_mem


 UP

MAINT

others

teramem_inter

kcs

biohpc

hpda

UP

UP

 UP

MAINT

File Systems

HOME
SCRATCH (legacy)
SCRATCH_DSS
DSS
DSA


UP
UP
UP
UP
UP

Detailed node status
Detailed queue status


Details:

Submit an Incident Ticket for the Linux Cluster


DSS Storage systems

For the status overview of the Data Science Storage please go to

https://doku.lrz.de/display/PUBLIC/Data+Science+Storage+Statuspage


Messages

see also: Aktuelle LRZ-Informationen / News from LRZ

Messages for all HPC System

Two-factor authentication will become mandatory on SuperMUC-NG and Linux Cluster as of September 18, 2023! Please refer to the official 2FA announcement!

Due to an observed security incident, the LRZ service of the "new" Remote Visualization 2021 (RVS, https://rv.lrz.de/) has been discontinued with immediate effect. Please see corresponding LRZ notification.

A new software stack (spack/23.1.0) is available on the CoolMUC- 2 and SuperMUC-NG. Release Notes of Spack/23.1.0 Software Stack

This software stack provides new versions of compilers, MPI libraries, and most other applications. Also there are significant changes w.r.t module suffixes (specifically MPI and MKL modules) and module interactions (we have added prerequisites of MPI and compilers for high-level packages to adhere to the compatibility in loaded modules in your terminal environment). Please refer to the release notes for detailed changes. 

This software stack is rolled out as a non-default on both machines. You will have to explicitly swap/switch spack modules to access the new software stack. The best way is to purge all loaded modules and load the Spack/23.1.0 like,

$> module purge ; module load spack/23.1.0

Please be aware, 

  • Using the "module purge" command will unload all previously loaded modules from your terminal shell, including automatically loaded ones such as "intel," "intel-mpi," and "intel-mkl." This step is crucial to prevent potential errors that may arise due to lingering modules.

  • In the future, when version 23.1.0 or later versions of the Spack software stack become the default, we will no longer automatically load any modules (e.g., compilers, MPI, and MKL). This change will provide users with a clean environment to begin their work.

We request you to reach out to us for any suggestions and questions. Use the "Spack Software Stack" keyword when you open a ticket at https://servicedesk.lrz.de/en/ql/create/26 .


Messages for SuperMUC-NG

SuperMUC-NG: Maintenance finished. Back in operation.

: Please see https://status.lrz.de/affected/h%C3%B6chstleistungsrechner/ for a scheduled maintenance starting on September 4.

Messages for Linux Clusters

See https://status.lrz.de/issues/linux-cluster/2023-09-25_maintenance/ for the current maintenance on HPDA and CoolMUC4 clusters.

After a failure in the LRZ cooling infrastructure (see https://status.lrz.de/issues/linux-cluster/2023-09-13-cooling-failure/ ) all Linux cluster systems are now back in normal operation.

Messages for Compute Cloud and other HPC Systems

The LRZ AI Systems (including the MCML system segment) did undergo a maintenance procedure between July 24th and 25th, 2023. On these days, the system was not be available to users (until ~11:30, 2023-07-25).