High Performance Computing

<< Zurück zur Dokumentationsstartseite

High Performance Computing

 

Forgot your Password? click here
Add new user (only for SuperMUC-NG)?
click here

Add new IP(only for SuperMUC-NG)?
click here
How to write good LRZ Service Requests? click here


SuperMUC-NG Status and Results Workshop
9.-11. May (online)
Agenda & Slides: click here


System Status (see also: Access and Overview of HPC Systems)

GREEN = fully operational YELLOW = operational with restrictions (see messages below) RED = not available



Höchstleistungsrechner (SuperMUC-NG)

System: 


login nodes: skx.supermuc.lrz.de

UP

archive nodes: skx-arch.supermuc.lrz.de

UP

File Systems
HOME
WORK
SCRATCH
DSS
DSA


UP

UP
UP
UP
UP

Partitions/Queues: 
micro, general, large

fat, test


UP

UP

 Globus Online File Transfer: 

UP

Detailed node status


Details:

Submit an Incident Ticket for the SuperMUC-NG

Add new user? click here

Add new IP? click here


Linux Cluster

CoolMUC-2
lxlogin(1,2,3,4).lrz.de

UP

serial partitions: serial

UP

parallel partitions cm2_(std,large)

UP

cluster cm2_tiny

UP

interactive partition: cm2_inter

UP

c2pap

UP

CoolMUC-3

lxlogin(8,9).lrz.de

parallel partition: mpp3_batch

interactive partition: mpp3_inter


UP

UP

UP

CoolMUC-4

lxlogin(1,2,3,4).lrz.de

interactive partition: cm4_inter_large_mem


UP

UP

others

teramem_inter

kcs

biohpc

UP

UP

UP

File Systems

HOME
SCRATCH
DSS
DSA


UP
UP
UP
UP

Detailed node status
Detailed queue status


Details:

Submit an Incident Ticket for the Linux Cluster


DSS Storage systems

For the status overview of the Data Science Storage please go to

https://doku.lrz.de/display/PUBLIC/Data+Science+Storage+Statuspage


Messages

see also: Aktuelle LRZ-Informationen / News from LRZ

Messages for all HPC System

A new software stack (spack/23.1.0) is available on the CoolMUC- 2 and SuperMUC-NG. Release Notes of Spack/23.1.0 Software Stack

This software stack provides new versions of compilers, MPI libraries, and most other applications. Also there are significant changes w.r.t module suffixes (specifically MPI and MKL modules) and module interactions (we have added prerequisites of MPI and compilers for high-level packages to adhere to the compatibility in loaded modules in your terminal environment). Please refer to the release notes for detailed changes. 

This software stack is rolled out as a non-default on both machines. You will have to explicitly swap/switch spack modules to access the new software stack. The best way is to purge all loaded modules and load the Spack/23.1.0 like,

$> module purge ; module load spack/23.1.0

Please be aware, 

  • Using the "module purge" command will unload all previously loaded modules from your terminal shell, including automatically loaded ones such as "intel," "intel-mpi," and "intel-mkl." This step is crucial to prevent potential errors that may arise due to lingering modules.

  • In the future, when version 23.1.0 or later versions of the Spack software stack become the default, we will no longer automatically load any modules (e.g., compilers, MPI, and MKL). This change will provide users with a clean environment to begin their work.

We request you to reach out to us for any suggestions and questions. Use the "Spack Software Stack" keyword when you open a ticket at https://servicedesk.lrz.de/en/ql/create/26 .

Due to an observed incident, the 4 nodes of the "new" Remote Visualization (RVS, https://rv.lrz.de/) have been deactivated and will not be available until further notice.

Messages for SuperMUC-NG
Currently there is nothing to report
Messages for Linux Clusters

Scheduled Maintenance

Due to a necessary maintenance in the cooling infrastructure, an interruption of operation will be necessary on CoolMUC-3 and several housing systems. The batch nodes of the system will be taken offline on June 12 at 16:00.

Please refer to the LRZ Service Status for details.

Messages for Compute Cloud and other HPC Systems

The LRZ AI Systems (including the MCML system segment) will undergo a maintenance procedure between June 5th and 6th, 2023. On these days, the system will not be available to users. Normal user operation is expected to resume on June 7th.