High Performance Computing

<< Zurück zur Dokumentationsstartseite

High Performance Computing

 

Forgot your Password? click here
Add new user (only for SuperMUC-NG)?
click here

Add new IP(only for SuperMUC-NG)?
click here
How to write good LRZ Service Requests? click here

How to setup two-factor authentication (2FA) on HPC systems? click here

System Status (see also: Access and Overview of HPC Systems)

GREEN = fully operational YELLOW = operational with restrictions (see messages below) RED = not available = see messages below



Höchstleistungsrechner (SuperMUC-NG)

login nodes: skx.supermuc.lrz.de LOGIN

archive nodes: skx-arch.supermuc.lrz.de ARCHIVE

File Systems  
HOME WORK SCRATCH DSS DSA

Partitions/Queues: 
 MIRCRO GENERAL LARGE

  FAT TEST

Detailed node status

Details:

Submit an Incident Ticket for the SuperMUC-NG

Add new user? click here

Add new IP? click here

Questions about 2FA on SuperMUC-NG? click here


Linux Cluster 

CoolMUC-2see issues below
lxlogin(1,2,3,4).lrz.de

UP

 

serial partition serial_std

UP

serial partition serial_long

UP

parallel partitions cm2_(std,large)

UP

cluster cm2_tiny

UP

interactive partition: cm2_inter

UP

c2pap

UP

 

CoolMUC-3

lxlogin(8,9).lrz.de

parallel partition: mpp3_batch

interactive partition: mpp3_inter


2FA ISSUES

UP

UP


CoolMUC-4

lxlogin5.lrz.de

interactive partition: cm4_inter_large_mem


UP

MOSTLY UP


others


teramem_inter

UP

 

kcs

MOSTLY UP

 

biohpc

UP

 

hpda

UP

 

File Systems

HOME
SCRATCH (legacy)
SCRATCH_DSS
DSS
DSA


UP
UP
UP
UP
UP


Detailed node status
Detailed queue status



Details:

Submit an Incident Ticket for the Linux Cluster

 


DSS Storage systems

For the status overview of the Data Science Storage please go to

https://doku.lrz.de/display/PUBLIC/Data+Science+Storage+Statuspage


Messages

see also: Aktuelle LRZ-Informationen / News from LRZ

Messages for all HPC System

A new software stack (spack/23.1.0) is available on the CoolMUC- 2 and SuperMUC-NG. Release Notes of Spack/23.1.0 Software Stack

This software stack provides new versions of compilers, MPI libraries, and most other applications. Also there are significant changes w.r.t module suffixes (specifically MPI and MKL modules) and module interactions (we have added prerequisites of MPI and compilers for high-level packages to adhere to the compatibility in loaded modules in your terminal environment). Please refer to the release notes for detailed changes. 

This software stack is rolled out as a non-default on both machines. You will have to explicitly swap/switch spack modules to access the new software stack. The best way is to purge all loaded modules and load the Spack/23.1.0 like,

$> module purge ; module load spack/23.1.0

Please be aware, 

  • Using the "module purge" command will unload all previously loaded modules from your terminal shell, including automatically loaded ones such as "intel," "intel-mpi," and "intel-mkl." This step is crucial to prevent potential errors that may arise due to lingering modules.

  • In the future, when version 23.1.0 or later versions of the Spack software stack become the default, we will no longer automatically load any modules (e.g., compilers, MPI, and MKL). This change will provide users with a clean environment to begin their work.

We request you to reach out to us for any suggestions and questions. Use the "Spack Software Stack" keyword when you open a ticket at https://servicedesk.lrz.de/en/ql/create/26 .


Messages for SuperMUC-NG

StarCCM+ Vers. 2024.1.1 (alias 19.02.012) has been installed on SuperMUC-NG.

The new release of ANSYS 2024.R1 (CFX, Fluent, ANSYS Mechanical, LS-Dyna) has been installed on SuperMUC-NG Phase 1. The new release ANSYS 2024.R1 has been made the new default version.

Messages for Linux Clusters

StarCCM+ Vers. 2024.1.1 (alias 19.02.012) has been installed on the LRZ Linux Clusters. This version of the Siemens PLM software can only be used in the cm4_inter_large_mem queue. Due to SLES12/15 OS level on CoolMUC-2/-3 it is known, that this version of StarCCM+ is no longer compatible with these older Linux Clusters and this issue will no longer be fixed. The default version of the StarCCM+ software modules therefore remains at version 2023.3.1 (on CoolMUC-2) and 2023.2.1 (on CoolMUC-3).

The new release of ANSYS 2024.R1 (CFX, Fluent, ANSYS Mechanical, LS-Dyna) has been installed on the Linux Clusters under SLES15 SP1/SP4. The software can no longer be provided on older Linux Clusters like CoolMUC-3 / RVS due to conflicts with the installed operating systems. Currently this new version of ANSYS 2024.R1 is known to run successfully on CoolMUC-2/-4 (cm4_inter_large_mem queue). On other Linux Clusters please use ANSYS 2023.R1.

The new release of Abaqus 2024 (Dassault Systems software, Simulia package) has been installed on the Linux Clusters under SLES15 SP4. The software can no longer be provided on older Linux Clusters like CoolMUC-2/3 due to conflicts with the installed operating systems. Currently this new version of Abaqus 2024 is known to run successfully on CoolMUC-4 nodes only (cm4_inter_large_mem queue). On other Linux Clusters please use furtheron Abaqus 2023.

Messages for Compute Cloud and other HPC Systems

Successful login to the LRZ AI Systems may currently fail. This is due to a disruption of the central load balancer, see https://status.lrz.de/ for updates. 

As a follow-up to the recent maintenance, another short term interruption of service was needed. The login nodes of the LRZ AI Systems were temporarily unavailable on Monday, March 18th, between 9-9:40am. Jobs already running on the system were not be affected.

The LRZ AI Systems maintenance has concluded. The system is back in operation.

The LRZ AI Systems maintenance has to be extended. We aim to return to operation during Thursday, March 14th and will update this announcement accordingly.

The LRZ AI Systems (including the MCML system segment) are undergoing a maintenance procedure between March 11th and 13th, 2024. On these days, the system is not be available to users. Normal user operation is expected to resume during the course of March 13th.