You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 453 Next »

<< Zurück zur Dokumentationsstartseite

High Performance Computing

 

Forgot your Password? click here
Add new user (only for SuperMUC-NG)?
click here

Add new IP(only for SuperMUC-NG)?
click here
How to write good LRZ Service Requests? click here


System Status (see also: Access and Overview of HPC Systems)

GREEN = fully operational YELLOW = operational with restrictions (see messages below) RED = not available



Höchstleistungsrechner (SuperMUC-NG)

System: 

UP

login nodes: skx.supermuc.lrz.de

UP

archive nodes: skx-arch.supermuc.lrz.de

UP

File Systems
HOME
WORK
SCRATCH
DSS
DSA


UP

UP
UP
UPUP

Partitions/Queues: 
micro, general, large

fat, test


UP

UP

 Globus Online File Transfer: 

UP

Detailed node status


Details:

Submit an Incident Ticket for the SuperMUC-NG

Add new user? click here

Add new IP? click here


Linux Cluster

CoolMUC-2
lxlogin(1,2,3,4).lrz.de

UP

serial partitions: serial

UP

parallel partitions cm2_(std,large)

UP

cluster cm2_tiny

UP

interactive partition: cm2_inter

UP

c2pap

UP

CoolMUC-3

lxlogin(8,9).lrz.de

parallel partition: mpp3_batch

interactive partition: mpp3_inter


UP

UP

UP

teramem, kcs

teramem_inter

kcs

UP

UP

File Systems

HOME
SCRATCH
DSS
DSA


UP
UP
UP
UP

Detailed node status
Detailed queue status


Details:

Submit an Incident Ticket for the Linux Cluster

Messages for SuperMUC-NG

The Energy Aware Runtime has been reactivated. Please be aware that this may have an impact on job processing times.

Please note that WORK/SRATCH on SuperMUC-NG exhibit currently possible performance degradation under heavy I/O load. Take this into account when planning your job runtimes.

The new hpcreport tool is now available to check job performance and accounting on SuperMUC-NG. Please check out

https://doku.lrz.de/display/PUBLIC/HPC+Report

https://www.lrz.de/aktuell/ali00923.html

Messages for Linux Clusters

SCRATCH is now fully online again. While we expect older data that were temporarily inaccessible to be fully available again, data that were created in the last few days before the problems started might be corrupt and need to be renewed from tape archive (if one exists) or recreated. 
There will be a reboot of one servers tomorrow, which however should not impact overall system operation.

The new release of Abaqus, Version 2022 (Dassault Systems Software) has been installed on both Linux Clusters CoolMUC-2 / CoolMUC-3 as well as on the RVS systems. The Abaqus documentation has been updated.

The new release of SimCenter StarCCM+, Version 2021.3.1 (Siemens PLM Software) has been installed and provided on the LRZ HPC systems (CM2, CM3, SNG and RVS systems). For details please see the correspondig announcement:
https://www.lrz.de/aktuell/ali00927.html

There are 4 new Remote Visualization (RVS_2021) nodes available in a friendly user testing period. Nodes are operated under Ubuntu OS and NoMachine. For more details please refer to the documentation.

Messages for Cloud and other HPC Systems

The LRZ AI and MCML Systems are back in operation as  the maintenance procedure planned from January 7th to January 11th is completed.

The RStudio Server service at LRZ was decommissioned. For a replacement offering please see Interactive Web Servers on the LRZ AI Systems and, more generally, LRZ AI Systems.

RStudio Server access is currently limited, due to a Linux Cluster failure. See https://www.lrz.de/aktuell/ali00922.html for details.

  • No labels