Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

HideElements
breadcrumbtrue
titletrue
spacelogotrue

<< Zurück zur Dokumentationsstartseite

Lrz box
Picture/images/lrz/Icon_HPC.png
Heading1High Performance Computing

Forgot your Password? click here
Add new user (only for SuperMUC-NG)?
click here

Add new IP(only for SuperMUC-NG)?
click here
How to write good LRZ Service Requests? click here


System Status (see also:

 

Access and Overview of HPC Systems)

Status
colourGreen
= fully operational
Status
colourYellow
= operational but some problems with restrictions (see messages below)
Status
colourRed
= not available



Höchstleistungsrechner (SuperMUC-NG)

System: 

Status
colourGreen
title

UP

up

login nodes: skx.supermuc.lrz.de

Status
colourGreen
title

UP

up

archive nodes: skx-arch.supermuc.lrz.de

Status
colourYellow
titleup

File Systems

: 


HOME

:


WORK

:


SCRATCH

:


DSS

:


DSA


Status
colourGreen
titleup

Status
colourYellow
titleUP

Status
colour
Green
Yellow
titleUP

Status
colourGreen
title
UP
up
Status
colourGreen
title
UP
up

Partitions/Queues: 
micro

, fat

, general, large

fat, test


Status
colourGreen
title

UP

up

Status
colourGreen
titleup

 Globus Online File Transfer: 

Status
colourGreen
title

UP

up

Detailed node status


Details:

Submit an Incident Ticket for the

Linux Cluster

SuperMUC-NG

Add new user? click here

Add new IP? click here



Linux Cluster

CoolMUC-2
Statuscolour
GreentitleUPlogin nodes:

lxlogin(
5
1,2,
6
3,
7
4).lrz.de

Status
colourGreen
title

UPmpp2_batch
mpp2_inter

up

serial partitions: serial

Status
colourGreen
title

UP

up

parallel partitions cm2_(std,large)

Status
colourGreen
titleUP

cluster cm2_tiny

Status
colourGreen
titleUP

CoolMUC-3
interactive partition: cm2_inter

Status
colourGreen
title

UPlogin node: lxlogin8.lrz.de

up

c2pap

Status
colourGreen
title

UP

up

CoolMUC-3

lxlogin(8,9).lrz.de

parallel partition: mpp3_batch

interactive partition: mpp3_inter


Status
colourGreen
title

UP

up

Status
colourGreen
title

UP

up

Other Cluster Systems

Status
colourGreen
title

UPlogin node: lxlogin10.lrz.de

Status
colourGreen
titleUP

ivymuc

up

teramem, kcs

teramem_inter

kcs

Status
colourGreen
title

UP

up

Status
colourGreen
title

UP StatuscolourGreentitleUP

up

File Systems

status

HOME
DSS
SCRATCH (mpp2)
SCRATCH (mpp3)

HOME

colourGreen
titleUP

SCRATCH
DSS
DSA


Status
colourGreen
title

UP

up

Status
colourGreen
title

UP

up

Status
colourGreen
title

UP

up

Status
colourGreen
title

UP

up

Detailed node status

click here


Detailed queue status


Details:

Submit an Incident Ticket for the Linux Cluster



Compute Cloud and
other HPC Systems

Compute Cloud: (https://cc.lrz.de)

Status
colourGreen
titleUP

GPU Cloud (

detailed status and free slots: https://

datalab.srv

cc.lrz.de

)

Status
colourGreen
titleUP

DGX-1

/lrz

Status
colourGreen
titleup

DGX-1v
LRZ AI Systems

Status
colourGreen
titleUP

RStudio Server

(https://www.rstudio.lrz.de)

Status
colour

Green

Red
title

UP

End of LIfe

Details:

Dokumentation
RStudio Server (LRZ Service)
Consulting for HPC and BigData Services at LRZ

Submit an Incident Ticket for the Compute Cloud

Submit an Incident Ticket for RStudio Server



 See
Messages for SuperMUC-NG

See https://www.lrz.de/aktuell/ali00823.html for the scheduled maintenance on January 29.

 See https://www.lrz.de/aktuell/ali00821.html for news on access to the LRZ license server for the ANSYS software (licansys.lrz.de) from the SNG compute nodes in SLURM jobs.

Archive nodes update
The  hardware maintenance has been rescheduled. Please expect a short downtime of the archive and backup servers (IBM spectrum protect) from 09:00 to 11:00 o’clock on Monday 24.01.22 .

 The new ANSYS Software Release, Version 2022.R1 has been installed and provided on SuperMUC-NG. For details and for some minor pending issues with this new software release please refer to the correspondig announcement:
https://www.lrz.de/aktuell/

ali00820
for news on the development environment.

23rd Gauss Call for Large-Scale Projects

Users with huge requirements of computing time must submit their
proposals via the Gauss Calls. The current 23rd Gauss Call is open from

January 13th to February 10th 2020, 17:00 o’clock CET (strict deadline)

The call will cover the period 1 May 2020 to 30 April 2021.

  • Projects, which need more than 45 million core-hours per year on
    SuperMUC-NG, must apply through this call.
  • LRZ provides a total of 600 Mio core-hours for this call.

Further information:::

  • GaussCall23.pdf
  • FactSheet-GCS-SUPERMUC-NG.pdf

    The Energy Aware Runtime (EAR) has been reactivated. Please be aware that this may have an impact on job processing times.

    Please note that WORK/SRATCH on SuperMUC-NG exhibit currently possible performance degradation under heavy I/O load. Take this into account when planning your job runtimes.

    The new hpcreport tool is now available to check job performance and accounting on SuperMUC-NG. Please check out

    https://doku.lrz.de/display/PUBLIC/

    GCS+large+scale+project+on+SuperMUC-NG

    Change of Access Policy for the tape archive

    Due to changed technical specifications for the IBM Spectrum Protect software, we have to change the access policy for the tape archive on SuperMUC-NG.
    This will also affect data from SuperMUC, which have already put into the tape archive.

    • Permissions to access the data will now be granted to all users of a project i.e., all users in a project group can retrieve data from other users in this project group.
    • The previous policy was that only the users who wrote the data into the archive could access it.
    • If your project is ‘pr12ab’, you can see the members of this group by
      getent group pr12ab-d
    • You have to add the project in the dsmc commands  i.e.
      dsmc q ar “/gpfs/work/p12ab/us12ab5/*“ –se=p12ab
    • Please note the difference between the project (“pr12ab”) and the permission group for data (”pr12ab-d”)

    See also: Backup and Archive on SuperMUC-NG

     SuperMUC Phase 2 has been finally shutdown

     skx-arch.supermuc.de (Node for archiving) will not be available before January, 2020

    Messages for Linux Cluster See

    https://www.lrz.de/aktuell/ali00923.html



    Messages for Linux Clusters

    SCRATCH is now fully online again. While we expect older data that were temporarily inaccessible to be fully available again, data that were created in the last few days before the problems started might be corrupt and need to be renewed from tape archive (if one exists) or recreated. 
    There will be a reboot of one servers tomorrow, which however should not impact overall system operation.

    The new ANSYS Software Release, Version 2022.R1 has been installed and provided on the LRZ Linux Cluster systems (CM2, CM3 and RVS systems). For details and for some minor pending issues with this new software release please refer to the correspondig announcement:
    https://www.lrz.de/aktuell/

    ali00822.html for the Release Notes on the installation of the new ANSYS software release 2020.R1 on all LRZ Linux Clusters, SuperMUC-NG and RVS. See

    The new release of Abaqus, Version 2022 (Dassault Systems Software) has been installed on both Linux Clusters CoolMUC-2 / CoolMUC-3 as well as on the RVS systems. The Abaqus documentation has been updated.

    The new release of SimCenter StarCCM+, Version 2021.3.1 (Siemens PLM Software) has been installed and provided on the LRZ HPC systems (CM2, CM3, SNG and RVS systems). For details please see the correspondig announcement:
    https://www.lrz.de/aktuell/

    ali00820
    for news on the development environment.

    : CoolMUC-3 scheduled maintenance (concluded).
    This also impacts housing customers with systems integrated into CoolMUC-3.

    Details are separately published.

    : HOME directory path has changed

    You will find all HOME data in the new DSS HOME area. Data migration was performed by LRZ (unless you are specifically notified that you need to perform HOME data migration yourself). For emergency recoveries, the legacy NAS area (pointed to by the HOME_LEGACY variable) will remain available in read-only mode until the end of the year 2019.

     End of service for NAS systems

    NAS paths (former HOME and PROJECT areas) have been taken offline at the beginning of January, 2020. Please contact the Service Desk if you have outstanding data migration issues.

    Messages for Cloud and other HPC Systems Limited availability of RStudio Server
    The limitations affecting RStudio Server have been resolved

    There are 4 new Remote Visualization (RVS_2021) nodes available in a friendly user testing period. Nodes are operated under Ubuntu OS and NoMachine. For more details please refer to the documentation.



    Messages for Cloud and other HPC Systems

    The LRZ AI and MCML Systems are back in operation as the maintenance procedure planned from January 7thto January 11th is completed.

    The RStudio Server service at LRZ was decommissioned. For a replacement offering please see Interactive Web Servers on the LRZ AI Systems and, more generally, LRZ AI Systems.



    More Links

    Children Display