Page tree

Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

HideElements
breadcrumbtrue
titletrue
spacelogotrue

<< Zurück zur Dokumentationsstartseite

Lrz box
Picture/images/lrz/Icon_HPC.png
Heading1High Performance Computing

System Status (see also: Access and Overview of HPC Systems)

Status
colourGreen
= fully operational
Status
colourYellow
= operational but experiencing problems (see messages below)
Status
colourRed
= not available


SuperMUC Phase 2

(: final shut down  , all data will be deleted)


Status
colourYellow
titleEND OF LIFE

login: hw.supermuc.lrz.de

Status
colourGreen
titleUP

File Systems:

Status
colourYellow
titleEND OF LIFE

Queues: micro, general, test, big

Status
colourGreen
titleUP

Detailed node status: 

Status
colourGreen
titleUP

login: skx.supermuc.lrz.de

Status
colourGreen
titleUP

File Systems: 
HOME:
WORK:
SCRATCH:
DSS:


Status
colourGreen
titleUP

Status
colourGreen
titleUP

Status
colourGreen
titleUP

Status
colourGreen
titleUP

Partitions/Queues: 
micro, fat, general, large


Status
colourGreen
titleUP

 Globus Online File Transfer: 

Status
colourGreen
titleUP

Detailed node status: 

Submit an Incident Ticket for SuperMUC-NG 



login:

Linux Cluster

CoolMUC-2


lxlogin(5, 6, 7).lrz.de

Status
colourGreen
titleUP

lxlogin(8, 10).lrz.de

SCRATCH

mpp2_batch
mpp2_inter
serial

mpp2_batch

Status
colourGreen
titleUP

Status
colourGreen
titleUP
Partitions/Queues:
Status
colourGreen
titleUP
CoolMUC-3

lxlogin8.lrz.de

Status
colourGreen
titleUP

mpp2mpp3_batch
mpp3_inter

Status
colourYellow
titlepartially up

Status
colourGreen
titleUP
serial
Other Cluster Systems
lxlogin10.lrz.de

Status
colourGreen
titleUP

mpp3ivymuc
teramem_batchinter
kcs

Status
colour

Yellow

Green
title

partially up

UP

Status
colourGreen
titleUP

mpp3_inter
teramem_inter
ivymuc

Status
colourGreen
titleUP
File Systems

HOME
DSS
SCRATCH (mpp2)
SCRATCH (mpp3)

Status
colourGreen
titleUP

Status
colourGreen
titleUP

Status
colourGreen
titleUP

Status
colourGreen
titleUP

Detailed node status: 

Details:


Submit an Incident Ticket for the Linux Cluster



Compute Cloud and other HPC Systems

Compute Cloud: (https://cc.lrz.de)

Status
colourGreen
titleUP

GPU Cloud (https://datalab.srv.lrz.de)

Status
colourGreen
titleUP

DGX-1

Status
colourGreen
titleup

DGX-1v

Status
colourGreen
titleUP

RStudio Server (https://www.rstudio.lrz.de)

Status
colourGreen
titleUP

Details:


Submit an Incident Ticket for the Compute Cloud



Messages for SuperMUC

Scheduled Maintenance
System is available again.

 Change of Access Policy for the tape archive

Due to changed technical specifications for the IBM Spectrum Protect software, we have to change the access policy for the tape archive on SuperMUC-NG.
This will also affect data from SuperMUC, which have already put into the tape archive.

  • Permissions to access the data will now be granted to all users of a project i.e., all users in a project group can retrieve data from other users in this project group.
  • The previous policy was that only the users who wrote the data into the archive could access it.
  • If your project is ‘pr12ab’, you can see the members of this group by
    getent group pr12ab-d
  • You have to add the project in the dsmc commands  i.e.
    dsmc q ar “/gpfs/work/p12ab/us12ab5/*“ –se=p12ab
  • Please note the difference between the project (“pr12ab”) and the permission group for data (”pr12ab-d”)

See also: Backup and Archive on SuperMUC-NG

 Deletion of data on SuperMUC Phase 2.

Required on your part: Data Migration from SuperMUC to SuperMUC-NG before this date!

 skx-arch.supermuc.de (Node for archiving) will not be available before January, 2020



Messages for Linux Cluster

 Maintenance
System is available again.

: HOME directory path will change

You will find all HOME data in the new DSS HOME area, data migration will be performed by LRZ (unless you are specifically notified that you need to perform HOME data migration yourself). For emergency recoveries, the legacy NAS area (pointed to by the HOME_LEGACY variable) will remain available in read-only mode until the end of the year.

Following action is required on your part:  Make necessary adjustments to (job) scripts and configuration data to account for the changes in path names. LRZ strongly recommends using relative path names because this minimizes the required work. 

Examples: 

 End of service for NAS systems

NAS paths will be taken offline at the beginning of January, 2020. Please contact the Service Desk for outstanding data migration issues.

NAS PROJECT path is mounted read-only

between and  

DSS PROJECT now available on HPC systems:

Following action is required on your part:

Migrate data from the legacy NAS area (pointed to by the PROJECT_LEGACY variable) to the new DSS area. LRZ strongly advises to get rid of unneeded data sets, and/or archive data sets to tape.

Step-by-step procedure for migration:

  1. On any cluster login node, issue the command
    dssusrinfo all
    This will list paths to accessible containers, as well as quota information etc.
  2. Edit your shell profile and set the PROJECT and/or WORK variable to a
    suitable path value based on the above output, typically one of the DSS
    paths with your account name appended to it.
  3. Use the cp or rsync or tar command to migrate your data from
    PROJECT_LEGACY to the new storage area.
  4. If your scripts use absolute path names instead of the PROJECT or
    WORK variable, they need appropriate updates

https://www.lrz.de/aktuell/ali00788.html

User e-Mail notification of DSS PROJECT link

Following actions are required on your part:

  1. Confirm the e-Mail invitation to validate your access,
  2. After the Linux Cluster maintenance (see below), store path information in an environment variable on the Cluster (e.g. by setting the PROJECT variable in ~/.bashrc).

One this is done, migrating data from NAS PROJECT to DSS PROJECT can start

Please read the change description on how to handle the significant changes to the Linux cluster configuration performed end of September, 2019



Messages for Cloud and other HPC Systems

                                        



More Links

Children Display