Page tree
Skip to end of metadata
Go to start of metadata

<< Zurück zur Dokumentationsstartseite

High Performance Computing

 

System Status (see also: Access and Overview of HPC Systems)

GREEN = fully operational YELLOW = operational but experiencing problems (see messages below) RED = not available


SuperMUC Phase 2

(only available until end of 2019)

END OF LIFE

login: hw.supermuc.lrz.de

UP

File Systems:
HOME:
WORK:
SCRATCH:

Important: Please migrate data to SuperMUC-NG asap! 


END OF LIFE
END OF LIFE
END OF LIFE

Queues: micro, general, test, big

UP

Detailed node status: 

UP

login: skx.supermuc.lrz.de

UP

File Systems: 
HOME:
WORK:
SCRATCH:
DSS:


UP
UP
UP
UP

Partitions/Queues: 
micro, fat, general, large


UP

Globus Online File Transfer: 

UP

Detailed node status: 

Details:


Submit an Incident Ticket for SuperMUC-NG 


Linux Cluster

 

login: lxlogin(5, 6, 7).lrz.de

UP

lxlogin(8, 10).lrz.de

UP



Partitions/Queue:

mpp2_batch

UP

mpp2_inter

UP

serial

UP

mpp3_batch

kcs

W/O SCRATCH

W/O SCRATCH

mpp3_inter
teramem_inter
ivymuc

W/O SCRATCH
UP
UP



Detailed node status: 

Details:

 


Submit an Incident Ticket for the Linux Cluster


Messages of the Day

see also: Aktuelle LRZ-Informationen / News from LRZ

Messages for SuperMUC

 Scheduled Maintenance

Important: Please speedily migrate data to SuperMUC-NG! 
skx-arch.supermuc.de is not yet available before early January, 2020
Messages for Linux Cluster

 SCRATCH/GPFS on CoolMUC-2

Today (from morning until early afternoon), the SCRATCH file system was not available. Within that timeframe, jobs may have crashed. SCRATCH file system was made available again.

Short lasting outages of SCRATCH have been observed today on CoolMUC-2 and hosted cluster systems as well.

 SCRATCH unavailable on CoolMUC-3

To avoid file system crashes, we have decided to unmount SCRATCH from all systems associated with CoolMUC-3 for now. We expect to revert this measure again at the next scheduled maintenance .

 End of service for NAS systems

NAS paths will be taken offline at the beginning of January, 2020. Please contact the Service Desk for outstanding data migration issues.

 Maintenance

November 27, 2019, at 8:00 am

https://www.lrz.de/aktuell/ali00807.html

: HOME directory path will change

You will find all HOME data in the new DSS HOME area, data migration will be performed by LRZ (unless you are specifically notified that you need to perform HOME data migration yourself). For emergency recoveries, the legacy NAS area (pointed to by the HOME_LEGACY variable) will remain available in read-only mode until the end of the year.

Following action is required on your part:  Make necessary adjustments to (job) scripts and configuration data to account for the changes in path names. LRZ strongly recommends using relative path names because this minimizes the required work. 

Examples: 

NAS PROJECT path is mounted read-only

between and  

DSS PROJECT now available on HPC systems:

Following action is required on your part:

Migrate data from the legacy NAS area (pointed to by the PROJECT_LEGACY variable) to the new DSS area. LRZ strongly advises to get rid of unneeded data sets, and/or archive data sets to tape.

Step-by-step procedure for migration:

  1. On any cluster login node, issue the command
    dssusrinfo all
    This will list paths to accessible containers, as well as quota information etc.
  2. Edit your shell profile and set the PROJECT and/or WORK variable to a
    suitable path value based on the above output, typically one of the DSS
    paths with your account name appended to it.
  3. Use the cp or rsync or tar command to migrate your data from
    PROJECT_LEGACY to the new storage area.
  4. If your scripts use absolute path names instead of the PROJECT or
    WORK variable, they need appropriate updates

https://www.lrz.de/aktuell/ali00788.html

User e-Mail notification of DSS PROJECT link

Following actions are required on your part:

  1. Confirm the e-Mail invitation to validate your access,
  2. After the Linux Cluster maintenance (see below), store path information in an environment variable on the Cluster (e.g. by setting the PROJECT variable in ~/.bashrc).

One this is done, migrating data from NAS PROJECT to DSS PROJECT can start

Please read the change description on how to handle the significant changes to the Linux cluster configuration performed end of September, 2019

Messages for Cloud and other HPC systems

The OpenNebula Compute Cloud was decommissioned on

                                        
  • No labels