You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 107 Next »

<< Zurück zur Dokumentationsstartseite

High Performance Computing

 

System Status (see also: Access and Overview of HPC Systems)

GREEN = fully operational YELLOW = operational but experiencing problems (see messages below) RED = not available


Linux Cluster

CoolMUC-2


lxlogin(5, 6, 7).lrz.de

UP

mpp2_batch
mpp2_inter
serial

UP

UPUP
CoolMUC-3

lxlogin8.lrz.de

UP

mpp3_batch
mpp3_inter

PARTIALLY UP

UP
Other Cluster Systems
lxlogin10.lrz.de

UP

ivymuc
teramem_inter
kcs

UP

UPUP
File Systems

HOME
DSS
SCRATCH (mpp2)
SCRATCH (mpp3)

UP
UP
UP
UP

Detailed node status: 

Details:


Submit an Incident Ticket for the Linux Cluster

Messages for SuperMUC

Next Gauss Call for Large-Scale Projects

We would also like to draw your attention to the next Gauss Call, which provides compute time for large-scale projects requiring more than 45 million core-hours per year.

The next call will cover the period 1 May 2020 to 30 April 2021. Projects, which need more than 45 million core-hours on SuperMUC-NG, must apply through this call. LRZ provides a total of 600 Mio core-hours for this call.

The call will open: 
The deadline will be : ,17:00 CET (strict!).

Please make your arrangements to participate in this call. Details will be separately announced.

 Deletion of data on SuperMUC Phase 2.

Required on your part: Data Migration from SuperMUC to SuperMUC-NG before this date!

Change of Access Policy for the tape archive

Due to changed technical specifications for the IBM Spectrum Protect software, we have to change the access policy for the tape archive on SuperMUC-NG.
This will also affect data from SuperMUC, which have already put into the tape archive.

  • Permissions to access the data will now be granted to all users of a project i.e., all users in a project group can retrieve data from other users in this project group.
  • The previous policy was that only the users who wrote the data into the archive could access it.
  • If your project is ‘pr12ab’, you can see the members of this group by
    getent group pr12ab-d
  • You have to add the project in the dsmc commands  i.e.
    dsmc q ar “/gpfs/work/p12ab/us12ab5/*“ –se=p12ab
  • Please note the difference between the project (“pr12ab”) and the permission group for data (”pr12ab-d”)

See also: Backup and Archive on SuperMUC-NG

 skx-arch.supermuc.de (Node for archiving) will not be available before January, 2020

Messages for Linux Cluster

: CoolMUC-3 scheduled maintenance.
This also impacts housing customers with systems integrated into CoolMUC-3.

Details are separately published.

: HOME directory path has changed

You will find all HOME data in the new DSS HOME area. Data migration was performed by LRZ (unless you are specifically notified that you need to perform HOME data migration yourself). For emergency recoveries, the legacy NAS area (pointed to by the HOME_LEGACY variable) will remain available in read-only mode until the end of the year 2019.

 End of service for NAS systems

NAS paths (former HOME and PROJECT areas) will be taken offline at the beginning of January, 2020. Please contact the Service Desk if you have outstanding data migration issues.

Messages for Cloud and other HPC Systems

Limited availability of RStudio Server
The limitations affecting RStudio Server have been resolved.


More Links

  • No labels