<< Zurück zur Dokumentationsstartseite
High Performance Computing

System Status (see also: Access and Overview of HPC Systems)
GREEN = fully operational YELLOW = operational with restrictions (see messages below) RED = not available
SuperMUC-NGUpcoming: Maintenance May 5th - May 8th | ||
System: | UP | |
login nodes: skx.supermuc.lrz.de | UP | |
File Systems: |
UP UP | |
Partitions/Queues: |
| |
Globus Online File Transfer: | UP | |
Detailed node status: | ||
Details: | ||
Linux ClusterAttention: CM2 still in test operation after OS update | ||
CoolMUC-2 | ||
lxlogin(1,2,3,4).lrz.de | UP | |
serial partitions: serial | UP | |
parallel partitions: cm2_(std,large) | ||
interactive partition: cm2_inter | UP | |
CoolMUC-3 login nodes: lxlogin(8,9).lrz.de parallel partition mpp3_batch interactive part. mpp3_inter | UP UP UP | |
teramem, ivymuc, kcs login node: lxlogin10.lrz.de ivymuc teramem_inter kcs | UP UP UNAV UP | |
File Systems HOME, DSS SCRATCH | UP UP | |
Detailed node status: | ||
Details: | ||
Compute Cloud and | ||
---|---|---|
Compute Cloud: (https://cc.lrz.de) detailed status and free slots: https://cc.lrz.de/lrz | UP | |
GPU Cloud (https://datalab.srv.lrz.de) | UP | |
DGX-1 | UP | |
DGX-1v | UP | |
RStudio Server (https://www.rstudio.lrz.de) | UP | |
Details: | ||
Messages for SuperMUC-NG |
---|
: See https://www.lrz.de/aktuell/ali00848.html for the announcement of the next SuperMUC-NG maintenance on 5.-6. May 2020. The following block operation for very large jobs will be conducted in the 48h after the maintenance. |
Messages for Linux Cluster |
---|
$ dssusrinfo all is not available at the moment |
CoolMUC-2: Update of Spack environment The Spack-driven software environment (20.1.1) contains a number of important bug-fixes. While we expect no substantial impact on existing programs, the update also impacts Intel MPI, which will be upleveled to 2019 Update 7. If this causes difficulties, please switch back to the old module as follows: module unload intel-mpi spack Alternatively, we suggest recompiling your application in the new environment to be future-proof. |
CoolMUC-2 is back in operation since ca. 19:45, with the exception of cm2_tiny which will require reconfiguration work next week. The teramem_inter queue can be accessed via the lxlogin8 login node. If you encounter problems, please see: CoolMUC-2: Open issues after the Cluster Hardware and Software Upgrade |
Messages for Cloud and other HPC Systems |
---|