You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 151 Next »

<< Zurück zur Dokumentationsstartseite

High Performance Computing


System Status (see also: Access and Overview of HPC Systems)

GREEN = fully operational YELLOW = operational with restrictions (see messages below) RED = not available

Linux Cluster

login nodes: lxlogin(5, 6, 7)
SLURM: mpp2_batch, mpp2_inter, serial


login node:
SLURM: mpp3_batch, mpp3_inter


teramem, ivymuc, kcs
login node:
SLURM: ivymuc, teramem_inter, kcs


File Systems
SCRATCH (mpp2), SCRATCH (mpp3)



Detailed node status: 


Submit an Incident Ticket for the Linux Cluster

Messages for SuperMUC

 PRACE Summer of HPC: Late-stage undergraduate and Master's students are invited to apply: Deadline: Feb 22, 2020.

 Call for Participation: LRZ Extreme Scale Workshop 2020. Deadline: Feb 21, 2020 

See for details on the maintenance from January 29.

 See for news on the development environment.

Messages for Linux Cluster

Starting yesterday evening, job submission started failing. This problem was caused by a filesystem filling up on the SLURM server. Submissions should be possible again now.

 See for the Release Notes on the installation of the new ANSYS software release 2020.R1 on all LRZ Linux Clusters, SuperMUC-NG and RVS.

 End of service for NAS systems

NAS paths (former HOME and PROJECT areas) have been taken offline at the beginning of January, 2020. Please contact the Service Desk if you have outstanding data migration issues.

Messages for Cloud and other HPC Systems

The RStudio Server maintenance has concluded. Make sure to read the RStudio Server (LRZ Service) documentation for a list of changes and actions required from the users after this maintenance. Please submit an incident ticket if you have any questions or encounter any issues.

  • No labels