<< Zurück zur Dokumentationsstartseite
High Performance Computing

Forgot your Password? click here
Add new user (only for SuperMUC-NG)? click here
Add new IP(only for SuperMUC-NG)? click here
How to write good LRZ Service Requests? click here
System Status (see also: Access and Overview of HPC Systems)
GREEN = fully operational YELLOW = operational with restrictions (see messages below) RED = not available
Höchstleistungsrechner (SuperMUC-NG) | ||
System: | ||
login nodes: skx.supermuc.lrz.de | UP | |
archive nodes: skx-arch.supermuc.lrz.de | UP | |
File Systems |
UP UPUP | |
Partitions/Queues: fat, test |
UP | |
Globus Online File Transfer: | UP | |
Detailed node status | ||
Details:
| ||
Submit an Incident Ticket for the SuperMUC-NG Add new user? click here Add new IP? click here |
Linux Cluster | ||
CoolMUC-2 | ||
lxlogin(1,2,3,4).lrz.de | UP | |
serial partitions: serial | UP | |
parallel partitions cm2_(std,large) | UP | |
cluster cm2_tiny | UP | |
interactive partition: cm2_inter | ISSUES | |
c2pap | UP | |
CoolMUC-3 lxlogin(8,9).lrz.de parallel partition: mpp3_batch interactive partition: mpp3_inter | UP UP UP | |
teramem, kcs | ||
teramem_inter kcs | UP UP | |
File Systems HOME | UP | |
Details: | ||
Compute Cloud and | ||
---|---|---|
Compute Cloud: (https://cc.lrz.de) detailed status and free slots: https://cc.lrz.de/lrz | UP | |
LRZ AI Systems | UP | |
RStudio Server | END OF LIFE | |
Details: | ||
Messages for SuperMUC-NG |
---|
On Monday, April 4, 2022, a new version of the spack-based development and application software stack will be rolled out. The new spack version will be loaded as default starting April 11, 2022 After that date, you will be still able to switch to the previous spack stack with > module switch spack spack/21.1.1 We strongly recommend recompiling self-built applications after the roll-out. See also https://doku.lrz.de/display/PUBLIC/Spack+Modules+Release+22.2.1 for details. |
Base core frequency of jobs has been set to 2.3GHz. Higher frequencies possible using EAR. |
The new hpcreport tool is now available to check job performance and accounting on SuperMUC-NG. Please check out |
Messages for Linux Clusters |
---|
The new version 2022.1.1 of the StarCCM+ CFD solver by Siemens PLM Software has been installed and rolled out on all Linux Clusters (cm2, mpp3, housing systems), the "old" and the "new" RVS systems as well as on SuperMUC-NG. The module system and the documentation have correspondingly been updated. Due to the observed issues with compute node overcommitment in all 2020.* versions of StarCCM+ with the new updated SLURM scheduler (see here: https://www.lrz.de/aktuell/ali00936.html) the StarCCM+ versions prior to 2021.1.1 have been decommissioned and removed from the module system. The 2022.1.1 version is the new default version of StarCCM+. |
The Slurm configuration on the partition "cm2_inter" shows unintentional behaviour. Job submissions may be rejected showing the error "Requested node configuration is not available". We are working on a solution. As workarounds, setting a maximum number of tasks=14 (--ntasks-per-node=14) or omitting the specification of cluster and partition names (options -M and -p) may avoid the issue. |
A scheduled maintenance of all cluster systems will start on March 28. See https://www.lrz.de/aktuell/ali00936.html for details. Update All systems are online again. |
There are 4 "new" Remote Visualization (RVS_2021) nodes available. The machines are in production mode. Nodes are operated under Ubuntu OS and NoMachine. Usage is limited to 2 hours and if you need a longer period of time, please file an LRZ Service Request. For more details please refer to the documentation. |
Messages for Cloud and other HPC Systems |
---|
The LRZ AI and MCML Systems will undergo a maintenance procedure from April 25th to April 27th (both inclusive.) During this period, the system will not be available to users. Normal user operation is expected to resume on April 28th. |
HPC Services
Attended Cloud Housing |