HideElements | ||||||
---|---|---|---|---|---|---|
|
<< Zurück zur Dokumentationsstartseite
Lrz box | ||||
---|---|---|---|---|
|
Forgot your Password? click here
Add new user (only for SuperMUC-NG)? click here
Add new IP(only for SuperMUC-NG)? click here
How to write good LRZ Service Requests? click here
System Status (see also:
Access and Overview of HPC Systems)
= fully operational Status colour Green
= operational with restrictions (see messages below) Status colour Yellow
= not available Status colour Red
Höchstleistungsrechner (SuperMUC-NG) | |||||||
System: | |||||||
login nodes: skx.supermuc.lrz.de |
|
|
archive nodes: skx-arch.supermuc.lrz.de |
|
|
| ||
|
|
|
|
|
|
|
|
|
|
| ||
Partitions/Queues: |
general, large fat, test |
|
| |||||||
Globus Online File Transfer: |
|
| ||
Detailed node status |
Details:
| ||
Add new user? click here Add new IP? click here |
Linux Cluster | ||
CoolMUC-2 |
lxlogin( |
1,2, |
3, |
4).lrz.de |
SLURM: mpp2_batch, mpp2_inter, serial
| ||||||||
serial partitions: serial |
| |||||||
parallel partitions cm2_(std,large) |
| |||||||
cluster cm2_tiny |
| |||||||
interactive partition: cm2_inter |
|
c2pap |
|
| ||
CoolMUC-3 |
lxlogin(8,9).lrz.de |
parallel partition: mpp3_batch |
interactive partition: mpp3_inter |
|
|
| ||
teramem, |
kcs |
SLURM: ivymuc,
teramem_inter |
kcs |
|
| ||
File Systems HOME |
|
SCRATCH (mpp2), SCRATCH (mpp3)
|
|
|
| ||
Details: | ||
Compute Cloud and | ||
---|---|---|
Compute Cloud: (https://cc.lrz.de) |
Status | ||||
---|---|---|---|---|
|
detailed status and free slots: https:// |
Status | ||||
---|---|---|---|---|
|
|
LRZ AI Systems |
| |||||||
RStudio Server |
|
|
| ||
Details: |
Messages for SuperMUC | |
---|---|
Due to partial disruption of I/O (WORK and SCRATCH) the machine is not fully available. Jobs may terminate with I/O error reports. | |
Call for Participation: LRZ Extreme Scale Workshop 2020 deadline: Feb 21, 2020 | |
-NG | |
as part of the recent maintenance, we shifted the software stack to a new filesystem. Yesterday, during the late afternoon a degradation occurred that led to segfaulting of applications. The root cause is still under investigation. As a temporary remedy, we switched back to the software stack on the previous filesystem yesterday evening (May 19). Today, we confirmed that most applications seem to work well. We will keep you informed about the progress to a complete solution of the problem. Apologies for any inconveniences. | |
The maintenance has mostly concluded. Please read https://www.lrz.de/aktuell/ | ali00823ali00938.html for details | on the maintenance from January 29.
23rd Gauss Call for Large-Scale Projects Users with huge requirements of computing time must submit their January 13th to February 10th 2020, 17:00 o’clock CET (strict deadline) The call will cover the period 1 May 2020 to 30 April 2021.
Further information::: | |
See https://www.lrz.de/aktuell/ali00821.html for news on access to the LRZ license server for the ANSYS software (licansys.lrz.de) from the SNG compute nodes in SLURM jobs. | |
See https://www.lrz.de/aktuell/ali00820.html for news on the development environment. |
. |
On Monday, April 4, 2022, a new version of the spack-based development and application software stack will be rolled out. The new spack version will be loaded as default starting April 11, 2022 After that date, you will be still able to switch to the previous spack stack with > module switch spack spack/21.1.1 We strongly recommend recompiling self-built applications after the roll-out. See also https://doku.lrz.de/display/PUBLIC/Spack+Modules+Release+22.2.1 for details. |
Base core frequency of jobs has been set to 2.3GHz. Higher frequencies possible using EAR. |
The new hpcreport tool is now available to check job performance and accounting on SuperMUC-NG. Please check out |
See https://www.lrz.de/aktuell/ali00820.html for news on the development environment.
End of service for NAS systems
NAS paths (former HOME and PROJECT areas) have been taken offline at the beginning of January, 2020. Please contact the Service Desk if you have outstanding data migration issues.
Messages for Linux Clusters |
---|
There are 4 "new" Remote Visualization (RVS_2021) nodes available. The machines are in production mode. Nodes are operated under Ubuntu OS and NoMachine. Usage is limited to 2 hours and if you need a longer period of time, please file an LRZ Service Request. For more details please refer to the documentation. |
Messages for Cloud and other HPC Systems |
---|
We have observed and addressed an issue with the LRZ AI Systems that concerned some running user jobs. As of now, newly started jobs should not be affected anymore. The work on the LRZ AI Systems to address the recently observed stability issues has been concluded. All users are invited to continue their work. We closely monitor system operation and will provide additional updates if needed. Thank you for your patience and understanding. We have identified the likely root cause for the ongoing issues with the LRZ AI and MCML Systems following the latest maintenance downtime. We continue work towards a timely resolution and can currently not guarantee uninterrupted & stable system availability. For further details, please see LRZ AI Systems The LRZ AI and MCML Systems did undergo a maintenance procedure from April 25th to April 27th (both inclusive.) During this period, the system was not available to users. Normal user operation did resume on 2022-04-27 16:30. |
HPC Services
Attended Cloud Housing |