HideElements | ||||||
---|---|---|---|---|---|---|
|
<< Zurück zur Dokumentationsstartseite
Lrz box | ||||
---|---|---|---|---|
|
Forgot your Password? click here
Add new user (only for SuperMUC-NG)? click here
Add new IP(only for SuperMUC-NG)? click here
How to write good LRZ Service Requests? click here
System Status (see also:
Access and Overview of HPC Systems)
= fully operational Status colour Green
= operational but some problems with restrictions (see messages below) Status colour Yellow
= not available Status colour Red
Höchstleistungsrechner (SuperMUC-NG) | |||||||
System: |
|
| |||||||
login nodes: skx.supermuc.lrz.de |
|
| ||||||||
archive nodes: skx-arch.supermuc.lrz.de |
| |||||||
|
|
|
|
|
|
|
| ||
Partitions/Queues: |
, general, large fat, test |
|
| |||||||||
Globus Online File Transfer: |
|
| ||
Detailed node status |
Details:
| ||
Add new user? click here Add new IP? click here |
Linux Cluster | ||
CoolMUC-2 |
lxlogin( |
1,2, |
3, |
4).lrz.de |
|
mpp2_inter
| |||||||
serial partitions: serial |
|
| ||||||||
parallel partitions cm2_(std,large) |
| |||||||
cluster cm2_tiny |
|
interactive partition: cm2_inter |
|
| |||||||
c2pap |
|
| |||||||
CoolMUC-3 lxlogin(8,9).lrz.de parallel partition: mpp3_batch interactive partition: mpp3_inter |
|
|
|
|
Status colour Green title UP
| |||||||
teramem, kcs | |||||||
teramem_inter kcs |
|
|
| ||
File Systems |
HOME
DSS
SCRATCH (mpp2)
SCRATCH (mpp3)
HOME |
colour | Green |
---|---|
title | UP |
SCRATCH |
|
|
|
|
| ||
Details: | ||
Compute Cloud and | ||
---|---|---|
Compute Cloud: (https://cc.lrz.de) |
Status | ||||
---|---|---|---|---|
|
detailed status and free slots: https:// |
Status | ||||
---|---|---|---|---|
|
|
LRZ AI Systems |
| |||||||
RStudio Server |
|
|
| ||
Details: |
Messages for SuperMUC-NG |
---|
See https://www.lrz.de/aktuell/ali00823.html for the scheduled maintenance on January 29.
See https://www.lrz.de/aktuell/ali00821.html for news on access to the LRZ license server for the ANSYS software (licansys.lrz.de) from the SNG compute nodes in SLURM jobs.
Archive nodes update |
The new ANSYS Software Release, Version 2022.R1 has been installed and provided on SuperMUC-NG. For details and for some minor pending issues with this new software release please refer to the correspondig announcement: |
23rd Gauss Call for Large-Scale Projects
Users with huge requirements of computing time must submit their
proposals via the Gauss Calls. The current 23rd Gauss Call is open from
January 13th to February 10th 2020, 17:00 o’clock CET (strict deadline)
The call will cover the period 1 May 2020 to 30 April 2021.
- Projects, which need more than 45 million core-hours per year on
SuperMUC-NG, must apply through this call. - LRZ provides a total of 600 Mio core-hours for this call.
Further information:::
The Energy Aware Runtime (EAR) has been reactivated. Please be aware that this may have an impact on job processing times. |
Please note that WORK/SRATCH on SuperMUC-NG exhibit currently possible performance degradation under heavy I/O load. Take this into account when planning your job runtimes. |
The new hpcreport tool is now available to check job performance and accounting on SuperMUC-NG. Please check out |
Change of Access Policy for the tape archive
Due to changed technical specifications for the IBM Spectrum Protect software, we have to change the access policy for the tape archive on SuperMUC-NG.
This will also affect data from SuperMUC, which have already put into the tape archive.
- Permissions to access the data will now be granted to all users of a project i.e., all users in a project group can retrieve data from other users in this project group.
- The previous policy was that only the users who wrote the data into the archive could access it.
- If your project is ‘pr12ab’, you can see the members of this group by
getent group pr12ab-d - You have to add the project in the dsmc commands i.e.
dsmc q ar “/gpfs/work/p12ab/us12ab5/*“ –se=p12ab - Please note the difference between the project (“pr12ab”) and the permission group for data (”pr12ab-d”)
See also: Backup and Archive on SuperMUC-NG
SuperMUC Phase 2 has been finally shutdown
skx-arch.supermuc.de (Node for archiving) will not be available before January, 2020
Messages for Linux Clusters |
---|
SCRATCH is now fully online again. While we expect older data that were temporarily inaccessible to be fully available again, data that were created in the last few days before the problems started might be corrupt and need to be renewed from tape archive (if one exists) or recreated. |
The new ANSYS Software Release, Version 2022.R1 has been installed and provided on the LRZ Linux Cluster systems (CM2, CM3 and RVS systems). For details and for some minor pending issues with this new software release please refer to the correspondig announcement: |
The new release of Abaqus, Version 2022 (Dassault Systems Software) has been installed on both Linux Clusters CoolMUC-2 / CoolMUC-3 as well as on the RVS systems. The Abaqus documentation has been updated. |
The new release of SimCenter StarCCM+, Version 2021.3.1 (Siemens PLM Software) has been installed and provided on the LRZ HPC systems (CM2, CM3, SNG and RVS systems). For details please see the correspondig announcement: |
: CoolMUC-3 scheduled maintenance (concluded).
This also impacts housing customers with systems integrated into CoolMUC-3.
Details are separately published.
: HOME directory path has changed
You will find all HOME data in the new DSS HOME area. Data migration was performed by LRZ (unless you are specifically notified that you need to perform HOME data migration yourself). For emergency recoveries, the legacy NAS area (pointed to by the HOME_LEGACY variable) will remain available in read-only mode until the end of the year 2019.
End of service for NAS systems
NAS paths (former HOME and PROJECT areas) have been taken offline at the beginning of January, 2020. Please contact the Service Desk if you have outstanding data migration issues.
The limitations affecting RStudio Server have been resolved
There are 4 new Remote Visualization (RVS_2021) nodes available in a friendly user testing period. Nodes are operated under Ubuntu OS and NoMachine. For more details please refer to the documentation. |
Messages for Cloud and other HPC Systems |
---|
The LRZ AI and MCML Systems are back in operation as the maintenance procedure planned from January 7thto January 11th is completed. The RStudio Server service at LRZ was decommissioned. For a replacement offering please see Interactive Web Servers on the LRZ AI Systems and, more generally, LRZ AI Systems. |
HPC Services
Attended Cloud Housing |