<< Zurück zur Dokumentationsstartseite
High Performance Computing
(available until end of 2019)
Compute Cloud and
|login: lxlogin(5-7, 8, 10).lrz.de||Systems:|
|File Systems: |
|LRZ Compute Cloud (https://cc.lrz.de)|
|Queues: micro, general, test, big||mpp2_batch|
|Detailed node status: ||mpp2_inter||GPU Cloud (https://datalab.srv.lrz.de)|
|File Systems: |
HOME: SOME ISSUES
SCRATCH: NOT YET AVAIL
|mpp3_inter||RStudio Server (https://www.rstudio.lrz.de)|
|Partitions/Queues: micro, fat, general, large ||teramem_inter|
|Detailed node status: ||ivymuc|
Submit an Incident Ticket for SuperMUC-NG
Submit an Incident Ticket for the Linux Cluster
|Submit an Incident Ticket for the Compute Cloud|
Message of the Day
SuperMUC and SuperMUC-NG
Linux Cluster Segments
Cloud and other Systems
Test Operation of SuperMUC-NG:
After an extended maintenance and configuration of SuperMUC-NG we now starting the test operation. All users with a valid SuperMUC-NG account are cordially invited on the new machine. We have already provided you with your new UserIDs via a personal email.
Please read the documentation for SuperMUC-NG: https://doku.lrz.de/display/PUBLIC/SuperMUC-NG
Please note the following issues:
- The SCRATCH file system is not available yet. All your data will have to be stored in your project WORK directory. You must configure the variable WORK in your .profile.
- The performance of the HOME file system is currently degraded. Expect spurious hangs or longer waits for file creation.
- The EAR (Energy Aware Runtime) mechanism is currently not active.
- Please proceed with your data migration to SuperMUC-NG as soon as possible. The old SuperMUC Phase 2 and its file systems go out of operation in December 2019. All data not archived or copied by then will be inevitably lost!
See https://doku.lrz.de/display/PUBLIC/Data+Migration+from+SuperMUC+to+SuperMUC-NG for technical support.
- Job accounting on SuperMUC-NG is NOT active yet, so no core hours will be deduced from your budget. We will inform you in due time before accounting will be activated. Please act responsible with these free resources. Any attempts monopolizing the queues by a single user or project will be seen as an unfriendly act and may lead to the eviction from the test Currently a tight limit of concurrently submitted jobs per user is in place to avoid such monopolization.
- We have rebuilt the software stack provided by spack and rolled out the new release version 19.1, which is now default on SuperMUC-NG. All packages are built against the intel19.04/gcc7.3 compiler, Intel MKL 2019.4, and Intel MPI 2019.4 (where applicable). If you have previously built software you may consider rebuilding or relinking.
To report problems with SuperMUC-NG please open a ticket at the LRZ service desk (https://doku.lrz.de/display/PUBLIC/Servicedesk+for+SuperMUC-NG) and use the keyword SuperMUC-NG in the ‘short description’. In case of problems always provide the jobID, the approximate time of occurrence and the location of your job script, and as much as possible information which may help to identify the problem.
The next call for GCS large-scale computing time proposals on SuperMUC-NG, JUWELS and Hazel Hen/Hawk will cover the period November 1, 2019 to October 31, 2020.
The call will open on 8 July 2019 and close on 5 August 2019, 17:00 CEST.
See also GCS large scale projects on SuperMUC-NG.
Because of the delayed start of operation for the new system “SuperMUC-NG”, we have decided to additionally operate the old system “SuperMUC-Phase 2” until the end of 2019. This will provide additional capacity for your jobs. However, full support for the file systems on the old system is only guaranteed until end of November 2019. Please copy your data beforehand to the new system.
|2019-06-14||The RStudio Server situation has been resolved.|
|2019-06-13||Due to a licensing issue (affecting our instances, not individual users), RStudio Server is currently unavailable. We are waiting for the software vendor to resolve the problem.|