(available until end of 2019)
Compute Cloud and
|login: lxlogin(5-7, 8, 10).lrz.de||Systems:|
|LRZ Compute Cloud (https://cc.lrz.de)|
|Queues: micro, general, test, big||mpp2_batch|
|Detailed node status: ||mpp2_inter||GPU Cloud (https://datalab.srv.lrz.de)|
|File Systems: |
|title||Not Yet Avail|
|RStudio Server (https://www.rstudio.lrz.de)|
micro, fat, general, large
|Globus Online File Transfer|
|Detailed node status: |
Submit an Incident Ticket for SuperMUC-NG
Submit an Incident Ticket for the Linux Cluster
|Submit an Incident Ticket for the Compute Cloud|
Message of the Day
SuperMUC and SuperMUC-NG
Linux Cluster Segments
Cloud and other Systems
The Aug 7-8 maintenance has concluded and the system is back in operation.
We have concluded the acceptance tests of SuperMUC-NG. Therefore, we will enable the accounting of compute time on the new system for all jobs which complete after August 19th, 2019 0 o’clock.
The billing units on both, SuperMUC-NG and SuperMUC, are “core-hours”. The used core-hours on both systems are subtracted from your common budget. Since a core of SuperMUC-NG has a higher peak-performance than a SuperMUC core, it should be an incentive to transfer your data and programs to the new system.
Your compute time budget is displayed on login. You can also query it by using the commands
$ module load lrztools
The old system will be powered down on December 31th, 2019.
Please copy your data beforehand to the new system.
Test Operation of SuperMUC-NG:
After an extended maintenance and configuration of SuperMUC-NG we now starting the test operation. All users with a valid SuperMUC-NG account are cordially invited on the new machine. We have already provided you with your new UserIDs via a personal email.
Please read the documentation: https://doku.lrz.de/display/PUBLIC/SuperMUC-NG
Please note the following issues:
- The SCRATCH file system is not available yet. All your data will have to be stored in your project WORK directory. You must configure the variable WORK in your .profile.
- The EAR (Energy Aware Runtime) mechanism is currently not active.
- Please proceed with your data migration to SuperMUC-NG as soon as possible. The old SuperMUC Phase 2 and its file systems go out of operation in December 2019. All data not archived or copied by then will be inevitably lost!
See https://doku.lrz.de/display/PUBLIC/Data+Migration+from+SuperMUC+to+SuperMUC-NG for technical support.
- Job accounting on SuperMUC-NG is NOT active yet, so no core hours will be deduced from your budget. We will inform you in due time before accounting will be activated. Please act responsible with these free resources. Any attempts monopolizing the queues by a single user or project will be seen as an unfriendly act and may lead to the eviction from the test Currently a tight limit of concurrently submitted jobs per user is in place to avoid such monopolization.
- We have rebuilt the software stack provided by spack and rolled out the new release version 19.1, which is now default on SuperMUC-NG. All packages are built against the intel19.04/gcc7.3 compiler, Intel MKL 2019.4, and Intel MPI 2019.4 (where applicable). If you have previously built software you may consider rebuilding or relinking.
To report problems with SuperMUC-NG please open a ticket at the LRZ service desk (https://doku.lrz.de/display/PUBLIC/Servicedesk+for+SuperMUC-NG) and use the keyword SuperMUC-NG in the ‘short description’. In case of problems always provide the jobID, the approximate time of occurrence and the location of your job script, and as much as possible information which may help to identify the problem.
The next call for GCS large-scale computing time proposals on SuperMUC-NG, JUWELS and Hazel Hen/Hawk will cover the period November 1, 2019 to October 31, 2020.
The call will open on 8 July 2019 and close on 5 August 2019, 17:00 CEST.
See also GCS large scale projects on SuperMUC-NG.
Because of the delayed start of operation for the new system “SuperMUC-NG”, we have decided to additionally operate the old system “SuperMUC-Phase 2” until the end of 2019. This will provide additional capacity for your jobs. However, full support for the file systems on the old system is only guaranteed until end of November 2019. Please copy your data beforehand to the new system.
|currently no news.|
|currently no news.|