Page tree
Skip to end of metadata
Go to start of metadata

<< Zurück zur Dokumentationsstartseite

High Performance Computing

 


System Status (see also: Access and Overview of HPC Systems)




















SuperMUC Phase 2

(available until end of 2019)

UP


Linux Cluster





Compute Cloud and
other Systems




















login: hw.supermuc.lrz.de

UP

login: lxlogin(5-7, 8, 10).lrz.de

UP

Systems:

File Systems:
HOME:
WORK:
SCRATCH:


   UP
  UP
  UP



Partitions/Queue:

LRZ Compute Cloud (https://cc.lrz.de)

UP

Queues: micro, general, test, big

UP

mpp2_batch

UP

OpenNebula (https://www.cloud.mwn.de)

UP

Detailed node status: mpp2_inter

UP

GPU Cloud (https://datalab.srv.lrz.de)

UP

TEST OPERATION

serial

UP

DGX-1

UNAVAILABLE

login: skx.supermuc.lrz.de

UP

mpp3_batch

UP

DGX-1v

UP

File Systems: 
HOME:
WORK:
SCRATCH:
DSS:


 UP
UP
 NOT YET AVAIL
 UP

mpp3_inter
teramem_inter
ivymuc

UP

UP
UP
RStudio Server (https://www.rstudio.lrz.de)

UP

Partitions/Queues: 
micro, fat, general, large

UP






Globus Online File Transfer

UP





Detailed node status: 




Details:


Details:


Details:


Submit an Incident Ticket for SuperMUC-NG 

Submit an Incident Ticket for the Linux Cluster

Submit an Incident Ticket for the Compute Cloud

Message of the Day


SuperMUC and SuperMUC-NG 

Linux Cluster Segments

Cloud and other Systems

DateMessage
2019-08-13

We have concluded the acceptance tests of SuperMUC-NG. Therefore, we will enable the accounting of compute time on the new system for all jobs which complete after August 19th, 2019 0 o’clock.

The billing units on both, SuperMUC-NG and SuperMUC, are “core-hours”. The used core-hours on both systems are subtracted from your common budget. Since a core of SuperMUC-NG has a higher peak-performance than a SuperMUC core, it should be an incentive to transfer your data and programs to the new system.

Your compute time budget is displayed on login. You can also query it by using the commands

$ module load lrztools
$ budget_and_quota

The old system will be powered down on December 31th, 2019.

Please copy your data beforehand to the new system.

update
2019-07-30

Test Operation of SuperMUC-NG:

After an extended maintenance and configuration of SuperMUC-NG we now starting the test operation. All users with a valid SuperMUC-NG account are cordially invited on the new machine.  We have already provided you with your new UserIDs via a personal email.

Please read the documentation: https://doku.lrz.de/display/PUBLIC/SuperMUC-NG

Please note the following issues:

  • The SCRATCH file system is not available yet. All your data will have to be stored in your project WORK directory. You must configure the variable WORK in  your .profile.
    See: https://doku.lrz.de/display/PUBLIC/Operational+Concept
  • The EAR (Energy Aware Runtime) mechanism is currently not active.
  • Please proceed with your data migration to SuperMUC-NG as soon as possible. The old SuperMUC Phase 2 and its file systems go out of operation in December 2019. All data not archived or copied by then will be inevitably lost!
    See https://doku.lrz.de/display/PUBLIC/Data+Migration+from+SuperMUC+to+SuperMUC-NG for technical support.
  • Job accounting on SuperMUC-NG is NOT active yet, so no core hours will be deduced from your budget. We will inform you in due time before accounting will be activated. Please act responsible with these free resources. Any attempts monopolizing the queues by a single user or project will be seen as an unfriendly act and may lead to the eviction from the test Currently a tight limit of concurrently submitted jobs per user is in place to avoid such monopolization.
  • We have rebuilt the software stack provided by spack and rolled out the new release version 19.1, which is now default on SuperMUC-NG. All packages are built against the intel19.04/gcc7.3 compiler, Intel MKL 2019.4, and Intel MPI 2019.4 (where applicable). If you have previously built software you may consider rebuilding or relinking.

To report problems with SuperMUC-NG please open a ticket at the LRZ service desk (https://doku.lrz.de/display/PUBLIC/Servicedesk+for+SuperMUC-NG) and use the keyword SuperMUC-NG in the ‘short description’. In case of problems always provide the jobID, the approximate time of occurrence and the location of your job script, and as much as possible information which may help to identify the problem.

2019-06-19

The next call for GCS large-scale computing time proposals on SuperMUC-NG, JUWELS and Hazel Hen/Hawk will cover the period November 1, 2019 to October 31, 2020.

The call will open on  8 July 2019 and close on 5 August 2019, 17:00 CEST.

See also GCS large scale projects on SuperMUC-NG.

DateMessage

currently no news.
DateMessage

currently no news.

Services







  • No labels