Page tree

Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

System Status (see also: Access and Overview of HPC Systems)




















SuperMUC Phase 2

(available until end of 2019)

Status
colourGreen
titleup


Linux Cluster





Compute Cloud and
other Systems




















login: hw.supermuc.lrz.de

Status
colourGreen
titleUP

login: lxlogin(5-7, 8, 10).lrz.de

Status
colourGreen
titleUP

Systems:

File Systems:
HOME:
WORK:
SCRATCH:


  

Status
colourGreen
titleup

 
Status
colourGreen
titleup

 
Status
colourGreen
titleup



Partitions/Queue:

LRZ Compute Cloud (https://cc.lrz.de)

Status
colourGreen
titleUP

Queues: micro, general, test, big

Status
colourGreen
titleup

mpp2_batch

Status
colourGreen
titleUP

OpenNebula (https://www.cloud.mwn.de)

Status
colourGreen
titleUP

Detailed node status: mpp2_inter

Status
colourGreen
titleUP

GPU Cloud (https://datalab.srv.lrz.de)

Status
colourGreen
titleUP

Status
colourYellow
titleTest operation

serial

Status
colourGreen
titleUP

DGX-1

Status
colourRed
titleUNAVAILABLE

login: skx.supermuc.lrz.de

Status
colourGreen
titleup

mpp3_batch

Status
colourGreen
titleUP

DGX-1v

Status
colourGreen
titleUP

File Systems: 
HOME:
WORK:
SCRATCH:
DSS:


 

Status
colourGreen
titleup

Status
colourGreen
titleup

 
Status
colourRed
titleNot Yet Avail

 
Status
colourGreen
titleup

mpp3_inter
teramem_inter
ivymuc

Status
colourGreen
titleUP

Status
colourGreen
titleUP

Status
colourGreen
titleUP
RStudio Server (https://www.rstudio.lrz.de)

Status
colourGreen
titleUP

Partitions/Queues: 
micro, fat, general, large

Status
colourGreen
titleup






Globus Online File Transfer

Status
colourGreen
titleup





Detailed node status: 




Details:


Details:


Details:


Submit an Incident Ticket for SuperMUC-NG 

Submit an Incident Ticket for the Linux Cluster

Submit an Incident Ticket for the Compute Cloud

Message of the Day


SuperMUC and SuperMUC-NG 

Linux Cluster Segments

Cloud and other Systems


DateMessage
2019-08-08

SuperMUC-NG maintenance:

The Aug 7-8 maintenance has concluded and the system is back in operation.

13

We have concluded the acceptance tests of SuperMUC-NG. Therefore, we will enable the accounting of compute time on the new system for all jobs which complete after August 19th, 2019 0 o’clock.

The billing units on both, SuperMUC-NG and SuperMUC, are “core-hours”. The used core-hours on both systems are subtracted from your common budget. Since a core of SuperMUC-NG has a higher peak-performance than a SuperMUC core, it should be an incentive to transfer your data and programs to the new system.

Your compute time budget is displayed on login. You can also query it by using the commands

$ module load lrztools
$ budget_and_quota

The old system will be powered down on December 31th, 2019.

Please copy your data beforehand to the new system.

update
2019-07-30

Test Operation of SuperMUC-NG:

After an extended maintenance and configuration of SuperMUC-NG we now starting the test operation. All users with a valid SuperMUC-NG account are cordially invited on the new machine.  We have already provided you with your new UserIDs via a personal email.

Please read the documentation: https://doku.lrz.de/display/PUBLIC/SuperMUC-NG

Please note the following issues:

  • The SCRATCH file system is not available yet. All your data will have to be stored in your project WORK directory. You must configure the variable WORK in  your .profile.
    See: https://doku.lrz.de/display/PUBLIC/Operational+Concept
  • The EAR (Energy Aware Runtime) mechanism is currently not active.
  • Please proceed with your data migration to SuperMUC-NG as soon as possible. The old SuperMUC Phase 2 and its file systems go out of operation in December 2019. All data not archived or copied by then will be inevitably lost!
    See https://doku.lrz.de/display/PUBLIC/Data+Migration+from+SuperMUC+to+SuperMUC-NG for technical support.
  • Job accounting on SuperMUC-NG is NOT active yet, so no core hours will be deduced from your budget. We will inform you in due time before accounting will be activated. Please act responsible with these free resources. Any attempts monopolizing the queues by a single user or project will be seen as an unfriendly act and may lead to the eviction from the test Currently a tight limit of concurrently submitted jobs per user is in place to avoid such monopolization.
  • We have rebuilt the software stack provided by spack and rolled out the new release version 19.1, which is now default on SuperMUC-NG. All packages are built against the intel19.04/gcc7.3 compiler, Intel MKL 2019.4, and Intel MPI 2019.4 (where applicable). If you have previously built software you may consider rebuilding or relinking.

To report problems with SuperMUC-NG please open a ticket at the LRZ service desk (https://doku.lrz.de/display/PUBLIC/Servicedesk+for+SuperMUC-NG) and use the keyword SuperMUC-NG in the ‘short description’. In case of problems always provide the jobID, the approximate time of occurrence and the location of your job script, and as much as possible information which may help to identify the problem.

2019-06-19

The next call for GCS large-scale computing time proposals on SuperMUC-NG, JUWELS and Hazel Hen/Hawk will cover the period November 1, 2019 to October 31, 2020.

The call will open on  8 July 2019 and close on 5 August 2019, 17:00 CEST.

See also GCS large scale projects on SuperMUC-NG.



2019-06-04

Because of the delayed start of operation for the new system “SuperMUC-NG”, we have decided to additionally operate the old system “SuperMUC-Phase 2” until the end of 2019. This will provide additional capacity for your jobs. However, full support for the file systems on the old system is only guaranteed until end of November 2019. Please copy your data beforehand to the new system.

DateMessage

currently no news.



DateMessage

currently no news.


...