HideElements | ||||||
---|---|---|---|---|---|---|
|
<< Zurück zur Dokumentationsstartseite
Lrz box | ||||
---|---|---|---|---|
|
Forgot your Password? click here
Add new user (only for SuperMUC-NG)? click here
Add new IP(only for SuperMUC-NG)? click here
How to write good LRZ Service Requests? click here
System Status (see also:
Access and Overview of HPC Systems)
= fully operational Status colour Green
= operational but experiencing problems with restrictions (see messages below) Status colour Yellow
= not available Status colour Red
SuperMUC Phase 2
(will be switched off , all data will be deleted)Höchstleistungsrechner (SuperMUC-NG) | |||||
System: |
|
|
| ||
login nodes: |
skx.supermuc.lrz.de |
|
|
Status | ||||
---|---|---|---|---|
|
Status | ||||
---|---|---|---|---|
|
Status colour Green title UP
archive nodes: skx-arch.supermuc.lrz.de |
|
|
| ||
|
|
|
|
|
|
| ||||||||
Partitions/Queues: |
general, large fat, test |
|
| |||||||
Globus Online File Transfer: |
|
| ||
Detailed node status |
Details:
| ||
Add new user? click here Add new IP? click here |
Linux Cluster |
CoolMUC-2 |
lxlogin( |
1,2, |
3, |
4).lrz.de |
|
| |||||||
serial partitions: serial |
|
| |||||
parallel partitions cm2_(std,large) |
|
|
| |||||
cluster cm2_tiny |
|
|
| |||||
interactive partition: cm2_inter |
|
|
|
c2pap |
|
|
mpp3_batch
kcs
| |||||||||||
CoolMUC-3 lxlogin(8,9).lrz.de parallel partition: mpp3_batch interactive partition: mpp3_inter |
|
|
|
|
| ||
teramem, kcs | ||
teramem_inter |
ivymuc
kcs |
| |||||||||||||
File Systems HOME |
|
|
|
|
|
|
| ||
Details: | ||
Compute Cloud and |
---|
Compute Cloud: (https://cc.lrz.de) |
Status | ||||
---|---|---|---|---|
|
detailed status and free slots: https:// |
Status | ||||
---|---|---|---|---|
|
|
LRZ AI Systems |
| |||||||
RStudio Server |
|
|
| ||
Details: |
Messages for SuperMUC |
---|
Change of Access Policy for the tape archive
Due to changed technical specifications for the IBM Spectrum Protect software, we have to change the access policy for the tape archive on SuperMUC-NG.
This will also affect data from SuperMUC, which have already put into the tape archive.
- Permissions to access the data will now be granted to all users of a project i.e., all users in a project group can retrieve data from other users in this project group.
- The previous policy was that only the users who wrote the data into the archive could access it.
- If your project is ‘pr12ab’, you can see the members of this group by
getent group pr12ab-d - You have to add the project in the dsmc commands i.e.
dsmc q ar “/gpfs/work/p12ab/us12ab5/*“ –se=p12ab - Please note the difference between the project (“pr12ab”) and the permission group for data (”pr12ab-d”)
See also: Backup and Archive on SuperMUC-NG
Deletion of data on SuperMUC Phase 2.
Required on your part: Data Migration from SuperMUC to SuperMUC-NG!
skx-arch.supermuc.de (Node for archiving) will not be available before January, 2020
SCRATCH/GPFS unavailable on CoolMUC-3
To avoid file system crashes, we have decided to unmount SCRATCH from all systems associated with CoolMUC-3 for now. We expect to revert this measure again at the next scheduled maintenance .
: HOME directory path will change
You will find all HOME data in the new DSS HOME area, data migration will be performed by LRZ (unless you are specifically notified that you need to perform HOME data migration yourself). For emergency recoveries, the legacy NAS area (pointed to by the HOME_LEGACY variable) will remain available in read-only mode until the end of the year.
Following action is required on your part: Make necessary adjustments to (job) scripts and configuration data to account for the changes in path names. LRZ strongly recommends using relative path names because this minimizes the required work.
Examples:
End of service for NAS systems
NAS paths will be taken offline at the beginning of January, 2020. Please contact the Service Desk for outstanding data migration issues.
NAS PROJECT path is mounted read-only
between and
DSS PROJECT now available on HPC systems:
Following action is required on your part:
Migrate data from the legacy NAS area (pointed to by the PROJECT_LEGACY variable) to the new DSS area. LRZ strongly advises to get rid of unneeded data sets, and/or archive data sets to tape.
Step-by-step procedure for migration:
- On any cluster login node, issue the command
dssusrinfo all
This will list paths to accessible containers, as well as quota information etc. - Edit your shell profile and set the PROJECT and/or WORK variable to a
suitable path value based on the above output, typically one of the DSS
paths with your account name appended to it. - Use the cp or rsync or tar command to migrate your data from
PROJECT_LEGACY to the new storage area. - If your scripts use absolute path names instead of the PROJECT or
WORK variable, they need appropriate updates
-NG |
---|
Archive nodes update |
The new ANSYS Software Release, Version 2022.R1 has been installed and provided on SuperMUC-NG. For details and for some minor pending issues with this new software release please refer to the correspondig announcement: |
The Energy Aware Runtime (EAR) has been reactivated. Please be aware that this may have an impact on job processing times. |
Please note that WORK/SRATCH on SuperMUC-NG exhibit currently possible performance degradation under heavy I/O load. Take this into account when planning your job runtimes. |
The new hpcreport tool is now available to check job performance and accounting on SuperMUC-NG. Please check out |
Messages for Linux Clusters |
---|
SCRATCH is now fully online again. While we expect older data that were temporarily inaccessible to be fully available again, data that were created in the last few days before the problems started might be corrupt and need to be renewed from tape archive (if one exists) or recreated. |
The new ANSYS Software Release, Version 2022.R1 has been installed and provided on the LRZ Linux Cluster systems (CM2, CM3 and RVS systems). For details and for some minor pending issues with this new software release please refer to the correspondig announcement: |
The new release of Abaqus, Version 2022 (Dassault Systems Software) has been installed on both Linux Clusters CoolMUC-2 / CoolMUC-3 as well as on the RVS systems. The Abaqus documentation has been updated. |
The new release of SimCenter StarCCM+, Version 2021.3.1 (Siemens PLM Software) has been installed and provided on the LRZ HPC systems (CM2, CM3, SNG and RVS systems). For details please see the correspondig announcement: |
User e-Mail notification of DSS PROJECT link
Following actions are required on your part:
- Confirm the e-Mail invitation to validate your access,
- After the Linux Cluster maintenance (see below), store path information in an environment variable on the Cluster (e.g. by setting the PROJECT variable in ~/.bashrc).
One this is done, migrating data from NAS PROJECT to DSS PROJECT can start
Please read the change description on how to handle the significant changes to the Linux cluster configuration performed end of September, 2019
There are 4 new Remote Visualization (RVS_2021) nodes available in a friendly user testing period. Nodes are operated under Ubuntu OS and NoMachine. For more details please refer to the documentation. |
Messages for Cloud and other HPC Systems |
---|
The LRZ AI and MCML Systems are back in operation as the maintenance procedure planned from January 7thto January 11th is completed. The RStudio Server service at LRZ was decommissioned. For a replacement offering please see Interactive Web Servers on the LRZ AI Systems and, more generally, LRZ AI Systems. |
HPC Services
Attended Cloud Housing |