Page tree
Skip to end of metadata
Go to start of metadata

How to get an account

Scientists and students from Munich Universities as well as Bavarian Universities can obtain access to the LRZ Linux-Cluster systems.

You need to follow these steps before you can login:

If you know your master user:

  1. Contact the responsible master user located at your institution. If you don't know who your master user is, please contact the head of your department (group, institute), who will be able to point you to your master user.
  2. Your master user can create a new LRZ account for you through the LRZ Identity Management Portal.
  3. Your master user can then check the "Linux" Box for this new LRZ account in the LRZ Linux-Cluster project web-form.
  4. The LRZ user support will activate access to the LRZ Linux-Cluster for your new LRZ account.
  5. You will receive an email from the LRZ user support that your account has been activated.
  6. Access to the LRZ Linux-Cluster will be possible within one day.

If your institute or group has no master user, your group can apply for a new LRZ project.

  • You have to fill out this two page PDF (only available in German): "Antrag auf ein LRZ-Projekt" in the article Vergabe von Kennungen über Master User
  • Please sent the filled and signed application form to the responsible LRZ contact person - scanned and submitted as e-mail is probably the fastest way.
  • Please, don't forget the mark at "Linux-Cluster"!
  • After approval, your new master user can go through steps 2-6 above.

Login and Security

Only the login nodes can be accessed interactively from the outside world. Two mechanisms are provided for logging in to the system; both incorporate security features to prevent appropriation of sensitive information by a third party.

Access via Secure Shell

Details on how to configure ssh for usage with the LRZ clusters are available in the document ssh - Secure Shell on LRZ HPC Systems.

From the UNIX command line on the own workstation the login to an LRZ account xyyyyzz is performed via one of the commands given in the following table.

ssh -Y -l xxyyyzz

Haswell (CoolMUC-2) login node

ssh -Y -l xxyyyzz

Haswell (CoolMUC-2) login node

ssh -Y -l xxyyyzz

Haswell (CoolMUC-2) login node

ssh -Y -l xxyyyzz

Haswell (CoolMUC-2) login node

ssh -Y -l xxyyyzzKNL Segment (CooMUC-3) login node
ssh -Y -l xxyyyzzIvy Bridge (IvyMUC) login node
gsissh -Y lxgt2.lrz.delogin node for GSI-SSH

The login nodes are meant for preparing your jobs, developing your programs, and as a gateway for copying data from your own computer to the cluster and back again. Since this resource is shared among many users, LRZ requires that you do not start any long-running or memory-hogging programs on these nodes; production runs should use batch jobs that are submitted to the SLURM scheduler. Our SLURM configuration also supports semi-interactive testing. Violation of the usage restrictions on the login nodes may lead to your account being blocked from further access to the cluster, apart from your processes being forcibly removed by LRZ administrative staff!


  • The -Y option of ssh is responsible for tunneling of the X11 protocol, it may be omitted if no X11 clients are required, or if you already have otherwise configured X11 tunnelling in your ssh client.

  • The HOME directory on the Linux Cluster is an NFS mounted volume, which is uniformly mounted on all cluster nodes.

  • The login node for the KNL cluster is itself not a KNL system; you can develop and compile your software there, but if you optimized for KNL, you may not be able to execute the program on the login node itself, but must use an interactive or scripted SLURM job. 

Secure Shell Public Keys

The Secure Shell ECDSA public keys for the interactive nodes are supplied here:

# Hosts lxlogin1,2,3,4 (CoolMUC-2) ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBA+NMRJcKKJ0tlj8BnAvPg7f5ThcPhLNEfjbVJm+tjR6RXwtSHOl2lIeJxU4bmoMEyki1QfCuzxVtzMzYGb5rH0= ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBA+NMRJcKKJ0tlj8BnAvPg7f5ThcPhLNEfjbVJm+tjR6RXwtSHOl2lIeJxU4bmoMEyki1QfCuzxVtzMzYGb5rH0= ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBA+NMRJcKKJ0tlj8BnAvPg7f5ThcPhLNEfjbVJm+tjR6RXwtSHOl2lIeJxU4bmoMEyki1QfCuzxVtzMzYGb5rH0= ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBA+NMRJcKKJ0tlj8BnAvPg7f5ThcPhLNEfjbVJm+tjR6RXwtSHOl2lIeJxU4bmoMEyki1QfCuzxVtzMzYGb5rH0=

# Host lxlogin8 (CoolMUC-3) ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFUhrpMPNzQr3Lll2wKb4iAcxRwqcD8QpW1fFXKNEOzYcSSuHIvpMOl7doIoS1uGRjfO1MIRuu26ADgIHs66tB0=

Please add these to ~/.ssh/known_hosts on your own workstation before logging in for the first time).

Login via Grid Services using GSI-SSH

An alternative way of accessing the cluster is to use GSI-SSH, which is a component of the Globus toolkit and provides

  • terminal access to your account
  • a single sign-on environment (no password required to access other machines)
  • easy access to a number of additional functionalities, including secure and parallel file transfer

The prerequisites for using it are

  • a Grid certificate installed on your machine and acknowledged by LRZ, as described on the LRZ Grid Portal. Please note that TUM, LMU, and LRZ members can use the new and easy short lived credential service (SLCS) of the DFN as an alternative: it allows you to immediately obtain a certificate for Grid usage
  • an installation of a GSI-SSH client on your own workstation, either the command line tool gsissh or the multi platform Java tool Gsissh-Term, as described on the LRZ Grid Portal.

Changing of Password and Shell

Please always use the web interface on the LRZ server to change your login password or your login shell for the cluster systems. Cluster-local commands cannot be used for this purpose.

Passwords must be changed at least once  in 6 months. We are aware that this measure imposes some overhead on users, but believe that it is necessary on security reasons, having implemented it based on guidelines of BSI (federal agency for information security) and the IT security standard ISO/IEC 27001. You are able to determine the actual invalidation date for your password by logging into the above-linked web interface and selecting the menu item "Person -> view" or  "Account -> view authorizations". In order to prevent being surprised by a password becoming invalid, you will be notified of the need to change your password via e-mail. Even if you miss the deadline for the password update, this only implies a temporary suspension of your  account - you will still be able to log in to the ID portal and make the password change.

Changing the password is also necessary after it has been newly issued, or reset to a starting value by a master user or LRZ staff. This assures that actual authentication is done with a password known only to the account owner.

Support via Service Desk

Questions concerning the usage of the Linux Cluster should always be directed to the LRZ Service Desk. A member of the LRZ HPC support team will then attend to your needs.

Documentation for Application Software and Packages

Please start from the HPC Software and Programming Support entries on the LRZ web server.

LRZ-specific configuration and policies on the clusters

Moving data from/to the cluster

The preferred method to move data to/from LRZ's Linux Cluster is using the Globus Research Data Management Portal. Details on the usage of Globus can be found here. Alternatively, you can also use scp (Secure Copy) or grid-ftp. FTP access to the cluster from outside (and also within the clusters) is disabled for security reasons. 

User accounts are personalized

User accounts are always assigned to a particular person. For a number of reasons, sharing of user accounts between different persons is not permitted; if noticed, it will lead to the account being deactivated by LRZ. All involved parties (including the Master User of the account's project) will be notified with information on the measures needed to rectify the situation.

Firewall, networking

The cluster is protected from certain types of external attacks by a firewall, the configuration of which may impact the functionality of certain applications as described in the following.

X11 Protocol

Direct X11 connections (via xhost or xauth) are prohibited, only ssh tunneling is supported.


None of the batch nodes in the cluster are by default routed to the outside world. Please contact LRZ HPC support if you require a particular system to be routed to the batch nodes.

Electronic mail

We recommend against using the Linux Cluster for mail purposes (apart from having the batch scheduler send mails to you, occasionally). Please consult the LRZ documentation on how to use eMail on how to properly use this facility.


Environment settings are controlled via the LRZ module system. Such settings are needed to access specific application program packages, or to properly establish a development enviornment.

Using the cron or at commands

This is not allowed on the LRZ cluster. Please submit SLURM batch jobs for performing computations.

General Linux System Documentation

Typical for Linux systems there are (at least) two formats for the system documentation:

  • man pages

  • info pages

  • No labels