Linux Cluster General Security Policies
Network setup
All Linux Cluster systems are assigned addresses within a single Virtual Local Area Network of LRZ. This permits operation of a unified management infrastructure as well as integration of further hard- and software components such as batch scheduler and storage systems. A firewall that encapsulates the VLAN is in place to prevent unauthorized accesses.
System Access
- The Linux cluster login nodes can be accessed from the outside world. Access mechanisms are limited to ssh - Secure Shell on LRZ HPC Systems (for interactive shells and small scale data transfers) and Grid facilities (e.g. gridFTP, for large scale data transfers).
- The Linux Cluster compute nodes can be accessed from the login nodes only during the runtime of a batch job by the user of this batch job.
- The Linux Cluster management infrastructure cannot be accessed by regular users.
- Housed cluster segments are operated exclusively for the housing owner. Access rights for login nodes in a housed segment as well as submission of jobs into a SLURM queue operating a housed segment is supplied only for those accounts designated by the housing owner.
Authentication can be conducted either directly against the LRZ identity management system through LDAP, or by using the Secure Shell's private/public key mechanism. It is incumbent on the user to supply a non-empty password for unlocking the private key.
Batch Processing
SLURM is used as workload manager on all Linux Cluster segments. Using SLURM commands, a user can submit job scripts or execute an interactive session; both inspection and management of SLURM jobs is limited to those jobs that are owned by the user. Except for the serial queue, complete compute nodes are dedicated exclusively to one user while a job of that user executes.
X Window System
Components of the X11 windowing system are installed on the Cluster login nodes. These permit to start graphical applications either in standalone mode or in a window managing environment. The X11-specific security mechanisms (.Xauthority) are in place to prevent unauthorized access by other users. Similarly, for window managing environments like VNC, a password managed mechanism is available for the same purpose.
File System Access
I/O subsystems are operated and managed separately from the Linux Cluster systems, but data access is uniform across all cluster segments. This uniformity is achieved through either NFS version 3 or GPFS file system mounts. NFS provides (limited) POSIX semantics for file system access, while GPFS supplies full POSIX semantics plus further features like Access Control Lists; the latter are mainly used in the context of DSS (see below).
All HOME, PROJECT and SCRATCH file system mounts have the general structure
/<mountpoint>/<group>/<user>
with root permissions only on the <group> level, and full access rights for the user on the <user> level. By default, no one else obtains rights to access the user's data, but the user can change this using the standard UNIX permission scheme, possibly selectively on subsets of data.
For Data Science Storage, the DSS documentation supplies information about managing access.
For GPFS file systems, the root account is prevented from accessing the file system on all Cluster systems that execute user's processes.
Operating Environment
To ensure safe operation of the system, security updates delivered by the vendor are implemented in a timely manner. This is done within the shortest possible time frame for all nodes that are multiplexed between different users (login and serial queue), and otherwise in scheduled maintenance phases. Due to additional proprietary drivers that require functional integration, there may be some delay in implementing security-related roll-outs; in case this endangers safe operation of the system, LRZ may decide on a case-to-case basis to temporarily close access to all cluster systems.