Linux Cluster

CooLMUC2_Front_0.35

The LRZ Linux Cluster consists of several segments with different types of interconnect and different sizes of shared memory. All systems have a (virtual) 64 bit address space:

  • CooLMUC-2 Cluster with 28-way Haswell-based nodes and FDR14 Infiniband interconnect, used for both serial and parallel processing
  • Intel Broadwell based 6 TByte shared memory server HP DL580 "Teramem"
  • CooLMUC-3 Cluster with 64-way KNL 7210-F many-core processors and Intel Omnipath OPA1 interconnect, for parallel/vector processing

Based on the various node types the LRZ Linux cluster offers a wide span of capabilities:

  • mixed shared and distributed memory
  • large software portfolio
  • flexible usage due to various available memory sizes
  • parallelization by message passing (MPI)
  • shared memory parallelization with OpenMP or pthreads
  • mixed (hybrid) programming with MPI and OpenMP
  • secure shell based logins and data transfer to generally accessible front end nodes
  • development environment with compilers, tools and libraries available on front end nodes, run time environments and applications available on batch nodes. Necessary licenses are supplied by LRZ.
  • resource assignment via SLURM scheduler
  • data management:
    • SCRATCH space for short lifetime data (removal is forced)
    • DSS/HOME area with small quota for program and configuration data
    • DSS/PROJECT area (max. 10 TByte) upon request for long lifetime data

Details on the admission process

  • Application for admission shall be through the Master User
  • The LRZ project must contain the user account that will be authorized for accessing the Linux Cluster.
  • LRZ functional accounts, Campus-LMU accounts, TUM-online accounts, local university institutional accounts, external student account and SuperMUC-NG accounts shall not be authorized for Linux Cluster access. The same applies for accounts assigned to a Max-Planck institute or a student hostel.