Job Processing on Housed Systems
How to prepare, run and manage jobs?
In order to run batch jobs or interactive jobs, you need to log in to the login node of your housed system. If your housed system does not have it's own login node, you may use lxlogin8:
ssh -Y userID@lxlogin8.lrz.de
Please consult Access and Login to the Linux-Cluster for further details on:
- the login procedure,
- the ssh documentation, e.g. policies,
- two-factor authentication.
Please consult the Linux Cluster documentation and subpages for
- examples of batch job scripts for serial or parallel jobs (e.g., shared-memory or MPI jobs),
- job submission procedure,
- Slurm commands to manage jobs.
You may need to adapt the examples according to your needs and the requirements of your housed system!
Please also consider our policies and rules!
Resource limits
The clusters and partitions listed in this section are only available for institutes that have a housing contract with LRZ.
Job Type | Architecture | Core counts and remarks | Run time limit (hours) | Memory limit (GByte) |
---|---|---|---|---|
Distributed memory parallel (MPI) jobs | 28-way Haswell-EP nodes with Infiniband FDR14 interconnect | Please specify the cluster --clusters=tum_chem and one of the partitions --partition=[tum_chem_batch, tum_chem_test] Up to 392 core jobs are possible (56 in the test queue). Dedicated to TUM Chemistry. | 384 (test queue: 12) | 2 per task (in MPP mode, using 1 physical core/task) |
Distributed memory parallel (MPI) jobs | 28-way Haswell-EP nodes with Infiniband FDR14 interconnect | Please specify the cluster --clusters=hm_mech Up to 336 core jobs are possible (if hyperthreading is exploited, double that number) Dedicated to Hochschule München Mechatronics | 336 | 18 per task (in MPP mode, using 1 physical core/task) |
Serial or shared memory jobs | 28-way Haswell-EP nodes with Ethernet interconnect | Please specify the cluster --clusters=tum_geodesy Dedicated to TUM Geodesy | 240 | 2 per task / 60 per node |
Shared memory parallel job | Intel- or AMD-based shared memory systems | Please specify the cluster --clusters=myri as well as one of the partitions --partition=myri_[p,u] Dedicated to TUM Mathematics | 144 | 3.9 per core |
Cluster/Partition | Architecture | Core counts and remarks | Run time limit (hours) | Memory limit (GByte) |
---|---|---|---|---|
--clusters=tum_geodesy --partition=tum_geodesy_std | 28-way Haswell-EP node | 1 core (effectively more if large memory is specified). Access is restricted to users from the TUM geodesy chairs. | 240 | 2 GByte (for 1 core) |
--clusters=lcg --partition=lcg_serial | 28-way Haswell-EP node 40-way Cascade Lake node | 1 core (effectively more if large memory is specified). Access is restricted to users from LMU high energy physics. | 96 | 64 -180 GByte (complete node) |
--clusters=htso --partition=htso_std | 80-way Ice Lake node | 1 core (effectively more if large memory is specified). Access is restricted. | 168 | 9 GByte (for 1 core) |
--clusters=hlai --partition=hlai_std | 80-way Ice Lake node | 1 core (effectively more if large memory is specified). Access is restricted. | 168 | 6 GByte (for 1 core) |
--clusters=httc --partition=httc_std | 80-way Ice Lake node | 1 core (effectively more if large memory is specified). Access is limited. | 960 | 3 GByte (for 1 core) |
--clusters=httc --partition=httc_high_mem | 80-way Ice Lake node | 1 core (effectively more if large memory is specified). Access is restricted. | 960 | 3 GByte (for 1 core) |
--clusters=biohpc_gen --partition=biohpc_gen_highmem | 40-way Sky lake Node | 1 cpu (effectively more if large memory is specified). Access is limited. | 504 | 4-40 GByte (for 1 cpu) |
--clusters=biohpc_gen --partition=biohpc_gen_production | 40-way Sky lake Node | 1 cpu (effectively more if large memory is specified). Access is limited. | 336 | 4-40 GByte (for 1 cpu) |
--clusters=biohpc_gen --partition=biohpc_gen_normal | 40-way Sky lake Node | 1 cpu (effectively more if large memory is specified). Access is limited. | 48 | 4-40 GByte (for 1 cpu) |
--clusters=biohpc_gen --partition=biohpc_gen_inter | 40-way Sky lake Node | 1 cpu (effectively more if large memory is specified). Access is restricted. | 12 | 4-40 GByte (for 1 cpu) |
--clusters=htce --partition=htce_short | 40-way Cascade Lake node | 1 core (effectively more if large memory is specified). Access is restricted. | 5 | 9 GByte (for 1 core) |
--clusters=htce --partition=htce_long | 40-way Cascade Lake node | 1 core (effectively more if large memory is specified). Access is restricted. | 336 | 9-19 GByte (for 1 core) |
--clusters=htce --partition=htce_all | 40-way Cascade Lake node | 1 core (effectively more if large memory is specified). Access is restricted. | 72 | 9-19 GByte (for 1 core) |
--clusters=htce --partition=htce_special | 40-way Cascade Lake node | 1 core (effectively more if large memory is specified). Access is restricted. | 120 | 9 GByte (for 1 core) |
--clusters=c2pap --partition=c2pap_serial | 28-way Haswell-EP node | 1 core (effectively more if large memory is specified). Access is restricted. | 48 | 2 GByte (for 1 core) |
--clusters=c2pap --partition=c2pap_preempt | 28-way Haswell-EP node | 1 core (effectively more if large memory is specified). Access is restricted. | 48 | 2 GByte (for 1 core) |