Job Processing on Housed Systems
How to prepare, run and manage jobs?
In order to run batch jobs or interactive jobs, you need to log in to the login node of your housed system. If your housed system does not have it's own login node, you may use the CoolMUC-4 login node:
ssh -Y userID@cool.hpc.lrz.de
Please consult Access and Login to the Linux-Cluster for further details on:
- the login procedure,
- the ssh documentation, e.g. policies,
- two-factor authentication.
Please consult the Linux Cluster documentation and subpages for
- examples of batch job scripts for serial or parallel jobs (e.g., shared-memory or MPI jobs),
- job submission procedure,
- Slurm commands to manage jobs.
You may need to adapt the examples according to your needs and the requirements of your housed system!
Please also consider our policies and rules!
Resource limits
The clusters and partitions listed in this section are only available for institutes that have a housing contract with LRZ.
| Cluster/Partition | Architecture | Core counts and remarks | Run time limit (hours) | Memory limit (GByte) |
|---|---|---|---|---|
--clusters=lcg --partition=lcg_serial | 40-way Cascade Lake node 10 192-way AMD EPYC 9654 96-Core nodes | 1 core (effectively more if large memory is specified). Access is restricted to users from LMU high energy physics. | 96 | 64 -180 GByte (complete node) |
--clusters=htso --partition=htso_std | 80-way Ice Lake node | 1 core (effectively more if large memory is specified). Access is restricted. | 168 | 9 GByte (for 1 core) |
--clusters=hlai --partition=hlai_std | 80-way Ice Lake node | 1 core (effectively more if large memory is specified). Access is restricted. | 168 | 6 GByte (for 1 core) |
--clusters=httc --partition=httc_std | 80-way Ice Lake node | 1 core (effectively more if large memory is specified). Access is limited. | 960 | 3 GByte (for 1 core) |
--clusters=httc --partition=httc_high_mem | 80-way Ice Lake node | 1 core (effectively more if large memory is specified). Access is restricted. | 960 | 3 GByte (for 1 core) |
--clusters=biohpc_gen --partition=biohpc_gen_highmem | 40-way Sky lake Node | 1 cpu (effectively more if large memory is specified). Access is limited. | 504 | 4-40 GByte (for 1 cpu) |
--clusters=biohpc_gen --partition=biohpc_gen_production | 40-way Sky lake Node | 1 cpu (effectively more if large memory is specified). Access is limited. | 336 | 4-40 GByte (for 1 cpu) |
--clusters=biohpc_gen --partition=biohpc_gen_normal | 40-way Sky lake Node | 1 cpu (effectively more if large memory is specified). Access is limited. | 48 | 4-40 GByte (for 1 cpu) |
--clusters=biohpc_gen --partition=biohpc_gen_inter | 40-way Sky lake Node | 1 cpu (effectively more if large memory is specified). Access is restricted. | 12 | 4-40 GByte (for 1 cpu) |