...
The following table gives an overview over the available OpenMPI installations on the LRZ HPC systems:
Platform | Environment module | Supported compiler module |
---|---|---|
SuperMUC-NG, CooLMUC2, CooLMUC3 |
openmpi/2.1.6-intel19 openmpi/3.1.6-intel19 openmpi/4.0.4-intel19 | intel/19.0 intel/19.0 intel/19.0 | |
SuperMUC-NG, CooLMUC2, CooLMUC3 |
openmpi/2.1.6-gcc8 openmpi/3.1.6-gcc8 openmpi/4.0.4-gcc8 | gcc/8 gcc/8 gcc/8 |
In order to access the OpenMPI installation, the appropriate environment module must be loaded after unloading the default MPI environment, and after loading a suitable compiler. For example, the command sequence
module unload intel-mpi module load openmpi/4.0.4-intel19 |
or
module switch intel-mpi openmpi/4.0.4-intel19 |
can be used, to select one of the OpenMPI installations from the table above. To compile and link the program, the mpicc / mpiCC / mpifort commands are available.
...
Depending on is EAR enabled or disabled add the lines
EAR is enabled
#SBATCH --ear=on
#SBATCH --ear-mpi-dist=openmpi
module switch intel-mpi openmpi/4.0.4-intel19
EAR is disabled
#SBATCH --ear=off
module switch intel-mpi openmpi/4.0.4-intel19
immediately after the SLURM prologue (#SBATCH block).
- Start the program with the srun or mpiexec (currently EAR hangs with it, waiting for EAR fix, please disable EAR or use srun).
...
Please consult the Example parallel job scripts on the Linux-Cluster page; starting out from a job script for Intel MPI, it should be sufficient to make the following change:
Add the line
module switch intel-mpi openmpi/4.0.4-intel19
immediately after the "source /etc/profile.d/modules.sh"
- Start the program with the srun or mpiexec command analogous to the command line specified for SuperMUC-NG above.
...