OpenMPI

The Open MPI Project is an open source MPI-2 implementation that is developed and maintained by a consortium of academic, research, and industry partners. Open MPI is therefore able to combine the expertise, technologies, and resources from all across the High Performance Computing community in order to build the best MPI library available. Open MPI offers advantages for system and software vendors, application developers and computer science researchers.

OpenMPI installations on LRZ systems

On LRZ HPC Systems, OpenMPI is used for research and experimental purposes. This means that while LRZ support staff will do its best to help you with problems, we do not have any commercial-level support and hence cannot give any functional or reliability guarantees for using this software. Also, programs built with OpenMPI will often not scale as well as the proprietary MPI implementations on the high-end systems.

The following table gives an overview over the available OpenMPI installations on the LRZ HPC systems:

PlatformEnvironment moduleSupported compiler module

SuperMUC-NG, CooLMUC2,

CooLMUC3

openmpi/2.1.6-intel19

openmpi/3.1.6-intel19

openmpi/4.0.4-intel19

intel/19.0

intel/19.0

intel/19.0

SuperMUC-NG, CooLMUC2,

CooLMUC3

openmpi/2.1.6-gcc8

openmpi/3.1.6-gcc8

openmpi/4.0.4-gcc8

gcc/8

gcc/8

gcc/8

In order to access the OpenMPI installation, the appropriate environment module must be loaded after unloading the default MPI environment, and after loading a suitable compiler. For example, the command sequence

module unload intel-mpi

module load openmpi/4.0.4-intel19

or

module switch intel-mpi openmpi/4.0.4-intel19

can be used, to select one of the OpenMPI installations from the table above. To compile and link the program, the mpicc / mpiCC / mpifort commands are available.

Usage of OpenMPI is typical for the LRZ MPI installations with respect to compiler wrappers and startup commands; special variants are described below.

The OpenMPI builds are all done with MPI_THREAD_MULTIPLE enabled.

Specific OpenMPI usage scenarios

Running OpenMPI programs on SuperMUC-NG

Please consult the Job Processing with SLURM on the SuperMUC-NG page; starting out from a job script for Intel MPI, it should be sufficient to make the following change:

  1. Switch modules

    module switch intel-mpi openmpi/4.0.4-intel19


  2. Start the program with the srun . You can use mpiexec if you deactivate also ear with the following sbatch option:
#SBATCH --ear=off 

Running OpenMPI programs on the Linux Cluster

Please consult the Example parallel job scripts on the Linux-Cluster page; starting out from a job script for Intel MPI, it should be sufficient to make the following change:

  1. Add the line

    module switch intel-mpi openmpi/4.0.4-intel19

    immediately after the "source /etc/profile.d/modules.sh"

  2. Start the program with the srun or mpiexec command analogous to the command line specified for SuperMUC-NG above.

Documentation

Documentation, as well as Frequently Asked Questions pages, are available on the OpenMPI website.

Also, man pages are available for the various OpenMPI commands, especially mpi(3) and srun(1).