GROMACS

Description of the LRZ specific usage of GROMACS on the Linux Cluster and SuperMUC-NG HPC Systems.

Introductory Remarks

What is GROMACS?

GROMACS is a versatile package to perform molecular dynamics, i.e. simulate the Newtonian equations of motion for systems with hundreds to millions of particles.

It is primarily designed for biochemical molecules like proteins and lipids that have a lot of complicated bonded interactions, but since GROMACS is extremely fast at calculating the nonbonded interactions (that usually dominate simulations) many groups are also using it for research on non-biological systems, e.g. polymers.

GROMACS is licensed and redistributed under the GPL.

Please consult the GROMACS web site for further information on this package.

The xdrfile library facility for I/O to xtc, edr and trr files is also available.

Authors

GROMACS was first developed in Herman Berendsens group, department of Biophysical Chemistry of Groningen University. It is a team effort, with contributions from several current and former developers all over world.

Available Versions at LRZ

Use module avail gromacs to find the available versions of GROMACS installations at LRZ including the default versions.

Please consult the example batch scripts below for how to use the MPI parallel versions. The single precision builds typically show larger numerical instabilities than the double precision builds. Furthermore, the GROMACS executables always have the same name.

Please note:

Starting with version 5.0 all Gromacs executables are collected in the 'gmx' utilty (see http://manual.gromacs.org/programs/byname.html)

Usage

(Documentation applies to the spack provided software stacks spack/release/19.2 and later; not applicable to spack/release/19.1 and earlier.)

Access to the binaries, libraries, and data files are provided through the gromacs module. This module sets up environment variables which point to these locations and updates the required paths.

  • The simplest start is to

    > module load gromacs

    which will give you the default version. On the login nodes, it points to a serial verision to run the utilities grompp, trjconv etc.; on the compute nodes, it will give you the mpi parallel version. Note that the parallel version will not work on the login nodes

           GROMACS versions available on SuperMUC-NG Phase1    


  • The latest versions available on SuperMUC-NG Phase 1 are provided in spack/23.1.
    module sw spack/23.1.0
  • The list of available gromacs modules on SuperMUC-NG Phase 1 you get e.g. by the command

    module av -t gromacs
    gromacs/2022.5
    gromacs/2022.5-intel
    gromacs/2022.5-intel-r64
    gromacs/2022.5-plumed
    gromacs/2022.5-r64
    gromacs/2023.1
    gromacs/2023.1-gcc
    gromacs/2023.1-gcc-r64
    gromacs/2023.1-intel
    gromacs/2023.1-intel-r64
    gromacs/2023.1-r64
    
    
  • This list contains modules with full version information. The suffixes indicate
    • the compiler (-intel)
    • plumed support (-plumed)
    • double (-r64) precision

Thus,  gromacs/2023.1-intel-r64 indicates gromacs version 2023.1, compiled with the intel compiler using Intel MPI  at double precision. The default version is double precision.

  • These alias names are resolved differently on the login nodes, where they point to the serial version, and on the compute nodes, where the MPI parallel variant is used.
  • Load your desired version, e.g. 'module load gromacs/2023.1-intel-r64'.
  • Note that in the GROMACS path there are automatic shell completion files available (check $GMXBIN/gmx-completion*) which add all GROMACS file extensions if you source them into your shell. A convieniet way to load them ist to run 'GMXRC'.

  • For compatibility, older versions of Gromacs are available in the default spack/22.2.1, following the same name convention
    gromacs/2020.4-plumed
    gromacs/2020.6-plumed
    gromacs/2021.4-plumed
    gromacs/2021.5
    gromacs/2021.5-gcc
    gromacs/2021.5-r64
    gromacs/2021.6
    gromacs/2021.6-gcc
    gromacs/2021.6-r64
    gromacs/2022.3
    gromacs/2022.3-gcc
    gromacs/2022.3-r64

GROMACS versions available on Linux-Cluster


  • On CoolMUC 4 Gromacs is available via Spack/24.1.1. To see the available version one can do:
    module use /lrz/sys/spack/release/24.1.1/modules/sapphirerapids
  • and the available versions visible via module av -t gromacs are:
    gromacs/2024.3-intel-impi-openmp-r32-parallel
    gromacs/2024.3-intel-impi-openmp-r64-parallel
    gromacs/2024.3-intel-impi-r32-parallel
    gromacs/2024.3-intel-impi-r64-parallel
  • All of them are built with Intel compilers and MPI. Versions with and without openMP in single (r32) or double (r64) precision are available. The gmx utility in the case of single precision with MPI support is called gmx_mpi and in double precision  gmx_mpi_d. The default version which loads on CoolMUC 4 is one with double precision without openMP support (openMP is not necessary for non-GPU accelerated runs).

Setting up batch jobs

For long production runs, a SLURM batch job should be used to run the program. The example batch scripts provided in this section require the input files speptide.top, after_pr.gro and full.mdp, all contained in the example archive,  to be placed in ~/mydir before the run.

Further notes:

  • to run in batch mode, submit the script using the sbatch command. To run small test cases interactively, first log in to SLURM cells and reserve the needed resources.

  • for batch jobs, the nice switch is set to 0 for mdrun. Please omit this switch when running interactively, otherwise your job will be forcibly removed from the system after some time.

  • please do not forget to replace the dummy e-Mail address and the input folder 'mydircetory' in the example scripts by your own.


Linux-Cluster with SLURM

gromacs/2024.3

SuperMUC NG with SLURM

gromacs/2018

#!/bin/bash
#SBATCH -o /home/cluster/<group>/<user>/mydir/gromacs.%j.out
#SBATCH -D /home/cluster/<group>/<user>/mydir
#SBATCH -J <job_name>
#SBATCH --get-user-env
#SBATCH --clusters=cm4
#SBATCH --partition=cm4_std #SBATCH --nodes=2
#SBATCH --ntasks-per-node=112 #SBATCH --mail-type=end #SBATCH --mail-user=<email_address>@<domain> #SBATCH --export=NONE
#SBATCH --qos=cm4_std #SBATCH --time=24:00:00
module use /dss/lrzsys/sys/stack/release/24.4.0/modules/MPI
module load intel-mpi/2021.12.0
module use /lrz/sys/spack/release/24.1.1/modules/sapphirerapids
module load slurm_setup

# load the gromacs version you would like to use

module load gromacs/2024.3-intel-impi-openmp-r32-parallel

module list
#generate .tpr file
gmx_mpi grompp -v -f full -o full -c after_pr -p speptide
# start mdrun
mpiexec gmx_mpi mdrun -s full -e full -o full -c after_full -g flog

#!/bin/bash
#SBATCH -o ./%x.%j.out
#SBATCH -e ./%x.%j.err
#SBATCH -D ./
#SBATCH --mail-type=END
#SBATCH --time=00:15:00
#SBATCH --partition=test
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=48
#SBATCH --export=NONE
#SBATCH --get-user-env
#SBATCH --mail-user=<email_address>@<domain>
#SBATCH --account=<project id>
#SBATCH -J <job name>

module load slurm_setup
module sw spack/23.1.0 # for Gromacs 2023
module load gromacs
module list
mpiexec gmx mdrun -v -deffnm <input filenames>

Scaling on LRZ Systems

SNG

Documentation

After loading the environment module, the $GROMACS_DOC variable points to a directory containing documentation and tutorials.

For further information (including the man pages for all GROMACS subcommands), please refer to the GROMACS web site.