GROMACS

Description of the LRZ specific usage of GROMACS on the Linux Cluster and SuperMUC-NG HPC Systems.

Introductory Remarks

What is GROMACS?

GROMACS is a versatile package to perform molecular dynamics, i.e. simulate the Newtonian equations of motion for systems with hundreds to millions of particles.

It is primarily designed for biochemical molecules like proteins and lipids that have a lot of complicated bonded interactions, but since GROMACS is extremely fast at calculating the nonbonded interactions (that usually dominate simulations) many groups are also using it for research on non-biological systems, e.g. polymers.

GROMACS is licensed and redistributed under the GPL.

Please consult the GROMACS web site for further information on this package.

The xdrfile library facility for I/O to xtc, edr and trr files is also available.

Authors

GROMACS was first developed in Herman Berendsens group, department of Biophysical Chemistry of Groningen University. It is a team effort, with contributions from several current and former developers all over world.

Available Versions at LRZ

Use module avail gromacs to find the available versions of GROMACS installations at LRZ including the default versions.

Please consult the example batch scripts below for how to use the MPI parallel versions. The single precision builds typically show larger numerical instabilities than the double precision builds. Furthermore, the GROMACS executables always have the same name.

Please note:

Starting with version 5.0 all Gromacs executables are collected in the 'gmx' utilty (see http://manual.gromacs.org/programs/byname.html)

Usage

(Documentation applies to the spack provided software stacks spack/release/19.2 and later; not applicable to spack/release/19.1 and earlier.)

Access to the binaries, libraries, and data files are provided through the gromacs module. This module sets up environment variables which point to these locations and updates the required paths.

  • The simplest start is to

    > module load gromacs

    which will give you the default version, which is always single precision. On the login nodes, it points to a serial verision to run the utilities grompp, trjconv etc.; on the compute nodes, it will give you the mpi parallel version. Note that the parallel version will not work on the login nodes.

  • The list of available gromacs modules you get e.g. by the command

    > module avail -t gromacs
    gromacs/2018(default:@)
    gromacs/2018-plumed(@)
    gromacs/2018-plumed-r64(@)
    gromacs/2018-r64(@)
    gromacs/2018.6-intel-impi-plumed-r32
    gromacs/2018.6-intel-impi-plumed-r64
    gromacs/2018.6-intel-impi-r32
    gromacs/2018.6-intel-impi-r64
    gromacs/2018.6-intel-plumed-seq-r32
    gromacs/2018.6-intel-plumed-seq-r64
    gromacs/2018.6-intel-seq-r32
    gromacs/2018.6-intel-seq-r64
    gromacs/2019(@)
    gromacs/2019-r64(@)
    gromacs/2019.3-intel-impi-r32
    gromacs/2019.3-intel-impi-r64
    gromacs/2019.3-intel-seq-r32
    gromacs/2019.3-intel-seq-r64
  • This list contains modules with full version information. The suffixes indicate
    • the compiler (-intel)
    • plumed support (-plumed)
    • parallel (-impi) or serial (-seq) version
    • single (-r32) or double (-r64) precision

Thus, gromacs/2018.6-intel-plumed-seq-r64 indicates gromacs version 2018.6, compiled with the intel compiler using Intel MPI  with plumed support at double precision.

  • Preferentially, use the convenient alias names indicated by the '(@)', which point to either of these modules. If unclear, you can query the version with e.g. 'module info gromacs/2018-r64'. These alias names are resolved differently on the login nodes, where they point to the serial version, and on the compute nodes, where the MPI parallel variant is used.
  • Load your desired version, e.g. 'module load gromacs/2018-plumed'.
  • Note that in the GROMACS path there are automatic shell completion files available (check $GMXBIN/gmx-completion*) which add all GROMACS file extensions if you source them into your shell. A convieniet way to load them ist to run 'GMXRC'.

Setting up batch jobs

For long production runs, a SLURM batch job should be used to run the program. The example batch scripts provided in this section require the input files speptide.top, after_pr.gro and full.mdp, all contained in the example archive,  to be placed in ~/mydir before the run.

Further notes:

  • to run in batch mode, submit the script using the sbatch command. To run small test cases interactively, first log in to SLURM cells and reserve the needed resources.

  • for batch jobs, the nice switch is set to 0 for mdrun. Please omit this switch when running interactively, otherwise your job will be forcibly removed from the system after some time.

  • please do not forget to replace the dummy e-Mail address and the input folder 'mydircetory' in the example scripts by your own.


Linux-Cluster with SLURM

gromacs/2018

SuperMUC NG with SLURM

gromacs/2018

#!/bin/bash
#SBATCH -o /home/cluster/<group>/<user>/mydir/gromacs.%j.out
#SBATCH -D /home/cluster/<group>/<user>/mydir
#SBATCH -J <job_name>
#SBATCH --get-user-env
#SBATCH --clusters=cm2_tiny
#SBATCH --ntasks=28
#SBATCH --mail-type=end
#SBATCH --mail-user=<email_address>@<domain>
#SBATCH --export=NONE
#SBATCH --time=24:00:00

module load slurm_setup
module load gromacs
module list
#generate .tpr file
gmx grompp -v -f full -o full -c after_pr -p speptide
# start mdrun
mpiexec gmx mdrun -s full -e full -o full -c after_full -g flog

#!/bin/bash
#SBATCH -o ./%x.%j.out
#SBATCH -e ./%x.%j.err
#SBATCH -D ./
#SBATCH --mail-type=END
#SBATCH --time=00:15:00
#SBATCH --partition=test
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=48
#SBATCH --export=NONE
#SBATCH --get-user-env
#SBATCH --mail-user=<email_address>@<domain>
#SBATCH --account=<project id>
#SBATCH -J <job name>

module load slurm_setup
module load gromacs
module list
mpiexec gmx mdrun -v -deffnm <input filenames>

Scaling on LRZ Systems

SNGcm2cm3


Documentation

After loading the environment module, the $GROMACS_DOC variable points to a directory containing documentation and tutorials.

For further information (including the man pages for all GROMACS subcommands), please refer to the GROMACS web site.