What is PLUMED?

PLUMED (PLUgin for MolEcular Dynamics) is an open-source, community-developed library that provides a wide range of different methods, which include:

  • enhanced-sampling algorithms
  • free-energy methods
  • tools to analyze the vast amounts of data produced by molecular dynamics (MD) simulations.

These techniques can be used in combination with a large toolbox of collective variables that describe complex processes in physics, chemistry, material science, and biology.

PLUMED works together with some of the most popular MD engines, such as ACEMD, Amber, DL_POLY, GROMACS, LAMMPS, NAMD, OpenMM, ABIN, CP2K, i-PI, PINY-MD, and Quantum Espresso. In addition, PLUMED can be used to augment the capabilities of analysis tools such as VMD, HTMD, OpenPathSampling, and as a standalone utility to analyze pre-calculated MD trajectories.

PLUMED can be interfaced with the host code using a single well-documented API that enables the PLUMED functionalities to be imported. The API is accessible from multiple languages (C, C++, FORTRAN, and Python), and is thus compatible with the majority of the codes used in the community. The PLUMED license (L-GPL) also allows it to be interfaced with proprietary software.

For more details, please consult the PLUMED Home page.

Usage of PLUMED at LRZ

As a Standalone Utility

PLUMED can be used as a tool to e.g. post-process pre-calculated MD trajectories ("plumed driver") or to calculate free energies from "HILL" files ("plumed sum_hills").

The environment modules package controls access to the software. Use "module avail plumed" to find all available versions of PLUMED installed at LRZ. Please note that both serial and MPI parallel versions of PLUMED are installed on LRZ systems.

To use the default version of PLUMED, please type:

> module load plumed

This will enable you to run the binary "plumed" for the loaded version of PLUMED.

Together With an MD Engine

PLUMED is designed to work together with most MD engines to e.g. run metadynamics simulations. On LRZ systems, PLUMED is available in conjunction with the the following codes:

GROMACS (module with PLUMED support)
AMBER (sander module)

Setting up Batch Jobs

For production runs using e.g. GROMACS patched with PLUMED support, please create a SLURM batch script and submit via "sbatch". In addition to the input files needed for GROMACS alone, you need the file "plumed.dat" which contains the settings for the bias.

Linux Cluster


#SBATCH -o /dss/dsshome1/<group>/<user>/mydir/%x.%j.out
#SBATCH -e /dss/dsshome1/<group>/<user>/mydir/%x.%j.err #SBATCH -D /dss/dsshome1/<group>/<user>/mydir #SBATCH -J <job_name>
#SBATCH --time=24:00:00
#SBATCH --clusters=cm2_tiny
#SBATCH --ntasks=28 #SBATCH --export=NONE #SBATCH --get-user-env #SBATCH --mail-type=end #SBATCH --mail-user=<email_address>@<domain> module load slurm_setup
module load gromacs/2019.4-intel19-impi-plumed-r32

mpiexec gmx mdrun -v -deffnm <input filenames> \
-plumed plumed.dat
#SBATCH -o /dss/dsshome1/<group>/<user>/mydir/%x.%j.out
#SBATCH -e /dss/dsshome1/<group>/<user>/mydir/%x.%j.err
#SBATCH -D /dss/dsshome1/<group>/<user>/mydir
#SBATCH -J <job name>
#SBATCH --time=24:00:00
#SBATCH --partition=micro
#SBATCH --ntasks=48
#SBATCH --export=NONE
#SBATCH --get-user-env
#SBATCH --mail-type=END
#SBATCH --mail-user=<email_address>@<domain>
#SBATCH --account=<project id>

module load slurm_setup
module load gromacs/2019.4-intel19-impi-plumed-r32

mpiexec gmx mdrun -v -deffnm <input filenames> \
-plumed plumed.dat


For documentation, please consult the PLUMED Home page, where the PLUMED manuals are also available.


If you have any questions or problems with PLUMED installed on different LRZ platforms, please don't hesitate to contact LRZ HPC support staff.