OpenMM

What is OpenMM?

OpenMM is a high performance python-based toolkit for molecular simulation. It can be used either as a stand-alone application for running simulations, or as a library called from a target code. It provides a combination of extreme flexibility (through custom forces and integrators), openness, and high performance (especially on recent GPUs) that make it truly unique among simulation codes. In addition, extensive language bindings for Python, C, C++, and even Fortran are included. The code is open source and actively maintained on Github, licensed under MIT and LGPL. Moreover, it is part of the Omnia suite of tools for predictive biomolecular simulation. For more details, please consult either the Home page of OpenMM or the Home page of the open source project on GitHub.

Usage of OpenMM at LRZ

The environment modules package controls access to the software. Use "module avail openmm" to find all available versions of OpenMM installed at LRZ.

To use the default version of OpenMM, please type:

> module load openmm

To run simulations in OpenMM, you can create a python script using the OpenMM Script Builder. You can select the force field, platform (Reference, CPU, CUDA, OpenCL), precision (single/mixed/double for CUDA and OpenCL) as well as different simulation parameters (e.g. temperature, pressure, integrator, simulation steps, data output). You can then download the script named e.g. simulatePdb.py. Please note that there are much more options in OpenMM as present in the script builder. You can e. g. also use parmed within your script to load topologies and coordinates from different moleular dynamics codes (such as amber and gromacs).

Having finalized the script, please create a SLURM batch script and submit via "sbatch". Inside you call the file simulatePdb.py.

Linux Cluster

SuperMUC-NG

#!/bin/bash
#SBATCH -o /dss/dsshome1/<group>/<user>/mydir/%x.%j.out
#SBATCH -e /dss/dsshome1/<group>/<user>/mydir/%x.%j.err #SBATCH -D /dss/dsshome1/<group>/<user>/mydir #SBATCH -J <job_name>
#SBATCH --time=24:00:00
#SBATCH --clusters=cm2_tiny
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=28
#SBATCH --export=NONE
#SBATCH --get-user-env
#SBATCH --mail-type=end
#SBATCH --mail-user=<email_address>@<domain>


module load slurm_setup
module load openmm

python simulatePdb.py
#!/bin/bash
#SBATCH -o /dss/dsshome1/<group>/<user>/mydir/%x.%j.out
#SBATCH -e /dss/dsshome1/<group>/<user>/mydir/%x.%j.err
#SBATCH -D /dss/dsshome1/<group>/<user>/mydir
#SBATCH -J <job name>
#SBATCH --time=24:00:00
#SBATCH --partition=micro
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=48
#SBATCH --export=NONE
#SBATCH --get-user-env
#SBATCH --mail-type=END
#SBATCH --mail-user=<email_address>@<domain>
#SBATCH --account=<project id>

module load slurm_setup
module load openmm

python simulatePdb.py

Documentation

For documentation, please consult the OpenMM Home page.

Support

If you have any questions or problems with OpenMM installed on different LRZ platforms, please don't hesitate to contact LRZ HPC support staff.