AMBER

What is AMBER?

Assisted Model Building with Energy Refinement (AMBER) is a suite of biomolecular simulation programs comprising numerous programs to setup, perform and analyze molecular dynamic simulations. The name AMBER also refers to a series of classical molecular mechanics force fields, primarily designed for the simulation of biomolecules. For more details, please consult the AMBER Home page.

AMBER Applications

AMBER is distributed by UCSF in two parts: AmberTools (which is free of charge and released under the GPL licence) and Amber (which not only builds on AmberTools by adding pmemd but is also distributed with a seperate license and fee structure). Essentially, this means that you may use the software only for research and teaching purposes.

AmberTools

ApplicationPurpose

NAB/sff

build molecules, run MD or apply distance geometry restraints using generalized Born, Poisson-Boltzmann or 3D-RISM implicit solvent models

antechamber, MCPB

create force fields for general organic molecules and metal centers

tleap, parmed

preparatory tools for Amber simulations

sqm

semiempirical and DFTB quantum chemistry program

pbsa

numerical solutions to Poisson-Boltzmann models

3D-RISM

integral equation models for solvation

sander

molecular dynamic simulations

mdgx

pushing the boundaries of Amber MD, primarily through parameter fitting.

cpptraj, pytraj

analyzing structure and dynamics in trajectories

MMPBSA.py, amberlite

energy-based analyses of MD trajectories

Note: MPI parallel executables have the postfix MPI (e.g., cpptraj.MPI, mdgx.MPI, MMPBSA.py.MPI and sander.MPI).

Amber

Compared to sander in AmberTools, pmemd in Amber facilitates much faster molecular dynamic simulations on parallel CPU or GPU hardware. On LRZ systems, both serial (pmemd) and MPI parallel (pmemd.MPI) versions of pmemd are available. In addition, pmemd can be compiled as a binary with CUDA (pmemd.cuda) and/or CUDA and MPI (pmemd.cuda.MPI) support on parallel GPU hardware such as DGX-1.

Usage of AMBER at LRZ

The environment modules package controls access to the software. Use "module avail amber" to find all available versions of AMBER installed at LRZ.

To use the default version of AMBER, please type:

> module load amber

This will enable you to run all available binaries for the loaded version of AMBER. E.g., you can then call tleap, sander or pmemd.

Note: leaprc files may have to be copied from $AMBERHOME/dat/leap/cmd to the current working directory.

Setting Up Batch Jobs

For productive-level molecular dynamic simulations using pmemd.MPI, a SLURM batch job should be submitted via "sbatch". The example batch scripts provided in this section require the input files mdin.in, topology.prmtop and coordinates.inpcrd, all contained in the example archive, to be placed in ~/mydir before the run.

Linux ClusterSuperMuc-NG
#!/bin/bash
#SBATCH -o /dss/dsshome1/<group>/<user>/mydir/%x.%j.out
#SBATCH -e /dss/dsshome1/<group>/<user>/mydir/%x.%j.err #SBATCH -D /dss/dsshome1/<group>/<user>/mydir #SBATCH -J <job_name>
#SBATCH --time=24:00:00
#SBATCH --clusters=cm2_tiny
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=28 #SBATCH --export=NONE #SBATCH --get-user-env #SBATCH --mail-type=end #SBATCH --mail-user=<email_address>@<domain> module load slurm_setup
module load amber

mpiexec pmemd.MPI -O -i mdin.in -o mdout.out -inf mdinfo.out \
-p topology.prmtop
-c coordinates.inpcrd \
-r coordinates_restrt.rst -x trajectory.nc
#!/bin/bash
#SBATCH -o /dss/dsshome1/<group>/<user>/mydir/%x.%j.out
#SBATCH -e /dss/dsshome1/<group>/<user>/mydir/%x.%j.err
#SBATCH -D /dss/dsshome1/<group>/<user>/mydir
#SBATCH -J <job name>
#SBATCH --time=24:00:00
#SBATCH --partition=micro
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=48
#SBATCH --export=NONE
#SBATCH --get-user-env
#SBATCH --mail-type=END
#SBATCH --mail-user=<email_address>@<domain>
#SBATCH --account=<project id>

module load slurm_setup
module load amber

mpiexec pmemd.MPI -O -i mdin.in -o mdout.out -inf mdinfo.out \
-p topology.prmtop -c coordinates.inpcrd \
-r coordinates_restrt.rst -x trajectory.nc

Note: Other AMBER binaries compiled with MPI support (e.g., cpptraj.MPI, mdgx.MPI, MMPBSA.py.MPI and sander.MPI) can be run by analogy.

Using Amber with DFTB

It is possible to run QM/MM AMBER calculations using the density functional tight binding method. To enable this, please proceed as follows.

First step (done in an interactive login shell):

> cd $HOME
> mkdir -p my_amber/dat/slko
> cd my_amber
> module load amber
> ln -s $AMBERHOME/exe
> cp <your DFTB files> dat/slko

Second step (adjust  batch script):

module load amber
export AMBERHOME=$HOME/my_amber
export PATH=$AMBERHOME/exe:$PATH

Note that various sets of DFTB files exist, and the file names partially overlap. You will need to set up multiple such installations if you want to use either different builds of amber (e.g. cm2, SuperMuc-NG) or different DFTB file sets.

Documentation

Please consult the AMBER Home page for documentation. The AMBER Reference Manuals are either available at the AMBER Home page or via the the environment variable $AMBER_DOC which points to a directory containing the PDF documentation.

Support

If you have any questions or problems with AMBER installed on different LRZ platforms, please don't hesitate to contact LRZ HPC support staff.