AMBER
What is AMBER?
Assisted Model Building with Energy Refinement (AMBER) is a suite of biomolecular simulation programs comprising numerous programs to setup, perform and analyze molecular dynamic simulations. The name AMBER also refers to a series of classical molecular mechanics force fields, primarily designed for the simulation of biomolecules. For more details, please consult the AMBER Home page.
AMBER Applications
AMBER is distributed by UCSF in two parts: AmberTools (which is free of charge and released under the GPL licence) and Amber (which not only builds on AmberTools by adding pmemd
but is also distributed with a seperate license and fee structure). Essentially, this means that you may use the software only for research and teaching purposes.
AmberTools
Application | Purpose |
---|---|
NAB/sff | build molecules, run MD or apply distance geometry restraints using generalized Born, Poisson-Boltzmann or 3D-RISM implicit solvent models |
antechamber, MCPB | create force fields for general organic molecules and metal centers |
tleap, parmed | preparatory tools for Amber simulations |
sqm | semiempirical and DFTB quantum chemistry program |
pbsa | numerical solutions to Poisson-Boltzmann models |
3D-RISM | integral equation models for solvation |
sander | molecular dynamic simulations |
mdgx | pushing the boundaries of Amber MD, primarily through parameter fitting. |
cpptraj, pytraj | analyzing structure and dynamics in trajectories |
MMPBSA.py, amberlite | energy-based analyses of MD trajectories |
Note: MPI parallel executables have the postfix MPI
(e.g., cpptraj.MPI
, mdgx.MPI
, MMPBSA.py.MPI
and sander.MPI
).
Amber
Compared to sander
in AmberTools, pmemd
in Amber facilitates much faster molecular dynamic simulations on parallel CPU or GPU hardware. On LRZ systems, both serial (pmemd
) and MPI parallel (pmemd.MPI
) versions of pmemd
are available. In addition, pmemd
can be compiled as a binary with CUDA (pmemd.cuda
) and/or CUDA and MPI (pmemd.cuda.MPI
) support on parallel GPU hardware such as DGX-1.
Usage of AMBER at LRZ
The environment modules package controls access to the software. Use "module avail amber" to find all available versions of AMBER installed at LRZ.
To use the default version of AMBER, please type:
> module load amber
This will enable you to run all available binaries for the loaded version of AMBER. E.g., you can then call tleap
, sander
or pmemd
.
Note: leaprc
files may have to be copied from $AMBERHOME/dat/leap/cmd
to the current working directory.
Setting Up Batch Jobs
For productive-level molecular dynamic simulations using pmemd.MPI
, a SLURM batch job should be submitted via "sbatch". The example batch scripts provided in this section require the input files mdin.in, topology.prmtop and coordinates.inpcrd, all contained in the example archive, to be placed in ~/mydir before the run.
Linux Cluster | SuperMuc-NG |
---|---|
#!/bin/bash #SBATCH -o /dss/dsshome1/<group>/<user>/mydir/%x.%j.out | #!/bin/bash |
Note: Other AMBER binaries compiled with MPI support (e.g., cpptraj.MPI
, mdgx.MPI
, MMPBSA.py.MPI
and sander.MPI
) can be run by analogy.
Amber with SYCL implementation of pmemd on SuperMUC-NG Phase 2
SuperMUC-NG Phase 2 is the latest, GPU-accelerated cluster at LRZ, equipped with Intel PVC GPUs. To use the Amber code with Intel PVC GPUs, you can follow these general steps. Keep in mind that this Amber version 20 release includes the SYCL implementation of pmemd to enable simulations on Intel GPUs. Please note that this release will not include all features.
#!/bin/bash #SBATCH -D ./ #SBATCH --mail-type=NONE #SBATCH --time=02:00:00 #SBATCH --partition=general #SBATCH --nodes=1 #SBATCH --ntasks-per-node=1 #SBATCH --cpus-per-task=1 #SBATCH --export=NONE #SBATCH --get-user-env #SBATCH --mail-user=.............. #SBATCH --account=............ #SBATCH -J 995334_0 #SBATCH --output=%x.%j.out #SBATCH --error=%x.%j.err #SBATCH --no-requeue module load slurm_setup module load intel-toolkit/2023.2.0 module rm intel-mkl module add intel-mkl/2024.0.0 module load amber mpirun -np 2 pmemd.sycl_SPFP.MPI -O -i relax.in -p 995334_atoms.parm7 -c 995334_atoms_eq_0.rst -r 995334_atoms_bench_0.rst -o 995334_atoms_bench_0.out -x 995334_atoms_bench_0.crd
Using the setup above for Intel MPI offloading, may bring a better performance. This command will start Amber on two tiles. Please note that even though Amber can run on more than two tiles, if the simulation is split across more than one GPU, the performance will significantly deteriorate. Therefore, for the moment, we don't recommend the usage of more than two tiles per simulation with Amber.
Because node sharing is not supported on Phase 2, for optimal usage of node resources, all GPUs must be occupied by a job. As mentioned above Amber scales poorly in multu-GPU and even multi-tile simulations, therefore we suggest running separate simulations on each tile using the following approach. Submit a jobscript starting 8 MPI ranks, one for each tile:
#!/bin/bash #SBATCH -D ./ #SBATCH --mail-type=NONE #SBATCH --time=04:00:00 #SBATCH --partition=general #SBATCH --nodes=1 #SBATCH --ntasks-per-node=8 #SBATCH --cpus-per-task=1 #SBATCH --export=NONE #SBATCH --get-user-env #SBATCH --mail-user=......... #SBATCH --account=......... #SBATCH -J 124422 #SBATCH --output=%x.%j.out #SBATCH --error=%x.%j.err #SBATCH --no-requeue module load slurm_setup module load intel-toolkit/2023.2.0 module rm intel-mkl module add intel-mkl/2024.0.0 module load amber mpirun -np 8 ./amber.sh
where amber.sh is a shell script containing the amber command started with srun:
srun -n 1 pmemd.sycl_SPFP.MPI -O -i relax.in -p 124422_atoms.parm7 -c 124422_atoms_eq_${MPI_LOCALRANKID}.rst -r 124422_atoms_bench_${MPI_LOCALRANKID}.rst -o 124422_atoms_bench_${MPI_LOCALRANKID}.out -x 124422_a toms_bench_${MPI_LOCALRANKID}.crd > kk_${MPI_LOCALRANKID}
MPI_LOCALRANKID will add indexes to the output files according to the MPI rank they run on.
Using Amber with DFTB
It is possible to run QM/MM AMBER calculations using the density functional tight binding method. To enable this, please proceed as follows.
First step (done in an interactive login shell):
> cd $HOME > mkdir -p my_amber/dat/slko > cd my_amber > module load amber > ln -s $AMBERHOME/exe > cp <your DFTB files> dat/slko
Second step (adjust batch script):
module load amber export AMBERHOME=$HOME/my_amber export PATH=$AMBERHOME/exe:$PATH
Note that various sets of DFTB files exist, and the file names partially overlap. You will need to set up multiple such installations if you want to use either different builds of amber (e.g. cm2, SuperMuc-NG) or different DFTB file sets.
Documentation
Please consult the AMBER Home page for documentation. The AMBER Reference Manuals are either available at the AMBER Home page or via the the environment variable $AMBER_DOC
which points to a directory containing the PDF documentation.
Support
If you have any questions or problems with AMBER installed on different LRZ platforms, please don't hesitate to contact LRZ HPC support staff.