lammps

Description

LAMMPS is a classical molecular dynamics code that models an ensemble of particles in a liquid, solid, or gaseous state. It can model atomic, polymeric, biological, metallic, granular, and coarse-grained systems using a variety of force fields and boundary conditions.
LAMMPS has potentials for soft materials (biomolecules, polymers) and solid-state materials (metals, semiconductors) and coarse-grained or mesoscopic systems. It can be used to model atoms or, more generically, as a parallel particle simulator at the atomic, meso, or continuum scale.

Usage on LRZ HPC Systems

Available Versions at LRZ

In order to run a LAMMPS job, please first load the appropriate environment module. See the module page for more information on modules.

module load lammps/20210310-gcc11-impi-openmp or lammps/20210310-intel21-impi-openmp  

Setup

You can execute LAMMPS on your input file in.mystuff either in serial mode,
lmp < in.mystuff

The pair potentials are available in LAMMPS_BASE/potentials; you can copy/link them from there or reference the full path name in your input file.

Batch Jobs

Please consult the batch documents of the Linux Cluster and the SuperMUC-NG, respectively, for how to set up a batch run. These documents also contain example script which you can easily adapt for running a LAMMPS batch job:

Linux-Cluster-COOLMUC-4 with SLURM

SuperMUC-NG Phase 1 with SLURM

#!/bin/bash
#SBATCH -o ./myjob.lammps.%j.%N.out
#SBATCH -D ./
#SBATCH -J Job_name

#----------------------------------------------------Job submission to serial cluster-----------------------------
#SBATCH --clusters=serial
#SBATCH --partition=serial_std
#SBATCH --nodes=1 #--Maximum number of nodes is 1.
#SBATCH --cpus-per-task=10 #--Maxmium number of cpus is 56.
#SBATCH --mem=50G
#----------------------------------------------------Job submission to cm4_tiny partition-----------------------------
#SBATCH --qos=cm4_tiny
#SBATCH --clusters=cm4
#SBATCH --partition=cm4_tiny
#SBATCH --nodes=1 #--Maximum number of node is 1.
#SBATCH --cpus-per-task=112 #--Number of cpus can vary between 56 to 112.
#----------------------------------------------------Job submission to cm4_std partition-----------------------------
#SBATCH --qos=cm4_std
#SBATCH --clusters=cm4
#SBATCH --partition=cm4_std
#SBATCH --nodes=2 #--Minimum number of nodes is 1 and maximum number is 4.
#SBATCH --cpus-per-task=112 #--Number of cpus can vary between 112 to 448.

# --- Realistic assumption for memory requirement of the task and proportional to the used number of CPU cores ---
#SBATCH --get-user-env
#SBATCH --mail-type=NONE
###       mail-type can be either one of (none, all, begin, end, fail,...)
#SBATCH --mail-user=<email_address>@<domain>
#SBATCH --export=NONE
#--- Realistic assumption of execution time is necessary.
#SBATCH --time=0:10:00
#----------------------------------------------------

module switch spack/22.2.1

module load gcc/11 intel-mkl/2020-gcc11 intel-mpi/2021-gcc

module load lammps/20210310-gcc11-impi-openmp


mpiexec lmp -i Path/To/Your/in.your_lammps_script
#!/bin/bash
#SBATCH -o %x.%j.out
#SBATCH -e %x.%j.err
#SBATCH -D pwd
#SBATCH --mail-type=END
#SBATCH --time=08:00:00
#SBATCH --partition=test
#SBATCH --nodes=2
#SBATCH --export=NONE
#SBATCH --get-user-env
#SBATCH --mail-user=<email_address>@<domain>
#SBATCH --account=<project id>
#SBATCH -J <job name>

module load slurm_setup
module load lammps/20210310-gcc11-impi-openmp

mpiexec -n 16 lmp -i /<pwd>/input

Documentation

The environment variable LAMMPS_DOC points at the HTML base documentation page if the lammps environment module is loaded.

Installation of LAMMPS using user_spack

LAMMPS 2024 can be installed on Cool MUC-4 using user_spack 24.4.0, to this end, 

module load user_spack 24.4.0 

module load fftw/3.3.10-intel24-impi-6zj (if you need to install kspace package)

spack install lammps@20240829.1%oneapi@2024.1.0 +kspace ^fftw%oneapi (or any other packages you need). 

Support

If you have any questions or problems with the LAMMPS installations on LRZ platforms please contact the LRZ support team.