VASP

Introduction

VASP is a package for performing ab-initio quantum-mechanical molecular dynamics (MD) using pseudopotentials and a plane wave basis set. The approach implemented in VASP is based on a finite-temperature local-density approximation and an exact evaluation of the instantaneous electronic ground state at each MD-step using efficient matrix diagonalization schemes and an efficient Pulay mixing.

Please consult the VASP Site for further details and documentation.

Licensing and Versions

Leibniz Supercomputing Centre has licensed VASP under an installation and maintenance agreement; under this agreement, usage of the software is only allowed for licensed users of VASP. Furthermore, a license of version 5 of VASP does not entitle to use of version 6 - a license upgrade must be procured.

Before using VASP on the LRZ HPC systems, the following steps must be performed:

  1. The user assures that he is registered with her/his mail address at the VASP user portal.
  2. The user applies by contacting LRZ HPC support, providing the following information: name, affiliation (working group), VASP version to be used (5 or 6), license number, the user account(s) under which you access the LRZ systems, and the mail address referenced in item 1.

  3. LRZ checks against the VASP portal whether the license information is valid; is this succeeds, access to the software is given.

Installed variants

VTST support as well as the Bader analysis tool are available for all installations. Furthermore, support for Wannier90 has been compiled into VASP.

The software is available on all HPC systems at LRZ. Note that on SuperMUC, now the IBM MPI (PE) version is used by default.

Usage

After login to the system or at the beginning of your batch script, please load the appropriate environment module via

module load vasp

By default, a recent version 5 release is loaded. Please issue module av vasp to identify other available versions, and load the desired one by adding the full version string, e.g. module load vasp/6.1.2

Startup for version 5

Please issue the command

vasp5 [-n <tasks>] [-a] [-s <half|gamma|full> ]

The meaning of the command line arguments is:

  • -n: The number of MPI tasks for parallel execution. If this is omitted, the serial version is executed.
  • -a: If this is specified, the VTST extensions are available. 
  • -s: If specified with full, no symmetry reduction is performed on the density. If half is specified, the charge density is reduced in one direction. If gamma is specified, the gamma point only is evaluated. By default, the value half is assumed.

Startup for version 6 

Version 6 uses an updated start script! Please issue the command

vasp6 [-n <tasks>] [-o] [-s <std|ncl|gam> ]

The meaning of the command line arguments is:

  • -n: The number of MPI tasks for parallel execution. If this is omitted, the serial version is executed.
  • -o: If this is specified, binaries with explicit OpenMP support are used. 
  • -s: If specified with std, the standard build is used. If ncl is specified, the non-collinear build (no symmetry reduction) is used. If gam is specified, the gamma point only is evaluated. By default, the value std is assumed.

Note that the -a switch is not needed any more. VTST as well as Wannier and solvation support are built into the regular binaries.

Example batch scripts

All example scripts explicitly specify to not restart jobs that fail due to a system problem. Reason: the VASP I/O concept is not restart-proof.

Parallel processing on Linux Cluster (CoolMUC-4) with SLURM 
(check the linked document for further resource selection options)

VASP 5VASP 6

#!/bin/bash
#SBATCH -o ./%x.%j.out
#SBATCH -D ./
#SBATCH -J vasp_job
#SBATCH --clusters=cm4

#SBATCH --partition=cm4_std

#SBATCH --qos=cm4_std

#### for single node jobs use --clusters=cm4_tiny without --partition cm4_tiny ; --qos=cm4_tiny
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=28
#SBATCH --no-requeue
#SBATCH --export=NONE
#SBATCH --get-user-env
#SBATCH --time=24:00:00

module load slurm_setup
module load vasp


vasp5 -n $SLURM_NTASKS 

#!/bin/bash
#SBATCH -o ./%x.%j.out
#SBATCH -D ./
#SBATCH -J vasp_job
#SBATCH --clusters=cm4
#SBATCH --partition=cm4_std

#SBATCH --qos=cm4_std

#SBATCH --nodes=3
#### for jobs with single node use --clusters=cm4_tiny without --partition=cm4_tiny ; --qos=cm4_tiny
#SBATCH --ntasks-per-node=28
#SBATCH --no-requeue
#SBATCH --export=NONE
#SBATCH --get-user-env
#SBATCH --time=24:00:00

module load slurm_setup
module load vasp/6.1.2


vasp6 -n $SLURM_NTASKS

Parallel processing on SuperMUC-NG with SLURM

VASP 5VASP6
#!/bin/bash 
#SBATCH -J vasp_job
#Output and error
#SBATCH -o ./%x.%j.out
#SBATCH -e ./%x.%j.err
#Initial working directory
#SBATCH -D ./
# Wall clock limit:
#SBATCH --time=24:00:00
#SBATCH --no-requeue
#Setup of execution environment
#SBATCH --export=NONE
#SBATCH --get-user-env
#SBATCH --nodes=3
#SBATCH --ntasks=128
#SBATCH --account=insert your_projectID_here
#SBATCH --partition=insert test, micro, general, large or fat
module load slurm_setup
module load vasp

vasp5 -n $SLURM_NTASKS
#!/bin/bash 
#SBATCH -J vasp_job
#Output and error
#SBATCH -o ./%x.%j.out
#SBATCH -e ./%x.%j.err
#Initial working directory
#SBATCH -D ./
# Wall clock limit:
#SBATCH --time=24:00:00
#SBATCH --no-requeue
#Setup of execution environment
#SBATCH --export=NONE
#SBATCH --get-user-env
#SBATCH --nodes=3
#SBATCH --ntasks=128
#SBATCH --account=insert your_projectID_here
#SBATCH --partition=insert test, micro, general, large or fat
module load slurm_setup
module load vasp/6.1.2

vasp6 -n $SLURM_NTASKS

Further notes

It is also possible to use an OMP_NUM_THREADS setting of larger than 1 provided that the number of MPI tasks is reduced in proportion. Since it is not trivial to set this up correctly, please consult the information available from the MPI document for details on how to handle hybrid execution. VASP version 6 has explicit support for hybrid execution.

Documentation and Support

A PDF file is available in the subdirectory $VASP_DOC on the systems where VASP is installed.

In case any problems are observed, please contact HPC support via the LRZ Service Desk. Providing test cases with short run times is appreciated.