TURBOMOLE

General Information

Overview of Functionality

TURBOMOLE is a highly optimized software for large-scale quantum chemical simulations of molecules, clusters, and periodic solides. TURBOMOLE consists of a series of modules; and it covers a wide range of research areas, more details can be found on the TURBOMOLE web page.

Usage conditions and Licensing

TURBOMOLE may only be used for academic and teaching purposes.

Running Turbomole at LRZ

General information for using the module environment system at LRZ can be found here. For creating and submitting batch jobs to the respective queueing system, please refer to the documentation of Load-Leveler (SuperMUC) and Linux Cluster on our webpages.

Serial Version

Before running any serial TURBOMOLE program, please load the serial module via:

  > module load turbomole  


This will adjust the variables $TURBODIR and $PATH.

Parallel Version

Recent parallel versions of Turbomole allows using different types of network interconnects - but sometimes requires manual intervention to select to correct interconnect for SuperMUC or Linux-Cluster. As parallel runs of Turbomole also require one control process, additional all parallel runs are only considered to run in the batch queuing systems and you need also to configure a ssh-Key according to the description you find here.

Before running any parallel TURBOMOLE program, please load the parallel module via:

  > module load turbomole

This will adjust the variables $TURBODIR, $PATH, $PARA_ARCH and $PARNODES.

$PARNODES will be set to the number of requested cores in the jobscript. If you want to use a (smaller) number of cores, please set $PARNODES to the number of cores you request, i.e.

   > export PARNODES=[nCores]

After login to the system, your batch script for Linux-Cluster or SuperMUC could look like examples below:

Linux-Cluster (SLURM)

SuperMUC-NG

#!/bin/bash
#SBATCH -o /home/cluster/<group>/<user>/mydir/cp2k.%j.out
#SBATCH -D /home/cluster/<group>/<user>/mydir
#SBATCH -J <job_name>
#SBATCH --clusters=cm2
#SBATCH --partition=cm2_std
#SBATCH --qos=cm2_std
#SBATCH --nodes=4
#SBATCH --ntasks=10 #SBATCH --mail-type=end #SBATCH --mail-user=<email_address>@<domain> #SBATCH --export=NONE #SBATCH --time=24:00:00 source /etc/profile.d/modules.sh module load slurm_setup
module load turbomole
export MYDIR=<link to your input files>
cd $MYDIR
export PARA_ARCH=MPI
export TM_MPI_ROOT=$MPI_BASE
rm -f $HOSTS_FILE
export HOSTS_FILE=$MYDIR/turbomole.machines
for i in `scontrol show hostname $SLURM_NODELIST`; do
  for j in $(seq 1 $SLURM_TASKS_PER_NODE); do echo $i >> turbomole.machines; done
done
export CORES=`wc -l < $HOSTS_FILE`
export PARNODES=$CORES
### or use: export PARNODES=$SLURM_NPROCS
##execute with:
echo dscf > $MYDIR/dscf.out
dscf > $MYDIR/dscf.out
ricc2  > $MYDIR/ricc2.out
# or example
jobex -ri
#For parallel runs with version Turbomole7.5 use:

mpirun -n 40 -f turbomole.machines ridft_mpi  > ridft.out
mpirun -n 40 -f turbomole.machines  escf_mpi  > escf.out
egrad > egrad.out 

#!/bin/bash 
#SBATCH -J jobname
#SBATCH -o ./%x.%j.out
#SBATCH -e ./%x.%j.err
#SBATCH -D ./
#SBATCH --mail-type=END
#SBATCH --mail-user=insert_your_email_here
#SBATCH --time=24:00:00
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=48
#SBATCH --no-requeue
#SBATCH --export=NONE
#SBATCH --get-user-env
#SBATCH --account=insert your_projectID_here
#SBATCH --partition=insert test, micro, general, large or fat

module load slurm_setup
module load turbomole
export MYDIR=<link to your input files>
cd $MYDIR
export PARA_ARCH=MPI
export TM_MPI_ROOT=$MPI_BASE
rm -f $HOSTS_FILE
export HOSTS_FILE=$MYDIR/turbomole.machines
for i in `scontrol show hostname $SLURM_NODELIST`; do
  for j in $(seq 1 $SLURM_TASKS_PER_NODE); do echo $i >> turbomole.machines; done
done
export CORES=`wc -l < $HOSTS_FILE`
export PARNODES=$CORES
### or use: export PARNODES=$SLURM_NPROCS
##execute with:
echo dscf > $MYDIR/dscf.out
dscf > $MYDIR/dscf.out
ricc2  > $MYDIR/ricc2.out
# or example
jobex -ri
#For parallel runs with version Turbomole7.5 use:

mpirun -n 48 -f turbomole.machines ridft_mpi  > ridft.out
mpirun -n 48 -f turbomole.machines  escf_mpi  > escf.out
egrad > egrad.out 

Several example script files for different cases on our systems can be found for Linux-Cluster here and also for SuperMUC-NG

More information about how to set up parallel TURBOMOLE programs can be found in the TURBOMOLE documentation (see section "Parallel Runs").

Documentation

After the TURBOMOLE module is loaded the documentation DOK.ps or DOK.pdf can be found in the directory $TURBOMOLE_DOC. You may also check the TURBOMOLE Forum.

TmoleX

TmoleX provides a graphical user interface for TURBOMOLE starting version 6.0. It features e.g.:

  • Import and export of coordinates from/to different formats like xyz, cosmo, sdf, ml2, car, arc 
  • Graphical visualization of molecular structure, including movies of gradients and vibrational frequencies
  • Generation of molecular orbitals and automatic occupation 
  • Submitting jobs to queuing systems
  • Viewing results from Turbomole jobs

For further information (also on the option for a local client installation) check the webpage of COSMOlogic.

To use TmoleX please first load the TURBOMOLE module, after that the Java module

   > module load java

and then start TmoleX from command line with

   > TmoleX

A short introduction on usage of TmoleX may be found under $TMOLEXDOC/Tutorial-tmolex-2-0.pdf.

To submit jobs to the MPP_Myri Cluster please add these lines:

   . /etc/profile.d/modules.sh 
   module load turbomole[.mpi]/6.x
   cd $SGE_O_WORKDIR

Support

If you have any questions or problems with TURBOMOLE installed on LRZ platforms please contact LRZ HPC support.