Description, License

GAMESS is an ab initio quantum mechanics program, which provides many methods for computation of the properties of molecular systems using standard quantum chemical methods, many of them updated for parallel execution.
For a detailed description, please look at the GAMESS home page or at the program description in $GAMESS_DOCUMENTATION, which is available after executing "module load gamess ".

GAMESS should be cited as:

M.W.Schmidt, K.K.Baldridge, J.A.Boatz, S.T.Elbert, M.S.Gordon, J.H.Jensen, S.Koseki, N.Matsunaga, K.A.Nguyen, S.J.Su, T.L.Windus, M.Dupuis, J.A.Montgomery, J.Comput.Chem.  14, 1347-1363(1993).

Versions and Platforms

The new installed versions which contain the Coupled-Cluster approaches, the Density Functional Theory approximation, as well as the Fragment Molecular Orbital (FMO) method which includes now the 3-body MP2 computations and the pair interaction energy decomposition analysis (PIEDA) are available in our parallel machines.

How to run GAMESS on the Linux-Cluster and SuperMUC

Before running GAMESS job interactively, please think first if you wish to check your input file via: EXETYP=CHECK.

After login to the system please load the module environment appropriate via:

module load gamess  

The execution of gamess can be done with

gamess [-n  N]  inputfilename >& outputfilename.log
  # N: number of cores 

where the input file in this example would be named "inputfilename.inp", and output file outputfilename.log.

Restart files are written to the files given by the following environment variables:


If you do not change these defaults for IRCDATA and PUNCH, the files will not be deleted. All other files (except your output file, of course) will be deleted, as long as you do not use the -k option:

gamess -K yes ...

Batch jobs for Serial and Parallel GAMESS

Submit the follwoing script:

Linux-Cluster (SLURM)


#SBATCH -o /home/cluster/<group>/<user>/mydir/gamess.%j.out
#SBATCH -D /home/cluster/<group>/<user>/mydir
#SBATCH -J <job_name>
#SBATCH --get-user-env
#SBATCH --clusters=cm2_tiny
#SBATCH --partition=cm2_tiny
#SBATCH --nodes=1-1
#SBATCH --cpus-per-task=28
#SBATCH --mail-type=end
#SBATCH --mail-user=<email_address>@<domain>
#SBATCH --export=NONE
#SBATCH --time=24:00:00

source /etc/profile.d/
 cd mydire  
 module load gamess
 gamess -n 32 job.inp >& job.out

# Job Name and Files (also --job-name)
#SBATCH -J jobname
#Output and error (also --output, --error):
#SBATCH -o ./%x.%j.out
#SBATCH -e ./%x.%j.err
#Initial working directory (also --chdir):
#Notification and type
#SBATCH --mail-type=END
#SBATCH --mail-user=insert_your_email_here
# Wall clock limit:
#SBATCH --time=24:00:00
#SBATCH --no-requeue
#Setup of execution environment
#SBATCH --export=NONE
#SBATCH --get-user-env
#SBATCH --account=insert your_projectID_here
#SBATCH --partition=insert test, micro, general, large or fat
module load slurm_setup
cd  mydire                 
module load gamess
gamess -n 128  job.inp >& job.out

Then submit the job script using sbatch (SLURM) or command.

e.g., assume the job script name is name-job.pbs:

% sbatch  name-job.pbs


After execution of "module load gamess", the environment variable documentation points to a directory containing the GAMESS documentation as it comes with the source code:


Other format of the documentation and more information may be found on the GAMESS Home Page.


After execution of "module load gamess", the environment variable EXAMPLES points to the GAMESS examples:


FAQ - Frequently asked questions 

Number of used processors in parallel GAMESS jobs

Q: I requested 8 processors but GAMESS tells me: PARALLEL VERSION RUNNING WITH   4 PROCESSORS

A: This is OK! You may find a detailed description of how GAMESS distributes its work and data around the processors in the documentation, Section 5 - Programmer's Reference (PROG.DOC): the first half processors computes (compute processes), the second half serves as data servers.


If you have any questions or problems with GAMESS installed on the different LRZ platforms, please don't hesitate to contact  Dr. M. Allalen: