Gaussian

General Information

Versions and Platforms


Gaussian 16

Operating Environments:

Linux

Hardware

Linux Cluster, SuperMUC

Producer

Gaussian Inc. Pittsburgh, USA&

Overview of Functionality

Please see the following links for more information:

Terms and Conditions

  • Gaussian may only be used for academic and teaching purposes; the license condition preclude using the software if you are directly or indirectly engaging in competition with GAUSSIAN.
  • All scientific articles based on the usage of Gaussian must contain suitable citations.
  • Gaussian software installed on LRZ HPC systems may only be used on these systems, and only after a user has obtained a group entry for the UNIX group "gaussian". Please contact LRZ HPC support to obtain such an entry, providing the account name.

Usage

Quite generally a number of releases of Gaussian may be available on any computing platform at any time. The modules package should be used to provide you with suitable environment settings for Gaussian.

> module av gaussian
> module load gaussian

will provide you with the default version of Gaussian presently available. Or, you can also load a specific version.

Parallel Usage

Run of Gaussian

Gaussian can run in shared and distributed memory parallel fashion (docu). Unfortunately, both not in a fashion common for the LRZ (OpenMP threading / MPI parallel). Please consider the documentation cautiously when deviating from our documentation here.

We strongly recommend to use shared memory parallelism (on a single node only). In this case, please add

%NProcshared=<number of parallel threads>

with the desired number of parallel threads to your gaussian input file.

Many Smaller Cases in parallel

The current compute nodes are usually larger than the common gaussian cases. But if more than one case can run in parallel, please consider jobfarming. Please contact our Service Desk if you need help.

Slurm Batch Scripts Examples

Linux ClusterSuperMUC-NG
#!/bin/bash
#SBATCH -o %x.%j.%N.out 
#SBATCH -D .
#SBATCH -J <job_name> 
#SBATCH --get-user-env 
#SBATCH --clusters=cm4 
#SBATCH --partition=cm4_tiny
#SBATCH --partition=qos
#SBATCH --ntasks=1 
#SBATCH --cpus-per-task=8
#SBATCH --mail-type=end 
#SBATCH --mail-user=<email_address>@<domain> 
#SBATCH --export=NONE 
#SBATCH --time=24:00:00  
module load slurm_setup

module load gaussian

# for dynamic adaptation to Slurm script changes of parallel size
sed -i "s/%nprocshared=.*/%NProcShared=$SLURM_CPUS_PER_NTASK/" my_inputfile.com

g16 my_inputfile.com

Note: This assumes, my_inputfile.com already contains an %NProcShared= entry.


#!/bin/bash 
#SBATCH -o  %x.%j.%N.out  
#SBATCH -D .
#SBATCH -J <job_name>
#SBATCH --get-user-env
#SBATCH --partition=micro    # test, micro, general, large, fat
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=1
#SBATCH --cpus-per-task=8    # up to 48
#SBATCH --mail-type=end
#SBATCH --mail-user=<email_address>@<domain>
#SBATCH --export=NONE
#SBATCH --time=24:00:00
#SBATCH --account=<your_projectID>
module load slurm_setup

module load gaussian

# for dynamic adaptation to Slurm script changes of parallel size
sed -i "s/%nprocshared=.*/%NProcShared=$SLURM_CPUS_PER_NTASK/I" my_inputfile.com

g16 my_inputfile.com

Note: This assumes, my_inputfile.com already contains an %NProcShared= entry.

We propose to test a small case on the respective test queues before, and migrate to production queues when you have checked correct and efficient working of your cases.

Similarly as with %NProcShared, you should specify %mem to limit the memory usage (unless you work on nodes exclusively). When you specify #SBATCH --mem=... ,  the environment variable SLURM_MEM_PER_NODE should be set inside the running job. If you don't want to (possibly inconsistently) hard-code it in your input file, you can use a similar sed mechanism as for %NProcShared to dynamically set %mem.

Documentation

Gaussian Documentation on the World Wide Web

Troubleshooting

If any of the utilities gives you a segmentation fault at execution, please

  1. check whether the input files were generated with/for the same version of Gaussian as the utility you are currently using,
  2. issue the command
    > ulimit -s unlimited           # for sh, bash, ksh shell
    > limit stacksize unlimited     # for csh, tcsh shell

Parallelism

  1. shared memory: The older %NProcShared= flag is considered deprecated, and %CPU= is favored. But for shared node systems like CoolMUC-4, this is disadvantageous, because the CPU IDs are not know before a job is scheduled. Using the wrong cpus then (not assigned to the job allocation) will make your job crash.
    One could use taskset -c -p $$ to extract a CPU list, from within the job script (before the call to g16). But that's rather cumbersome as you must remove all the hyperthread partners manually. This but works only for shared-memory configured jobs (–ntasks=1 --cpus-per-task=...).
    If you can acquire nodes exclusively, the CPU list can be set at your will.
  2. distributed memory: gaussian appears to support also this - via Linda. But one needs to construct a node list within the Slurm job, because it is known only when the job runs. Furthermore, it requires to setup passphrase-less SSH keys before.
    Should you require such a working mode, please contact our Service Desk.
  3. shared+distributed memory: It seems that the above can be combined. Again, if you strive for this an require help, please contact our Service Desk.