Abaqus (Structural Mechanics)

Abaqus is a general purpose Computational Structural Mechanics (CSM) code. Abaqus is part of the Dassault Systems software portfolio since its acquisition from Abaqus Inc. in May 2005. Simulia Abaqus FEA as a leading program in nonlinear finite element analysis and as a general purpose CSM code provides a wide variety of material modeling capabilities, a collection of multiphysics capabilities, such as coupled acoustic-structural, piezoelectric, and structural-pore capabilities. Simulia Abaqus FEA software package includes Abaqus/Standard, Abaqus/Explicit and Abaqus/CFD.

Further information about Abaqus, licensing of the Dassault Systems (DS) software and related terms of software usage at LRZ, the Dassault Systems Software mailing list, access to the Dassault Systems software documentation and LRZ user support can be found on the main Dassault Systems Software documentation page.

Getting Started

Once you are logged into one of the LRZ cluster systems, you can check the availability (i.e. installation) of Abaqus software by:

> module avail abaqus

Load the prefered Abaqus version environment module, e.g.:

> module load abaqus/2025

One can use Abaqus in interactive GUI mode for the only purpose of pre- and/or postprocessing on the Login Nodes (Linux: SSH Option "-Y" or X11-Forwarding; Windows: using PuTTY and XMing for X11-forwarding). This interactive usage is mainly intended for making quick simulation setup changes, which require GUI access. It is NOT permitted to run computationally intensive Abaqus simulation runs or postprocessing sessions with large memory consumption on Login Nodes.

The Abaqus GUI is started by:

> abaqus cae

Abaqus Licensing - RESEARCH vs. TEACHING

Two different license types are available for the use of Abaqus software. For scientific research the license type RESEARCH should be used. This license type provides access to the unlimited capability Abaqus solvers. For testing and teaching purposes as well as for computational less intensive tasks requiring only a very limited number of CPU cores the license type TEACHING can and should be used. The TEACHING license type is limited to an execution on 1-4 CPU cores. Therefore the TEACHING license type should be used in the serial queue of the LRZ Linux Clusters (see example provided below). For further possible limitations of the TEACHING license type please refer to the Abaqus documentation by Dassault Systems.

For TUM/UTG users the corresponding license settings in the SLURM scripts (and correspondingly in the local abaqus_v6.env file) are as follows:

License TypeSettings in the SLURM script / abaqus_v6.env file
RESEARCHlicense_server_type=FLEXNET
abaquslm_license_file="8101@license4.lrz.de"
academic=RESEARCH
TEACHINGlicense_server_type=FLEXNET
abaquslm_license_file="8101@license6.lrz.de"
academic=TEACHING

Why I should call Abaqus through the Abaqus-Python-Wrapper?

When Abaqus 2023/2024/2025 was installed on the LRZ filesystem for CoolMUC-2/-3/-4, it turned out that the Abaqus software is not working to 100% correctly. The issue experienced and described here is obviously a well known deficiency of the abaqus software on some (but not all) Linux systems. Corresponding descriptions and attempts to mitigate the issue can be found on the internet.

When Abaqus is called on a command line prompt of e.g. a LRZ Linux Cluster login node (for testing purposes and with a small input file), the Abaqus software is carrying out the simulation, until the message string "THE ANALYSIS HAS COMPLETED SUCCESSFULLY" appears in the status file (*.sta). Now it would be expected, that Abaqus is abandoning its work and provides the user again with an accessible command line prompt. Instead the Abaqus Standard process seems to hang and the command line prompt is not made available to the user again. While this is not much of a problem in interactive usage of Abaqus, it becomes a major issue, if an Abaqus simulation should be submitted to a Linux Cluster in batch processing mode. In this case the SLURM job would hang and block cluster resources, until either the user or the SLURM scheduler would terminate the hanging process. Thereby substantial Linux cluster resources would be wasted, because the user defined maximum execution time of the SLURM job might be substantially longer than the actual execution time of the Abaqus simulation. 

To mitigate this issue, a Python wrapper for Abaqus is provided. This Python wrapper launches Abaqus in background mode and afterwards it constantly monitors the Abaqus status file (*.sta) of the simulation. If the status file appears in the working directory and contains the success message issued by Abaqus, than it terminates/kills the Abaqus standard process and the simulation is finished.

The Abaqus-Python-wrapper can be called by the following syntax:

Usage of the Abaqus-Python-wrapper:
     abq_wrapper.py --job=<jobfile> --script=<Python-Script> --double=<explicit|both|off> --memory=<memory> --cpus=<cpus> [ --user=<Fortran-Routine> ]
or:
     abq_wrapper.py -j <jobfile>  -s=<Python-Script> -d <explicit|both|off> -m <memory> -c <cpus> [ -u <Fortran-Routine> ]

The input file should be given without the filename extension, i.e. without .inp. The argument for usage of a User-FORTRAn routine is an optional argument to the Abaqus-Python-Wrapper, i.e. it can be omitted if not required. The above syntax is printed to the screen, if the Abaqus-Python-wrapper is called wiht the "--help" argument:

> abq_wrapper.py --help

If there is the arising need to support more command line arguments, which can be recognized by the Abaqus-Python-wrapper and piped through to the Abaqus executable, than please specify your needs in a Service Request to the LRZ application support team.

Abaqus Parallel Execution (Batch Mode)

All parallel Abaqus simulations on LRZ Linux Clusters and SuperMUC-NG are submitted as non-interactive batch jobs to the appropriate scheduling system (SLURM) into the different pre-defined parallel execution queues. Further information about the batch queuing systems and the queue definitions, capabilities and limitations can be found on the documentation pages of the corresponding HPC system (LinuxCluster, SuperMUC-NG)

For job submission to a batch queuing system a corresponding small shell script needs to be provided, which contains:

  • Batch queueing system specific commands for the job resource definition
  • Module command to load the Abaqus environment module
  • Commands for the assembly of the Abaqus machines/node list and the custoized environment file for Abaqus
  • Start command for parallel execution of Abaqus with all appropriate command line parameters - here by using again the above mentioned Abaqus-Python-wrapper

The intended syntax and available command line options for the invocation of the Abaqus solver command can be found out by:

> abaqus -help

The configuration of the parallel cluster partition (list of node names and corresponding number of cores) is provided to the abaqus command from the batch queuing system (SLURM) by the provisioning of SLURM environment variables, containing the information provided by the cluster user in the job resource definition and by the dynamic assignment of cluster compute nodes by the SLURM scheduler at the tie of execution. The number of Abaqus solver processes is submitted to the abaqus solver command (or the abq_wrapper.py correspondingly) by the SLURM environment variable $SLURM_NTASKS.

CoolMUC-4 Serial Queue: Abaqus Job Submission using SLURM on small number of CPU Cores (1-4; typical for TEACHING license type)

The name "serial queue" is to a certain extend misleading here. The serial queue of LRZ Linux Clusters (CM4) differ from other CoolMUC-4 queues in that regard, that the access is granted to just one single cluster node and that the access to this cluster node is non-exclusive, i.e. might be shared with other cluster users depending on the resource requirements of the job as they are specified in the SLURM script. Nevertheless the launched application can make use of more than just a single CPU core on that cluster node, i.e. apply a certain degree of parallelization - either in shared memory mode or in distributed memory mode (MPI).

In the following an example of a job submission batch script for Abaqus using the TEACHING license type on CoolMUC-4 (SLURM queue = serial) in the batch queuing system SLURM is provided. The example is formulated for the use of Abaqus floating license of license type TEACHING being provided by the TUM/UTG license server license6.lrz.de (User-ID authentication for license check-out). Consequently befor the use of an Abaqus floating license the license owner would need to register his LRZ User-ID in an appropriately provided Dassault Systems license server and the license server information needs to be included in the Abaqus environment file "abaqus_v6.env" accordingly, Since the SLURM script needs to overwrite this Abaqus environment file in the current working directory, any intended changes or additions to this environment file need to be implmented into the SLURM script of the user (not directly being written into the Abaqus environment file, as most users might be used to).

#!/bin/bash
#SBATCH -o ./myjob.%j.%N.out
#SBATCH -D ./
#SBATCH -J abaqus_serial
#SBATCH --clusters=serial
#SBATCH --partition=serial_std
#SBATCH --cpus-per-task=4
# --- Maximum of 4 CPU cores with Abaqus TEACHING license on serial queue ---
#SBATCH --mem=20g
# --- Realistic assumption for memory requirement of the task and proportional to the used number of CPU cores ---
#SBATCH --get-user-env
#SBATCH --mail-type=NONE
###       mail-type can be either one of (none, all, begin, end, fail,...) 
#SBATCH --mail-user=Max.Mustermann@lrz.de
#SBATCH --export=NONE
#SBATCH --time=0:15:00
#----------------------------------
module load slurm_setup

module list
echo "DEBUG ------------------"
module av abaqus
module load abaqus/2025
module list
echo "DEBUG ------------------"

echo =============================== Abaqus Machines List Assembly  ============================
### Create ABAQUS environment file for current job, you can set/add your own options 
### (Python syntax) to the Abaqus environment file being created in the working directory.
### Attention:
###            A user provided abaqus_v6.env file in the working directory will be overwritten!
###            Required user options need to be implemented here in this SLURM script instead! 
env_file=abaqus_v6.env
### ============================================================================================
node_list=$(scontrol show hostname ${SLURM_NODELIST} | sort -u)
mp_host_list="["
for host in ${node_list}; do
    mp_host_list="${mp_host_list}['$host', ${SLURM_CPUS_PER_TASK}],"
done
mp_host_list=$(echo ${mp_host_list} | sed -e "s/,$/]/")

cat << EOF > ${env_file}
mp_host_list = ${mp_host_list}
license_server_type=FLEXNET
academic=TEACHING
abaquslm_license_file="8101@license6.lrz.de"
EOF

echo =============================== Abaqus Start ==============================================
#
echo abq_wrapper.py --job=testfile --double=both --memory=$SLURM_MEM_PER_NODE --cpus=$SLURM_CPUS_PER_TASK
abq_wrapper.py --job=testfile --double=both --memory=$SLURM_MEM_PER_NODE --cpus=$SLURM_CPUS_PER_TASK
#
echo =============================== Abaqus Stop  ==============================================

The license server information "8101@license6.lrz.de" in the above SLURM script needs to be adapted to the license server providing the valid Abaqus licenses. For TUM/UTG users and Abaqus license type TEACHING this license server is "8101@license6.lrz.de" as shown above.

A corresponding example for the usage of Abaqus Research licenses then would look like:

#!/bin/bash
#SBATCH -o ./myjob.%j.%N.out
#SBATCH -D ./
#SBATCH -J abaqus_serial
#SBATCH --clusters=serial
#SBATCH --partition=serial_std
#SBATCH --cpus-per-task=15
# --- Less or equal to the maximum CPU Cores of a single CM4 cluster node with Abaqus RESEARCH license ---
#SBATCH --mem=90G
# --- Realistic assumption for memory requirement of the task and proportional to the used number of CPU cores ---
#SBATCH --get-user-env
#SBATCH --mail-type=NONE
###       mail-type can be either one of (none, all, begin, end, fail,...) 
#SBATCH --mail-user=Max.Mustermann@lrz.de
#SBATCH --export=NONE
#SBATCH --time=0:15:00
#----------------------------------
module load slurm_setup

module list
echo "DEBUG ------------------"
module av abaqus
module load abaqus/2025
module list
echo "DEBUG ------------------"

echo =============================== Abaqus Machines List Assembly  ============================
### Create ABAQUS environment file for current job, you can set/add your own options 
### (Python syntax) to the Abaqus environment file being created in the working directory.
### Attention:
###            A user provided abaqus_v6.env file in the working directory will be overwritten!
###            Required user options need to be implemented here in this SLURM script instead! 
env_file=abaqus_v6.env
### ============================================================================================
node_list=$(scontrol show hostname ${SLURM_NODELIST} | sort -u)
mp_host_list="["
for host in ${node_list}; do
    mp_host_list="${mp_host_list}['$host', ${SLURM_CPUS_PER_TASK}],"
done
mp_host_list=$(echo ${mp_host_list} | sed -e "s/,$/]/")

cat << EOF > ${env_file}
mp_host_list = ${mp_host_list}
license_server_type=FLEXNET
academic=RESEARCH
abaquslm_license_file="8101@license4.lrz.de"
EOF

echo =============================== Abaqus Start ==============================================
#
echo abq_wrapper.py --job=testfile --double=both --memory=$SLURM_MEM_PER_NODE --cpus=$SLURM_CPUS_PER_TASK
abq_wrapper.py --job=testfile --double=both --memory=$SLURM_MEM_PER_NODE --cpus=$SLURM_CPUS_PER_TASK
#
echo =============================== Abaqus Stop  ==============================================


Assumed that the above SLURM script has been saved under the filename "abaqus_serial.sh", the SLURM batch job has to be submitted by issuing the following command on the CM4 Linux Cluster login node (lxlogin5):

sbatch abaqus_serial.sh

CoolMUC-4 : Abaqus Job Submission on LRZ Linux Clusters running SLES15 SP6 using SLURM

Abaqus is provided on the new CoolMUC-4 (CM4) compute nodes with support for the Intel MPI message passing library on CM4 queues (cm4_inter_large_mem, cm4_tiny, cm4_std) with Infiniband interfaces.

Important: You should use the cm4_tiny / cm4_std queues ONLY (!) with the full number of CPU cores per node, which requires large Abaqus simulations with substantially more then 1.5 Million of DOF's (DOF = Degree of Freedom) to run efficiently and not to waste sparse compute resources on CM4 queues. If your simulation has less DOF's, then please run the tasks in the CM4 serial queue instead (see example above). Cluster nodes in the cm4_tiny / cm4_std queues are assigned exclusively to the user and therefore all of the available CPU cores should be uitilized by the user.

In the following an example of a job submission batch script for Abaqus using the RESEARCH license type on CoolMUC-4 (SLURM queue = cm4_tiny) in the batch queuing system SLURM is provided. The example is formulated for the use of Abaqus floating license being provided by the LRZ internal license server license4.lrz.de (User-ID authentication for license check-out). Consequently befor the use of an Abaqus floating license the license owner would need to register his LRZ User-ID in an appropriately provided Dassault Systems license server and the license server information needs to be included in the Abaqus environment file "abaqus_v6.env" accordingly, Since the SLURM script needs to overwrite this Abaqus environment file in the current working directory, any intended changes or additions to this environment file need to be implemented into the SLURM script of the user (not directly being written into the Abaqus environment file, as most users might be used to).

Please use this large and powerful compute resource only for tasks, which really can make efficient use of the 112 cores and 512 Gb node memory per compute node of CM4. Don't use this resource for rather smallish tasks, thereby wasting this powerful resources.

#!/bin/bash
#SBATCH -o ./myjob.%j.%N.out
#SBATCH -D ./
#SBATCH -J abaqus_cm4
#SBATCH --clusters=cm4
#SBATCH --partition=cm4_tiny
#SBATCH --qos=cm4_tiny
#SBATCH --get-user-env
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=112
# --- Maximum number of CPU cores is 112 for cm4 - Use CM4 resources carefully and efficiently ! ---
#SBATCH --mail-type=end
#SBATCH --mail-user=Max.Mustermann@lrz.de
#SBATCH --export=NONE
#SBATCH --time=0:15:00
#----------------------------------
module load slurm_setup

export TMPDIR=$SCRATCH_DSS/$USER
echo $TMPDIR

module list
echo "DEBUG ------------------"
module use -p /lrz/sys/share/modules/files_sles15/tools
module load python/3.6_intel
module av abaqus
module load abaqus/2025
module list
echo "DEBUG ------------------"

# cat /proc/cpuinfo

echo =============================== Abaqus Machines List Assembly  ============================
### Create ABAQUS environment file for current job, you can set/add your own options 
### (Python syntax) to the Abaqus environment file being created in the working directory.
### Attention:
###            A user provided abaqus_v6.env file in the working directory will be overwritten!
###            Required user options need to be implemented here in this SLURM script instead! 
env_file=abaqus_v6.env
### ============================================================================================

node_list=$(scontrol show hostname ${SLURM_NODELIST} | sort -u)
mp_host_list="["
for host in ${node_list}; do
    mp_host_list="${mp_host_list}['$host', ${SLURM_NTASKS_PER_NODE}],"
done
mp_host_list=$(echo ${mp_host_list} | sed -e "s/,$/]/")

cat << EOF > ${env_file}
mp_host_list = ${mp_host_list}
license_server_type=FLEXNET
abaquslm_license_file="<your_port>@<your_license_server>"
academic=RESEARCH
EOF

echo =============================== Abaqus Start ==============================================
#
echo python $ABAQUS_LRZBIN/abq_wrapper_python3.py --job=testfile --double=both --memory=470Gb --cpus=$SLURM_NTASKS
python $ABAQUS_LRZBIN/abq_wrapper_python3.py abq_wrapper.py --job=testfile --double=both --memory=470Gb --cpus=$SLURM_NTASKS
#
echo =============================== Abaqus Stop  ==============================================

The license server information "<your_port>@<your_license_server>" in the above SLURM script needs to be adapted to the license server providing the valid Abaqus licenses. For TUM/UTG users this license server is "8101@license4.lrz.de".
Assumed that the above SLURM script has been saved under the filename "abaqus_cm4_tiny.sh", the SLURM batch job has to be submitted by issuing the following command on the CM4 Linux Cluster login nodes (lxlogin5):

sbatch abaqus_cm4_tiny.sh

SuperMUC-NG : Abaqus Job Submission on SNG running SLES15 using SLURM

Abaqus has not yet been tested on the SuperMUC-NG.