ANSYS computational structural mechanics (CSN) analysis software enables you to solve complex structural engineering problems and make better, faster design decisions. With the finite element analysis (FEA) tools available in the suite, you can customize and automate solutions for your structural mechanics problems. ANSYS structural mechanics software is available in two different software environments - ANSYS Workbench (the newer GUI-oriented environment) and ANSYS Mechanical APDL (sometimes called ANSYS Classic, the older MAPDL scripted environment).

LS-Dyna software is an advanced general-purpose multiphysics simulation software package developed by the Livermore Software Technology Corporation (LSTC). The LS-Dyna software is part of the ANSYS standard software distribution and comes with an integration into the ANSYS Workbench environment as well as with an LS-Dyna standalone solver executable. While the package continues to contain more and more possibilities for the calculation of many complex, real world problems, its origins and core-competency lie in highly nonlinear transient dynamic finite element analysis (FEA) using explicit time integration.

Getting Started

In order to use ANSYS Mechanical solutions, the corresponding ANSYS environment module has to be loaded (either for interactive work or in the batch script):

> module load ansys/2022.R1

Simulations which potentially produce a high work load on the used computer system (i.e. essentially all CSM/FEM engineering simulations) must be submitted as non-interactive (batch) jobs on the LRZ clusters via the respective job queueing systems. More information can be found here for SLURM (Linux Cluster, SuperMUC-NG).

ANSYS Mechanical Job Submission on LRZ Linux Clusters using SLURM

For the non-interactive execution of ANSYS Mechanical, there are several options. The general command (according to the ANSYS documentation) is:

> ansys [-j jobname]
        [-d device_type ]
        [-m work_space]
        [-db database_space ]
        [-dir directory ]
        [-b [ nolist ] ] [-s [ noread ] ]
        [-p ansys_product ]  [-g [ off ] ]
        [-custom]
        [ < inputfile ] [ > outputfile ]

However, this does not always work smoothly as intended. In such cases, you should try to execute the program modules directly. For instance, in case of MAPDL (here, a SLURM / CoolMUC-3 example is provided):

#!/bin/bash
#SBATCH -o ./myjob.%j.%N.out
# --- Provide here your own working directory ---
#SBATCH -D ./
#SBATCH -J ansys_mpp3
#SBATCH --clusters=mpp3
#SBATCH --get-user-env
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=64
# --- multiples of 64 for mpp3 ---
#SBATCH --mail-type=end
#SBATCH --mail-user=Max.Mustermann@lrz.de
#SBATCH --export=NONE
#SBATCH --time=0:10:00
#----------------------------------------------------
module load slurm_setup

# Extract from SLURM the information about cluster machine hostnames and number of tasks per node:
machines=""
for i in $(scontrol show hostnames=$SLURM_JOB_NODELIST); do
        machines=$machines:$i:$SLURM_NTASKS_PER_NODE
done
machines=${machines:1}
# For later check of this information, echo it to stdout so that the information is captured in the job file:
echo $machines

module avail ansys
module load ansys/2022.R1
module list

# cat /proc/cpuinfo
echo ========================================== ANSYS Start ==============================================
# For later check of the correctness of the supplied ANSYS MADPL command it is echoed to stdout,
# so that it can be reviewed afterwards in the job file:
echo mapdl -dis -mpi INTELMPI -machines $machines -j "file" -s read -l en-us -b -i ./<DAT-Filename> -o ./file.out
mapdl -dis -mpi INTELMPI -machines $machines -j "file" -s read -l en-us -b -i ./<DAT-Filename> -o ./file.out
# Please do not forget to insert here your own DAT file with its correct name!
echo ========================================== ANSYS Stop ===============================================

For the CoolMUC-2 cluster, cm2_tiny queue, the corresponding example would look like:

#!/bin/bash
#SBATCH -o ./myjob.%j.%N.out
#SBATCH -D ./
#SBATCH -J ansys_cm2_tiny
#SBATCH --cluster=cm2_tiny
#SBATCH --partition=cm2_tiny
#SBATCH --get-user-env
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=28
# ---- multiples of 28 for CoolMUC-2 ----
#SBATCH --mail-type=all
#SBATCH --mail-user=Max.Mustermann@lrz.de
#SBATCH --no-requeue
#SBATCH --export=NONE
#SBATCH --time=0:10:00
#-----------------------
module load slurm_setup

 # Extract from SLURM the information about cluster machine hostnames and number of tasks per node:
machines=""
for i in $(scontrol show hostnames=$SLURM_JOB_NODELIST); do
        machines=$machines:$i:$SLURM_NTASKS_PER_NODE
done
echo $machines
machines=${machines:1}
echo $machines

module list
module av ansys
module load ansys/2022.R1
module list

# cat /proc/cpuinfo
echo ========================================== ANSYS Start ==============================================
 # For later check of the correctness of the supplied ANSYS MADPL command it is echoed to stdout,
# so that it can be reviewed afterwards in the job file:
echo mapdl -dis -mpi INTELMPI -machines $machines -j "file" -s read -l en-us -b -i ./<DAT-Filename> -o ./file.out
mapdl -dis -mpi INTELMPI -machines $machines -j "file" -s read -l en-us -b -i ./<DAT-Filename> -o ./file.out
 # Please do not forget to insert here your own DAT file with its correct name!
echo ========================================== ANSYS Stop ===============================================


The somewhat tedious but transparent extraction of the parallel resource information from SLURM is necessary, since the original MAPDL-script in the ANSYS software wraps the actual call to "mpiexec". Alternatively, you can start "ansysdis201" directly by using "mpiexec", i.e. the execution command line in the above SLURM script would be required to be replaced by:

mpiexec ansysdis201 -dis -mpi INTELMPI -j "file" -s read -l en-us -b -i ./<DAT-Filename> -o ./file.out

Please note the missing parallel machine specification (number of nodes, number of tasks per node), which is in this case not necessary anymore, because the SLURM queueing system provides this information directly to the "mpiexec" call.

LRZ is currently not supporting the usage of the ANSYS Remote Solver Manager (ANSYS RSM), and thus the batch executation of ANSYS Workbench projects. This is because ANSYS RSM is not supporting SLURM as a batch queueing system, so that the parallel execution of ANSYS Workbench projects and the usage of ANSYS Parameter Manager for parallelized parametric design studies conflicts with the concept of operation of the LRZ Linux Cluster and SuperMUC-NG.

ANSYS Mechanical Job Submission on SuperMUC-NG using SLURM

In the following an example of a job submission batch script for ANSYS Mechanical on SuperMUC-NG (Login node: skx.supermuc.lrz.de, SLURM partition = test) in the batch queuing system SLURM is provided. Please note that supported ANSYS versions on SuperMUC-NG are ANSYS 2020.R1 or later. At this time ANSYS 2021.R2 is the default version.

#!/bin/bash
#SBATCH -o ./myjob.%j.%N.out
#SBATCH -D ./
#SBATCH -J ansys_test
#SBATCH --partition=test
# ---- partitions : test | micro | general | fat | large
#SBATCH --get-user-env
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=48
# ---- multiples of 48 for SuperMUC-NG ----
#SBATCH --mail-type=END
#SBATCH --mail-user=Max.Mustermann@lrz.de
#SBATCH --no-requeue
#SBATCH --export=NONE
#SBATCH --time=0:10:00
#SBATCH --account=<Your_own_project>
#SBATCH --switches=1@24:00:00
#
#########################################################
## Switch to disable energy-aware runtime (if required) :
## #SBATCH --ear=off
#########################################################
module load slurm_setup

machines=""
for i in $(scontrol show hostnames=$SLURM_JOB_NODELIST); do
        machines=$machines:$i:$SLURM_NTASKS_PER_NODE
done
machines=${machines:1}
echo $machines

module avail ansys
module load ansys/2021.R2
module list

# cat /proc/cpuinfo
echo ========================================== ANSYS Start ==============================================
# For later check of the correctness of the supplied ANSYS Mechanical command it is echoed to stdout,
# so that it can be reviewed afterwards in the job file:
echo mapdl -dis -mpi INTELMPI -machines $machines -j "file" -s read -l en-us -b -i ./<DAT-Filename> -o ./file.out
mapdl -dis -mpi INTELMPI -machines $machines -j "file" -s read -l en-us -b -i ./<DAT-Filename> -o ./file.out
# Please do not forget to insert here your own DAT and OUT file with their correct names!
echo ========================================== ANSYS Stop ===============================================

LS-Dyna Job Submission on LRZ Linux Clusters using SLURM

The usage of the LS-Dyna solver can be treated almost in the same manner as the usage of MAPDL standalone solver simulations, using the ANSYS FEA solver. Potential users of LS-Dyna should be aware, that LRZ is holding a number of serial LS-Dyna licenses, i.e. those licenses allow only serial execution of the LS-Dyna solver (1 task or usage of just 1 CPU core).With those available serial LS-Dyna licenses, the solver can be executed e.g. in the serial SLURM queue by the following script:

#!/bin/bash
#SBATCH -o ./myjob.lsdyna.%j.%N.out
#SBATCH -D ./
#SBATCH -J lsdyna_serial
#SBATCH --clusters=serial
#SBATCH --partition=serial_std
#SBATCH --get-user-env
#SBATCH --mem=4096mb
#SBATCH --cpus-per-task=1
#SBATCH --mail-type=NONE
###       mail-type can be either one of (none, all, begin, end, fail,...)
#SBATCH --mail-user=Max.Mustermann@lrz.de
#SBATCH --export=NONE
#SBATCH --time=0:10:00
#----------------------------------------------------
module load slurm_setup
 
# Extract from SLURM the information about cluster machine hostnames and number of tasks per node:
machines=""
for i in $(scontrol show hostnames=$SLURM_JOB_NODELIST); do
        machines=$machines:$i:$SLURM_NTASKS_PER_NODE
done
machines=${machines:1}
# For later check of this information, echo it to stdout so that the information is captured in the job file:
echo $machines
 
module avail ansys
module load ansys/2022.R1
module list

echo ========================================== LS-Dyna Start ==============================================
# For later check of the correctness of the supplied ANSYS LS-DYNA command it is echoed to stdout,
# so that it can be reviewed afterwards in the job file:
echo lsdyna I=<...my_lsdyna-example_case...>.k 
lsdyna I=<...my_lsdyna-example_case...>.k
# Please do not forget to insert here your own *.k file with its correct name!
echo ========================================== LS-Dyna Stop ===============================================


Unfortunately, for parallel execution the available ANSYS Academic Research HPC licenses are not applicablein combination with the LS-Dyna solver, since the vendor of LS-Dyna has foreseen for this purpose special LS-Dyna HPC licenses (license feature "dysmp"), which are currently not included in the LRZ licensing pool for the ANSYS software (i.e. currently there are no publicaly available license keys "dysmp"). Potential users of parallel LS-Dyna should get in touch with CADFEM GmbH as ANSYS software vendor for the purchase of such licenses for their own purposes. If the software is afterwards intended to run in full parallel mode on an LRZ Linux Cluster, the corresponding LS-Dyna HPC licenses should be hosted on the ANSYS License Server at LRZ (licansys.lrz.de).

Provided, that LS-Dyna HPC licenses are available, a corresponding LS-Dyna SLURM submission script could look like the following for the CoolMUC-3 Linux cluster (mpp3):

#!/bin/bash
#SBATCH -o ./job.lsdyna.%j.%N.out
# --- Provide here your own working directory ---
#SBATCH -D ./
#SBATCH -J lsdyna_mpp3
#SBATCH --clusters=mpp3
#SBATCH --get-user-env
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=64
# --- multiples of 64 for mpp3 ---
#SBATCH --mail-type=end
#SBATCH --mail-user=Max.Mustermann@lrz.de
#SBATCH --export=NONE
#SBATCH --time=0:10:00
#----------------------------------------------------
module load slurm_setup

# Extract from SLURM the information about cluster machine hostnames and number of tasks per node:
machines=""
for i in $(scontrol show hostnames=$SLURM_JOB_NODELIST); do
        machines=$machines:$i:$SLURM_NTASKS_PER_NODE
done
machines=${machines:1}
# For later check of this information, echo it to stdout so that the information is captured in the job file:
echo $machines

module avail ansys
module load ansys/2022.R1
module list

echo ========================================== LS-Dyna Start ==============================================
# For later check of the correctness of the supplied ANSYS LS-DYNA command it is echoed to stdout,
# so that it can be reviewed afterwards in the job file:
echo lsdyna pr=aa_r_dy i=my_test.k NCPU=$SLURM_NTASKS
lsdyna pr=aa_r_dy i=my_test.k NCPU=$SLURM_NTASKS
# Please do not forget to insert here your own *.k file with its correct name!
echo ========================================== LS-Dyna Stop ===============================================