ANSYS Mechanical (CSM)

ANSYS computational structural mechanics (CSN) analysis software enables you to solve complex structural engineering problems and make better, faster design decisions. With the finite element analysis (FEA) tools available in the suite, you can customize and automate solutions for your structural mechanics problems. ANSYS structural mechanics software is available in two different software environments - ANSYS Workbench (the newer GUI-oriented environment) and ANSYS Mechanical APDL (sometimes called ANSYS Classic, the older MAPDL scripted environment).

Getting Started

In order to use ANSYS Mechanical solutions, the corresponding ANSYS environment module has to be loaded (either for interactive work or in the batch script):

> module load ansys/2024.R2

Simulations which potentially produce a high work load on the used computer system (i.e. essentially all CSM/FEM engineering simulations) must be submitted as non-interactive (batch) jobs on the LRZ clusters via the respective job queueing systems. More information can be found here for SLURM (Linux Cluster, SuperMUC-NG).

ANSYS Mechanical Job Submission on LRZ Linux Clusters using SLURM

For the non-interactive execution of ANSYS Mechanical, there are several options. The general command (according to the ANSYS documentation) is:

> ansys [-j jobname]
        [-d device_type ]
        [-m work_space]
        [-db database_space ]
        [-dir directory ]
        [-b [ nolist ] ] [-s [ noread ] ]
        [-p ansys_product ]  [-g [ off ] ]
        [-custom]
        [ < inputfile ] [ > outputfile ]

However, this does not always work smoothly as intended. In such cases, you should try to execute the program mapdl directly.

ANSYS Mechanical Job Submission in CoolMUC-4 Serial Queue

The name "serial queue" is to a certain extend misleading here. The serial queue of LRZ Linux Clusters (CM4) differ from other CoolMUC-4 queues in that regard, that the access is granted to just one single cluster node and that the access to this cluster node is non-exclusive, i.e. might be shared with other cluster users depending on the resource requirements of the job as they are specified in the SLURM script. Nevertheless the launched application can make use of more than just a single CPU core on that cluster node, i.e. apply a certain degree of parallelization - either in shared memory mode or in distributed memory mode (MPI).

In the following an example of a job submission batch script for ANSYS Mechanical on CoolMUC-4 (SLURM queue = serial) in the batch queuing system SLURM is provided. 

Please use this large and powerful compute resource with a carefully specified number of CPU cores and a reasonably quantified amount of requested node memory per compute node of CM4. Don't waste powerful CM4 compute resources and please be fair to other CM4 cluster users.

#!/bin/bash
#SBATCH -o ./myjob.%j.%N.out
# --- Provide here your own working directory ---
#SBATCH -D ./
#SBATCH -J ansys_serial
#SBATCH --clusters=serial
#SBATCH --partition=serial_std
#SBATCH --cpus-per-task=15
# --- Less or equal to the maximum number of CPU Cores of a single CM4 cluster node ---
#SBATCH --mem=50G
# --- Realistic assumption for memory requirement of the task and proportonal to the used number of CPU cores ---
#SBATCH --mail-type=end
#SBATCH --mail-user=Max.Mustermann@lrz.de
 #SBATCH --get-user-env
#SBATCH --export=NONE
#SBATCH --time=0:10:00
#----------------------------------------------------
module load slurm_setup

# Extract from SLURM the information about cluster machine hostnames and number of tasks per node:
machines=""
for i in $(scontrol show hostnames=$SLURM_JOB_NODELIST); do
        machines=$machines:$i:$SLURM_CPUS_PER_TASK
done
machines=${machines:1}
# For later check of this information, echo it to stdout so that the information is captured in the job file:
echo $machines

module avail ansys
module load ansys/2024.R2
module list

# cat /proc/cpuinfo
echo ========================================== ANSYS Start ==============================================
# For later check of the correctness of the supplied ANSYS MADPL command it is echoed to stdout,
# so that it can be reviewed afterwards in the job file:
echo mapdl -dis -mpi INTELMPI -machines $machines -j "file" -s read -l en-us -b -i ./<DAT-Filename> -o ./file.out
mapdl -dis -mpi INTELMPI -machines $machines -j "file" -s read -l en-us -b -i ./<DAT-Filename> -o ./file.out
# Please do not forget to insert here your own DAT file with its correct name!
echo ========================================== ANSYS Stop ===============================================

You should store the above listed SLURM script under e.g. the filename ansys_cm4_serial.sh. The shell script then needs to be made an executable file and it must be submited to the job scheduler by using the command:

chmod 755 ansys_cm4_serial.sh
sbatch ansys_cm4_serial.sh

ANSYS Mechanical Job Submission on CoolMUC-4 in the cm4_tiny / cm4_std Queues

ANSYS Mechanical is provided on the new CoolMUC-4 (CM4) compute nodes with support for the Intel MPI message passing library on CM4 queues (cm4_inter_large_mem, cm4_tiny, cm4_std) with Infiniband interfaces.

Important: You should use the cm4_tiny / cm4_std queues ONLY (!) with the full number of CPU cores per node, which requires large ANSYS Mechanical simulations with substantially more then 1.5 Million of DOF's (DOF = Degree of Freedom) to run efficiently and not to waste sparse compute resources on CM4 queues. If your simulation has less DOF's, then please run the tasks in the CM4 serial queue instead (see example above). Cluster nodes in the cm4_tiny / cm4_std queues are assigned exclusively to the user and therefore all of the available CPU cores should be utilized by the user.

Please use this large and powerful compute resource only for tasks, which really can make efficient use of the 112 cores and 512 Gb node memory per compute node of CM4. Don't use this resource for rather smallish tasks, thereby wasting this powerful resources.

For the CoolMUC-4 cluster, cm4_tiny queue, the corresponding example would look like:

#!/bin/bash
#SBATCH -o ./myjob.%j.%N.out
#SBATCH -D ./
#SBATCH -J ansys_cm4
#SBATCH --clusters=cm4
#SBATCH --partition=cm4_tiny
# ---- partitions : cm4_tiny | cm4_std
#SBATCH --qos=cm4_tiny
# ---- qos : cm4_tiny | cm4_std
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=112
# --- Maximum number of CPU cores is 112 for cm4 - Use CM4 resources carefully and efficiently ! ---
#SBATCH --mail-type=all
#SBATCH --mail-user=Max.Mustermann@lrz.de
#SBATCH --get-user-env
#SBATCH --no-requeue
#SBATCH --export=NONE
#SBATCH --time=0:10:00
#-----------------------
module load slurm_setup

 # Extract from SLURM the information about cluster machine hostnames and number of tasks per node:
machines=""
for i in $(scontrol show hostnames=$SLURM_JOB_NODELIST); do
        machines=$machines:$i:$SLURM_NTASKS_PER_NODE
done
echo $machines
machines=${machines:1}
echo $machines

module list
module av ansys
module load ansys/2024.R2
module list

# cat /proc/cpuinfo
echo ========================================== ANSYS Start ==============================================
 # For later check of the correctness of the supplied ANSYS MADPL command it is echoed to stdout,
# so that it can be reviewed afterwards in the job file:
echo mapdl -dis -mpi INTELMPI -machines $machines -j "file" -s read -l en-us -b -i ./<DAT-Filename> -o ./file.out
mapdl -dis -mpi INTELMPI -machines $machines -j "file" -s read -l en-us -b -i ./<DAT-Filename> -o ./file.out
 # Please do not forget to insert here your own DAT file with its correct name!
echo ========================================== ANSYS Stop ===============================================

You should store the above listed SLURM script under e.g. the filename ansys_cm4_batch.sh. The shell script then needs to be made an executable file and it must be submited to the job scheduler by using the command:

chmod 755 ansys_cm4_batch.sh
sbatch ansys_cm4_batch.sh

The somewhat tedious but transparent extraction of the parallel resource information from SLURM is necessary, since the original MAPDL-script in the ANSYS software wraps the actual call to "mpiexec". 

LRZ is currently not supporting the usage of the ANSYS Remote Solver Manager (ANSYS RSM), and thus the batch executation of ANSYS Workbench projects. This is because ANSYS RSM is not supporting SLURM as a batch queueing system, so that the parallel execution of ANSYS Workbench projects and the usage of ANSYS Parameter Manager for parallelized parametric design studies conflicts with the concept of operation of the LRZ Linux Cluster and SuperMUC-NG.

ANSYS Mechanical Job Submission on SuperMUC-NG using SLURM

In the following an example of a job submission batch script for ANSYS Mechanical on SuperMUC-NG (Login node: skx.supermuc.lrz.de, SLURM partition = test) in the batch queuing system SLURM is provided. 

#!/bin/bash
#SBATCH -o ./myjob.%j.%N.out
#SBATCH -D ./
#SBATCH -J ansys_test
#SBATCH --partition=test
# ---- partitions : test | micro | general | fat | large
#SBATCH --get-user-env
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=48
# ---- 48 for SuperMUC-NG ----
#SBATCH --mail-type=END
#SBATCH --mail-user=Max.Mustermann@lrz.de
#SBATCH --no-requeue
#SBATCH --export=NONE
#SBATCH --time=0:10:00
#SBATCH --account=<Your_own_project>
#################################################################
## switch to enforce execution on a single island (if required) :
#SBATCH --switches=1@24:00:00
#################################################################
#
#################################################################
## Switch to disable energy-aware runtime (if required) :
## #SBATCH --ear=off
#################################################################

module load slurm_setup

machines=""
for i in $(scontrol show hostnames=$SLURM_JOB_NODELIST); do
        machines=$machines:$i:$SLURM_NTASKS_PER_NODE
done
machines=${machines:1}
echo $machines

module avail ansys
module load ansys/2024.R2
module list

# cat /proc/cpuinfo
echo ========================================== ANSYS Start ==============================================
# For later check of the correctness of the supplied ANSYS Mechanical command it is echoed to stdout,
# so that it can be reviewed afterwards in the job file: #
#
# For all versions newer then 2022.R2 please set:
echo mapdl -dis -mpi INTELMPI -machines $machines -j "file" -s read -l en-us -b -i ./<DAT-Filename> -o ./file.out
mapdl -dis -mpi INTELMPI -machines $machines -j "file" -s read -l en-us -b -i ./<DAT-Filename> -o ./file.out #
# Please do not forget to insert here your own DAT and OUT file with their intended and correct names!
echo ========================================== ANSYS Stop ===============================================

You should store the above listed SLURM script under e.g. the filename ansys_sng_batch.sh. The shell script then needs to be made an executable file and it must be submited to the job scheduler by using the command:

chmod 755 ansys_sng_batch.sh
sbatch ansys_sng_batch.sh