ANSYS LS-Dyna (FEA & Multiphysics)

LS-Dyna software is an advanced general-purpose multiphysics simulation software package developed by the Livermore Software Technology Corporation (LSTC), which was acquired by ANSYS Inc. in Q4/2019. The LS-Dyna software is part of the ANSYS standard software distribution and comes with an integration into the ANSYS Workbench environment as well as with an LS-Dyna standalone solver executable. While the package continues to contain more and more possibilities for the calculation of many complex, real world problems, its origins and core-competency lie in highly nonlinear transient dynamic finite element analysis (FEA) using explicit time integration.

Getting Started

In order to use LS-Dyna solutions, the corresponding LS-Dyna environment module has to be loaded (either for interactive work or in the batch script):

> module load lsdyna/2024.R2

Simulations which potentially produce a high work load on the used computer system (i.e. essentially all CSM/FEM engineering simulations) must be submitted as non-interactive (batch) jobs on the LRZ clusters via the respective job queueing systems. More information can be found here for SLURM (Linux Cluster, SuperMUC-NG).

LS-Dyna Job Submission on LRZ Linux Clusters using SLURM

The usage of the LS-Dyna solver can be treated almost in the same manner as the usage of ANSYS Mechanial (MAPDL) standalone solver simulations using the ANSYS FEA solver. Since ANSYS Release 2022.R2 (November 2022) both the basic LS-Dyna solver license (license key : dyna) as well as the required LS_Dyna parallel licenses (license key : dysmp) are included in the LRZ campus license for the ANSYS software. On LRZ High-Performance computing systems the LS-Dyna licenses are provided free of charge.

In contrary to the mainline ANSYS solver products the LS-Dyna solver is still a solver, which can only be executed on a single CPU core with the basic solver license (1*dyna). Executing LS-Dyna solver on a higher number of CPU cores (N cores) requires the checkout of (N-1) LS-Dyna HPC licenses ((N-1)*dysmp). For historical reasons and a still existing lack in the ANSYS software integration of this solver, the LS-Dyna HPC licenses are different from the generally applicable ANSYS HPC licenses (anshpc).

LS-Dyna Job Submission in CoolMUC-4 Serial Queue

The name "serial queue" is to a certain extend misleading here. The serial queue of LRZ Linux Clusters (CM4) differ from other CoolMUC-4 queues in that regard, that the access is granted to just one single cluster node and that the access to this cluster node is non-exclusive, i.e. might be shared with other cluster users depending on the resource requirements of the job as they are specified in the SLURM script. Nevertheless the launched application can make use of more than just a single CPU core on that cluster node, i.e. apply a certain degree of parallelization - either in shared memory mode or in distributed memory mode (MPI).

In the following an example of a job submission batch script for LS-Dyna on CoolMUC-4 (SLURM queue = serial) in the batch queuing system SLURM is provided. 

Please use this large and powerful compute resource with a carefully specified number of CPU cores and a reasonably quantified amount of requested node memory per compute node of CM4. Don't waste powerful CM4 compute resources and please be fair to other CM4 cluster users.

#!/bin/bash
#SBATCH -o ./myjob.lsdyna.%j.%N.out
#SBATCH -D ./
#SBATCH -J lsdyna_serial
#SBATCH --clusters=serial
#SBATCH --partition=serial_std
#SBATCH --cpus-per-task=10
# --- Less or equal to the maximum number of CPU Cores of a single CM4 cluster node ---
#SBATCH --mem=50G
# --- Realistic assumption for memory requirement of the task and proportional to the used number of CPU cores ---#SBATCH --get-user-env
#SBATCH --mail-type=NONE
###       mail-type can be either one of (none, all, begin, end, fail,...)
#SBATCH --mail-user=Max.Mustermann@lrz.de
#SBATCH --export=NONE
#SBATCH --time=0:10:00
#----------------------------------------------------
module load slurm_setup
 
# Extract from SLURM the information about cluster machine hostnames and number of tasks per node:
machines=""
for i in $(scontrol show hostnames=$SLURM_JOB_NODELIST); do
        machines=$machines:$i:$SLURM_CPUS_PER_TASK
done
machines=${machines:1}
# For later check of this information, echo it to stdout so that the information is captured in the job file:
echo $machines
 
module avail lsdyna
module load lsdyna/2024.R2
module list

echo ========================================== LS-Dyna Start ==============================================
# For later check of the correctness of the supplied ANSYS LS-DYNA command it is echoed to stdout,
# so that it can be reviewed afterwards in the job file:
#
# Using LS-Dyna on a number of CPU cores with OpenMP thread-based parallelism:
#
echo lsdyna pr=dyna i=my_test.k NCPU=$SLURM_CPUS_PER_TASK
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
lsdyna pr=dyna I=my_test.k NCPU=$SLURM_CPUS_PER_TASK
#
# Using LS-Dyna on a number of CPU cores with Intel-MPI message passing based parallelism:
#
# echo lsdyna pr=dyna i=my_test.k -dp -dis -machines $machines
# lsdyna pr=dyna I=my_test.k -dp -dis -machines $machines
#
# Please do not forget to insert here your own *.k file with its correct name!
echo ========================================== LS-Dyna Stop ===============================================

You should store the above listed SLURM script under e.g. the filename lsdyna_cm4_serial.sh. The shell script then needs to be made an executable file and it must be submited to the job scheduler by using the command:

chmod 755 lsdyna_cm4_serial.sh
sbatch lsdyna_cm4_serial.sh

LS-Dyna Job Submission on CoolMUC-4 in the cm4_tiny / cm4_std Queues

Starting from November 2022 the ANSYS Academic Research licenses allow the parallel execution of LS-Dyna solver as long as a sufficient number of LS-Dyna HPC licenses (license key : dysmp) have been purchased from LRZ campus license. For LRZ high-performance computing systems (CoolMUC-4) the LS-Dyna solver and LS-Dyna parallel licenses (dyna, dysmp) are provided to Linux Cluster Users free of charge.

LS-Dyna is provided on the new CoolMUC-4 (CM4) compute nodes with support for the Intel MPI message passing library on CM4 queues (cm4_inter_large_mem, cm4_tiny, cm4_std) with Infiniband interfaces.

Important: You should use the cm4_tiny / cm4_std queues ONLY (!) with the full number of CPU cores per node, which requires large LS-Dyna simulations with substantially more then 1.5 Million of DOF's (DOF = Degree of Freedom) to run efficiently and not to waste sparse compute resources on CM4 queues. If your simulation has less DOF's, then please run the tasks in the CM4 serial queue instead (see example above). Cluster nodes in the cm4_tiny / cm4_std queues are assigned exclusively to the user and therefore all of the available CPU cores should be utilized by the user.

For the CoolMUC-4 cluster, cm4_tiny queue, the corresponding LS-Dyna example would look like the following.

Please use this large and powerful compute resource only for tasks, which really can make efficient use of the 112 cores and 512 Gb node memory per compute node of CM4. Don't use this resource for rather smallish tasks, thereby wasting this powerful resources.

#!/bin/bash
#SBATCH -o ./job.lsdyna.%j.%N.out
# --- Provide here your own working directory ---
#SBATCH -D ./
#SBATCH -J lsdyna_cm4
#SBATCH --clusters=cm4
#SBATCH --partition=cm4_tiny
# ---- partitions : cm4_tiny | cm4_std
#SBATCH --qos=cm4_tiny
# ---- qos : cm4_tiny | cm4_std
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=112
# --- Maximum number of CPU cores is 112 for cm4 - Use CM4 resources carefully and efficiently ! ---
#SBATCH --mail-type=end
###       mail-type can be either one of (none, all, begin, end, fail,...)
#SBATCH --mail-user=Max.Mustermann@lrz.de
#SBATCH --get-user-env
#SBATCH --export=NONE
#SBATCH --time=0:10:00
#----------------------------------------------------
module load slurm_setup

# Extract from SLURM the information about cluster machine hostnames and number of tasks per node:
machines=""
for i in $(scontrol show hostnames=$SLURM_JOB_NODELIST); do
        machines=$machines:$i:$SLURM_NTASKS_PER_NODE
done
machines=${machines:1}
# For later check of this information, echo it to stdout so that the information is captured in the job file:
echo $machines

module avail lsdyna
module load lsdyna/2024.R2
module list

echo ========================================== LS-Dyna Start ==============================================
# For later check of the correctness of the supplied ANSYS LS-DYNA command it is echoed to stdout,
# so that it can be reviewed afterwards in the job file:
echo lsdyna pr=dyna i=my_test.k -dp -dis -machines $machines
lsdyna pr=dyna i=my_test.k -dp -dis -machines $machines
# Please do not forget to insert here your own *.k file with its correct name!
echo ========================================== LS-Dyna Stop ===============================================

You should store the above listed SLURM script under e.g. the filename lsdyna_cm4_batch.sh. The shell script then needs to be made an executable file and it must be submited to the job scheduler by using the command:

chmod 755 lsdyna_cm4_batch.sh
sbatch lsdyna_cm4_batch.sh