ANSYS System Coupling

Introduction

ANSYS software provides the capability to couple two or even more of the ANSYS solvers for different physical subsystems (fluids, mechanical, thermal, EMAG, etc.) in a single simulation. The commonly used framework for carrying out such kind of simulations with coupled physics is called ANSYS System Coupling (SC). SC is usually accessible in ANSYS Workbench, where participating solvers for the physical subsystems are represented by their corresponding frames on the ANSYS Workbench Schematic and are connected by "wires", as it is intended by the user to couple them together (1-way coupling, 2-way coupling, stationary, transient, etc.). System Coupling (SC) boundary conditions have to be defined in all of the participating solver systems in order to provide the required data for SC data exchange. 

One particular type of System Coupling is so-called Fluid-Structure-Interaction (FSI), where the solution of a fluid system is coupled with a structural mechanics simulation. Possible applications include but are not limitted to:

  • Structural response to a fluid motion. The fluid flow simulation usually provides a pressure force acting on the structure and the structural mechanics simulation provides the resulting displacement of the fluid flow boundary in response to that actign pressure force. Possible application of such type of SC is e.g. the analysis of vortex induced vibrations or the analysis of strong deformations of rather flexible solid structures under the influence of a moving fluid.
  • SC can also be used to simulate the thermal coupling between an e.g. heated/cooled solid region and an adjacent fluid flow region. This type of use for SC is not very recommended, since the only thermal coupling between a fluid and solid region can be simulated more efficiently by only using the ANSYS Fluid Solver and by modeling the solid region as a Conjugate Heat Transfer zone (CHT), i.e. by solving the energy equation in the solid region inside of the fluid solver together with the remaining hydrodynamic equation system.
  • Furthermore SC can be applied to e.g. the coupling of a fluid flow solver with the solver for electro-magnetic field equations, e.g. for EMHD (Electro-Magnetic Hydrodynamics) applications like Joule heating of a fluid (e.g. liquid metal) due to strong electro-magnetic fields. This requires the additional installation of the ANSYS Electromagnetics software packages in addition to the ANSYS standard software installation (containing mostly the required packages for CFD and CSM applications)

ANSYS SC on LRZ Linux Clusters

LRZ Linux Clusters are operated under the control of the SLURM scheduler and therefore the ANSYS Workbench in graphical mode is not available for the execution of any ANSYS simulation tasks. Consequently it is necessary for e.g. FSI applicatuions as well, that the user can formulate and submit his intended simulation task in the form of a script-driven and command line oriented workflow. Fortunately ANSYS is providing the SC framework in a manner, that a formerly setup system coupling application from GUI-oriented ANSYS Workbench can be converted into a Python script driven workflow, which can be encapsulated into a SLURM batch job submission script with reasonable low effort. Corresponding tutorials are provided by ANSYS on the ANSYS Help Website in the section for System Coupling (SC) Tutorials.

Once the SC simulation has been reformulated in the form of a driving Python script with subsequent calls to the ANSYS MAPDL and ANSYS CFX / ANSYS Fluent solvers, the remaining task is to execute this Python script from within a SLURM scheduler submission script and to provide dynamically the reserved SLURM resources on the used LRZ Linux Cluster to the called ANSYS CFD/CSM solvers with a reasonable distribution of the computational resource to both solvers depending on their relativ workload in the overall FSI simulation.

An example of such an ANSYS SC submission script to the SLURM scheduler on e.g. CoolMUC-4 Linux Cluster is provided here:

#!/bin/bash
#SBATCH -o ./job.fsi.%j.%N.out
#SBATCH -D ./
#SBATCH -J fsi_cm4_tiny
#SBATCH --clusters=cm4
#SBATCH --partition=cm4_tiny
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=112
# --- Maximum of 112 CPU cores per CM4 compute node ---
#SBATCH --mail-type=none
#SBATCH --mail-type=end
#SBATCH --mail-user=Max.Mustermann@lrz.de
#SBATCH --get-user-env
#SBATCH --export=NONE
#SBATCH --time=2:00:00
##############################################################################
## switch to enforce execution on a single island (required for SuperMUC-NG) :
#SBATCH --switches=1@24:00:00
##############################################################################

module load slurm_setup

module load cfx/2024.R2
module load ansys/2024.R2

if [ -f "./slurm_machines.txt" ]; then
    rm ./slurm_machines.txt
fi

localmachine=$(hostname)
for i in $(scontrol show hostnames=$SLURM_JOB_NODELIST); do
  machines=""
  if [ "$i" == "$localmachine" ] 
    then 
      machines=$i" "$SLURM_NTASKS_PER_NODE
    else 
      machines=$i"ib "$SLURM_NTASKS_PER_NODE
    fi
  echo $machines >> ./slurm_machines.txt
done
cp $TMPDIR/cfxhosts.$SLURM_JOB_ID ./CFX/hostinfo.ccl

echo ======================================= ANSYS SC Start ==============================================
# For ANSYS Release <= 2021.R2: 
# echo "$SYSC_ROOT/bin/systemcoupling" --mpi=intel -p=ib -R runCFX.py
# "$SYSC_ROOT/bin/systemcoupling" --mpi=intel -p=ib -R runCFX.py
#
# For ANSYS Release >= 2022.R1:
echo "$SYSC_ROOT/bin/systemcoupling" --mpi=intel -p=infiniband -R runCFX.py
"$SYSC_ROOT/bin/systemcoupling" --mpi=intel -p=infiniband -R runCFX.py
echo ======================================= ANSYS SC Stop ===============================================

A corresponding SLURM submission script can be easily formulated for an FSI simulation with participating ANSYS Mechanical and ANSYS Fluent solvers as well.

The finally called runCFX.py or runFluent.py Python scripts are rather very case specific and they need to contain the translation of the SLURM provided information about the reserved computational resources in the Linux Cluster on runtime into the required ANSYS SC Python data structures (into the ANSYS SC datamodel) as they are expected by the ANSYS System Coupling (SC). Furthermore for ANSYS Fluent some specific solver arguments need to be provided, so that the ANSYS Fluent solver can be started on the LRZ Linux Clusters with enforced usage of Intel MPI and corresponding settings for the communication network of the cluster.

In ANSYS 2022.R1 new changes to the underlying ANSYS System Coupling Data Model have been introduced, which have some slight impact on the syntax of some formerly used data model options and which lead to changes in the runCFX.py or runFluent.py scripts correspondingly. In the ANSYS documentation there is more information avaiilable on how the structure and syntax of SC data model commands have changed in this release.

Known Limitations

ANSYS System Coupling is known to run well on all LRZ Linux Cluster systems being based on NVidia/Mellanox Infiniband interconnects, i.e. CoolMUC-4 in cm4_tiny | cm4_std queues.

ANSYS System Coupling is known to run well on Intel OmniPath interconnect based Linux Clusters (CM3 based housing cluster systems like HTRP, HTTF and others, SuperMUC-NG) for ANSYS releases newer then 2023.R1 as well.

How to proceed from here and how to get started

Interested users of LRZ Linux Cluster systems and/or SuperMUC-NG should get in contact with the HPC Application Support Team at LRZ, either through the LRZ Service Desk or by contacting Dr. Thomas Frank. Interested users should discuss their ANSYS System Coupling (SC) application of interest with the LRZ HPC/APP CFD-Lab team and the team will provide further supporting advice and documentation, in particular adapted ANSYS tutorial material and templates for driving Python scripts as they are used in the above listed SLURM submission script. Unfortunately this material cannot be generically documented, so that it would fit for each and every inteded simulation purpose in the scope of system coupling, so that it has been decided to provide it only after a first face-to-face discussion with the corresponding users.