ANSYS CFX (Fluid Dynamics)

ANSYS CFX is a general purpose Computational Fluid Dynamics (CFD) code. ANSYS CFX is part of the ANSYS software portfolio since 2003 (formerly owned by AEA Technology). As a general purpose CFD code ANSYS CFX provides a wide variety of physical models for turbulent flows, acoustics, Eulerian and Lagrangian multiphase flow modeling, radiation, combustion and chemical reactions, heat and mass transfer including CHT (conjugate heat transfer in solid domains). ANSYS CFX is in particular powerful because of its easy-to-use CEL/CCL command and expression language, which allows the extension or modification of implemented physical models without the need of programming. With CCL and additional user-defined variables many physical models can be implemented by just incorporating the required formulas, algebraic expressions and even transport equations in the ANSYS CFX setup without the need of User-FORTRAN routines.

Further information about ANSYS CFX, licensing of the ANSYS software and related terms of software usage at LRZ, the ANSYS mailing list, access to the ANSYS software documentation and LRZ user support can be found on the main ANSYS documentation page.

Getting Started

Once you are logged into one of the LRZ cluster systems, you can check the availability (i.e. installation) of ANSYS CFX software by:

> module avail cfx

Load the prefered ANSYS CFX version environment module, e.g.:

> module load cfx/2024.R1

One can use ANSYS CFX in interactive GUI mode for the only purpose of pre- and/or postprocessing on the Login Nodes (Linux: SSH Option "-Y" or X11-Forwarding; Windows: using PuTTY and XMing for X11-forwarding). This interactive usage is mainly intended for making quick simulation setup changes, which require GUI access. And since ANSYS CFX is loading the mesh into the login nodes memory, this approach is only applicable to comparable small cases. It is NOT permitted to run computationally intensive ANSYS CFX simulation runs or postprocessing sessions with large memory consumption on Login Nodes. Alternatively ANSYS CFX can be run, e.g. for memory intensive postprocessing and OpenGL acceleration of the GUI on the Remote Visualization Systems.

The so-called ANSYS CFX launcher is started by:

> cfx5launch

Alternatively on Linux systems you may want to start the standalone ANSYS CFX applications individually by:

      > cfx5pre
or: 
      > cfx5post

Controlling an ANSYS CFX Simulation Run in Batch Mode

It is not permitted to run computationally intensive ANSYS CFX simulations on front-end Login Nodes or the Remote Visualization Systems in order not to disturb other LRZ users. However, the ANSYS CFX Solver Manager can be used to either analyze the output file information of a finished CFX run or to even monitor a still running ANSYS CFX simulation, which is running on the Linux Cluster or SuperMUC-NG in batch mode. This is accomplished by starting the corresponding ANSYS CFX application on a Login Node and by linking it with the corresponding simulation run:

> cfx5solve

      --> File Menue --> Monitor Run in Progress --> Select the corresponding "run directory"
      --> File Menue --> Monitor Finished Run --> Select the corresponding *.res file

ANSYS CFX Solver Manager communicates with a still running ANSYS CFX simulation run via file output in the run directory. Therefore existing larger latencies in the job output and file systems can lead to delays in the update of e.g. the output file information or observed solver monitors. Further, the ANSYS Solver Manager can be used to stop a still running ANSYS CFX simulation by stopping the ANSYS CFX simulation after finishing the next steady-state iteration or timestep without loss of information and by writing out a *.res and *.out file for a potential later restart or postprocessing analysis. The same kind of ANSYS CFX emergency stop can be accomplished in a Linux shell by doing the following:

> cd <your-working-directory>
> cd <your-DEF-filename>_00x.dir
> touch stp

In the code snippet above you need to replace the name of your working directory and the name of your current simulation run. By creating an empty file of name "stp" in the ANSYS CFX run directory applying the command "touch stp", ANSYS CFX will recognize this file on the next possible convenience of the code and initiate a clean stop of your simulation run. This has to be given preference over a cancelation of your batch job, since this will lead to a hard cancelation of the ANSYS CFX simulation run with loss of the interim simulation result.

If the user just wants to write a RES/BAK file of the current intermediate CFD result from the current simulation and without interrupting the ongoing ANSYS CFX simulation, than this can be accomplished by starting the ANSYS CFX Solver Manager on a Linux cluster login node, connect the CFX Solver Manager with the still ongoing simulation run as described above and by pressing on the provided "Save" icon (Create a backup of the run at the current timestep). ANSYS CFX will write a full BAK file being suitable for postprocessing and/or a solver restart and will continue with the ongoing simulation.

ANSYS CFX Parallel Execution (Batch Mode)

All parallel ANSYS CFX simulations on LRZ Linux Clusters and SuperMUC-NG are submitted as non-interactive batch jobs to the appropriate scheduling system (SLURM) into the different pre-defined parallel execution queues. Further information about the batch queuing systems and the queue definitions, capabilities and limitations can be found on the documentation pages of the corresponding HPC system (LinuxCluster, SuperMUC-NG)

For job submission to a batch queuing system a corresponding small shell script needs to be provided, which contains:

  • Batch queueing system specific commands for the job resource definition
  • Module command to load the ANSYS CFX environment module
  • Start command for parallel execution of cfx5solve with all appropriate command line parameters

The intended syntax and available command line options for the invocation of the cfx5solve command can be found out by:

> cfx5solve -help

The configuration of the parallel cluster partition (list of node names and corresponding number of cores) is provided to the cfx5solve command from the batch queuing system (SLURM) by the provision of an automatically generated environment variable $CFX_HOSTLIST, based on the information provided by the cluster user in the job resource definition. Furthermore the environment variable $LRZ_SYSTEM_SEGMENT is predefined by the used cluster system as well and is just piped through to the cfx5solve command as description of the parallel start-up method and communication interconnect to be used for the parallel ANSYS CFX simulation run.

Furthermore we recommend to LRZ cluster users to write for longer simulation runs regular backup files, which can be used as the basis for a job restart in case of machine or job failure. A good practice for a 48 hour ANSYS CFX simulation (max. time limit) would be to write backup files every 6 or 12 hours. Further information you can find in the ANSYS documentation in the chapter "ANSYS CFX-Pre User's Guide" (Output Control → User Interface → Backup Tab). Furthermore it is recommended to use the "Elapsed Wall Clock Time Control" in the job definition in ANSYS CFX Pre (Solver Control → Elapsed Wall Clock Time Control → Maximum Run Time → <48h or max. queue time limit respectively). Please plan for the setting of this wall clock time limit enough time buffer for the writing of output and results files, which can be a time consuming task depending on your application.

CoolMUC-2 : ANSYS CFX Job Submission on LRZ Linux Clusters running SLES15 using SLURM

In the following an example of a job submission batch script for ANSYS CFX on CoolMUC-2 (SLURM queues = cm2_tiny | cm2_std | cm2_large) in the batch queuing system SLURM is provided:

#!/bin/bash
#SBATCH -o ./myjob.%j.%N.out
#SBATCH -D ./
#SBATCH -J cfx_cm2_std
#SBATCH --clusters=cm2
#SBATCH --partition=cm2_std
#SBATCH --qos=cm2_std
#SBATCH --get-user-env
#SBATCH --nodes=3
#SBATCH --ntasks-per-node=28
# --- multiples of 28 for cm2 ---
#SBATCH --mail-type=NONE
#SBATCH --mail-user=Max.Mustermann@lrz.de
#SBATCH --export=NONE
#SBATCH --time=0:10:00 
#----------------------------------------------------
module load slurm_setup

module avail cfx
module load cfx/2023.R2
module list 

# cat /proc/cpuinfo
echo ========================================== CFX Start ==============================================
# For later check of the correctness of the supplied ANSYS CFX command it is echoed to stdout,
# so that it can be reviewed afterwards in the job file:
echo cfx5solve -v -par-dist $CFX_HOSTLIST -start-method "$LRZ_SYSTEM_SEGMENT" -double -def StaticMixer.def
cfx5solve -v -par-dist $CFX_HOSTLIST -start-method "$LRZ_SYSTEM_SEGMENT" -double -def StaticMixer.def
# Please do not forget to insert here your own DEF file with its correct name!
echo ========================================== CFX Stop ===============================================

Assumed that the above SLURM script has been saved under the filename "cfx_cm2_std.sh", the SLURM batch job has to be submitted by issuing the following command on one of the Linux Cluster login nodes:

sbatch cfx_cm2_std.sh

Correspondingly for the cm2_tiny queue the job script would look like:

#!/bin/bash
#SBATCH -o ./myjob.%j.%N.out
#SBATCH -D ./
#SBATCH -J cfx_cm2_tiny
#SBATCH --clusters=cm2_tiny
#SBATCH --partition=cm2_tiny
#SBATCH --get-user-env
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=28
# --- multiples of 28 for cm2 ---
#SBATCH --mail-type=NONE
#SBATCH --mail-user=Max.Mustermann@lrz.de
#SBATCH --export=NONE
#SBATCH --time=0:10:00 
#----------------------------------------------------
module load slurm_setup

module avail cfx
module load cfx/2023.R2
module list 

# cat /proc/cpuinfo
echo ========================================== CFX Start ==============================================
# For later check of the correctness of the supplied ANSYS CFX command it is echoed to stdout,
# so that it can be reviewed afterwards in the job file:
echo cfx5solve -v -par-dist $CFX_HOSTLIST -start-method "$LRZ_SYSTEM_SEGMENT" -double -def StaticMixer.def
cfx5solve -v -par-dist $CFX_HOSTLIST -start-method "$LRZ_SYSTEM_SEGMENT" -double -def StaticMixer.def
# Please do not forget to insert here your own DEF file with its correct name!
echo ========================================== CFX Stop ===============================================

Remark: At the moment the ANSYS CFX versions ranging from 2021.R1 to 2023.R2 are known to work with Intel MPI on CM2.

Warning: Do NOT use additionally mpirun, mpiexec or any srun command to start the parallel processes. This is done by a MPI wrapper by the cfx5solve startup script in the background. Also, do not try to change the default Intel MPI to any other MPI version to run ANSYS CFX in parallel. On the LRZ cluster systems only the usage of Intel MPI is supported and known to work propperly with ANSYS CFX.

CoolMUC-3 : ANSYS CFX Job Submission on LRZ Linux Clusters running SLES12 using SLURM

In the following an example of a job submission batch script for ANSYS CFX on CoolMUC-3 (SLURM queue = mpp3) in the batch queuing system SLURM is provided:

#!/bin/bash
#SBATCH -o ./myjob.%j.%N.out
#SBATCH -D ./
#SBATCH -J cfx_mpp3_slurm
#SBATCH --clusters=mpp3
#SBATCH --get-user-env
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=64
# --- multiples of 64 for mpp3 ---
#SBATCH --mail-type=NONE
#SBATCH --mail-user=Max.Mustermann@lrz.de
#SBATCH --export=NONE
#SBATCH --time=0:10:00 
#----------------------------------------------------
module load slurm_setup

module avail cfx
module load cfx/2023.R2
module list 

# cat /proc/cpuinfo
echo ========================================== CFX Start ==============================================
# For later check of the correctness of the supplied ANSYS CFX command it is echoed to stdout,
# so that it can be reviewed afterwards in the job file:
echo cfx5solve -v -par-dist $CFX_HOSTLIST -start-method "$LRZ_SYSTEM_SEGMENT" -double -def StaticMixer.def
cfx5solve -v -par-dist $CFX_HOSTLIST -start-method "$LRZ_SYSTEM_SEGMENT" -double -def StaticMixer.def
# Please do not forget to insert here your own DEF file with its correct name!
echo ========================================== CFX Stop ===============================================

Assumed that the above SLURM script has been saved under the filename "cfx_mpp3_slurm.sh", the SLURM batch job has to be submitted by issuing the following command on one of the Linux Cluster login nodes:

sbatch cfx_mpp3_slurm.sh

Warning: Do NOT use additionally mpirun, mpiexec or any srun command to start the parallel processes. This is done by a MPI wrapper by the cfx5solve startup script in the background. Also, do not try to change the default Intel MPI to any other MPI version to run ANSYS CFX in parallel. On the LRZ cluster systems only the usage of Intel MPI is supported and known to work propperly with ANSYS CFX.

CoolMUC-4 : ANSYS CFX Job Submission on LRZ Linux Clusters running SLES15 SP4 using SLURM

In the following an example of a job submission batch script for ANSYS CFX on CoolMUC-4 (SLURM queue = inter | partition = cm4_inter_large_mem) in the batch queuing system SLURM is provided.

Please use this large and powerful compute resource only for tasks, which really can make efficient use of the 80 cores and 512 Gb node memory per compute node of CM4. Don't use this resource for rather smallish tasks, thereby wasting this powerful resource.

#!/bin/bash
#SBATCH -o ./myjob.%j.%N.out
#SBATCH -D ./
#SBATCH -J cfx_cm4
#SBATCH --clusters=inter
#SBATCH --partition=cm4_inter_large_mem
#SBATCH --get-user-env
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=80
# --- multiples of 80 for cm4 | 512 gb memory per compute node available ---
#SBATCH --mail-type=NONE
#SBATCH --mail-user=Max.Mustermann@lrz.de
#SBATCH --export=NONE
#SBATCH --time=0:10:00 
#----------------------------------------------------
module load slurm_setup

# Please mind the changed new Scratch filesystem on CM4; legacy $SRATCH is not available
echo $SCRATCH_DSS

module avail cfx
module load cfx/2024.R1
module list 

# cat /proc/cpuinfo
echo ========================================== CFX Start ==============================================
# For later check of the correctness of the supplied ANSYS CFX command it is echoed to stdout,
# so that it can be reviewed afterwards in the job file:
echo cfx5solve -v -par-dist $CFX_HOSTLIST -start-method "$LRZ_SYSTEM_SEGMENT" -double -def StaticMixer.def
cfx5solve -v -par-dist $CFX_HOSTLIST -start-method "$LRZ_SYSTEM_SEGMENT" -double -def StaticMixer.def
# Please do not forget to insert here your own DEF file with its correct name!
echo ========================================== CFX Stop ===============================================

Assumed that the above SLURM script has been saved under the filename "cfx_cm4_slurm.sh", the SLURM batch job has to be submitted by issuing the following command on one of the Linux Cluster login nodes:

sbatch cfx_cm4_slurm.sh

Warning: Do NOT use additionally mpirun, mpiexec or any srun command to start the parallel processes. This is done by a MPI wrapper by the cfx5solve startup script in the background. Also, do not try to change the default Intel MPI to any other MPI version to run ANSYS CFX in parallel. On the LRZ cluster systems only the usage of Intel MPI is supported and known to work propperly with ANSYS CFX.

SuperMUC-NG : ANSYS CFX Job Submission on SNG using SLURM

In the following an example of a job submission batch script for ANSYS CFX on SuperMUC-NG (Login node: skx.supermuc.lrz.de, SLURM partition = test) in the batch queuing system SLURM is provided. Please note that supported ANSYS versions on SuperMUC-NG are ANSYS 2020.R1 or later. At this time ANSYS 2024.R1 is the default version.

#!/bin/bash
#SBATCH -o ./myjob.%j.%N.out
#SBATCH -D ./
#SBATCH -J cfx_sng
#SBATCH --partition=test
# ---- test | micro | general | large | fat
#SBATCH --get-user-env
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=48
# ---- multiples of 48 for SuperMUC-NG ----
#SBATCH --mail-type=END
#SBATCH --mail-user=Max.Mustermann@lrz.de
#SBATCH --no-requeue
#SBATCH --export=NONE
#SBATCH --time=0:10:00
#SBATCH --account=<your_SNG_project>
################################################################# 
## switch to enforce execution on a single island (if required) : 
## #SBATCH --switches=1@24:00:00
#################################################################
#
#################################################################
## switch to disable energy-aware runtime (if required) :
## #SBATCH --ear=off
#################################################################
 
module load slurm_setup

module av cfx
module load cfx/2024.R1
module list 

# cat /proc/cpuinfo
echo ========================================== CFX Start ============================================== 
# For later check of the correctness of the supplied ANSYS CFX command it is echoed to stdout, 
# so that it can be reviewed afterwards in the job file: 
echo cfx5solve -v -par-dist $CFX_HOSTLIST -start-method "$LRZ_SYSTEM_SEGMENT" -double -def StaticMixer.def 
cfx5solve -v -par-dist $CFX_HOSTLIST -start-method "$LRZ_SYSTEM_SEGMENT" -double -def StaticMixer.def 
# Please do not forget to insert here your own DEF file with its correct name! 
echo ========================================== CFX Stop ===============================================

Assumed that the above SLURM script has been saved under the filename "cfx_sng_slurm.sh", the SLURM batch job has to be submitted by issuing the following command on one of the SuperMUC-NG login nodes:

sbatch cfx_sng_slurm.sh