ANSYS Fluent (Fluid Dynamics)

ANSYS Fluent is a general purpose Computational Fluid Dynamics (CFD) code. ANSYS Fluent is part of the ANSYS software portfolio since 2007 (formerly owned by Fluent, Inc.). As a general purpose CFD code ANSYS Fluent provides a wide variety of physical models for turbulent flows, acoustics, Eulerian and Lagrangian multiphase flow modeling, radiation, combustion and chemical reactions, heat and mass transfer including CHT (conjugate heat transfer in solid domains). Since ANSYS Fluent R19.2 some basic expression language capabilities are provided. But most of the time advanced user customization requires the programming of a user-defined function (UDF) in C language.

Further information about ANSYS Fluent, licensing of the ANSYS software and related terms of software usage at LRZ, the ANSYS mailing list, access to the ANSYS software documentation and LRZ user support can be found on the main ANSYS documentation page.

Getting Started

Before the user tries to start ANSYS Fluent on a LRZ HPC system, the user should follow the steps as outlined in the paragraph "6. SSH User Environment Settings". ANSYS Fluent is known not to run in distributed parallel mode on LRZ Linux Clusters or SuperMUC-NG without properly generated passphrase-free SSH keys.

Once you are logged into one of the LRZ cluster systems, you can check the availability (i.e. installation) of ANSYS Fluent software by:

> module avail fluent

Load the prefered ANSYS Fluent version environment module, e.g.:

> module load fluent/2024.R1

In contrary to ANSYS CFX the ANSYS Fluent software does not consist of a number of separate applications for preprocessing/solver/postprocessign purposes. Moreover ANSYS Fluent is designed as a single-window monolithic application which embeds all of these tasks in a single-window GUI and is even providing meshing capabilities (formerly known as TGRID or Fluent Meshing).

One can use ANSYS Fluent in interactive GUI mode for the only purpose of Serial pre- and/or postprocessing on the Login Nodes (Linux: SSH Option "-Y" or X11-Forwarding; Windows: using PuTTY and XMing for X11-forwarding). This interactive usage is mainly intended for making quick simulation setup changes, which require GUI access. And since ANSYS Fluent is loading the full mesh into the login nodes memory, this approach is only applicable to comparable small cases. It is NOT permitted to run computationally intensive ANSYS Fluent simulation runs or serial/parallel postprocessing sessions with large memory consumption on Login Nodes. Alternatively ANSYS Fluent can be run, e.g. for memory intensive postprocessing and OpenGL acceleration of the GUI on the Remote Visualization Systems.

The so-called ANSYS Fluent launcher is started by:

> fluent

where you have to specify, whether you intend to work:

  • on a 2-dimensional or a 3-dimensional case
  • in single or double precision
  • in meshing mode
  • in serial or parallel mode (on Login Nodes of the LRZ cluster systems only Serial mode is permitted)

It is not permitted to run computationally intensive ANSYS Fluent simulations on front-end Login Nodes in order not to disturb other LRZ users. A similar mode of operation like the ANSYS CFX Solver Manager monitoring mode for a still running parallel task on a cluster system or for an "a posteriori" monitoring analysis for a finished simulation run is unfortunately not existing for ANSYS Fluent. I.e. a graphical visualization of solver monitors has to be realized by the user outside of the ANSYS Fluent environment, e.g. by Python scripting, MS Excel or similar based on the monitor files and captured ANSYS Fluent output.

But it might be of interest, that ANSYS Fluent can be run in parallel mode for the use of the ANSYS Fluent build-in postprocessing. This can be used on compute nodes in batch mode, if the postprocessing is scripted by using either TUI command language or pyFluent API.

ANSYS Fluent on Linux Cluster and SupermUC-NG Login Nodes

ANSYS Fluent is a very resource-intensive application with respect to both main memory (RAM) and CPU resources! Please run ANSYS Fluent on login nodes with greatest care and under your supervision (e.g. using command "top" + <Cntrl>-M in a second terminal window) !

Especially, involving multiple ANSYS Fluent processes / parallelization might cause a high load on the login node and has the potential to massively disturb other users on the same system! Running ANSYS Meshing on login nodes can easily lead to overload and make the login node no longer responsive, so that a reboot of the machine is required. Be careful !

ANSYS Fluent applications, which cause a high load on login nodes and disturbance to other users or general operation of the login node, will be terminated by system administrators without any prior notification!

Our recommendations on the login nodes:

  • Running multiple instances of ANSYS Fluent by the same user is prohibited!
    Please run only one instance of the software on a particular login node at any time!
  • It is only allowed to run a single instance of ANSYS Fluent on login nodes solely for the purpose of pre- and/or postprocessing. Absolutely none (zero) ANSYS Fluent simulations are allowed on login nodes.
  • The maximum allowed number of cores in ANSYS Fluent parallelization is <=4 CPU cores!
    Any ANSYS Fluent instance using a higher degree of parallelization on login nodes will be terminated by system administrators without any prior notification!
  • If using <=4 cores in ANSYS Fluent parallelization you need to switch ANSYS Fluent to Open-MPI. The default Intel MPI does not work on login nodes due to conflicts with SLURM.
  • Please check the load and memory consumption of your own ANSYS Fluent session. Usually, you can do this via the "top" command, e.g.:
    top -u $USER
    Using <Cntrl>-M is sorting the displayed process list by the amount of consumed memory per process.
  • If a graphical user interface is needed, then you may run a single instance of ANSYS Fluent via a VNC session (VNC Server on Login-Nodes) to increase the performance and responsiveness of the GUI!
  • If a graphical user interface is not needed, it is adviced to run ANSYS Fluent via an interactive Slurm job or in batch mode under SLURM control.
    These jobs run on compute nodes. A high degree of parallelization is explicitly allowed here, as long as ANSYS Fluent is run efficiently (with the rule of thumb of approx. 10.000 mesh elements per CPU core at least)!
    Do not over-parallelize ANSYS Fluent simulations. Non effectively used HPC resources are stolen resources from other users, since they are wasted.
  • Repeated violation of the above mentioned restrictions to ANSYS Fluent usage on login nodes might result in a ban of the affected user account and notification to the scientific supervisor/professor. 

ANSYS Fluent Parallel Execution (Batch Mode)

All parallel ANSYS Fluent simulations on LRZ Linux Clusters and SuperMUC-NG are submitted as non-interactive batch jobs to the appropriate scheduling systems (SLURM) into the different pre-defined parallel execution queues. Further information about the batch queuing systems and the queue definitions, capabilities and limitations can be found on the documentation pages of the corresponding HPC system (LinuxCluster, SuperMUC-NG)

For job submission to a batch queuing system a corresponding small shell script needs to be provided, which contains:

  • Batch queueing system specific commands for the job resource definition
  • Module command to load the ANSYS Fluent environment module
  • Start command for parallel execution of fluent with all appropriate command line parameters
  • Reference to a small ANSYS Fluent journal file (*.jou), which is used to control the execution of ANSYS Fluent with the provided CAS file, since ANSYS Fluent CAS files do not contain a solver control section.

The intended syntax and available command line options for the invocation of the fluent command can be found out by:

> fluent -help

The configuration of the parallel cluster partition (list of node names and corresponding number of cores) is provided to the fluent command from the batch queuing system (SLURM) by the provision of an automatically generated environment variable $FLUHOSTS, based on the information provided by the cluster user in the job resource definition. Furthermore the environment variables $SLURM_NTASKS or $LOADL_TOTAL_TASKS are predefined by the used cluster system and batch queuing system as well and the information is just piped through to the fluent command as description of the cluster partition to be used for the parallel ANSYS Fluent simulation run.

Furthermore we recommend to LRZ cluster users to write for longer simulation runs regular backup files, which can be used as the basis for a job restart in case of machine or job failure. A good practice for a 48 hour ANSYS Fluent simulation (max. time limit) would be to write CAS/DAT files every 6 or 12 hours (to be specified in ANSYS Fluent under: Solution → Calculation Activities → Autosave Every Iterations). Further information you can find in the ANSYS documentation in the chapter "ANSYS Fluent Users Guide, Part II: Solution Mode; Chapter 40.16.1. Autosave Dialog Box").

Unfortunately ANSYS Fluent is not providing capabilities to stop a simulation run based on an elapsed wall clock time criterion. Furthermore it is unfortunately not possible to stop an ANSYS Fluent batch mode simulation run without loss of information by some whatever input from an external source  (only by using "scancel" or "llcancel" commands of the respectively used batch queuing systems) . Consequently it is due to the software user to have a good estimate for the overall runtime of the simulation task and to undertake necessary precautions to write regularly the required restart/backup information. Please plan for the setting of the maximum number of iterations/timesteps in your simulation enough time buffer for the writing of output case and data files, which can be a time consuming task depending on your application.

Caution: Changes in the ANSYS Fluent Parallel Start vor all Releases >=R19.2

In the past for ANSYS Releases < R19.2 it was necessary to heavily modify the ANSYS Fluent parallel start-up methods in order to make them run under SLURM and LoadLeveler on the LRZ cluster systems. Consequently for these older ANSYS Releases (e.g.  18.2, 19.0 and 19.1) only the "-mpi=intel" flag has to be specified. No information is required for the cluster interconnect, i.e. the "-pib.infinipath" or "-pib.dapl" flags should not be specified since this information is hard-coded in the corresponding ANSYS Fluent start-up scripts on the corresponding cluster systems.

ANSYS Fluent R19.2 and later versions:

For ANSYS Fluent Releases R19.2 and later a more flexible way of parallel ANSYS Fluent start-up on the different LRZ cluster systems has been found, which requires less intervention and modifications of the ANSYS Fluent start-up scripts and MPI wrappers. Consequently the information about the interconnect network (MPI fabric) of the used LRZ cluster sytem has to be provided to ANSYS Fluent by the user submitting the appropriate command line flags:

LRZ Linux ClusterSLURM QueueCluster Owner

ANSYS Fluent Command Line Options

ANSYS Versions : 19.2 → 2019.R3

(deprecated releases)

ANSYS Fluent Command Line Options

ANSYS Versions : 2020.R1 → 2021.R2

(2020.Rx are deprecated releases)

ANSYS Fluent Command Line Options

ANSYS Versions : 2022.R1 → 2023.R2

ANSYS Fluent Command Line Options

ANSYS Versions : 2024.R1 → ∞

CoolMUC-2cm2_tiny, cm2_std, cm2_largeLRZ
-mpi=intel -pib.ofi
( -mpi=ibmmpi -pib.dapl )
-mpi=intel -pib.ofi
-mpi=intel -pib.ofi-mpi=intel -pib.ofed
CoolMUC-3mpp3LRZ
-mpi=intel -pib.infinipath
-mpi=intel -pib.ofi
-mpi=intel -pib.infinipathnot available
TUM_Aertum_aer_batchTUM Aerodynamik
-mpi=intel -pib.dapl
-mpi=intel -pib.ofi
-mpi=intel -pib.ofi-mpi=intel -pib.ofed
HTRPhtrp_batchFRM-II
 -mpi=intel -pib.infinipath
-mpi=intel -pib.ofi
-mpi=intel -pib.infinipath-mpi=intel -pib.ofed
HTTF

httf_batch
httf_skylake

TUM LTF
-mpi=intel -pib.infinipath
-mpi=intel -pib.ofi
-mpi=intel -pib.infinipath-mpi=intel -pib.ofed
HTFDhtfd_batchTUM Thermofluiddynamik
-mpi=intel -pib.infinipath
-mpi=intel -pib.ofi
-mpi=intel -pib.infinipath-mpi=intel -pib.ofed
SuperMUC-NG

all SLURM partitions

(test, general, micro,
large, fat,...)

LRZ
-mpi=intel -pib.ofi
-mpi=intel -pib.ofi
-mpi=intel -pib.ofi-mpi=intel -pib.ofed

CoolMUC-2 : ANSYS Fluent Job Submission on LRZ Linux Clusters running SLES15 using SLURM

Only the newer versions of ANSYS Fluent ranging from 2021.R1 up to 2024.R1 are compatible with the new CM2 Linux Cluster operating system SLES15 in combination with contemporary versions of Intel MPI 2019 and Intel OneApi MPI 2020/2021. Starting with ANSYS Release 2022.R2 the softwarte is now utilizing INTEL MPI 2021. In the following an example of a job submission batch script for ANSYS Fluent 2024.R1 on CoolMUC-2 (SLURM queue = cm2_std) in the batch queuing system SLURM is provided:

#!/bin/bash
#SBATCH -o ./myjob.%j.%N.out
#SBATCH -D ./
#SBATCH -J fluent_cm2_std
#SBATCH --clusters=cm2
#SBATCH --partition=cm2_std
#SBATCH --qos=cm2_std
#SBATCH --get-user-env
#SBATCH --nodes=3
#SBATCH --ntasks-per-node=28
# --- multiples of 28 for cm2 ---
#SBATCH --mail-type=NONE
#SBATCH --mail-user=Max.Mustermann@lrz.de
#SBATCH --export=NONE
#SBATCH --time=0:10:00 
#----------------------------------------------------
module load slurm_setup

module avail fluent
module load fluent/2024.R1
module list

# cat /proc/cpuinfo
echo "========================================= Fluent Start =================================================="
# For later check of the correctness of the supplied ANSYS Fluent command it is echoed to stdout,
# so that it can be reviewed afterwards in the job file:
echo fluent 3ddp -mpi=intel -pib.ofed -t$SLURM_NTASKS -ssh -cnf=$FLUHOSTS -gu -driver null -i Static_Mixer_run_Fluent.jou
fluent 3ddp -mpi=intel -pib.ofed -t$SLURM_NTASKS -ssh -cnf=$FLUHOSTS -gu -driver null -i Static_Mixer_run_Fluent.jou > Static_Mixer_run_Fluent.out
# Please do not forget to insert here your own JOU and OUT file with its correct name!
echo "========================================= Fluent End ===================================================="

Assumed that the above SLURM script example has been saved under the filename "fluent_cm2_std.sh", the SLURM batch job has to be submitted by issuing the following command on one of the Linux Cluster login nodes:

sbatch fluent_cm2_std.sh

Correspondingly for the cm2_tiny queue the job script would look like:

#!/bin/bash
#SBATCH -o ./myjob.%j.%N.out
#SBATCH -D ./
#SBATCH -J fluent_cm2_tiny
#SBATCH --clusters=cm2_tiny
#SBATCH --partition=cm2_tiny
#SBATCH --get-user-env
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=28
# --- multiples of 28 for cm2 ---
#SBATCH --mail-type=NONE
#SBATCH --mail-user=Max.Mustermann@lrz.de
#SBATCH --export=NONE
#SBATCH --time=0:10:00 
#----------------------------------------------------
module load slurm_setup

module avail fluent
module load fluent/2024.R1
module list

# cat /proc/cpuinfo
echo "========================================= Fluent Start =================================================="
# For later check of the correctness of the supplied ANSYS Fluent command it is echoed to stdout,
# so that it can be reviewed afterwards in the job file:
echo fluent 3ddp -mpi=intel -pib.ofed -t$SLURM_NTASKS -ssh -cnf=$FLUHOSTS -gu -driver null -i Static_Mixer_run_Fluent.jou
fluent 3ddp -mpi=intel -pib.ofed -t$SLURM_NTASKS -ssh -cnf=$FLUHOSTS -gu -driver null -i Static_Mixer_run_Fluent.jou > Static_Mixer_run_Fluent.out
# Please do not forget to insert here your own JOU and OUT file with its correct name!
echo "========================================= Fluent End ===================================================="

Warning: Do NOT use additionally mpirun, mpiexec or any srun command to start the parallel processes. This is done by a MPI wrapper by the fluent startup script in the background.

CoolMUC-3 : ANSYS Fluent Job Submission on LRZ Linux Clusters running SLES12 using SLURM

In the following an example of a job submission batch script for ANSYS Fluent on CoolMUC-3 (SLURM queue = mpp3) in the batch queuing system SLURM is provided:

#!/bin/bash
#SBATCH -o ./myjob.%j.%N.out
#SBATCH -D ./
#SBATCH -J fluent_mpp3_slurm
#SBATCH --clusters=mpp3
#SBATCH --get-user-env
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=64
# --- multiples of 64 for mpp3 ---
#SBATCH --mail-type=NONE
#SBATCH --mail-user=Max.Mustermann@lrz.de
#SBATCH --export=NONE
#SBATCH --time=0:10:00 
#----------------------------------------------------
module load slurm_setup

module avail fluent
module load fluent/2023.R2
module list

# cat /proc/cpuinfo
echo "========================================= Fluent Start =================================================="
# For later check of the correctness of the supplied ANSYS Fluent command it is echoed to stdout,
# so that it can be reviewed afterwards in the job file:
echo fluent 3ddp -mpi=intel -pib.infinipath -t$SLURM_NTASKS -ssh -cnf=$FLUHOSTS -gu -driver null -i Static_Mixer_run_Fluent.jou
fluent 3ddp -mpi=intel -pib.infinipath -t$SLURM_NTASKS -ssh -cnf=$FLUHOSTS -gu -driver null -i Static_Mixer_run_Fluent.jou > Static_Mixer_run_Fluent.out
# Please do not forget to insert here your own JOU and OUT file with its correct name!
echo "========================================= Fluent End ===================================================="

Assumed that the above SLURM script has been saved under the filename "fluent_mpp3_slurm.sh", the SLURM batch job has to be submitted by issuing the following command on one of the Linux Cluster login nodes:

sbatch fluent_mpp3_slurm.sh

Warning: Do NOT use additionally mpirun, mpiexec or any srun command to start the parallel processes. This is done by a MPI wrapper by the fluent startup script in the background. Also, do not try to change the default Intel MPI to any other MPI version to run ANSYS Fluent in parallel. On the LRZ cluster systems only the usage of Intel MPI is supported and known to work propperly with ANSYS Fluent.

CoolMUC-4 : ANSYS Fluent Job Submission on LRZ Linux Clusters running SLES15 SP4 using SLURM

In the following an example of a job submission batch script for ANSYS Fluent on CoolMUC-4 (SLURM queue = inter | partition = cm4_inter_large_mem) in the batch queuing system SLURM is provided.
Please use this large and powerful compute resource only for tasks, which really can make efficient use of the 80 cores and 512 Gb node memory per compute node of CM4. Don't use this resource for rather smallish tasks, thereby wasting this powerful resource.

#!/bin/bash
#SBATCH -o ./myjob.%j.%N.out
#SBATCH -D ./
#SBATCH -J fluent_cm4
#SBATCH --clusters=inter
#SBATCH --partition=cm4_inter_large_mem
#SBATCH --get-user-env
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=80
# --- multiples of 80 for cm4 | 512 gb memory per compute node available ---
#SBATCH --mail-type=NONE
#SBATCH --mail-user=Max.Mustermann@lrz.de
#SBATCH --export=NONE
#SBATCH --time=0:10:00 
#----------------------------------------------------
module load slurm_setup

# Please mind the changed new Scratch filesystem on CM4; legacy $SRATCH is not available
echo $SCRATCH_DSS

module avail fluent
module load fluent/2024.R1
module list

# cat /proc/cpuinfo
echo "========================================= Fluent Start =================================================="
# For later check of the correctness of the supplied ANSYS Fluent command it is echoed to stdout,
# so that it can be reviewed afterwards in the job file:
echo fluent 3ddp -mpi=intel -pib.ofed -t$SLURM_NTASKS -ssh -cnf=$FLUHOSTS -gu -driver null -i Static_Mixer_run_Fluent.jou
fluent 3ddp -mpi=intel -pib.ofed -t$SLURM_NTASKS -ssh -cnf=$FLUHOSTS -gu -driver null -i Static_Mixer_run_Fluent.jou > Static_Mixer_run_Fluent.out
# Please do not forget to insert here your own JOU and OUT file with its correct name!
echo "========================================= Fluent End ===================================================="

Assumed that the above SLURM script has been saved under the filename "fluent_cm4_slurm.sh", the SLURM batch job has to be submitted by issuing the following command on one of the Linux Cluster login nodes:

sbatch fluent_cm4_slurm.sh

Warning: Do NOT use additionally mpirun, mpiexec or any srun command to start the parallel processes. This is done by a MPI wrapper by the fluent startup script in the background. Also, do not try to change the default Intel MPI to any other MPI version to run ANSYS Fluent in parallel. On the LRZ cluster systems only the usage of Intel MPI is supported and known to work propperly with ANSYS Fluent.

SuperMUC-NG : ANSYS Fluent Job Submission on SNG running SLES15 using SLURM

In the following an example of a job submission batch script for ANSYS Fluent on SuperMUC-NG (Login node: skx.supermuc.lrz.de, SLURM partition = test) in the batch queuing system SLURM is provided. Please note that supported ANSYS versions on SuperMUC-NG are ANSYS 2021.R1 or later. At this time ANSYS 2023.R2 is the default version.

#!/bin/bash
#SBATCH -o ./myjob.%j.%N.out
#SBATCH -D ./
#SBATCH -J fluent_test
#SBATCH --partition=test
# ---- partitions : test | micro | general | fat | large
#SBATCH --get-user-env
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=48
# ---- multiples of 48 for SuperMUC-NG ----
#SBATCH --mail-type=END
#SBATCH --mail-user=Max.Mustermann@lrz.de
#SBATCH --no-requeue
#SBATCH --export=NONE
#SBATCH --time=0:05:00
#SBATCH --account=<Your_own_project>
#################################################################
## switch to enforce execution on a single island (if required) :
#SBATCH --switches=1@24:00:00 
#################################################################
#
#################################################################
## Switch to disable energy-aware runtime (if required) :
## #SBATCH --ear=off
#################################################################

module load slurm_setup

module av fluent
module load fluent/2024.R1
module list 

# cat /proc/cpuinfo
echo "========================================= Fluent Start =================================================="
# For later check of the correctness of the supplied ANSYS Fluent command it is echoed to stdout,
# so that it can be reviewed afterwards in the job file:
echo fluent 3ddp -mpi=intel -pib.ofed -t$SLURM_NTASKS -ssh -cnf=$FLUHOSTS -g -i Static_Mixer_run_Fluent.jou
fluent 3ddp -mpi=intel -pib.ofed -t$SLURM_NTASKS -ssh -cnf=$FLUHOSTS -g -i Static_Mixer_run_Fluent.jou > Static_Mixer_run_Fluent.out
# Please do not forget to insert here your own JOU and OUT file with their correct names!
echo "========================================= Fluent End ===================================================="


Assumed that the above SLURM script has been saved under the filename "fluent_sng_slurm.sh", the SLURM batch job has to be submitted by issuing the following command on one of the SuperMUC-NG login nodes:

sbatch fluent_sng_slurm.sh

Possible Measures to Mitigate Licensing Issues on SuperMUC-NG

Potential Licensing Issue with very large SNG Node Counts

It is recommended to touch the following environment variable only in the case, if you intend to run ANSYS Fluent on more then 90 SNG nodes, i.e. more then 4.320 CPU cores.

If one tries to run ANSYS Fluent on a large SNG node count as 90 to 100 SNG nodes (corresponding to 4.320 to 4.800 CPU cores of the Intel Skylake processors), then there might occur a potential licensing issue in obtaining the required ANSYS HPC licenses from the ANSYS license server licansys.lrz.de in time. This occurs because ANSYS Fluent cortex is obtaining the CFD licenses first and then the large number of ANSYS Fluent MPI processes are spawned on all participating compute nodes. This MPI process spawning takes a substantial amount of time. ANSYS Fluent has here a build-in fixed time-out of 10 minutes. If this time-out is exceeded before the MPI process spawning could have taken place, ANSYS Fluent will stop with a hard error and a core dump message. Users can extend this ANSYS Fluent time-out value beyond the 10 minutes limit by setting the following environment variable:

export FLUENT_START_COMPUTE_NODE_TIME_OUT=1600

 The value of this environment variable is measured in seconds and the default value of 10 minutes corresponds to 600 seconds. So for launching ANSYS Fluent on e.g. 200 SNG nodes (corresponding to 9.600 CPU cores) a value of this environment variable of 1.600 seconds should be sufficient.

Potential Licensing Issues with longer response Time from the ANSYS License Server

It is recommended to touch the following environment variables only in the case, if you have experienced the licensing issues which are described here in the following.

With the update of the ANSYS license server software to version 2023.R2 we are experiencing performance decay of the license server in timely answering license requests from different ANSYS software packages and from different LRZ HPC systems. This can result in failure of application launch with not very obvious error messages and given reasons. The underlying issue is, that the ANSYS license server is not responding to a license request from an ANSYS application in the expected timeframe, so that the application assumes, that the license server is either down, not reachable over the network or responding too slowely.

These issues can be mitigated by the user of the ANSYS software by setting the following 3 environment variables and increasing the set default values:

# Extend the time limit for the application time-out when the license server responds slowely.
# Default value is 20 seconds, max. value is 60 seconds:
export ANSYSLI_TIMEOUT_CONNECT=60
#
# Increase the amount of time ansyscl will wait for the FNP (license) checkout.
# Default value is 5 second, max. value is arbitrary (N seconds):
export ANSYSLI_TIMEOUT_FLEXLM=300
#
# Increase the amount of time that elapses before the client times out if it cannot get a response from the server.
# Default value is 60 seconds, max. value is 300 seconds:
export ANSYSLI_TIMEOUT_TCP=300

ANSYS Fluent Journal Files

In contrary to ANSYS CFX the information provided in an ANSYS Fluent CAS file is either not sufficient or not properly used by ANSYS Fluent in order to run a simulation on a parallel cluster system by just submitting the CAS file.  For a propper start and solver control of a parallel ANSYS Fluent simulation run in batch mode it is required to provide an at least minimal so-called journal file (file extension *.jou), as you can see in the above SLURM script examples by the command line option "-i Static_Mixer_run_Fluent.jou".

Such a basic journal file contains a number of so-called TUI commands to ANSYS Fluent (TUI = Text User Interface). Details on the TUI command language of ANSYS Fluent can be found in the ANSYS Fluent documentation, Part II: Solution Mode; Chapter 2: Text User Interface (TUI).

As a small example of such a basic ANSYS Fluent journal file the "Static_Mixer_run_Fluent.jou" from the above SLURM batch queuing scripts is provided here. This small journal file does the following:

  • read CAS file for the Static_Mixer.cas simulation
  • write an ANSYS Fluent settings file
  • do hybrid initialization of the case
  • print time stamps of wall clock time at the start and end of solver iterations, e.g. for performance analysis
  • do 100 steady-state iterations of the ANSYS Fluent solver, pseudo-transient solution method
  • write a pair of CAS/DAT results files
  • write reports of parallel time usage, system status and the ANSYS Fluent simulation summary

Please note, that for carrying out the same simulation as a transient simulation the journal file would require modifications for the timestep integration. Same applies, if it is e.g. intended to initialize a run from a previously computed interpolation file or to continue a run from a previously obtained results file pair of CAS/DAT files.

; Scheme commands to specify the check pointing and emergency exit commands:
(set! checkpoint/check-filename "./check-fluent")
(set! checkpoint/exit-filename "./exit-fluent") 
;
; Feel free to modify all following lines to adress the requirements of your Fluent simulation.
;
; no overwrite confirmation / exit on error / hide questions / don't redisplay questions: 
/file/set-batch-options no yes yes no
; Option to disable HDF5-based CFF file format (legacy CAS/DAT files):
/file/cff-files? no
/file/read-case "./Static_Mixer.cas"
;
; The following option requires local scaling of the monitored residuals:
/solve/monitors/residual/enhanced-continuity-residual y
;
/file/write-settings StaticMixer_Settings.txt
;
/solve/initialize/hyb-initialization
;
(format-time #f #f)
/solve/iterate 100
(format-time #f #f)
/parallel time usage
/report/system/proc-stats
;
/file/write-case-data "./Static_Mixer_%i_final.cas" 
;
/report/summary "StaticMixer_Simulation_Report.txt"
exit y

ANSYS Fluent Check-pointing and Emergency Exit in Batch Mode

Normally a user does not has direct SSH access to the compute nodes of a Linux cluster where ANSYS Fluent is executed under the control of a batch queueing system like e.g. SLURM. Therefore the following approach is provided to control ANSYS Fluent to a certain extend from the Linux cluster login node, while it is executed on a certain Linux cluster partition. So it might be desirable to:

  • Check-pointing the simulation:
    The user might want to make ANSYS Fluent to write a pair of CAS/DAT files of the current intermediate simulation result on the next possible convenience, i.e. by reaching the next steady-state iteration or by finalizing the currently computed timestep.
  • Immediate stop or emergency exit from the current simulation run:
    The user might want to make ANSYS Fluent stop the current simulation on the next possible convenience and by writing the last computed CFD result to a pair of CAS/DAT files for postprocessing or simulation restart. This could e.g. be desirable, if for some reason the time in the queue is about to expire, but in accordance with the specified journal file ANSYS Fluent would normally not come to a normal end of execution with results file writing. Without the following possibility ANSYS Fluent would just be canceled by the SLURM manager with total loss of the last computed result (i.e. no CAS/DAT file being written). With the following possibility this can be circumvented.

As in the example journal file above, the following lines need to be included, in order to make ANSYS Fluent check-pointing and emergency exit functionality available:

;.....
; Journal file snippet enabling check-pointing and emergency exit functionality
;
; Scheme commands to specify the check pointing and emergency exit commands:
(set! checkpoint/check-filename "./check-fluent")
(set! checkpoint/exit-filename "./exit-fluent") 
;
;.....

With the above two scheme language definitions two filenames are getting declared, which afterwards during ANSYS Fluent runtime can be used to initiat either checkpointing (i.e. writing gzip'ed CAS/DAT files of the intermediate result) or an emergency exit from the current simulation. Once the above declarations had been included in the controlling journal file prior to SLURM submission, than fom a Linux terminal window on the Linux cluster login node and being positioned in the ANSYS Fluent working directory checkpointing/emergency exit can be initiated with the following Linux commands:

cd ~/<working-dir>
# Linux command for initiating the ANSYS Fluent check-pointing:
touch ./check-fluent
#
# Linux command to initiate an emergency exit of ANSYS Fluent from the currently running simulation:
touch ./exit-fluent

If the gzip'ed pair of CAS/DAT files have been written, ANSYS Fluent automatically removes the created empty files from the filesystem, so that the user does not has to care about them. As an additional convenience, ANSYS Fluent creates in the case of an emergency exit a new journal file, which starts with reading in the latest created CAS/DAT files and contains the remaining part of the users journal file which up to this point has not been executed. This newly generated journal file can be used as a template for re-submission to SLURM and continuation of the simulation run.

ANSYS Fluent Journal Commands to flush large Linux I/O Buffer Caches

Under certain circumstances you might encounter in your output files (transcript) from ANSYS Fluent simulations warning messages of the follwoing kind:

> WARNING: Rank:0 Machine mpp3-xxxxxxx has 66 % of RAM filled with file buffer caches.
This can cause potential performance issues. Please use -cflush flag to flush the cache. 
(In case of any trouble with that, try the TUI/Scheme command '(flush-cache)'.)

This warning messages point you to the fact, that on at least one computational node of your assigned Linux cluster partition it was detected on ANSYS Fluent startup that larger amounts of the node memory are occupied by large Linux I/O buffer caches. So what does that mean and how you should react?

  1. In principle this should not be the case, since the SLURM prolog of the batch queueing system should provide to you an almost clean set of computational nodes with purged Linux I/O buffer caches. But it can occur, that this SLURM prolog fails in entirely flushing the caches. If that is observed by you on a more regular basis, please file a LRZ service request and provide the ANSYS Fluent output file (transcript) to the LRZ support staff.
  2. The warning message from ANSYS Fluent is not a very serious concern and your intended simulation run should continue without further issues. But it can become a concern from a simulation performance point of view. Memory occupation by larger remaining Linux I/O buffer caches can (and most likely will) lead to an unbalanced distribution of ANSYS Fluent's memory allocations with respect to the launched ANSYS Fluent tasks on the dual socket compute node, i.e. a number of ANSYS Fluent tasks need to permanently access their corresponding data not from the pürocessors own local memory but from the memory of the other neighbouring processor on the dual-socket compute node, thereby encountering less efficient memory access. This potentially can lead to ANSYS Fluent performance degradation in the order of up to 10-12% based on the measurable increase in simulation run time. For a 48 hours simulation run 10-12% increase in simulation time are about 5 hours plus.
  3. Consequently, if you permanently observe these warning messages, you potentially would like to get the following command lines included in your ANSYS Fluent journal file in order to get the large Linux I/O buffer caches flushed by yourself prior to the startup of ANSYS Fluent for your intended simulation. Please do not include these lines by default, because they delay the application startup a bit depending on the amount of available system memory on the nodes. Start-up delay can be up to 1-3 minutes in comparison with normal startup procedure of ANSYS Fluent.
  4. Do not try to use the above mentioned recommendation to include the option "-cflush" on the ANSYS Fluent command line in your SLURM script, since this does not work for the Linux cluster diskless compute nodes. Instead use the following lines in your simulation journal file:
; Following commands reduce the pending Linux I/O buffers to a minimum.
; Large I/O buffers can potentially have a performance impact on ANSYS Fluent
; due to non-local memory allocation in dual-processor systems of the compute nodes.
(rpsetvar 'cache-flush/target/reduce-by-mb 12288)
(flush-cache)
;
; ... followed by the remaining code lines from your normal ANSYS Fluent journal file...
;
/file/set-batch-options no yes yes no 
; Option to disable HDF5-based CFF file format (legacy CAS/DAT files):
/file/cff-files? no
/file/read-case "./Static_Mixer.cas" 
; ...

The "magic number" 12288 (Mb) in the 1st command line of the above ANSYS Fluent journal file snippet is the specification of the amount of system memory, which cannot be purged due to resident Linux OS files in the memory of the diskless compute nodes. This is mainly to be addressed to buffers of the GPFS parallel file system and the OS system image. Since the size of the preloaded OS system image can be subject to slight changes over time, this is rather an experimental value than just a fixed number. If you observe, that the flushing process is not carried out successfully (by error messages in the ANSYS Fluent transcript), try to increase this number. The given number essentially means, that on a compute node with 64 Gb ANSYS Fluent tries to clean-up/flush about 64Gb - 12.288Gb = 51.772Gb of memory. 

Creation of Graphics Output from ANSYS Fluent in Batch Mode

Normally in batch mode ANSYS Fluent is only producing a transcript of the simulation run, i.e. a tabulated list of computed residuals and monitor data together with other text-based information about the current simulation run, and finally the specified output files (CAS/DAT files, backup files, monitor files and reports). But at least it would be desirable, to have from a batch simulation run a PNG or JPEG graphics file with the graphical representation of the convergence history of ANSYS Fluent available. This can be realized by doing the following.

First of all ANSYS Fluent needs to be started in the SLURM script with the correct command line options and by including a so-called NULL graphics driver:

fluent 3ddp -mpi=intel -t$LOADL_TOTAL_TASKS -cnf=$FLUHOSTS -gu -driver null -i Static_Mixer_run_Fluent.jou > Static_Mixer_run_Fluent.out

Here the options "-gu -driver null" allow the generation and export of graphics files from an ANSYS Fluent simulation run in batch mode.

The user than has to organize this graphics file export by appropriate journal commands or by defined calculation activities. For example, the following journal commands can be used to write the convergence history of the ANSYS Fluent simulations as colored and black&white residual diagrams in two separate PNG files:

;.....
;
; write a PNG file of the residual history
;
/solve/monitors/residual/plot? yes 
/display/set-window 1
/display/set/picture/driver png
/display/save-picture residual_history.png
/display/set/picture/color-mode/mono-chrome
/display/set-window 1
/display/save-picture residual_history_bw.png
;
;.....

By similar journal file statements e.g. diagrams of monitor data or postprocessing graphics like contour plots, streamlines, isosurfaces, etc. being previously defined by the postprocessing functionality inside ANSYS Fluent can be exported from an ANSYS Fluent simulation in batch mode as well.

Compilation of ANSYS Fluent UDF's (User-defined Functions)

For customization purposes ANSYS Fluent provides essentially the following two possibilities:

  • Starting with the ANSYS Fluent 2019.R1 release (early beta in R19.2), the CFD solver is now supporting algebraic expressions, like ANSYS CFX and Discovery (formerly known as AIM Fluids) users are used to already for a long time. Consequently, for many purposes, where users had to write a piece of a C subroutine in the past, the same goal can now be achieved by just inserting a named algebraic expression in ANSYS Fluent's GUI, e.g. for specifying a turbulent velocity profile at an inlet cross-section of your geometry in accordance with the 1/7th power law.
  • Nevertheless in many cases User-defined Functions (UDF's) are still rather common for ANSYS Fluent to customize build-in fluids solver capabilities and model functionalities. UDF's are user programmed subroutines, written in C language, which are hooked-up in the CFD setup to fullfill their purpose.

Prior to be able to run ANSYS Fluent for a CAS file depending upon a UDF, the UDF written in C language needs to be compiled. Best a UDF is compiled on a system with the same operating system (OS) and same processor architecture as the target compute cluster. And the UDF library needs to be created for the same target floating point number precision, i.e. either for single or double precision.

In the case of LRZ Linux Clusters and SNG the need for a pre-compilation of UDF's has been removed starting with ANSYS Fluent module files from January 2021. The ANSYS Fluent simulations can be started in batch mode on the Linux Clusters just by providing the corresponding CAS file with hooked-up UDF calls together with the UDF source files in C programming language. ANSYS Fluent will compile the corresponding UDF's into a library just on-the-fly, if the ./libudf subdirectory is not yet existing. This compilation is done automatically prior to run the ANSYS Fluent simulation on the Linux Cluster or SNG.
Manual pre-compilation of the UDF's by the ANSYS Fluent user is no longer required.