ANSYS Fluent (Fluid Dynamics)
ANSYS Fluent is a general purpose Computational Fluid Dynamics (CFD) code. ANSYS Fluent is part of the ANSYS software portfolio since 2007 (formerly owned by Fluent, Inc.). As a general purpose CFD code ANSYS Fluent provides a wide variety of physical models for turbulent flows, acoustics, Eulerian and Lagrangian multiphase flow modeling, radiation, combustion and chemical reactions, heat and mass transfer including CHT (conjugate heat transfer in solid domains). Since ANSYS Fluent R19.2 the expression language capabilities are provided, where many user customization tasks can be accomplished with. But most of the time highly advanced user customization requires the programming of a user-defined function (UDF) in C language.
Further information about ANSYS Fluent, licensing of the ANSYS software and related terms of software usage at LRZ, the ANSYS mailing list, access to the ANSYS software documentation and LRZ user support can be found on the main ANSYS documentation page.
Getting Started
Before the user tries to start ANSYS Fluent on a LRZ HPC system, the user should follow the steps as outlined in the paragraph "6. SSH User Environment Settings". ANSYS Fluent is known not to run in distributed parallel mode on LRZ Linux Clusters or SuperMUC-NG without properly generated passphrase-free SSH keys.
Once you are logged into one of the LRZ cluster systems, you can check the availability (i.e. installation) of ANSYS Fluent software by:
> module avail fluent
Load the prefered ANSYS Fluent version environment module, e.g.:
> module load fluent/2024.R2
In contrary to ANSYS CFX the ANSYS Fluent software does not consist of a number of separate applications for preprocessing/solver/postprocessing purposes. Moreover ANSYS Fluent is designed as a single-window monolithic application which embeds all of these tasks in a single-window GUI and is even providing meshing capabilities (formerly known as TGRID or Fluent Meshing).
One can use ANSYS Fluent in interactive GUI mode for the only purpose of Serial pre- and/or postprocessing on the Login Nodes (Linux: SSH Option "-Y" or X11-Forwarding; Windows: using PuTTY and XMing for X11-forwarding). This interactive usage is mainly intended for making quick simulation setup changes, which require GUI access. And since ANSYS Fluent is loading the full mesh into the login nodes memory, this approach is only applicable to comparable small cases. It is NOT permitted to run computationally intensive ANSYS Fluent simulation runs or serial/parallel postprocessing sessions with large memory consumption on Login Nodes. The formerly existing Remote Visualization systems have been switched off in March 2024 without replacement due to their end-of-life. Any work with the ANSYS software being related to interactive mesh generation as well as graphically intensive pre- and postprocessing tasks need to be carried out on local computer systems and by using an ANSYS Academic Research license, which is available from LRZ for an annual license fee.
The so-called ANSYS Fluent launcher is started by:
> fluent -driver x11
where you have to specify, whether you intend to work:
- on a 2-dimensional or a 3-dimensional case
- in single or double precision
- in meshing mode
- in serial or parallel mode (on Login Nodes of the LRZ cluster systems only limited parallelization is permitted - see below)
It is not permitted to run computationally intensive ANSYS Fluent simulations on front-end Login Nodes in order not to disturb other LRZ users. A similar mode of operation like the ANSYS CFX Solver Manager monitoring mode for a still running parallel task on a cluster system or for an "a posteriori" monitoring analysis for a finished simulation run is unfortunately not existing for ANSYS Fluent. I.e. a graphical visualization of solver monitors has to be realized by the user outside of the ANSYS Fluent environment, e.g. by Python scripting, MS Excel or similar based on the monitor files and captured ANSYS Fluent output.
But it might be of interest, that ANSYS Fluent can be run in parallel mode for the use of the ANSYS Fluent build-in postprocessing. This can be used on compute nodes in batch mode, if the postprocessing is scripted by using either TUI command language or pyFluent API.
ANSYS Fluent on Linux Cluster and SupermUC-NG Login Nodes
ANSYS Fluent is a very resource-intensive application with respect to both main memory (RAM) and CPU resources! Please run ANSYS Fluent on login nodes with greatest care and under your supervision (e.g. using command "top" + <Cntrl>-M in a second terminal window) !
Especially, involving multiple ANSYS Fluent processes / parallelization might cause a high load on the login node and has the potential to massively disturb other users on the same system! Running ANSYS Meshing on login nodes can easily lead to overload and make the login node no longer responsive, so that a reboot of the machine is required. Be careful !
ANSYS Fluent applications, which cause a high load on login nodes and disturbance to other users or general operation of the login node, will be terminated by system administrators without any prior notification!
Our recommendations on the login nodes:
- Running multiple instances of ANSYS Fluent by the same user is prohibited!
Please run only one instance of the software on a particular login node at any time! - It is only allowed to run a single instance of ANSYS Fluent on login nodes solely for the purpose of pre- and/or postprocessing. Absolutely none (zero) ANSYS Fluent simulations are allowed on login nodes.
- The maximum allowed number of cores in ANSYS Fluent parallelization is <=4 CPU cores!
Any ANSYS Fluent instance using a higher degree of parallelization on login nodes will be terminated by system administrators without any prior notification! - If using <=4 cores in ANSYS Fluent parallelization you need to switch ANSYS Fluent to Open-MPI. The default Intel MPI does not work on login nodes due to conflicts with SLURM.
- Please check the load and memory consumption of your own ANSYS Fluent session. Usually, you can do this via the "top" command, e.g.:Using <Cntrl>-M is sorting the displayed process list by the amount of consumed memory per process.
top -u $USER
- If a graphical user interface is needed, then you may run a single instance of ANSYS Fluent via a VNC session (VNC Server on Login-Nodes) to increase the performance and responsiveness of the GUI!
- If a graphical user interface is not needed, it is adviced to run ANSYS Fluent via an interactive Slurm job or in batch mode under SLURM control.
These jobs run on compute nodes. A high degree of parallelization is explicitly allowed here, as long as ANSYS Fluent is run efficiently (with the rule of thumb of approx. 10.000 mesh elements per CPU core at least)!
Do not over-parallelize ANSYS Fluent simulations. Non effectively used HPC resources are stolen resources from other users, since they are wasted. - Repeated violation of the above mentioned restrictions to ANSYS Fluent usage on login nodes might result in a ban of the affected user account and notification to the scientific supervisor/professor.
ANSYS Fluent Parallel Execution (Batch Mode)
All parallel ANSYS Fluent simulations on LRZ Linux Clusters and SuperMUC-NG are submitted as non-interactive batch jobs to the appropriate scheduling systems (SLURM) into the different pre-defined parallel execution queues. Further information about the batch queuing systems and the queue definitions, capabilities and limitations can be found on the documentation pages of the corresponding HPC system (LinuxCluster, SuperMUC-NG)
For job submission to a batch queuing system a corresponding small shell script needs to be provided, which contains:
- Batch queueing system specific commands for the job resource definition
- Module command to load the ANSYS Fluent environment module
- Start command for parallel execution of fluent with all appropriate command line parameters
- Reference to a small ANSYS Fluent journal file (*.jou), which is used to control the execution of ANSYS Fluent with the provided CAS file, since ANSYS Fluent CAS files do not contain a solver control section.
The intended syntax and available command line options for the invocation of the fluent command can be found out by:
> fluent -help
The configuration of the parallel cluster partition (list of node names and corresponding number of cores) is provided to the fluent command from the batch queuing system (SLURM) by the provision of an automatically generated environment variable $FLUHOSTS, based on the information provided by the cluster user in the job resource definition. Furthermore the environment variables $SLURM_NTASKS or $LOADL_TOTAL_TASKS are predefined by the used cluster system and batch queuing system as well and the information is just piped through to the fluent command as description of the cluster partition to be used for the parallel ANSYS Fluent simulation run.
Furthermore we recommend to LRZ cluster users to write for longer simulation runs regular backup files, which can be used as the basis for a job restart in case of machine or job failure. A good practice for a 48 hour ANSYS Fluent simulation (max. time limit) would be to write CAS/DAT files every 6 or 12 hours (to be specified in ANSYS Fluent under: Solution → Calculation Activities → Autosave Every Iterations). Further information you can find in the ANSYS documentation in the chapter "ANSYS Fluent Users Guide, Part II: Solution Mode; Chapter 40.16.1. Autosave Dialog Box").
Caution: Required Information Regarding the ANSYS Fluent Parallel Start for Different Releases
For ANSYS Fluent the information about the interconnect network (MPI fabric) of the used LRZ cluster sytem has to be provided to ANSYS Fluent by the user submitting the appropriate command line flags:
LRZ Linux Cluster | SLURM Queue | Cluster Owner | ANSYS Fluent Command Line Options ANSYS Versions : 2022.R1 → 2023.R2 | ANSYS Fluent Command Line Options ANSYS Versions : 2024.R1 → ∞ |
---|---|---|---|---|
CoolMUC-4 | serial, cm4_inter, cm4_tiny, cm4_std | LRZ | -mpi=intel -pib | -mpi=intel -pib |
TUM_Aer | tum_aer_cm4 | TUM Aerodynamik | -mpi=intel -pib.ofi | -mpi=intel -pib |
HTRP | htrp_cm4 | FRM-II | -mpi=intel -pib.infinipath | -mpi=intel -pib |
HTTF | httf_skylake | TUM LTF | -mpi=intel -pib.infinipath | -mpi=intel -pib |
HTFD | ? | TUM Thermofluiddynamik | -mpi=intel -pib.infinipath | -mpi=intel -pib |
SuperMUC-NG | all SLURM partitions (test, general, micro, | LRZ | -mpi=intel -pib.ofi | -mpi=intel -pib |
CoolMUC-4 : ANSYS Fluent Job Submission on LRZ Linux Clusters running SLES15 SP4 using SLURM
ANSYS Fluent Job Submission in CoolMUC-4 Serial Queue
The name "serial queue" is to a certain extend misleading here. The serial queue of LRZ Linux Clusters (CM4) differ from other CoolMUC-4 queues in that regard, that the access is granted to just one single cluster node and that the access to this cluster node is non-exclusive, i.e. might be shared with other cluster users depending on the resource requirements of the job as they are specified in the SLURM script. Nevertheless the launched application can make use of more than just a single CPU core on that cluster node, i.e. apply a certain degree of parallelization - either in shared memory mode or in distributed memory mode (MPI).
Important: As a general rule of thumb approx. 10.000-20.000 mesh cells (or even more) per CPU core should be provided by the simulation task in order to run efficiently on the CM4 hardware. The memory should be specified realistically and proportional to the number of used CPU cores in proportional relationship to the total number of CPU cores per compute node and the total amount of memory per node.
In the following an example of a job submission batch script for ANSYS Fluent on CoolMUC-4 (SLURM queue = serial) in the batch queuing system SLURM is provided.
Please use this large and powerful compute resource with a carefully specified number of CPU cores and a reasonably quantified amount of requested node memory per compute node of CM4. Don't waste powerful CM4 compute resources and please be fair to other CM4 cluster users.
Assumed that the above SLURM script has been saved under the filename "fluent_cm4_serial.sh", the SLURM batch job has to be submitted by issuing the following command on one of the Linux Cluster login nodes:
chmod 755 fluent_cm4_serial.sh sbatch fluent_cm4_serial.sh
Warning: Do NOT use additionally mpirun, mpiexec or any srun command to start the parallel processes. This is done by a MPI wrapper by the fluent startup script in the background. Also, do not try to change the default Intel MPI to any other MPI version to run ANSYS Fluent in parallel. On the LRZ cluster systems only the usage of Intel MPI is supported and known to work propperly with ANSYS Fluent.
ANSYS Fluent Job Submission on CoolMUC-4 in the cm4_tiny / cm4_std Queues
ANSYS Fluent is provided on the new CoolMUC-4 (CM4) compute nodes with support for the Intel MPI message passing library on CM4 queues (cm4_inter_large_mem, cm4_tiny, cm4_std) with Infiniband interfaces.
Important: You should use the cm4_tiny / cm4_std queues ONLY (!) with the full number of CPU cores per node, which requires large ANSYS Fluent simulations with substantially more then 1.5 Million mesh cells to run efficiently and not to waste sparse compute resources on CM4 queues. If your simulation has less mesh cells, then please run the tasks in the CM4 serial queue instead (see example above). As a general rule of thumb approx. 10.000-20.000 mesh cells per CPU core (or more) should be provided by the simulation task in order to run efficiently on the CM4 hardware. Cluster nodes in the cm4_tiny / cm4_std queues are assigned exclusively to the user and therefore all of the available CPU cores should be utilized by the user.
In the following an example of a job submission batch script for ANSYS Fluent on CoolMUC-4 (SLURM queue = cm4 | partition = cm4_cm4_tiny | cm4_std) in the batch queuing system SLURM is provided.
Please use this large and powerful compute resource only for tasks, which really can make efficient use of the 112 cores and 512 Gb node memory per compute node of CM4. Don't use this resource for rather smallish tasks, thereby wasting this powerful resources.
Assumed that the above SLURM script has been saved under the filename "fluent_cm4_tiny.sh", the SLURM batch job has to be submitted by issuing the following command on one of the Linux Cluster login nodes:
chmod 755 fluent_cm4_tiny.sh sbatch fluent_cm4_tiny.sh
SuperMUC-NG : ANSYS Fluent Job Submission on SNG running SLES15 using SLURM
In the following an example of a job submission batch script for ANSYS Fluent on SuperMUC-NG (Login node: skx.supermuc.lrz.de, SLURM partition = test) in the batch queuing system SLURM is provided. Please note that supported ANSYS versions on SuperMUC-NG are ANSYS 2021.R1 or later. At this time ANSYS 2023.R2 is the default version.
Assumed that the above SLURM script has been saved under the filename "fluent_sng_slurm.sh", the SLURM batch job has to be submitted by issuing the following command on one of the SuperMUC-NG login nodes:
sbatch fluent_sng_slurm.sh
Possible Measures to Mitigate Licensing Issues on SuperMUC-NG
Potential Licensing Issue with very large SNG Node Counts
It is recommended to touch the following environment variable only in the case, if you intend to run ANSYS Fluent on more then 90 SNG nodes, i.e. more then 4.320 CPU cores.
If one tries to run ANSYS Fluent on a large SNG node count as 90 to 100 SNG nodes (corresponding to 4.320 to 4.800 CPU cores of the Intel Skylake processors), then there might occur a potential licensing issue in obtaining the required ANSYS HPC licenses from the ANSYS license server licansys.lrz.de in time. This occurs because ANSYS Fluent cortex is obtaining the CFD licenses first and then the large number of ANSYS Fluent MPI processes are spawned on all participating compute nodes. This MPI process spawning takes a substantial amount of time. ANSYS Fluent has here a build-in fixed time-out of 10 minutes. If this time-out is exceeded before the MPI process spawning could have taken place, ANSYS Fluent will stop with a hard error and a core dump message. Users can extend this ANSYS Fluent time-out value beyond the 10 minutes limit by setting the following environment variable:
export FLUENT_START_COMPUTE_NODE_TIME_OUT=1600
The value of this environment variable is measured in seconds and the default value of 10 minutes corresponds to 600 seconds. So for launching ANSYS Fluent on e.g. 200 SNG nodes (corresponding to 9.600 CPU cores) a value of this environment variable of 1.600 seconds should be sufficient.
Potential Licensing Issues with longer response Time from the ANSYS License Server
It is recommended to touch the following environment variables only in the case, if you have experienced the licensing issues which are described here in the following.
With the update of the ANSYS license server software to version 2023.R2 we are experiencing performance decay of the license server in timely answering license requests from different ANSYS software packages and from different LRZ HPC systems. This can result in failure of application launch with not very obvious error messages and given reasons. The underlying issue is, that the ANSYS license server is not responding to a license request from an ANSYS application in the expected timeframe, so that the application assumes, that the license server is either down, not reachable over the network or responding too slowely.
These issues can be mitigated by the user of the ANSYS software by setting the following 3 environment variables and increasing the set default values:
# Extend the time limit for the application time-out when the license server responds slowely. # Default value is 20 seconds, max. value is 60 seconds: export ANSYSLI_TIMEOUT_CONNECT=60 # # Increase the amount of time ansyscl will wait for the FNP (license) checkout. # Default value is 5 second, max. value is arbitrary (N seconds): export ANSYSLI_TIMEOUT_FLEXLM=300 # # Increase the amount of time that elapses before the client times out if it cannot get a response from the server. # Default value is 60 seconds, max. value is 300 seconds: export ANSYSLI_TIMEOUT_TCP=300
ANSYS Fluent Journal Files
In contrary to ANSYS CFX the information provided in an ANSYS Fluent CAS file is either not sufficient or not properly used by ANSYS Fluent in order to run a simulation on a parallel cluster system by just submitting the CAS file. For a propper start and solver control of a parallel ANSYS Fluent simulation run in batch mode it is required to provide an at least minimal so-called journal file (file extension *.jou), as you can see in the above SLURM script examples by the command line option "-i Static_Mixer_run_Fluent.jou".
Such a basic journal file contains a number of so-called TUI commands to ANSYS Fluent (TUI = Text User Interface). Details on the TUI command language of ANSYS Fluent can be found in the ANSYS Fluent documentation, Part II: Solution Mode; Chapter 2: Text User Interface (TUI).
As a small example of such a basic ANSYS Fluent journal file the "Static_Mixer_run_Fluent.jou" from the above SLURM batch queuing scripts is provided here. This small journal file does the following:
- read CAS file for the Static_Mixer.cas simulation
- write an ANSYS Fluent settings file
- do hybrid initialization of the case
- print time stamps of wall clock time at the start and end of solver iterations, e.g. for performance analysis
- do 100 steady-state iterations of the ANSYS Fluent solver, pseudo-transient solution method
- write a pair of CAS/DAT results files
- write reports of parallel time usage, system status and the ANSYS Fluent simulation summary
Please note, that for carrying out the same simulation as a transient simulation the journal file would require modifications for the timestep integration. Same applies, if it is e.g. intended to initialize a run from a previously computed interpolation file or to continue a run from a previously obtained results file pair of CAS/DAT files.
ANSYS Fluent Check-pointing and Emergency Exit in Batch Mode
Normally a user does not has direct SSH access to the compute nodes of a Linux cluster where ANSYS Fluent is executed under the control of a batch queueing system like e.g. SLURM. Therefore the following approach is provided to control ANSYS Fluent to a certain extend from the Linux cluster login node, while it is executed on a certain Linux cluster partition. So it might be desirable to:
- Check-pointing the simulation:
The user might want to make ANSYS Fluent to write a pair of CAS/DAT files of the current intermediate simulation result on the next possible convenience, i.e. by reaching the next steady-state iteration or by finalizing the currently computed timestep. - Immediate stop or emergency exit from the current simulation run:
The user might want to make ANSYS Fluent stop the current simulation on the next possible convenience and by writing the last computed CFD result to a pair of CAS/DAT files for postprocessing or simulation restart. This could e.g. be desirable, if for some reason the time in the queue is about to expire, but in accordance with the specified journal file ANSYS Fluent would normally not come to a normal end of execution with results file writing. Without the following possibility ANSYS Fluent would just be canceled by the SLURM manager with total loss of the last computed result (i.e. no CAS/DAT file being written). With the following possibility this can be circumvented.
As in the example journal file above, the following lines need to be included, in order to make ANSYS Fluent check-pointing and emergency exit functionality available:
;..... ; Journal file snippet enabling check-pointing and emergency exit functionality ; ; Scheme commands to specify the check pointing and emergency exit commands: (set! checkpoint/check-filename "./check-fluent") (set! checkpoint/exit-filename "./exit-fluent") ; ;.....
With the above two scheme language definitions two filenames are getting declared, which afterwards during ANSYS Fluent runtime can be used to initiat either checkpointing (i.e. writing gzip'ed CAS/DAT files of the intermediate result) or an emergency exit from the current simulation. Once the above declarations had been included in the controlling journal file prior to SLURM submission, than fom a Linux terminal window on the Linux cluster login node and being positioned in the ANSYS Fluent working directory checkpointing/emergency exit can be initiated with the following Linux commands:
cd ~/<working-dir> # Linux command for initiating the ANSYS Fluent check-pointing: touch ./check-fluent # # Linux command to initiate an emergency exit of ANSYS Fluent from the currently running simulation: touch ./exit-fluent
If the gzip'ed pair of CAS/DAT files have been written, ANSYS Fluent automatically removes the created empty files from the filesystem, so that the user does not has to care about them. As an additional convenience, ANSYS Fluent creates in the case of an emergency exit a new journal file, which starts with reading in the latest created CAS/DAT files and contains the remaining part of the users journal file which up to this point has not been executed. This newly generated journal file can be used as a template for re-submission to SLURM and continuation of the simulation run.
ANSYS Fluent Journal Commands to flush large Linux I/O Buffer Caches
Under certain circumstances you might encounter in your output files (transcript) from ANSYS Fluent simulations warning messages of the follwoing kind:
> WARNING: Rank:0 Machine mpp3-xxxxxxx has 66 % of RAM filled with file buffer caches. This can cause potential performance issues. Please use -cflush flag to flush the cache. (In case of any trouble with that, try the TUI/Scheme command '(flush-cache)'.)
This warning messages point you to the fact, that on at least one computational node of your assigned Linux cluster partition it was detected on ANSYS Fluent startup that larger amounts of the node memory are occupied by large Linux I/O buffer caches. So what does that mean and how you should react?
- In principle this should not be the case, since the SLURM prolog of the batch queueing system should provide to you an almost clean set of computational nodes with purged Linux I/O buffer caches. But it can occur, that this SLURM prolog fails in entirely flushing the caches. If that is observed by you on a more regular basis, please file a LRZ service request and provide the ANSYS Fluent output file (transcript) to the LRZ support staff.
- The warning message from ANSYS Fluent is not a very serious concern and your intended simulation run should continue without further issues. But it can become a concern from a simulation performance point of view. Memory occupation by larger remaining Linux I/O buffer caches can (and most likely will) lead to an unbalanced distribution of ANSYS Fluent's memory allocations with respect to the launched ANSYS Fluent tasks on the dual socket compute node, i.e. a number of ANSYS Fluent tasks need to permanently access their corresponding data not from the pürocessors own local memory but from the memory of the other neighbouring processor on the dual-socket compute node, thereby encountering less efficient memory access. This potentially can lead to ANSYS Fluent performance degradation in the order of up to 10-12% based on the measurable increase in simulation run time. For a 48 hours simulation run 10-12% increase in simulation time are about 5 hours plus.
- Consequently, if you permanently observe these warning messages, you potentially would like to get the following command lines included in your ANSYS Fluent journal file in order to get the large Linux I/O buffer caches flushed by yourself prior to the startup of ANSYS Fluent for your intended simulation. Please do not include these lines by default, because they delay the application startup a bit depending on the amount of available system memory on the nodes. Start-up delay can be up to 1-3 minutes in comparison with normal startup procedure of ANSYS Fluent.
- Do not try to use the above mentioned recommendation to include the option "-cflush" on the ANSYS Fluent command line in your SLURM script, since this does not work for the Linux cluster diskless compute nodes. Instead use the following lines in your simulation journal file:
; Following commands reduce the pending Linux I/O buffers to a minimum. ; Large I/O buffers can potentially have a performance impact on ANSYS Fluent ; due to non-local memory allocation in dual-processor systems of the compute nodes. (rpsetvar 'cache-flush/target/reduce-by-mb 12288) (flush-cache) ; ; ... followed by the remaining code lines from your normal ANSYS Fluent journal file... ; /file/set-batch-options no yes yes no ; Option to disable HDF5-based CFF file format (legacy CAS/DAT files): /file/cff-files? no /file/read-case "./Static_Mixer.cas" ; ...
The "magic number" 12288 (Mb) in the 1st command line of the above ANSYS Fluent journal file snippet is the specification of the amount of system memory, which cannot be purged due to resident Linux OS files in the memory of the diskless compute nodes. This is mainly to be addressed to buffers of the GPFS parallel file system and the OS system image. Since the size of the preloaded OS system image can be subject to slight changes over time, this is rather an experimental value than just a fixed number. If you observe, that the flushing process is not carried out successfully (by error messages in the ANSYS Fluent transcript), try to increase this number. The given number essentially means, that on a compute node with 64 Gb ANSYS Fluent tries to clean-up/flush about 64Gb - 12.288Gb = 51.772Gb of memory.
Creation of Graphics Output from ANSYS Fluent in Batch Mode
Normally in batch mode ANSYS Fluent is only producing a transcript of the simulation run, i.e. a tabulated list of computed residuals and monitor data together with other text-based information about the current simulation run, and finally the specified output files (CAS/DAT files, backup files, monitor files and reports). But at least it would be desirable, to have from a batch simulation run a PNG or JPEG graphics file with the graphical representation of the convergence history of ANSYS Fluent available. This can be realized by doing the following.
First of all ANSYS Fluent needs to be started in the SLURM script with the correct command line options and by including a so-called NULL graphics driver:
fluent 3ddp -mpi=intel -t$LOADL_TOTAL_TASKS -cnf=$FLUHOSTS -gu -driver null -i Static_Mixer_run_Fluent.jou > Static_Mixer_run_Fluent.out
Here the options "-gu -driver null" allow the generation and export of graphics files from an ANSYS Fluent simulation run in batch mode.
The user than has to organize this graphics file export by appropriate journal commands or by defined calculation activities. For example, the following journal commands can be used to write the convergence history of the ANSYS Fluent simulations as colored and black&white residual diagrams in two separate PNG files:
;..... ; ; write a PNG file of the residual history ; /solve/monitors/residual/plot? yes /display/set-window 1 /display/set/picture/driver png /display/save-picture residual_history.png /display/set/picture/color-mode/mono-chrome /display/set-window 1 /display/save-picture residual_history_bw.png ; ;.....
By similar journal file statements e.g. diagrams of monitor data or postprocessing graphics like contour plots, streamlines, isosurfaces, etc. being previously defined by the postprocessing functionality inside ANSYS Fluent can be exported from an ANSYS Fluent simulation in batch mode as well.
Compilation of ANSYS Fluent UDF's (User-defined Functions)
For customization purposes ANSYS Fluent provides essentially the following two possibilities:
- Starting with the ANSYS Fluent 2019.R1 release (early beta in R19.2), the CFD solver is now supporting algebraic expressions, like ANSYS CFX and Discovery (formerly known as AIM Fluids) users are used to already for a long time. Consequently, for many purposes, where users had to write a piece of a C subroutine in the past, the same goal can now be achieved by just inserting a named algebraic expression in ANSYS Fluent's GUI, e.g. for specifying a turbulent velocity profile at an inlet cross-section of your geometry in accordance with the 1/7th power law.
- Nevertheless in many cases User-defined Functions (UDF's) are still rather common for ANSYS Fluent to customize build-in fluids solver capabilities and model functionalities. UDF's are user programmed subroutines, written in C language, which are hooked-up in the CFD setup to fullfill their purpose.
Prior to be able to run ANSYS Fluent for a CAS file depending upon a UDF, the UDF written in C language needs to be compiled. Best a UDF is compiled on a system with the same operating system (OS) and same processor architecture as the target compute cluster. And the UDF library needs to be created for the same target floating point number precision, i.e. either for single or double precision.
In the case of LRZ Linux Clusters the need for a pre-compilation of UDF's has been removed starting with ANSYS Fluent module files from January 2021. The ANSYS Fluent simulations can be started in batch mode on the Linux Clusters just by providing the corresponding CAS file with hooked-up UDF calls together with the UDF source files in C programming language. ANSYS Fluent will compile the corresponding UDF's into a library just on-the-fly, if the ./libudf subdirectory is not yet existing. This compilation is done automatically prior to run the ANSYS Fluent simulation on the Linux Cluster.
On SuperMUC-NG no UDF compilation environment is available on the compute nodes. Please open your case once in an interactive ANSYS Fluent session which had been started on a SNG login node with 2 tasks (2 CPU cores). This will compile the corresponding embedded UDF on the SNG login node and the corresponding UDF library will be stored in the UDF subdirectory. Afterwards you can use the pre-compiled UDF for computations on the SNG compute nodes.