ANSYS CFX (Fluid Dynamics)
ANSYS CFX is a general purpose Computational Fluid Dynamics (CFD) code. ANSYS CFX is part of the ANSYS software portfolio since 2003 (formerly owned by AEA Technology). As a general purpose CFD code ANSYS CFX provides a wide variety of physical models for turbulent flows, acoustics, Eulerian and Lagrangian multiphase flow modeling, radiation, combustion and chemical reactions, heat and mass transfer including CHT (conjugate heat transfer in solid domains). ANSYS CFX is in particular powerful because of its easy-to-use CEL/CCL command and expression language, which allows the extension or modification of implemented physical models without the need of programming. With CCL and additional user-defined variables many physical models can be implemented by just incorporating the required formulas, algebraic expressions and even transport equations in the ANSYS CFX setup without the need of User-FORTRAN routines.
Further information about ANSYS CFX, licensing of the ANSYS software and related terms of software usage at LRZ, the ANSYS mailing list, access to the ANSYS software documentation and LRZ user support can be found on the main ANSYS documentation page.
Getting Started
Once you are logged into one of the LRZ cluster systems, you can check the availability (i.e. installation) of ANSYS CFX software by:
> module avail cfx
Load the prefered ANSYS CFX version environment module, e.g.:
> module load cfx/2024.R2
One can use ANSYS CFX in interactive GUI mode for the only purpose of pre- and/or postprocessing on the Login Nodes (Linux: SSH Option "-Y" or X11-Forwarding; Windows: using PuTTY and XMing for X11-forwarding). This interactive usage is mainly intended for making quick simulation setup changes, which require GUI access. And since ANSYS CFX is loading the mesh into the login nodes memory, this approach is only applicable to comparable small cases. It is NOT permitted to run computationally intensive ANSYS CFX simulation runs or postprocessing sessions with large memory consumption on Login Nodes. The formerly existing Remote Visualization systems have been switched off in March 2024 without replacement due to their end-of-life. Any work with the ANSYS software being related to interactive mesh generation as well as graphically intensive pre- and postprocessing tasks need to be carried out on local computer systems and by using an ANSYS Academic Research license, which is available from LRZ for an annual license fee.
The so-called ANSYS CFX launcher is started by:
> cfx5launch
Alternatively on Linux systems you may want to start the standalone ANSYS CFX applications individually by:
> cfx5pre -gr mesa or: > cfx5post -gr mesa
Controlling an ANSYS CFX Simulation Run in Batch Mode
It is not permitted to run computationally intensive ANSYS CFX simulations on front-end Login Nodes in order not to disturb other LRZ users. However, the ANSYS CFX Solver Manager can be used to either analyze the output file information of a finished CFX run or to even monitor a still running ANSYS CFX simulation, which is running on the Linux Cluster or SuperMUC-NG in batch mode. This is accomplished by starting the corresponding ANSYS CFX application on a Login Node and by linking it with the corresponding simulation run:
> cfx5solve --> File Menue --> Monitor Run in Progress --> Select the corresponding "run directory" --> File Menue --> Monitor Finished Run --> Select the corresponding *.res file
ANSYS CFX Solver Manager communicates with a still running ANSYS CFX simulation run via file output in the run directory. Therefore existing larger latencies in the job output and file systems can lead to delays in the update of e.g. the output file information or observed solver monitors. Further, the ANSYS Solver Manager can be used to stop a still running ANSYS CFX simulation by stopping the ANSYS CFX simulation after finishing the next steady-state iteration or timestep without loss of information and by writing out a *.res and *.out file for a potential later restart or postprocessing analysis. The same kind of ANSYS CFX emergency stop can be accomplished in a Linux shell by doing the following:
> cd <your-working-directory> > cd <your-DEF-filename>_00x.dir > touch stp
In the code snippet above you need to replace the name of your working directory and the name of your current simulation run. By creating an empty file of name "stp" in the ANSYS CFX run directory applying the command "touch stp", ANSYS CFX will recognize this file on the next possible convenience of the code and initiate a clean stop of your simulation run. This has to be given preference over a cancelation of your batch job, since this will lead to a hard cancelation of the ANSYS CFX simulation run with loss of the interim simulation result.
If the user just wants to write a RES/BAK file of the current intermediate CFD result from the current simulation and without interrupting the ongoing ANSYS CFX simulation, than this can be accomplished by starting the ANSYS CFX Solver Manager on a Linux cluster login node, connect the CFX Solver Manager with the still ongoing simulation run as described above and by pressing on the provided "Save" icon (Create a backup of the run at the current timestep). ANSYS CFX will write a full BAK file being suitable for postprocessing and/or a solver restart and will continue with the ongoing simulation.
ANSYS CFX Parallel Execution (Batch Mode)
All parallel ANSYS CFX simulations on LRZ Linux Clusters and SuperMUC-NG are submitted as non-interactive batch jobs to the appropriate scheduling system (SLURM) into the different pre-defined parallel execution queues. Further information about the batch queuing systems and the queue definitions, capabilities and limitations can be found on the documentation pages of the corresponding HPC system (LinuxCluster, SuperMUC-NG)
For job submission to a batch queuing system a corresponding small shell script needs to be provided, which contains:
- Batch queueing system specific commands for the job resource definition
- Module command to load the ANSYS CFX environment module
- Start command for parallel execution of cfx5solve with all appropriate command line parameters
The intended syntax and available command line options for the invocation of the cfx5solve command can be found out by:
> cfx5solve -help
The configuration of the parallel cluster partition (list of node names and corresponding number of cores) is provided to the cfx5solve command from the batch queuing system (SLURM) by the provision of an automatically generated environment variable $CFX_HOSTLIST, based on the information provided by the cluster user in the job resource definition. Furthermore the environment variable $LRZ_SYSTEM_SEGMENT is predefined by the used cluster system as well and is just piped through to the cfx5solve command as description of the parallel start-up method and communication interconnect to be used for the parallel ANSYS CFX simulation run.
Furthermore we recommend to LRZ cluster users to write for longer simulation runs regular backup files, which can be used as the basis for a job restart in case of machine or job failure. A good practice for a 48 hour ANSYS CFX simulation (max. time limit) would be to write backup files every 6 or 12 hours. Further information you can find in the ANSYS documentation in the chapter "ANSYS CFX-Pre User's Guide" (Output Control → User Interface → Backup Tab). Furthermore it is recommended to use the "Elapsed Wall Clock Time Control" in the job definition in ANSYS CFX Pre (Solver Control → Elapsed Wall Clock Time Control → Maximum Run Time → <48h or max. queue time limit respectively). Please plan for the setting of this wall clock time limit enough time buffer for the writing of output and results files, which can be a time consuming task depending on your application.
CoolMUC-4 : ANSYS CFX Job Submission on LRZ Linux Clusters under OS SLES15 SP6 using SLURM
ANSYS CFX Job Submission on CoolMUC-4 in Serial Queue
The name "serial queue" is to a certain extend misleading here. The serial queue of LRZ Linux Clusters (CM4) differ from other CoolMUC-4 queues in that regard, that the access is granted to just one single cluster node and that the access to this cluster node is non-exclusive, i.e. might be shared with other cluster users depending on the resource requirements of the job as they are specified in the SLURM script. Nevertheless the launched application can make use of more than just a single CPU core on that cluster node, i.e. apply a certain degree of parallelization - either in shared memory mode or in distributed memory mode (MPI).
Important: As a general rule of thumb approx. 10.000-20.000 mesh cells (or even more) per CPU core should be provided by the simulation task in order to run efficiently on the CM4 hardware. The memory should be specified realistically and proportional to the number of used CPU cores in proportional relationship to the total number of CPU cores per compute node and the total amount of memory per node.
In the following an example of a job submission batch script for ANSYS CFX on CoolMUC-4 (SLURM queue = serial) in the batch queuing system SLURM is provided.
Please use this large and powerful compute resource with a carefully specified number of CPU cores and a reasonably quantified amount of requested node memory per compute node of CM4. Don't waste powerful CM4 compute resources and please be fair to other CM4 cluster users.
You should store the above listed SLURM script under e.g. the filename cfx_cm4_serial.sh. The shell script then needs to be made an executable file and it must be submited to the job scheduler by using the command:
chmod 755 cfx_cm4_serial.sh sbatch cfx_cm4_serial.sh
ANSYS CFX Job Submission on CoolMUC-4 in the cm4_tiny / cm4_std Queues
ANSYS CFX is provided on the new CoolMUC-4 (CM4) compute nodes with support for the Intel MPI message passing library on CM4 queues (cm4_inter_large_mem, cm4_tiny, cm4_std) with Infiniband interfaces.
Important: You should use the cm4_tiny / cm4_std queues ONLY (!) with the full number of CPU cores per node, which requires large ANSYS CFX simulations with substantially more then 1.5 Million mesh cells to run efficiently and not to waste sparse compute resources on CM4 queues. If your simulation has less mesh cells, then please run the tasks in the CM4 serial queue instead (see example above). As a general rule of thumb approx. 10.000-20.000 mesh cells per CPU core (or more) should be provided by the simulation task in order to run efficiently on the CM4 hardware. Cluster nodes in the cm4_tiny / cm4_std queues are assigned exclusively to the user and therefore all of the available CPU cores should be utilized by the user.
In the following an example of a job submission batch script for ANSYS CFX on CoolMUC-4 (SLURM queue = cm4 | partition = cm4_cm4_tiny | cm4_std) in the batch queuing system SLURM is provided.
Please use this large and powerful compute resource only for tasks, which really can make efficient use of the 112 cores and 512 Gb node memory per compute node of CM4. Don't use this resource for rather smallish tasks, thereby wasting this powerful resources.
Assumed that the above SLURM script has been saved under the filename "cfx_cm4_tiny.sh", the SLURM batch job has to be submitted by issuing the following command on one of the Linux Cluster login nodes:
chmod 755 cfx_cm4_tiny.sh sbatch cfx_cm4_tiny.sh
Warning: Do NOT use additionally mpirun, mpiexec or any srun command to start the parallel processes. This is done by a MPI wrapper by the cfx5solve startup script in the background. Also, do not try to change the default Intel MPI to any other MPI version to run ANSYS CFX in parallel. On the LRZ cluster systems only the usage of Intel MPI is supported and known to work propperly with ANSYS CFX.
SuperMUC-NG : ANSYS CFX Job Submission on SNG using SLURM
In the following an example of a job submission batch script for ANSYS CFX on SuperMUC-NG (Login node: skx.supermuc.lrz.de, SLURM partition = test) in the batch queuing system SLURM is provided. Please note that supported ANSYS versions on SuperMUC-NG are ANSYS 2020.R1 or later. At this time ANSYS 2024.R2 is the default version.
Assumed that the above SLURM script has been saved under the filename "cfx_sng_slurm.sh", the SLURM batch job has to be submitted by issuing the following command on one of the SuperMUC-NG login nodes:
sbatch cfx_sng_slurm.sh