OpenFOAM on HPC Systems
What is OpenFOAM®?
In its simplest installed form, OpenFOAM® is a set of libraries and solvers for the solution of a wide range of problems in Fluid Dynamics and Continuum Mechanics from laminar simple regimes to DNS or LES including reactive turbulent flows. Many solvers have multiphysics features, where structural mechanics domains are coupled with CFD domains, or new features are available as a module for Molecular Dynamics (MD).
But OpenFOAM® also represents a C++ framework for manipulating fields, and solving general partial differential equations by means of finite volume methods (FVM) on unstructured grids. Thus, it is easy to adapt to complex geometries and wide spectra of configurations and applications. Furthermore, this software is MPI parallel, and it has a large set of useful utilities that allow importing and exporting meshes, configurations, etc., and they interface to, among others, Fluent, CFX, Paraview, and Ensight.
License Terms and Usage Conditions
OpenFOAM® is published under a Common Creative License (ESI), and source code is freely available. Users are encouraged to download and compile a preferred version/flavor of OpenFOAM on the LRZ HPC cluster.
ESI OpenFOAM, The OpenFOAM Foundation, Foam Extend
The installation procedure can be a bit clumsy - although one can in general follow the installation instructions guide of the respective OpenFOAM distributors. Decisions you have to make are about the MPI-flavor (Intel MPI, OpenMPI, etc.), and the usage of the various Third-Party libraries. The best recommendation is usually to compile any dependency you need by yourself. Then namely you can be sure to be independent (to a large extent) of the LRZ system.
In cases you need help, please contact the LRZ ServiceDesk.
We offer some support for maintained OpenFOAM installations (see next paragraph). The decision on version and flavor is guided by the size of request groups. We can support only fully released OpenFOAM versions (no development versions).
Getting Started
Check the available (i.e. installed) versions (for instance; Various systems might support different versions/modifications/...):
~> module avail openfoam ------- /lrz/sys/spack/.......... ------- openfoam/2006-gcc11-impi-i32 openfoam/2006-gcc11-impi-i64
If something is available, it is easy to load a module like as follows.
~> module load openfoam-com/2006-gcc11-impi-i64
(i32
and i64
mean the indexes are 32/64-bit numbers. For large meshes, i64
is probably better.)
As a first step, you might consider copying the large suite of OpenFOAM tutorials into your FOAM_RUN directory by invoking:
~> mkdir -p $FOAM_RUN ~> cp -r $FOAM_TUTORIALS $FOAM_RUN/
Smaller serial tutorial cases can be run on the login node. The larger tutorial cases, especially the MPI parallel cases, must be submitted to the HPC clusters (see below).
Pre- and Post-Processing
For pre- and post-processing, i.e. meshing or visualizing results, the LRZ Remote Visualization System is a possible option. Paraview is the visualization tool for OpenFOAM.
For post-processing using ParaView, you only need to create a file with ending .foam
(e.g. touch bla.foam
), and open that file from ParaView.
You can either download your data to your PC/Laptop and analyze them there. Or, for larger cases, you can use one of the options ParaView offers to analyze the data in place (i.e. remotely on the LRZ systems). In order that this works e.g. in a parallel fashion, you need to leave the case decomposed, and start paraview
or pvserver
with exactly as many MPI tasks as processor folders are there. Alternatively, you can use reconstructPar
and decomposePar
to change the composition. In the GUI, you need to specify (hook in the right place) that you wish to open the decomposed case. If the number of MPI tasks does not match the number of processor folders, you will earn error messages.
Batch Jobs on LRZ Clusters
Production runs and longer simulations must be performed on the HPC clusters. A SLURM job on the Linux cluster looks for example like:
For different LRZ systems, please consult the documentation for further or different Slurm and Batch job settings! Submission is done via sbatch myfoam.sh
.
In order that this works correctly, the total number of MPI tasks (nodes times tasks-per-node) must be equal to numberOfSubdomains inside system/decomposeParDict!
Own Installations
From Source
Please have a look into this guide. It covers essentially the installation from source procedure documented on the OpenFOAM's docu pages.
Using user_spack
Using spack is maybe the simplest approach on our system.
If problems occur, there is not much we (LRZ) can do. We are not OpenFOAM developers, nor maintainers of the spack-packages. We'd kindly ask you to report issues directly to them. But often, it is worth trying different compilers or compiler versions. OpenFOAM reacts very sensible to that. And also please check what you really need. The simpler the dependency trees, the large the chances for success (e.g. there is rarely need to build VTK or paraview).
Building User-defined Solvers/Libraries against the Spack OpenFOAM Installation
An example on CoolMUC-X might look as follows:
~> module rm intel-mpi intel-mkl intel ~> module load gcc intel-mpi openfoam/2006-gcc8-i64-impi
(You can conserve this environment using module's collection feature. This simplifies the development work using frameworks like OpenFOAM.)
The name of the OpenFOAM module contains the compiler – gcc/8
. So this module needs be loaded.
The MPI module is usually the default Intel MPI module. (Up to maybe that one better uses the version for GCC, as shown. But as Intel MPI is rather well behaved, and the actual MPI library is wrapped by the libPStream.so of OpenFOAM, you should hardly face occasions where you need to link directly against the MPI library.)
Example continued ...
~> cp -r $FOAM_APP/solvers/incompressible/pimpleFoam . ~> cd pimpleFoam ~/pimpleFoam> find . -name files -exec sed -i 's/FOAM_APPBIN/FOAM_USER_APPBIN/g; s/FOAM_LIBBIN/FOAM_USER_LIBBIN/g' {} + ~/pimpleFoam> WM_NCOMPPROCS=20 wmake [...] ~/pimpleFoam> which pimpleFoam <your-HOME>/OpenFOAM/<your USER ID>-v2006/platforms/linux64GccDPInt64-spack/bin/pimpleFoam
That's it. Please note that using FOAM_USER_APPBIN
and FOAM_USER_LIBBIN
instead of FOAM_APPBIN
and FOAM_LIBBIN
is essential, because you have no permissions to install anything into our system folders.
Your user bin and lib paths are usually also before the system paths, such that they should be found first. Therefore, in the example above, pimpleFoam
is found in the user path, not the system path.
For testing, prepare a small case for maybe one or two nodes, and use the interactive queue of the cluster to run some time steps.
GPFS parallel Filesystem on the LRZ HPC clusters!
OpenFOAM produces per default lots of small files - for each processor, every writeout step, and for each field. GPFS (i.e. WORK and SCRATCH at the LRZ) is not made for such finely grained file/folder structure. OpenFOAM does not seem to support any HDF5 or NetCDF output, currently, to face this and similar issues.
For more recent versions of OpenFOAM, you can but use collated I/O. Using FOAM_IORANKS environment variable, you can even determine, which ranks perform the I/O. So, our recommendation is to have one rank per node that performs I/O.
As I/O might be a bottleneck anyway, thinking about the problem before running it brute-force on the HPC clusters with possibly hundreds or even thousands of parallel threads, could be advisable! What do you want to get out of your simulation? What are the relevant questions to be answered? OpenFOAM offers also features for in-situ post-processing (Catalyst might be an option here). This mitigates the I/O problems largely, because it reduces the necessity of storing many data and/or hundreds of thousands of small files for the offline post-processing using e.g. Paraview. Please consult the OF User Guide, and look for Function Objects!
Post-Processing via Paraview
Please, use dedicated LRZ paraview modules. NOT paraFoam!
- Small cases (<200 MByte) can be copied (scp,rsync,filezilla,winscp,sftp, ...) and analyzed locally using ParaView
- Medium-size cases (<1 GByte) could be analyzed using ParaView on login nodes; we recommend to start VNC server-client connection through SSH tunnel; also, pvservers can be started in parallel (use
-launcher fork
option ofmpiexec
) and connected to locally on that node through the ParaView GUI (or, also possible, through the SSH tunnel if set properly - we do not recommend this, however). Please, be kind to the other users on the login node and do not monopolize all resources! - Large cases. Start a Slurm job and distribute pvservers parallel on as many nodes as are necessary as described in ParaView Server-Client mode section.
Remarks, Topics, Troubleshooting
swak4Foam Installation
Swak4Foam is an OpenFOAM-external package, and can be installed as any other user provided solver. We can give no guarantee that the following procedure will always work out of the box. Specifically for the newest OpenFOAM versions, compatibility might not be given. However, the following procedure might succeed for users (by example on the Linux Cluster).
The source is the development branch of swak4Foam (see http://openfoamwiki.net/index.php/Installation/swak4Foam/Downloading#swak4Foam_development_version).
For usage, it is important that you (currently) execute manually before (put it into your ~/.profile
, if you want)
> export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$FOAM_USER_LIBBIN
No change of the modules are otherwise necessary for the usage. (You can check with ldd $FOAM_USER_APPBIN/fieldReport
, for instance. No not found
shoud appear!)
Legacy Versions
Old versions of OpenFOAM cannot be supported on LRZ-systems (more than three releases back from the current version).
You need to port to one of the supported recent versions of OpenFOAM.
Specifically, the older versions do not support the vector-register hardware of modern CPUs!
And generally, we recommend to use the latest stable release available.