|Table of Contents|
What is Comsol?
COMSOL Multiphysics is a finite element analysis, solver and simulation software/FEA software package for various physics and engineering applications, especially for coupled phenomena, and multi physics.
License Terms and Usage Conditions
Important Note: Due to the rather small interest and request for usage of Comsol on the LRZ HPC clusters during the past years, we do not prolong the Comsol support at the LRZ. The persisting Comsol versions can still be used within the scope of the existing license. But newer releases or modules won't be available.
COMSOL Multiphysics is a licensed software. Therefore, Comsol sessions/jobs require permanent access to a license server such as the one maintained at the LRZ.
LRZ users without own licenses can use the LRZ Comsol licenses on a first-come-first-serve base. Users with own licenses can use those if Comsol Multiphysics GmbH agrees! If users want to perform simulations on SuperMUC, please contact the ServiceDesk and request shipping your licenses to the LRZ license server, because this is the only license server made accessible on SuperMUC (please ensure the agreement of Comsol Multiphysics GmbH!). The Linux cluster does not fall under these restrictions.
The LRZ holds currently 5 general COMSOL Multiphysics FNL licenses, and one license for each of the following modules: AC/DC, Battery & Fuel Cells, Heat Transfer, Structural Mechanics, Accoustics.
If more licenses (also for other modules) are required, please contact our ServiceDesk. For modules of broader general interest among the universities, the LRZ might accept the costs of license procurement. Other financing schemes can be negotiated.
Being logged in on any of the LRZ cluster systems, please, check the available (i.e. installed) versions:
$ module avail comsol
COMSOL Multiphysics can be used by loading the appropriate module,
$ module load comsol
The GUI can be started via (requires SSH X forwarding and a locally running X server (e.g. Xming or VcXsrv under Windows), or VNC)
(!) WARNING (!): The latest versions of COMSOL Multiphysics require the WebKitGTK+ library for the build-in Help System. This library is not available on the login nodes of the LRZ clusters and SuperMUC. Please, use the LRZ Remote Visualization System for pre and post processing! (see Comsol requirements)
On the command line, there are many options for a COMSOL run. Please, enter
$ comsol -help
to get a list of all possible options.
Batch Jobs on the Linux Cluster
Generally, Comsol supports shared (threads) and distributed (MPI) memory parallelization, and Slurm. See the Comsol Knowledge Base page, for more details.
Small jobs can be started inside the GUI or on the command line on the login nodes only for testing. For larger jobs and production runs, the user must use our job queueing systems (SLURM on Linux clusters/LoadLeveler on SuperMUC) in order to use or parallel compute resources!
For example, on the Linux Cluster (CoolMUC-2), a SLURM job script might look like this:
#!/bin/bash #SBATCH -o ./jobComsol_%j_%N.out #SBATCH -D . #SBATCH -J comsol_job #SBATCH --get-user-env #SBATCH --clusters=cm2 #SBATCH --clusters=cm2_std # check whether matching your needs! cm2_tiny, ... #SBATCH --qos=cm2_std #SBATCH --nodes=4 #SBATCH --ntasks-per-node=4 # Haswell-specific !!! #SBATCH --cpus-per-task=7 # Haswell-specific !!! #SBATCH --mail-type=none #SBATCH --mail-user=<your email> #SBATCH --time=00:30:00 module load slurm_setup module rm intel-mpi module load intel-mpi/2018.4.274 # necessary on SLES 15 module load comsol/5.5 comsol batch -mpibootstrap slurm -mpifrabrics shm:ofa -inputfile micromixer_cluster.mph -outputfile output.mph -tmpdir $TMPDIR
To create the hostfile explicitly is another option. COMSOL also provides hybrid task execution. So, using shared memory parallelism (OpenMP) is highly encouraged as runtimes can be considerable smaller.
Concerning the MPI fabrics: If you want to use Comsol on CoolMUC-3,
I_MPI_FRABRICS must be set to
shm:tmi. Alternatively, you can also use
-mpifabrics shm:tmi on the command-line.
Temporary folder: Comsol uses per default
/tmp as temporary work folder. This is a (also on compute nodes) local folder, which is but rather small. When you use it you most probably experience an error like
No space left on device. The temporary work folder for comsol can be changed via the option
-tmpdir $TMPDIR. As temporary work folder path any path with access permissions and enough capacity can be used.
$TMPDIR points to
$SCRATCH and fulfills these criteria.
More information for parallel processing can be found in the Comsol Knowledge Base.
Special Batch related Topics
COMSOL can be executed with several commands - amongst other with batch. To obtain an overview over batch related command line options, issue
$ comsol batch -help
Useful options are -inputfile, -outputfile, -batchlog, and -study. The latter selects a special study from the inputfile to be performed.
Playing with NUMA-specific options (MPI-tasks versus OpenMP threads) might yield some additional performance enhancements. But general recommendations cannot be given. One should perform tests in specific cases.
Playing with MPI related options underlies the same recommendations as for NUMA-specific options. As COMSOL provides its own MPI (Intel) distribution, there might be interference with the LRZ MPI installations. If problems occur due to this circumstance, please contact the ServiceDesk!
Proposals for Comsol Workflows at LRZ
The COMSOL workflow usually comprises pre-processing, solving, and post-processing. Pre- and post-processing are highly bound to the GUI, and can be in part done locally, on the LRZ cluster login nodes, or the LRZ Remote Visualization System, if hardware graphics acceleration is required. The meshing and compute steps, specifically for larger cases, should be done on the compute nodes! So, we propose the following workflows (depends also on the Licenses you have and want to use):
2. Batch Job from GUI
Using LRZ Licenses
If you want to use the LRZ COMSOL licenses, the whole workflow should be accomplished on the LRZ systems. For pre- and postprocessing, specifically if elaborate graphical processing is involved, we recommend the usage of the LRZ Remote Visualization System. Each node there has several cores and a GPU (graphics card). For smaller and not too long lasting cases and tasks, the 28 cores can be used also for the computing steps. Please do not waste licenses by opening several GUIs! And for that reason, please run compute steps in parallel from within the GUI (not on an extra command line). To exploit also the rendering hardware, please open in the GUI and switch in Options-> Preferences->Graphics and Plot Windows->Rendering to OpenGL (and restart the GUI). Under Options-> Preferences->General you can also change the preferred language of the GUI. In order that this works on the Remote Visualization Nodes, you must execute Comsol via
In larger or longer lasting cases, please prepare your case file on the LRZ Remote Visualization System and submit COMSOL BATCH jobs e.g. on the Linux Cluster as described above. The HOME and WORK directories are mounted both on the Linux cluster and the Remote Visualization nodes. So you don't need to copy large files to and fro.
Please take care to put the mark on "use batch license" if you submit jobs to SLURM from within the GUI! This helps to maximize the deployment of all available licenses.
Using own Licenses
If you have your own licenses for COMSOL (either from own license server, or licenses hosted on the LRZ license server), you can use them. The LRZ Linux cluster nodes can access the LRZ license server (comsol.lrz.de) and the internet. If you want to use your own licenses, go sure that your license server is reachable from the outside! Before considering this, ensure that Comsol Multiphysics GmbH agrees on such an operational mode!
For SuperMUC, only the the LRZ license server is reachable.
Potentially, there is also a chance to connect the GUI from the Laptop or local PC via the LRZ login nodes to the LRZ SLURM job scheduler (as described in the Talk above). But this might only work if the LRZ license server is used exclusively. If interested in this configuration of the workflow, please contact our ServiceDesk!
LoadLeveler (on SuperMUC) is currently not supported by the COMSOL Multiphysics GUI.