Comsol on HPC Systems

Important Note!!

Due to the rather small interest and request for usage of Comsol on the LRZ HPC clusters during the past years, we decided to not prolong the Comsol support at the LRZ. 
The persisting Comsol versions and installations can still be used within the scope of the existing license, and available functionality (Comsol version 5.6 is officially not supported for the OS version on the Linux cluster).
But newer releases or modules won't be available.

What is Comsol?

COMSOL Multiphysics is a finite element analysis, solver and simulation software/FEA software package for various physics and engineering applications, especially for coupled phenomena, and multi physics.

License Terms and Usage Conditions 

COMSOL Multiphysics is a licensed software. Therefore, Comsol sessions/jobs require permanent access to a license server such as the one maintained at the LRZ.
LRZ users without own licenses can use the LRZ Comsol licenses on a first-come-first-serve base. Users with own licenses can use those if Comsol Multiphysics GmbH agrees! If users want to perform simulations on SuperMUC, please contact the ServiceDesk and request shipping your licenses to the LRZ license server, because this is the only license server made accessible on SuperMUC (please ensure the agreement of Comsol Multiphysics GmbH!). The Linux cluster does not fall under these restrictions.

The LRZ holds currently 5 general COMSOL Multiphysics FNL licenses, and one license for each of the following modules: AC/DC, Battery & Fuel Cells, Heat Transfer, Structural Mechanics, Accoustics.
If more licenses (also for other modules) are required, please contact our ServiceDesk. For modules of broader general interest among the universities, the LRZ might accept the costs of license procurement. Other financing schemes can be negotiated.

Getting Started

Being logged in on any of the LRZ cluster systems, please, check the available (i.e. installed) versions:

> module use /lrz/sys/share/modules/extfiles
> module avail comsol
------------- /lrz/sys/share/modules/extfiles -----------------
comsol/5.6_u2

COMSOL Multiphysics can be used by loading the appropriate module,

> module load comsol

The GUI can be started via (requires SSH X forwarding and a locally running X server (e.g. Xming or VcXsrv under Windows), or VNC)

 comsol

(!) WARNING (!): COMSOL Multiphysics requires WebKitGTK+ library for the build-in Help System. This library is not available on the LRZ clusters. (see Comsol requirements)

On the command line, there are many options for a COMSOL run. Please, enter

$ comsol -help

to get a list of all possible options.

Batch Jobs on the Linux Cluster

Generally, Comsol supports shared (threads) and distributed (MPI) memory parallelization, and Slurm. See the Comsol Knowledge Base page, for more details.

Small jobs can be started inside the GUI or on the command line on the login nodes only for testing. For larger jobs and production runs, the user must use our job queueing systems (SLURM on Linux clusters/LoadLeveler on SuperMUC) in order to use or parallel compute resources!
For example, on the Linux Cluster (CoolMUC-2), a SLURM job script might look like this:

Example SLURM Script for CoolMUC-4
#!/bin/bash 
#SBATCH -o ./jobComsol_%j_%N.out
#SBATCH -D .
#SBATCH -J comsol_job
#SBATCH --get-user-env
#SBATCH --export=none
#SBATCH --clusters=cm4
#SBATCH --clusters=cm4_tiny       # check whether matching your needs!
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=2       # please check hardware specifics!
#SBATCH --cpus-per-task=56        # please check hardware specifics!
#SBATCH --mail-type=none
#SBATCH --time=00:30:00

module load slurm_setup

module use /lrz/sys/share/modules/extfiles      # for the moment
module load comsol

export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
comsol batch -mpibootstrap slurm -inputfile micromixer_cluster.mph -outputfile output.mph -tmpdir $TMPDIR

COMSOL provides a hybrid task execution model. Using shared memory parallelism (OpenMP) is highly encouraged as runtimes can be considerable smaller.

Concerning the MPI fabrics: Running comsol on more than one node on CoolMUC-4 was not tested, but might work. In case of questions or problems, please open a Service Desk ticket.

Temporary folder: Comsol uses per default /tmp as temporary work folder. This is a node local folder (also on compute nodes), which might be rather small. When you use it you most probably experience an error like No space left on device. The temporary work folder for comsol can be changed via the option -tmpdir $TMPDIR. As temporary work folder path any path with access permissions and enough capacity can be used. $TMPDIR points to $SCRATCH and fulfills these criteria.

More information for parallel processing can be found in the Comsol Knowledge Base.

Special Batch related Topics

COMSOL can be executed with several commands - among others with batch. To obtain an overview over batch related command line options, issue

$ comsol batch -help

Useful options are -inputfile, -outputfile, -batchlog, and -study. The latter selects a special study from the inputfile to be performed.

Playing with NUMA-specific options (MPI-tasks versus OpenMP threads) might yield some additional performance enhancements. But general recommendations cannot be given. One should perform tests in specific cases.

Playing with MPI related options underlies the same recommendations as for NUMA-specific options. As COMSOL provides its own MPI (Intel) distribution, there might be interference with the LRZ MPI installations. If problems occur due to this circumstance, please contact the ServiceDesk!

Proposals for Comsol Workflows at LRZ

The COMSOL workflow usually comprises pre-processing, solving, and post-processing. Pre- and post-processing are highly bound to the GUI, and can be in part done locally, on the LRZ cluster login nodes. The meshing and compute steps, specifically for larger cases, should be done on the compute nodes! So, we propose the following workflows (depending also on the Licenses you have available).

  1. Prepare the case using the GUI. This can be accomplished e.g. using https://ood.hpc.lrz.de → VNC session on compute or login node.
    From this, a MPH file results.
  2. Submit a Slurm job script (with the content as outlined above).

Using LRZ Licenses

If you want to use the LRZ COMSOL licenses, the whole workflow should be accomplished on the LRZ systems. Please do not waste licenses by opening several GUIs!
Under Options-> Preferences->General you can also change the preferred language of the GUI.

In larger or longer lasting cases, please prepare your case file and submit COMSOL BATCH jobs to the Linux Cluster as described above.

Please note that comsol supports "use batch license" if Slurm jobs are submitted from within the GUI! This helps to maximize the deployment of all available licenses. But it requires to configure the Slurm queues inside the GUI manually.

Using own Licenses

If you have your own licenses for COMSOL (either from own license server, or licenses hosted on the LRZ license server), you can use them (you can even install your own comsol if Comsol GmbH agrees). The LRZ Linux cluster nodes have access to the LRZ license server (comsol.lrz.de) and to the internet. If you want to use your own licenses, go sure that your license server is reachable from the outside (both department firewall and license server must be prepared)! Before considering this, ensure that Comsol Multiphysics GmbH agrees on such an operational mode!

Getting Help / Documentation

General user guides and documentation are available at Comsol.
For an introduction, please have a look into the Introduction to Comsol Multiphysics.

Comsol also provides Webinars and Learning-Vidoes.