Simple Batch Job

Pre-Processing

  • locally on own Laptop/PC, or on LRZ Login or Remote Visualization Nodes
  • result: .mph-File

Computation/Simulation

  • Copy the .mph-file to your HOME or WORK directory (depending on where you want to run your simulation)
  • Create a SLURM script, e.g. using vi, emacs, ... any ASCII editor available

    Simple SLURM Batch Script (comsol_job.sh)
    #!/bin/bash 
    #SBATCH -o <your job-output path>/job.mpp2_Comsol.out
    #SBATCH -D <your work path>/
    #SBATCH -J mpp2_comsol_test
    #SBATCH --get-user-env 
    #SBATCH --clusters=mpp2
    #SBATCH --nodes=4
    #SBATCH --ntasks-per-node=4        # for Haswell !!
    #SBATCH --cpus-per-task=7          # for Haswell !!
    #SBATCH --mail-type=none
    #SBATCH --mail-user=<your email>
    #SBATCH --time=00:30:00
    module load slurm_setup
    # check for possible need to load different MPI package (e.g. SLES 15)!
    module load comsol
    comsol batch -mpibootstrap slurm -mpifabrics shm:ofa -inputfile micromixer_cluster.mph -outputfile output.mph -tmpdir $TMPDIR

    Take care to estimate the length of your job conservatively! Simple scaling tests on a smaller case can be of help! Select the NTASKS-PER-NODE and OMP_NUM_THREADS according to the architecture you are running Comsol on! Hint: NTASKS-PER-NODE * OMP_NUM_THREADS should be equal the total number of CPU cores per node!
    Concerning the MPI fabrics: If you want to use Comsol on CoolMUC-3, I_MPI_FRABRICS must be set to shm:tmi. Alternatively, you can also use -mpifabrics shm:tmi on the command-line. For CoolMUC-2, it was tested with -mpifabrics shm:ofa

  • Submit the job!

    $ sbatch comsol_job.sh
  • Using the SLURM tools (and also the --mail-type tag), you can get updates about the status of your job

Post-Processing

  • Copy the output result file back and open it in Comsol GUI (locally on your Laptop/PC or on the LRZ Login or Remote Visualization Nodes) for post-processing