Simcenter StarCCM+ (Fluid Dynamics)

Simcenter StarCCM+ is a general purpose Computational Fluid Dynamics (CFD) code. Simcenter StarCCM+is part of the Siemens PLM software portfolio since April 2016 (formerly owned by cd-adapco also known as Computational Dynamics-Analysis & Design Application Company Ltd). As a general purpose CFD code Simcenter StarCCM+ provides a wide variety of physical models for turbulent flows, acoustics, Eulerian and Lagrangian multiphase flow modeling, radiation, combustion and chemical reactions, heat and mass transfer including CHT (conjugate heat transfer in solid domains). 

Further information about Simcenter StarCCM+, licensing of the Siemens PLM software and related terms of software usage at LRZ, the Siemens PLM mailing list, access to the Siemens PLM software documentation and LRZ user support can be found on the main Siemens PLM documentation page.

Getting Started

Once you are logged into one of the LRZ cluster systems, you can check the availability (i.e. installation) of StarCCM+ software by:

> module avail starccm

Load the prefered StarCCM+ version environment module, e.g.:

> module load starccm/2024.3.1

One can use StarCCM+ in interactive GUI mode for the only purpose of pre- and/or postprocessing on the Login Nodes (Linux: SSH Option "-Y" or X11-Forwarding; Windows: using PuTTY and XMing for X11-forwarding). This interactive usage is mainly intended for making quick simulation setup changes, which require GUI access. And since StarCCM+ is loading the mesh into the login nodes memory, this approach is only applicable to comparable small cases. It is NOT permitted to run computationally intensive StarCCM+ simulation runs or postprocessing sessions with large memory consumption on Login Nodes. The formerly existing Remote Visualization systems have been switched off in March 2024 without replacement due to their end-of-life. Any work with the Siemens PLM software (StarCCM+) being related to interactive mesh generation as well as graphically intensive pre- and postprocessing tasks need to be carried out on local computer systems and by using a Siemens PLM POD or node-locked license.

The Simcenter StarCCM+ GUI is started by:

> starccm+

The 3d results visualization program StarView+ can be launched by:

      > starview+

Siemens PLM StarCCM+ on Linux Cluster and SuperMUC-NG Login Nodes

StarCCM+ is a very resource-intensive application with respect to both main memory (RAM) and CPU resources! Please run StarCCM+ on login nodes with greatest care and under your supervision (e.g. using command "top" + <Cntrl>-M in a second terminal window) !

Especially, involving multiple StarCCM+ processes / parallelization might cause a high load on the login node and has the potential to massively disturb other users on the same system! Running StarCCM+ in compute or meshing mode on login nodes can easily lead to overload and make the login node no longer responsive, so that a reboot of the machine is required. Be careful !
StarCCM+
applications, which cause a high load on login nodes and disturbance to other users or general operation of the login node, will be terminated by system administrators without any prior notification!

Our recommendations on the login nodes:

  • Running multiple instances of StarCCM+ by the same user is prohibited!
    Please run only one instance of the software on a particular login node at any time!
  • It is only allowed to run a single instance of StarCCM+ on login nodes solely for the purpose of pre- and/or postprocessing. Absolutely none (zero) StarCCM+ simulations are allowed on login nodes.
  • The maximum allowed number of cores in starCCM+ parallelization is <=4 CPU cores!
    Any StarCCM+ instance using a higher degree of parallelization on login nodes will be terminated by system administrators without any prior notification!
  • If using <=4 cores in StarCCM+ parallelization it is recommended to switch StarCCM+ to Open-MPI. The default Intel MPI might not work on login nodes due to conflicts with SLURM.
  • Please check the load and memory consumption of your own StarCCM+ session. Usually, you can do this via the "top" command, e.g.:
    top -u $USER
    Using <Cntrl>-M is sorting the displayed process list by the amount of consumed memory per process.
  • If a graphical user interface is needed, then you may run a single instance of StarCCM+ via a VNC session (VNC Server on Login-Nodes) to increase the performance and responsiveness of the GUI!
  • If a graphical user interface is not needed, it is adviced to run StarCCM+ via an interactive Slurm job or in batch mode under SLURM control.
    These jobs run on compute nodes. A high degree of parallelization is explicitly allowed here, as long as StarCCM+ is run efficiently (with the rule of thumb of approx. 10.000 mesh elements per CPU core at least)!
    Do not over-parallelize StarCCM+ simulations. Non effectively used HPC resources are stolen resources from other users, since they are wasted.
  • Repeated violation of the above mentioned restrictions to StarCCM+ usage on login nodes might result in a ban of the affected user account and notification to the scientific supervisor/professor. 

Mixed vs. Double Precision Solver of StarCCM+

Siemens PLM is providing installation packages for mixed precision and higher accuracy double precision simulations. The latter comes for the price of approx. 20% higher execution times and approx. twice as large simulation results files. The LRZ module system is providing access to both versions of StarCCM+.

Access to StarCCM+ mixed precision solvers e.g. by:

module load starccm/2024.3.1                  # loading the mixed precision StarCCM+ module

Access to StarCCM+ double precision solvers e.g. by:

module load starccm_dp/2024.3.1               # loading the double precision StarCCM+ module

Simcenter StarCCM+ Parallel Execution (Batch Mode)

All parallel StarCCM+ simulations on LRZ Linux Clusters and SuperMUC-NG are submitted as non-interactive batch jobs to the appropriate scheduling system (SLURM) into the different pre-defined parallel execution queues. Further information about the batch queuing systems and the queue definitions, capabilities and limitations can be found on the documentation pages of the corresponding HPC system (LinuxCluster, SuperMUC-NG)

For job submission to a batch queuing system a corresponding small shell script needs to be provided, which contains:

  • Batch queueing system specific commands for the job resource definition
  • Module command to load the Simcenter StarCCM+ environment module
  • Start command for parallel execution of starccm+ with all appropriate command line parameters, including a controling StarCCM+ java macro.

The intended syntax and available command line options for the invocation of the starcmm+ solver command can be found out by:

> starccm+ -help

The configuration of the parallel cluster partition (list of node names and corresponding number of cores) is provided to the starccm+ command from the batch queuing system (SLURM) by the provision of an automatically generated environment variable $STAR_HOSTLIST, based on the information provided by the cluster user in the job resource definition. The number of StarCCM+ solver processes is submitted to the starccm+ solver command by the SLURM environment variable $SLURM_NTASKS.

Furthermore we recommend to LRZ cluster users to write for longer simulation runs regular backup files, which can be used as the basis for a job restart in case of machine or job failure. A good practice for a 48 hour StarCCM+ simulation (max. time limit) would be to write backup files every 6 or 12 hours. Further information you can find in the StarCCM+ documentation.

CoolMUC-4 Serial Queue: Simcenter StarCCM+ Job Submission on LRZ Linux Clusters running SLES15 using SLURM

StarCCM+ can be provided on CoolMUC-4 (CM4) in the serial queue (serial_std) with support for both the default Intel MPI and OpenMPI with Infiniband interfaces.

Similar SLURM batch script syntax can be applied for either using Power-on-Demand licensing or the access to a license server which provides floating licenses for StarCCM+.

The name "serial queue" is to a certain extend misleading here. The serial queue of LRZ Linux Clusters (CM4) differ from other CoolMUC-4 queues in that regard, that the access is granted to just one single cluster node and that the access to this cluster node is non-exclusive, i.e. might be shared with other cluster users depending on the resource requirements of the job as they are specified in the SLURM script. Nevertheless the launched application can make use of more than just a single CPU core on that cluster node, i.e. apply a certain degree of parallelization - either in shared memory mode or in distributed memory mode (MPI).

Important: As a general rule of thumb approx. 10.000-20.000 mesh cells (or even more) per CPU core should be provided by the simulation task in order to run efficiently on the CM4 hardware. The memory should be specified realistically and proportional to the number of used CPU cores in proportional relationship to the total number of CPU cores per compute node and the total amount of memory per node.

Please use this large and powerful compute resource with a carefully specified number of CPU cores and a reasonably quantified amount of requested node memory per compute node of CM4. Don't waste powerful CM4 compute resources and please be fair to other CM4 cluster users.

Using a Power-on-Demand License

In the following an example of a job submission batch script for StarCCM+ on CoolMUC-4 (SLURM queue = serial) in the batch queuing system SLURM is provided. The example is formulated for the use of POD (Power-on-Demand) licensing. POD keys can be obtained either through the TUM campus license or directly from Siemens PLM.

#!/bin/bash
#SBATCH -o myjob_starccm.%j.%N.out
#SBATCH -D ./
#SBATCH -J starccm_cm4_serial
#SBATCH --clusters=serial
#SBATCH --partition=serial_std
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=28
# --- Less or equal to the maximum number of CPU Cores of a single CM4 cluster node ---
#SBATCH --mem=200G
# --- Realistic assumption for memory requirement of the task and proportional to the used number of CPU cores ---
#SBATCH --mail-user=Max.Mustermann@lrz.de
#SBATCH --export=NONE
#SBATCH --get-user-env
#SBATCH --export=NONE
#SBATCH --time=00:10:00
#----------------------------------------------------
module load slurm_setup

module av starccm
module load starccm/2024.3.1        # <-- mixed precision version 
# module load starccm_dp/2024.3.1   # <-- double precision version
module list

echo "=================== Verification of the StarCCM+ Hostlist =========================="
echo $STAR_HOSTLIST
cat $STAR_HOSTLIST
echo "=================== Verification of the StarCCM+ Hostlist =========================="

# cat /proc/cpuinfo
# Provide your StarCCM+ simulation file name and Java macro name here:
SIM_FILE=Linux_StarCCM_Test.sim
START_MACRO=MeshSimSave.java
#
echo "===========================  Start of StarCCM+  ===================================="
echo "starccm+ -power -podkey <Name_of_your_POD_license_key> -licpath 1999@flex.cd-adapco.com -rsh /usr/bin/ssh -mpi intel -mpiflags \"-bootstrap ssh\" -printpids -machinefile $STAR_HOSTLIST -np $SLURM_NTASKS -batch $START_MACRO -batch-report $SIM_FILE"
#
starccm+ -power -podkey <Name_of_your_POD_license_key> -licpath 1999@flex.cd-adapco.com -rsh /usr/bin/ssh -mpi intel -mpiflags "-bootstrap ssh" -printpids -machinefile $STAR_HOSTLIST -np $SLURM_NTASKS -batch $START_MACRO -batch-report $SIM_FILE
echo "===========================  End of StarCCM+  ======================================"

Assumed that the above SLURM script has been saved under the filename "starccm_POD_cm4_serial.sh", the SLURM batch job has to be submitted by issuing the following command on the Linux Cluster login node (lxlogin5):

chmod 755 starccm_POD_cm4_serial.sh
sbatch starccm_POD_cm4_serial.sh

Using a LRZ Floating License

In the following an example of a job submission batch script for StarCCM+ on CoolMUC-4 (SLURM queue = serial) in the batch queuing system SLURM is provided. The example is formulated for the use of a StarCCM+ floating license being provided by the LRZ internal license server license1.lrz.de (User-ID authentication for license check-out). Consequently befor the use of StarCCM+ floating licenses the User-ID of the user has to be registered in the LRZ license server (please send a LRZ Service Request).

#!/bin/bash
#SBATCH -o myjob_starccm.%j.%N.out
#SBATCH -D ./
#SBATCH -J starccm_cm4_serial
#SBATCH --clusters=serial
#SBATCH --partition=serial_std
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=28
# --- Less or equal to the maximum number of CPU Cores of a single CM4 cluster node ---
#SBATCH --mem=250G
# --- Realistic assumption for memory requirement of the task and proportional to the used number of CPU cores ---
#SBATCH --mail-user=Max.Mustermann@lrz.de
#SBATCH --export=NONE
#SBATCH --get-user-env
#SBATCH --export=NONE
#SBATCH --time=00:10:00
#----------------------------------------------------
module load slurm_setup

module av starccm
module load starccm/2024.3.1        # <-- mixed predicion version
# module load starccm_dp/2024.3.1   # <-- double precision version
module list

echo "=================== Verification of the StarCCM+ Hostlist =========================="
echo $STAR_HOSTLIST
cat $STAR_HOSTLIST
echo "=================== Verification of the StarCCM+ Hostlist =========================="

# cat /proc/cpuinfo
# Provide your StarCCM+ simulation file name and Java macro name here:
SIM_FILE=Linux_StarCCM_Test.sim
START_MACRO=MeshSimSave.java
#
echo "===========================  Start of StarCCM+  ===================================="
# Syntax of the start of StarCCM+ by using Intel MPI 2018/2019:
echo "starccm+ -licpath 1999@license1.lrz.de -rsh /usr/bin/ssh -mpi intel -mpiflags \"-bootstrap ssh\" -printpids -machinefile $STAR_HOSTLIST -np $SLURM_NTASKS -batch $START_MACRO -batch-report $SIM_FILE"
#
starccm+ -licpath 1999@license1.lrz.de -rsh /usr/bin/ssh -mpi intel -mpiflags "-bootstrap ssh" -printpids -machinefile $STAR_HOSTLIST -np $SLURM_NTASKS -batch $START_MACRO -batch-report $SIM_FILE
#------------------------------------------------------------------------------------------
# Syntax of the start of StarCCM+ by using OpenMPI:
#   Version 2023.3.1: use commandline options "-mpi openmpi -fabric ofi"
# echo "starccm+ -licpath 1999@license1.lrz.de -rsh /usr/bin/ssh -mpi openmpi -fabric ofi -printpids -machinefile $STAR_HOSTLIST -np $SLURM_NTASKS -batch $START_MACRO -batch-report $SIM_FILE"
#
starccm+ -licpath 1999@license1.lrz.de -rsh /usr/bin/ssh -mpi openmpi -fabric ofi -printpids -machinefile $STAR_HOSTLIST -np $SLURM_NTASKS -batch $START_MACRO -batch-report $SIM_FILE
echo "===========================  End of StarCCM+  ======================================"

Assumed that the above SLURM script has been saved under the filename "starccm_floatlic_cm4_serial.sh", the SLURM batch job has to be submitted by issuing the following command on the Linux Cluster login node (lxlogin5):

chmod 755 starccm_floatlic_cm4_serial.sh
sbatch starccm_floatlic_cm4_serial.sh

CoolMUC-4 : Simcenter StarCCM+ Job Submission on LRZ Linux Clusters running SLES15 using SLURM

StarCCM+ can be provided on CoolMUC-4 (CM4) with support for both the default Intel MPI and OpenMPI on CM4 queues (cm4_inter_large_mem, cm4_tiny, cm4_std) with Infiniband interfaces.

Similar SLURM batch script syntax can be applied for either using Power-on-Demand licensing or the access to a license server which provides floating licenses for StarCCM+.

Please mind, that CM-4 compute nodes have access to the $HOME and $SCRATCH_DSS filesystems. 

Important: You should use the cm4_tiny / cm4_std queues ONLY (!) with the full number of CPU cores per node, which requires large StarCCM+ simulations with substantially more then 1.5 Million mesh cells to run efficiently and not to waste sparse compute resources on CM4 queues. If your simulation has less mesh cells, then please run the tasks in the CM4 serial queue instead (see example above). As a general rule of thumb approx. 10.000-20.000 mesh cells per CPU core (or more) should be provided by the simulation task in order to run efficiently on the CM4 hardware. Cluster nodes in the cm4_tiny / cm4_std queues are assigned exclusively to the user and therefore all of the available CPU cores should be utilized by the user.

In the following an example of a job submission batch script for StarCCM+ on CoolMUC-4 (SLURM queue = cm4 | partition = cm4_cm4_tiny | cm4_std) in the batch queuing system SLURM is provided.

Please use this large and powerful compute resource only for tasks, which really can make efficient use of the 112 cores and 512 Gb node memory per compute node of CM4. Don't use this resource for rather smallish tasks, thereby wasting this powerful resources.

Using a Power-on-Demand License

In the following an example of a job submission batch script for StarCCM+ on CoolMUC-4 (SLURM queue = cm4 | partition = cm4_cm4_tiny | cm4_std) in the batch queuing system SLURM is provided. The example is formulated for the use of POD (Power-on-Demand) licensing. POD keys can be obtained either through the TUM campus license or directly from Siemens PLM.

#!/bin/bash
#SBATCH -o myjob_starccm.%j.%N.out
#SBATCH -D ./
#SBATCH -J starccm_cm4_tiny
#SBATCH --clusters=cm4
#SBATCH --partition=cm4_tiny
# ---- partitions : cm4_tiny | cm4_std
#SBATCH --qos=cm4_tiny
# ---- qos : cm4_tiny | cm4_std
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=112
# --- Maximum number of CPU cores is 112 for cm4 - Use CM4 resources carefully and efficiently ! ---
#SBATCH --mail-user=Max.Mustermann@lrz.de
#SBATCH --export=NONE
#SBATCH --get-user-env
#SBATCH --export=NONE
#SBATCH --time=00:10:00
#----------------------------------------------------
module load slurm_setup

module av starccm
module load starccm/2024.3.1        # <-- mixed precision version 
# module load starccm_dp/2024.3.1   # <-- double precision version
module list

echo "=================== Verification of the StarCCM+ Hostlist =========================="
echo $STAR_HOSTLIST
cat $STAR_HOSTLIST
echo "=================== Verification of the StarCCM+ Hostlist =========================="

# cat /proc/cpuinfo
# Provide your StarCCM+ simulation file name and Java macro name here:
SIM_FILE=Linux_StarCCM_Test.sim
START_MACRO=MeshSimSave.java
#
echo "===========================  Start of StarCCM+  ===================================="
echo "starccm+ -power -podkey <Name_of_your_POD_license_key> -licpath 1999@flex.cd-adapco.com -rsh /usr/bin/ssh -mpi intel -mpiflags \"-bootstrap ssh\" -printpids -machinefile $STAR_HOSTLIST -np $SLURM_NTASKS -batch $START_MACRO -batch-report $SIM_FILE"
#
starccm+ -power -podkey <Name_of_your_POD_license_key> -licpath 1999@flex.cd-adapco.com -rsh /usr/bin/ssh -mpi intel -mpiflags "-bootstrap ssh" -printpids -machinefile $STAR_HOSTLIST -np $SLURM_NTASKS -batch $START_MACRO -batch-report $SIM_FILE
echo "===========================  End of StarCCM+  ======================================"

Assumed that the above SLURM script has been saved under the filename "starccm_POD_cm4_tiny.sh", the SLURM batch job has to be submitted by issuing the following command on the Linux Cluster login node (lxlogin5):

chmod 755 starccm_POD_cm4_tiny.sh
sbatch starccm_POD_cm4_tiny.sh

Using a LRZ Floating License

In the following an example of a job submission batch script for StarCCM+ on CoolMUC-4 (SLURM queue = cm4 | partition = cm4_cm4_tiny | cm4_std) in the batch queuing system SLURM is provided. The example is formulated for the use of a StarCCM+ floating license being provided by the LRZ internal license server license1.lrz.de (User-ID authentication for license check-out). Consequently befor the use of StarCCM+ floating licenses the User-ID of the user has to be registered in the LRZ license server (please send a LRZ Service Request).

#!/bin/bash
#SBATCH -o myjob_starccm.%j.%N.out
#SBATCH -D ./
#SBATCH -J starccm_cm4_tiny
#SBATCH --clusters=cm4
#SBATCH --partition=cm4_tiny
# ---- partitions : cm4_tiny | cm4_std
#SBATCH --qos=cm4_tiny
# ---- qos : cm4_tiny | cm4_std
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=112
# --- Maximum number of CPU cores is 112 for cm4 - Use CM4 resources carefully and efficiently ! ---
#SBATCH --mail-user=Max.Mustermann@lrz.de
#SBATCH --export=NONE
#SBATCH --get-user-env
#SBATCH --export=NONE
#SBATCH --time=00:10:00
#----------------------------------------------------
module load slurm_setup

module av starccm
module load starccm/2024.3.1        # <-- mixed predicion version
# module load starccm_dp/2024.3.1   # <-- double precision version
module list

echo "=================== Verification of the StarCCM+ Hostlist =========================="
echo $STAR_HOSTLIST
cat $STAR_HOSTLIST
echo "=================== Verification of the StarCCM+ Hostlist =========================="

# cat /proc/cpuinfo
# Provide your StarCCM+ simulation file name and Java macro name here:
SIM_FILE=Linux_StarCCM_Test.sim
START_MACRO=MeshSimSave.java
#
echo "===========================  Start of StarCCM+  ===================================="
# Syntax of the start of StarCCM+ by using Intel MPI 2018/2019:
echo "starccm+ -licpath 1999@license1.lrz.de -rsh /usr/bin/ssh -mpi intel -mpiflags \"-bootstrap ssh\" -printpids -machinefile $STAR_HOSTLIST -np $SLURM_NTASKS -batch $START_MACRO -batch-report $SIM_FILE"
#
starccm+ -licpath 1999@license1.lrz.de -rsh /usr/bin/ssh -mpi intel -mpiflags "-bootstrap ssh" -printpids -machinefile $STAR_HOSTLIST -np $SLURM_NTASKS -batch $START_MACRO -batch-report $SIM_FILE
#------------------------------------------------------------------------------------------
# Syntax of the start of StarCCM+ by using OpenMPI:
#   Version 2023.3.1: use commandline options "-mpi openmpi -fabric ofi"
# echo "starccm+ -licpath 1999@license1.lrz.de -rsh /usr/bin/ssh -mpi openmpi -fabric ofi -printpids -machinefile $STAR_HOSTLIST -np $SLURM_NTASKS -batch $START_MACRO -batch-report $SIM_FILE"
#
starccm+ -licpath 1999@license1.lrz.de -rsh /usr/bin/ssh -mpi openmpi -fabric ofi -printpids -machinefile $STAR_HOSTLIST -np $SLURM_NTASKS -batch $START_MACRO -batch-report $SIM_FILE
echo "===========================  End of StarCCM+  ======================================"

Assumed that the above SLURM script has been saved under the filename "starccm_floatlic_cm4_tiny.sh", the SLURM batch job has to be submitted by issuing the following command on the Linux Cluster login node (lxlogin5):

chmod 755 starccm_floatlic_cm4_tiny.sh
sbatch starccm_floatlic_cm4_tiny.sh

SuperMUC-NG : Simcenter StarCCM+ Job Submission on SNG running SLES15 using SLURM

In the following an example of a job submission batch script for StarCCM+ on SuperMUC-NG (Login node: skx.supermuc.lrz.de, SLURM partition = test) in the batch queuing system SLURM is provided.
Please note that POD licensing is not supported on SuperMUC-NG, since it is not possible to reach any external license server from SuperMUC-NG compute nodes due to additional hardening of this supercomputer machine.
LRZ can provide a rather small number of available StarCCM+ floating licenses on request. Alternatively interested users in using StarCCM+ on SuperMUC-NG need to provide their own StarCCM+ licenses on a hosted license server in the LRZ network. For that the migration of existing license pools to this LRZ license server need to be arranged with Siemens PLM as a software vendor for StarCCM+. If that is the licensing solution for starCCM+ on SuperMUC-NG you would like to go for, please contact our LRZ Service Desk accordingly.

#!/bin/bash
#SBATCH -o ./myjob_%x.%j.%N.out
#SBATCH -D ./
#SBATCH -J starccm_sng
#SBATCH --partition=test
# ---- test | micro | general | large | fat
#SBATCH --get-user-env
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=48
# ---- multiples of 48 for SuperMUC-NG ----
#SBATCH --mail-type=NONE
#SBATCH --mail-user=Max.Mustermann@lrz.de
#SBATCH --no-requeue
#SBATCH --export=NONE
#SBATCH --time=0:15:00
#SBATCH --account=<...your project ID ...>
#SBATCH --switches=1@24:00:00
#########################################################
#-----------------------------------------
module load slurm_setup 

module av starccm
# module load starccm/2024.3.1
module load starccm_dp/2024.3.1
module list

echo "=================== Verification of the StarCCM+ Hostlist =========================="
echo $STAR_HOSTLIST
cat $STAR_HOSTLIST
echo "=================== Verification of the StarCCM+ Hostlist =========================="
 
SIM_FILE=Linux_StarCCM_Test.sim
START_MACRO=MeshSimSave.java

echo "=========================  Start von StarCCM+  ===================================="
echo starccm+ -licpath 1999@license1.lrz.de -rsh /usr/bin/ssh -mpi intel -mpiflags \"-bootstrap ssh\" -printpids -machinefile $STAR_HOSTLIST -np $SLURM_NTASKS -batch $SIM_FILE -batch $START_MACRO -batch-report
date
starccm+ -licpath 1999@license1.lrz.de -rsh /usr/bin/ssh -mpi intel -mpiflags "-bootstrap ssh" -printpids -machinefile $STAR_HOSTLIST -np $SLURM_NTASKS -batch $SIM_FILE -batch $START_MACRO -batch-report
date
echo "=========================  Ende von StarCCM+  ===================================="

Assumed that the above SLURM script has been saved under the filename "starccm_sng_slurm.sh", the SLURM batch job has to be submitted by issuing the following command on one of the SuperMUC-NG login nodes:

sbatch starccm_sng_slurm.sh