Simcenter StarCCM+ (Fluid Dynamics)

Simcenter StarCCM+ is a general purpose Computational Fluid Dynamics (CFD) code. Simcenter StarCCM+is part of the Siemens PLM software portfolio since April 2016 (formerly owned by cd-adapco also known as Computational Dynamics-Analysis & Design Application Company Ltd). As a general purpose CFD code Simcenter StarCCM+ provides a wide variety of physical models for turbulent flows, acoustics, Eulerian and Lagrangian multiphase flow modeling, radiation, combustion and chemical reactions, heat and mass transfer including CHT (conjugate heat transfer in solid domains). 

Further information about Simcenter StarCCM+, licensing of the Siemens PLM software and related terms of software usage at LRZ, the Siemens PLM mailing list, access to the Siemens PLM software documentation and LRZ user support can be found on the main Siemens PLM documentation page.

Getting Started

Once you are logged into one of the LRZ cluster systems, you can check the availability (i.e. installation) of StarCCM+ software by:

> module avail starccm

Load the prefered StarCCM+ version environment module, e.g.:

> module load starccm/2024.1.1

One can use StarCCM+ in interactive GUI mode for the only purpose of pre- and/or postprocessing on the Login Nodes (Linux: SSH Option "-Y" or X11-Forwarding; Windows: using PuTTY and XMing for X11-forwarding). This interactive usage is mainly intended for making quick simulation setup changes, which require GUI access. And since StarCCM+ is loading the mesh into the login nodes memory, this approach is only applicable to comparable small cases. It is NOT permitted to run computationally intensive StarCCM+ simulation runs or postprocessing sessions with large memory consumption on Login Nodes. Alternatively StarCCM+ can be run, e.g. for memory intensive postprocessing and OpenGL acceleration of the GUI on the Remote Visualization Systems.

The Simcenter StarCCM+ GUI is started by:

> starccm+

The 3d results visualization program StarView+ can be launched by:

      > starview+

Siemens PLM StarCCM+ on Linux Cluster and SupermUC-NG Login Nodes

StarCCM+ is a very resource-intensive application with respect to both main memory (RAM) and CPU resources! Please run StarCCM+ on login nodes with greatest care and under your supervision (e.g. using command "top" + <Cntrl>-M in a second terminal window) !

Especially, involving multiple StarCCM+ processes / parallelization might cause a high load on the login node and has the potential to massively disturb other users on the same system! Running StarCCM+ in compute or meshing mode on login nodes can easily lead to overload and make the login node no longer responsive, so that a reboot of the machine is required. Be careful !
StarCCM+
applications, which cause a high load on login nodes and disturbance to other users or general operation of the login node, will be terminated by system administrators without any prior notification!

Our recommendations on the login nodes:

  • Running multiple instances of StarCCM+ by the same user is prohibited!
    Please run only one instance of the software on a particular login node at any time!
  • It is only allowed to run a single instance of StarCCM+ on login nodes solely for the purpose of pre- and/or postprocessing. Absolutely none (zero) StarCCM+ simulations are allowed on login nodes.
  • The maximum allowed number of cores in starCCM+ parallelization is <=4 CPU cores!
    Any StarCCM+ instance using a higher degree of parallelization on login nodes will be terminated by system administrators without any prior notification!
  • If using <=4 cores in StarCCM+ parallelization it is recommended to switch StarCCM+ to Open-MPI. The default Intel MPI might not work on login nodes due to conflicts with SLURM.
  • Please check the load and memory consumption of your own StarCCM+ session. Usually, you can do this via the "top" command, e.g.:
    top -u $USER
    Using <Cntrl>-M is sorting the displayed process list by the amount of consumed memory per process.
  • If a graphical user interface is needed, then you may run a single instance of StarCCM+ via a VNC session (VNC Server on Login-Nodes) to increase the performance and responsiveness of the GUI!
  • If a graphical user interface is not needed, it is adviced to run StarCCM+ via an interactive Slurm job or in batch mode under SLURM control.
    These jobs run on compute nodes. A high degree of parallelization is explicitly allowed here, as long as StarCCM+ is run efficiently (with the rule of thumb of approx. 10.000 mesh elements per CPU core at least)!
    Do not over-parallelize StarCCM+ simulations. Non effectively used HPC resources are stolen resources from other users, since they are wasted.
  • Repeated violation of the above mentioned restrictions to StarCCM+ usage on login nodes might result in a ban of the affected user account and notification to the scientific supervisor/professor. 

Mixed vs. Double Precision Solver of StarCCM+

Siemens PLM is providing installation packages for mixed precision and higher accuracy double precision simulations. The latter comes for the price of approx. 20% higher execution times and approx. twice as large simulation results files. The LRZ module system is providing access to both versions of StarCCM+.

Access to StarCCM+ mixed precision solvers e.g. by:

module load starccm/2024.1.1                  # loading the mixed precision StarCCM+ module

Access to StarCCM+ double precision solvers e.g. by:

module load starccm_dp/2024.1.1               # loading the double precision StarCCM+ module

Simcenter StarCCM+ Parallel Execution (Batch Mode)

All parallel StarCCM+ simulations on LRZ Linux Clusters and SuperMUC-NG are submitted as non-interactive batch jobs to the appropriate scheduling system (SLURM) into the different pre-defined parallel execution queues. Further information about the batch queuing systems and the queue definitions, capabilities and limitations can be found on the documentation pages of the corresponding HPC system (LinuxCluster, SuperMUC-NG)

For job submission to a batch queuing system a corresponding small shell script needs to be provided, which contains:

  • Batch queueing system specific commands for the job resource definition
  • Module command to load the Simcenter StarCCM+ environment module
  • Start command for parallel execution of starccm+ with all appropriate command line parameters, including a controling StarCCM+ java macro.

The intended syntax and available command line options for the invocation of the starcmm+ solver command can be found out by:

> starccm+ -help

The configuration of the parallel cluster partition (list of node names and corresponding number of cores) is provided to the starccm+ command from the batch queuing system (SLURM) by the provision of an automatically generated environment variable $STAR_HOSTLIST, based on the information provided by the cluster user in the job resource definition. The number of StarCCM+ solver processes is submitted to the starccm+ solver command by the SLURM environment variable $SLURM_NTASKS.

Furthermore we recommend to LRZ cluster users to write for longer simulation runs regular backup files, which can be used as the basis for a job restart in case of machine or job failure. A good practice for a 48 hour StarCCM+ simulation (max. time limit) would be to write backup files every 6 or 12 hours. Further information you can find in the StarCCM+ documentation.

CoolMUC-2 : Simcenter StarCCM+ Job Submission on LRZ Linux Clusters running SLES15 using SLURM

Actual version 2024.1.1 of StarCCM+ is no longer compatible with the outdated CoolMUC-2 operating system. This issue is known and will no longer be fixed.

StarCCM+ can be provided on CoolMUC-2 (CM2) with support for both the default Intel MPI and OpenMPI on CM2 queues (cm2_tiny, cm2_std) with Infiniband interfaces.

Similar SLURM batch script syntax can be applied for either using Power-on-Demand licensing or the access to a license server which provides floating licenses for StarCCM+.

Using a Power-on-Demand License

In the following an example of a job submission batch script for StarCCM+ on CoolMUC-2 (SLURM queue = cm2_tiny) in the batch queuing system SLURM is provided. The example is formulated for the use of POD (Power-on-Demand) licensing. POD keys can be obtained either through the TUM campus license or directly from Siemens PLM.

#!/bin/bash
#SBATCH -o ./myjob.%j.%N.out
#SBATCH -D ./
#SBATCH -J starccm_cm2_tiny
#SBATCH --clusters=cm2_tiny
#SBATCH --partition=cm2_tiny
#SBATCH --get-user-env
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=28
# --- multiples of 28 for cm2 / cm2_tiny clusters ---
#SBATCH --mail-type=NONE
#SBATCH --mail-user=Max.Mustermann@lrz.de
#SBATCH --export=NONE
#SBATCH --time=00:10:00
#
module load slurm_setup

module av starccm
module load starccm/2023.3.1        # <-- mixed precision version 
# module load starccm_dp/2023.3.1   # <-- double precision version
module list

echo "=================== Verification of the StarCCM+ Hostlist =========================="
echo $STAR_HOSTLIST
cat $STAR_HOSTLIST
echo "=================== Verification of the StarCCM+ Hostlist =========================="

# cat /proc/cpuinfo
# Provide your StarCCM+ simulation file name and Java macro name here:
SIM_FILE=Linux_StarCCM_Test.sim
START_MACRO=MeshSimSave.java
#
echo "===========================  Start of StarCCM+  ===================================="
echo "starccm+ -power -podkey <Name_of_your_POD_license_key> -licpath 1999@flex.cd-adapco.com -rsh /usr/bin/ssh -mpi intel -mpiflags \"-bootstrap ssh\" -printpids -machinefile $STAR_HOSTLIST -np $SLURM_NTASKS -batch $START_MACRO -batch-report $SIM_FILE"
#
starccm+ -power -podkey <Name_of_your_POD_license_key> -licpath 1999@flex.cd-adapco.com -rsh /usr/bin/ssh -mpi intel -mpiflags "-bootstrap ssh" -printpids -machinefile $STAR_HOSTLIST -np $SLURM_NTASKS -batch $START_MACRO -batch-report $SIM_FILE
echo "===========================  End of StarCCM+  ======================================"

Assumed that the above SLURM script has been saved under the filename "starccm_POD_cm2_tiny.sh", the SLURM batch job has to be submitted by issuing the following command on one of the Linux Cluster login nodes:

sbatch starccm_POD_cm2_tiny.sh

Using a LRZ Floating License

In the following an example of a job submission batch script for StarCCM+ on CoolMUC-2 (SLURM queue = cm2_std) in the batch queuing system SLURM is provided. Correspondingly smaller jobs using a smaller number of compute nodes can be submitted to the CM2 cluster queue cm2_tiny by adjusting the provided SLURM script accordingly (change of --cluster and --partition statements and omitting the --qos statement - see example for cm2_tiny queue above). The example is formulated for the use of a StarCCM+ floating license being provided by the LRZ internal license server license1.lrz.de (User-ID authentication for license check-out). Consequently befor the use of StarCCM+ floating licenses the User-ID of the user has to be registered in the LRZ license server (please send a LRZ Service Request).

#!/bin/bash
#SBATCH -o ./myjob.%j.%N.out
#SBATCH -D ./
#SBATCH -J starccm_cm2_std
#SBATCH --clusters=cm2
#SBATCH --partition=cm2_std
#SBATCH --qos=cm2_std
#SBATCH --get-user-env
#SBATCH --nodes=3
#SBATCH --ntasks-per-node=28
# --- multiples of 28 for cm2 ---
#SBATCH --mail-type=NONE
#SBATCH --mail-user=Max.Mustermann@lrz.de
#SBATCH --export=NONE
#SBATCH --time=00:10:00
#
module load slurm_setup

module av starccm
module load starccm/2023.3.1        # <-- mixed predicion version
# module load starccm_dp/2023.3.1   # <-- double precision version
module list

echo "=================== Verification of the StarCCM+ Hostlist =========================="
echo $STAR_HOSTLIST
cat $STAR_HOSTLIST
echo "=================== Verification of the StarCCM+ Hostlist =========================="

# cat /proc/cpuinfo
# Provide your StarCCM+ simulation file name and Java macro name here:
SIM_FILE=Linux_StarCCM_Test.sim
START_MACRO=MeshSimSave.java
#
echo "===========================  Start of StarCCM+  ===================================="
# Syntax of the start of StarCCM+ by using Intel MPI 2018/2019:
echo "starccm+ -licpath 1999@license1.lrz.de -rsh /usr/bin/ssh -mpi intel -mpiflags \"-bootstrap ssh\" -printpids -machinefile $STAR_HOSTLIST -np $SLURM_NTASKS -batch $START_MACRO -batch-report $SIM_FILE"
#
starccm+ -licpath 1999@license1.lrz.de -rsh /usr/bin/ssh -mpi intel -mpiflags "-bootstrap ssh" -printpids -machinefile $STAR_HOSTLIST -np $SLURM_NTASKS -batch $START_MACRO -batch-report $SIM_FILE
#------------------------------------------------------------------------------------------
# Syntax of the start of StarCCM+ by using OpenMPI:
#   Version 2023.3.1: use commandline options "-mpi openmpi -fabric ofi"
# echo "starccm+ -licpath 1999@license1.lrz.de -rsh /usr/bin/ssh -mpi openmpi -fabric ofi -printpids -machinefile $STAR_HOSTLIST -np $SLURM_NTASKS -batch $START_MACRO -batch-report $SIM_FILE"
#
starccm+ -licpath 1999@license1.lrz.de -rsh /usr/bin/ssh -mpi openmpi -fabric ofi -printpids -machinefile $STAR_HOSTLIST -np $SLURM_NTASKS -batch $START_MACRO -batch-report $SIM_FILE
echo "===========================  End of StarCCM+  ======================================"

Assumed that the above SLURM script has been saved under the filename "starccm_floatlic_cm2_tiny.sh", the SLURM batch job has to be submitted by issuing the following command on one of the Linux Cluster login nodes (lxlogin1,2,3,4):

sbatch starccm_floatlic_cm2_std.sh

CoolMUC-3 : Simcenter StarCCM+ Job Submission on LRZ Linux Clusters running SLES12 using SLURM

Actual versions of StarCCM+ are no longer compatible with the outdated CoolMUC-3 operating system. This issue is known and will no longer be fixed.

Using a Power-on-Demand License

In the following an example of a job submission batch script for StarCCM+ on CoolMUC-3 (SLURM queue = mpp3) in the batch queuing system SLURM is provided. The example is formulated for the use of POD (Power-on-Demand) licensing. POD keys can be obtained either through the TUM campus license or directly from Siemens PLM.

#!/bin/bash
#SBATCH -o ./myjob.%j.%N.out
#SBATCH -D ./
#SBATCH -J starccm_mpp3_slurm
#SBATCH --clusters=mpp3
#SBATCH --get-user-env
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=64
# --- multiples of 64 for mpp3 ---
#SBATCH --mail-type=NONE
#SBATCH --mail-user=Max.Mustermann@lrz.de
#SBATCH --export=NONE
#SBATCH --time=00:10:00
#
module load slurm_setup

module av starccm
module load starccm/2023.1.1        # <-- mixed precision version
# module load starccm_dp/2023.1.1   # <-- double precision version 
module list

echo "=================== Verification of the StarCCM+ Hostlist =========================="
echo $STAR_HOSTLIST
cat $STAR_HOSTLIST
echo "=================== Verification of the StarCCM+ Hostlist =========================="

# cat /proc/cpuinfo
# Provide your StarCCM+ simulation file name and Java macro name here:
SIM_FILE=Linux_StarCCM_Test.sim
START_MACRO=MeshSimSave.java
#
echo "===========================  Start of StarCCM+  ===================================="
echo "starccm+ -power -podkey <Name_of_your_POD_license_key> -licpath 1999@flex.cd-adapco.com -rsh /usr/bin/ssh -mpi intel -mpiflags \"-bootstrap slurm\" -printpids -machinefile $STAR_HOSTLIST -np $SLURM_NTASKS -batch $START_MACRO -batch-report $SIM_FILE"
#
starccm+ -power -podkey <Name_of_your_POD_license_key> -licpath 1999@flex.cd-adapco.com -rsh /usr/bin/ssh -mpi intel -mpiflags "-bootstrap slurm" -printpids -machinefile $STAR_HOSTLIST -np $SLURM_NTASKS -batch $START_MACRO -batch-report $SIM_FILE
echo "===========================  End of StarCCM+  ======================================"

Assumed that the above SLURM script has been saved under the filename "starccm_POD_mpp3_slurm.sh", the SLURM batch job has to be submitted by issuing the following command on one of the Linux Cluster login nodes:

sbatch starccm_POD_mpp3_slurm.sh

Using a LRZ Floating License

In the following an example of a job submission batch script for StarCCM+ on CoolMUC-3 (SLURM queue = mpp3) in the batch queuing system SLURM is provided. The example is formulated for the use of a StarCCM+ floating license being provided by the LRZ internal license server license1.lrz.de (User-ID authentication for license check-out). Consequently befor the use of StarCCM+ floating licenses the User-ID of the user has to be registered in the LRZ license server (please send a LRZ Service Request).

#!/bin/bash
#SBATCH -o ./myjob.%j.%N.out
#SBATCH -D ./
#SBATCH -J starccm_mpp3_slurm
#SBATCH --clusters=mpp3
#SBATCH --get-user-env
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=64
# --- multiples of 64 for mpp3 ---
#SBATCH --mail-type=NONE
#SBATCH --mail-user=Max.Mustermann@lrz.de
#SBATCH --export=NONE
#SBATCH --time=00:10:00
#
module load slurm_setup

module av starccm
module load starccm/2023.1.1        # <-- mixed precision version
# module load starccm_dp/2023.1.1   # <-- double precision version
module list

echo "=================== Verification of the StarCCM+ Hostlist =========================="
echo $STAR_HOSTLIST
cat $STAR_HOSTLIST
echo "=================== Verification of the StarCCM+ Hostlist =========================="

# cat /proc/cpuinfo
# Provide your StarCCM+ simulation file name and Java macro name here:
SIM_FILE=Linux_StarCCM_Test.sim
START_MACRO=MeshSimSave.java
#
echo "===========================  Start of StarCCM+  ===================================="
echo "starccm+ -licpath 1999@license1.lrz.de -rsh /usr/bin/ssh -mpi intel -mpiflags \"-bootstrap slurm\" -printpids -machinefile $STAR_HOSTLIST -np $SLURM_NTASKS -batch $START_MACRO -batch-report $SIM_FILE"
#
starccm+ -licpath 1999@license1.lrz.de -rsh /usr/bin/ssh -mpi intel -mpiflags "-bootstrap slurm" -printpids -machinefile $STAR_HOSTLIST -np $SLURM_NTASKS -batch $START_MACRO -batch-report $SIM_FILE
echo "===========================  End of StarCCM+  ======================================"

Assumed that the above SLURM script has been saved under the filename "starccm_floatlic_mpp3_slurm.sh", the SLURM batch job has to be submitted by issuing the following command on one of the Linux Cluster login nodes:

sbatch starccm_floatlic_mpp3_slurm.sh

Warning: Do NOT use additionally mpirun, mpiexec or any srun command to start the parallel processes. This is done by a MPI wrapper by the starccm+ startup script in the background. Also, do not try to change the default Intel MPI to any other MPI version to run StarCCM+ in parallel. On the LRZ cluster systems only the usage of Intel MPI is supported and known to work propperly with Simcenter StarCCM+.

SuperMUC-NG : Simcenter StarCCM+ Job Submission on SNG running SLES15 using SLURM

In the following an example of a job submission batch script for StarCCM+ on SuperMUC-NG (Login node: skx.supermuc.lrz.de, SLURM partition = test) in the batch queuing system SLURM is provided.
Please note that POD licensing is not supported on SuperMUC-NG, since it is not possible to reach any external license server from SuperMUC-NG compute nodes due to additional hardening of this supercomputer machine.
LRZ can provide a rather small number of available StarCCM+ floating licenses on request. Alternatively interested users in using StarCCM+ on SuperMUC-NG need to provide their own StarCCM+ licenses on a hosted license server in the LRZ network. For that the migration of existing license pools to this LRZ license server need to be arranged with Siemens PLM as a software vendor for StarCCM+. If that is the licensing solution for starCCM+ on SuperMUC-NG you would like to go for, please contact our LRZ Service Desk accordingly.

#!/bin/bash
#SBATCH -o ./myjob_%x.%j.%N.out
#SBATCH -D ./
#SBATCH -J starccm_sng
#SBATCH --partition=test
# ---- test | micro | general | large | fat
#SBATCH --get-user-env
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=48
# ---- multiples of 48 for SuperMUC-NG ----
#SBATCH --mail-type=NONE
#SBATCH --mail-user=Max.Mustermann@lrz.de
#SBATCH --no-requeue
#SBATCH --export=NONE
#SBATCH --time=0:15:00
#SBATCH --account=<...your project ID ...>
#SBATCH --switches=1@24:00:00
#########################################################
#-----------------------------------------
module load slurm_setup 

module av starccm
# module load starccm/2024.1.1
module load starccm_dp/2024.1.1
module list

echo "=================== Verification of the StarCCM+ Hostlist =========================="
echo $STAR_HOSTLIST
cat $STAR_HOSTLIST
echo "=================== Verification of the StarCCM+ Hostlist =========================="
 
SIM_FILE=Linux_StarCCM_Test.sim
START_MACRO=MeshSimSave.java

echo "=========================  Start von StarCCM+  ===================================="
echo starccm+ -licpath 1999@license1.lrz.de -rsh /usr/bin/ssh -mpi intel -mpiflags \"-bootstrap ssh\" -printpids -machinefile $STAR_HOSTLIST -np $SLURM_NTASKS -batch $SIM_FILE -batch $START_MACRO -batch-report
date
starccm+ -licpath 1999@license1.lrz.de -rsh /usr/bin/ssh -mpi intel -mpiflags "-bootstrap ssh" -printpids -machinefile $STAR_HOSTLIST -np $SLURM_NTASKS -batch $SIM_FILE -batch $START_MACRO -batch-report
date
echo "=========================  Ende von StarCCM+  ===================================="

Assumed that the above SLURM script has been saved under the filename "starccm_sng_slurm.sh", the SLURM batch job has to be submitted by issuing the following command on one of the SuperMUC-NG login nodes:

sbatch starccm_sng_slurm.sh