- Login to the LRZ Cluster (login node)
Create a job script, e.g. for SuperMUC-NG with the content
It is necessary to use the mesa variant for off-screen-software-rendering.*)Security: Consider options like
--disable-further-connectionsto enhance the security of your paraview sessions! Be aware that
pvserverdoes not support any password protection at the moment.
Submit the script via
and check, where it is running (
squeue -u $USER ...(see SLURM documentation)). Once it is running, check the output file for
Create a SSH forward tunnel. Open a shell on your local computer and run:
opa is important, for SuperMUC-NG.
<server>is most probably
skx.supermuc.lrz.deon SuperMUC-NG (depends also on your local SSH configuration).
Another port is possible and may be necessary if 11111 (left == local port above) is already used. (Consult your SSH client's documentation!) The port on the compute node should be always available as 11111, as nodes are exclusively used. But also this can be changed by executing
pvserverwith the option
--server-port=<other port number>.
On CoolMUC-2, the node name needs to be extended by ib, for the InfiniBand network.
- Open locally (on the Laptop/PC in front of you) the
paraviewGUI of the same version as the
pvserver! Click on connect to server button, and create a connection (manual) to
localhost:11111(the SSH tunnel prolongs this - use the other port number if you had to change this for the SSH tunnel).
- After connection, you should be able to open case files on the LRZ cluster file system. We also recommend to open the memory monitor (see ParaView GUI documentation!).
- After finishing the visualization, close the connection to the server. The server automatically finishes the operation, and the SLURM job finishes.
*) MPI issue: The pre-compiled ParaView is built with and linked against MPICH (not Intel MPI!). This still works, but requires some caution when using on SuperMUC-NG/Linux Cluster, as it does not work out-of-the-box with the Slurm- and/or PMI configuration on the LRZ systems (currently - Dec 2020). On a single node,
mpiexec -launcher fork ... can help. For more than one node, several options are available, though they are not very convenient.
**) Paraview is delivered with MPICH. With paraview-prebuild/5.8.0_mesa, the
mpiexec startup fails with an srun configuration error on SuperMUC-NG. The problem seems to be the MPICH provided by the paraview package, where PATH is pointing to.
A proven workaround to set an Intel MPI module explicitly (what requires unfortunately to unload
devEnv). So an alternative might be,
> module use -p /lrz/sys/spack/.tmp.test.mpich/share/spack/modules/linux-sles12-skylake_avx512 # on SuperMUC-NG
> module use -p /lrz/sys/spack/.tmp.test.mpich/share/spack/modules/linux-sles15-haswell # on CoolMUC-2
> module load mpich-3.3.2-gcc
AFTER the Paraview module was loaded.