Page tree
Skip to end of metadata
Go to start of metadata
  1. Login to the LRZ Cluster (login node)
  2. Create a job script, e.g. for SuperMUC-NG with the content

    pvtest.sh
    #!/bin/sh
    #SBATCH -o ./pvtest.%j.%N.out 
    #SBATCH -D .
    #SBATCH -J pvtest
    #SBATCH --get-user-env 
    #SBATCH --partition=test                     # what you need
    #SBATCH --nodes=2                            # what you need
    #SBATCH --ntasks-per-node=24                 # what you need
    #SBATCH --mail-type=none
    #SBATCH --export=NONE 
    #SBATCH --time=00:30:00                      # what you need
    #SBATCH --account=<project ID>               # necessary on SuperMUC-NG only!
    ##SBATCH --switches=1                        # Requesting all nodes from a single island
    ##SBATCH --ear=off                           # on SNG necessary sometimes ... :(
    module load slurm_setup
    module load paraview-prebuild/5.8.0_mesa     # or look for available modules!! But use MESA! 
    module rm intel-mpi
    module load intel-mpi                        # in order to make Intel MPI mpiexec visible again
    mpiexec pvserver --disable-further-connections  # if mpiexec is not working, try srun ...  see below **) 

    It is necessary to use the mesa variant for off-screen-software-rendering.*)

    Security: Consider options like --server-port=... and --disable-further-connections to enhance the security of your paraview sessions! Be aware that pvserver does not support any password protection at the moment.
  3. Submit the script via

    $ sbatch pvtest.sh

    and check, where it is running (squeue -u $USER ... (see SLURM documentation)). Once it is running, check the output file for

    Connection URL: cs://i01r02c01s05.sng.lrz.de:11111
  4. Create a SSH forward tunnel. Open a shell on your local computer and run:

    ssh -L 11111:i01r02c01s05opa:11111 <userID>@<server>


    opa is important, for SuperMUC-NG. <server> is most probably skx.supermuc.lrz.de on SuperMUC-NG (depends also on your local SSH configuration). 
    Another port is possible and may be necessary if 11111 (left == local port above) is already used. (Consult your SSH client's documentation!) The port on the compute node should be always available as 11111, as nodes are exclusively used. But also this can be changed by executing pvserver with the option --server-port=<other port number>.
    On CoolMUC-2, the node name needs to be extended by ib, for the InfiniBand network.

  5. Open locally (on the Laptop/PC in front of you) the paraview GUI of the same version as the pvserver! Click on connect to server button, and create a connection (manual) to localhost:11111 (the SSH tunnel prolongs this - use the other port number if you had to change this for the SSH tunnel).
  6. After connection, you should be able to open case files on the LRZ cluster file system. We also recommend to open the memory monitor (see ParaView GUI documentation!).
  7. After finishing the visualization, close the connection to the server. The server automatically finishes the operation, and the SLURM job finishes.


*) MPI issue: The pre-compiled ParaView is built with and linked against MPICH (not Intel MPI!). This still works, but requires some caution when using on SuperMUC-NG/Linux Cluster, as it does not work out-of-the-box with the Slurm- and/or PMI configuration on the LRZ systems (currently - Dec 2020). On a single node, mpiexec -launcher fork ... can help. For more than one node, several options are available, though they are not very convenient.

**) Paraview is delivered with MPICH. With paraview-prebuild/5.8.0_mesa, the mpiexec startup fails with an srun configuration error on SuperMUC-NG. The problem seems to be the MPICH provided by the paraview package, where PATH is pointing to.
A proven workaround to set an Intel MPI module explicitly (what requires unfortunately to unload devEnv). So an alternative might be,
>  module use -p /lrz/sys/spack/.tmp.test.mpich/share/spack/modules/linux-sles12-skylake_avx512              # on SuperMUC-NG
> module use -p /lrz/sys/spack/.tmp.test.mpich/share/spack/modules/linux-sles15-haswell                            # on CoolMUC-2
> module load mpich-3.3.2-gcc
AFTER the Paraview module was loaded.

  • No labels