Page tree

Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
  1. Login to the LRZ Cluster (login node)
  2. Create a job script, e.g. for SuperMUC-NG with the content

    Code Block
    languagebash
    themeEclipse
    titlepvtest.sh
    collapsetrue
    #!/bin/sh
    #SBATCH -o ./pvtest.%j.%N.out 
    #SBATCH -D .
    #SBATCH -J pvtest
    #SBATCH --get-user-env 
    #SBATCH --partition=test                     # what you need
    #SBATCH --nodes=2                            # what you need
    #SBATCH --ntasks-per-node=24                 # what you need
    #SBATCH --mail-type=none
    #SBATCH --export=NONE 
    #SBATCH --time=00:30:00                      # what you need
    #SBATCH --account=<project ID>               # necessary on SuperMUC-NG only!
    #SBATCH --switches=1                         # Requesting all nodes from a single island
    ##SBATCH --ear=off                           # on SNG necessary sometimes ... :(
    module load slurm_setup
    module load paraview-prebuild/5.8.0_mesa     # or look for available modules!! But use MESA! 
    srun pvserver                                --disable-further-connections  # mpiexec is an alternative but may not work, so does srun in all cases ... :(  see below **) 

    It is necessary to use the mesa variant for off-screen-software-rendering.*)

    Security: Consider options like --server-port=... and --disable-further-connections to enhance the security of your paraview sessions! Be aware that pvserver does not support any password protection at the moment.
  3. Submit the script via

    Code Block
    languagebash
    themeEclipse
    $ sbatch pvtest.sh

    and check, where it is running (squeue -u $USER ... (see SLURM documentation)). Once it is running, check the output file for

    Code Block
    languagebash
    themeEclipse
    Connection URL: cs://i01r02c01s05.sng.lrz.de:11111


  4. Create a SSH forward tunnel. Open a shell on your local computer and run:

    Code Block
    ssh -L 11111:i01r02c01s05opa:11111 <userID>@<server>


    opa is important, for SuperMUC-NG. <server> is most probably skx.supermuc.lrz.de on SuperMUC-NG (depends also on your local SSH configuration). 
    Another port is possible and may be necessary if 11111 (left == local port above) is already used. (Consult your SSH client's documentation!) The port on the compute node should be always available as 11111, as nodes are exclusively used. But also this can be changed by executing pvserver with the option --server-port=<other port number>.
    On CoolMUC-2, the node name needs to be extended by ib, for the InfiniBand network.

  5. Open locally (on the Laptop/PC in front of you) the paraview GUI of the same version as the pvserver! Click on connect to server button, and create a connection (manual) to localhost:11111 (the SSH tunnel prolongs this - use the other port number if you had to change this for the SSH tunnel).
  6. After connection, you should be able to open case files on the LRZ cluster file system. We also recommend to open the memory monitor (see ParaView GUI documentation!).
  7. After finishing the visualization, close the connection to the server. The server automatically finishes the operation, and the SLURM job finishes.

...