Research and Development projects

The increasing capabilities of High-Performance Computing (HPC) systems are opening new frontiers for numerical sciences. Astrophysics in particular has a long tradition of numerical experiments and simulations, which are often the sole way to explicate theoretical predictions and compare them with observations. Traditional astrophysical problems often require to resolve phenomena spanning several orders of magnitude, both in length and time scale, and thus largely benefit from increased computing power. Similar techniques, mandatory in all modern HPC applications, must also be employed to gain insight on the growing amounts of data produced by novel simulations.

The Astro-Lab projects presented in this page aim to provide numerical astrophysicists with an integrated HPC environment from start to finish, i.e. including post-processing, visualization, comparisons with observations and forward modelling.
Convergence in environment, hardware and techniques are key in such a challenging and competitive field; the most recent developments in tools offered by vendors such as Intel (e.g. IDP,  the OSPRay raytracing engine and oneAPI in general) channel great collective effort towards such convergence.

Hover cursor or finger over images for a description.

Parallel Scientific Visualization

The necessity of parallelizing large visualization tasks has led to an increase in the usage of the most parallel techniques, such as projections and ray tracing. 

Visualizing dark matter around merging black holes

Astro-Lab members: Salvatore Cielo
Project partners:  Katy Clough (University of Oxford), Jamie Bamber (University of Oxford), Josu Auerrokoetxea (King's College London)

We used VisIt + OSPRay isosurface rendering on SuperMUC-NG to visualize the results of GR-CHOMBO simulations, run within the PRACE project "Fundamental Physics in the Gravitational Waves Era".

The breakthrough detection of gravitational waves since 2015, as predicted by the General Theory of Relativity in 1916, opened new observational channels to phenomena of extreme, dynamical gravity
that leave no other signature than distortions of space-time, as in the case of merging black holes. These data in turn encode additional information on the Universe, such as signatures of the constituents of Dark Matter that remain otherwise a mystery. Simulations of these systems (merging Black Holes + Dark Matter field candidates) give rise to fast-evolving, highly-convoluted space-times showing quadrupole and higher multipoles structures. Displaying these in a comprehensible but still 3D form is extremely helpful for a clear interpretation of the results.  

In this partnership we used OSPRay isosurface renderings within VisIt as a natural upgrade of LRZ's scientific visualization setup to provide high-resolution, polished and intuitive visualizations of numerical simulations of such physical cases.  Isosurfaces have been optimized for multi-node renderings only since version 3.1.4 of VisIt; here used to visualize the challenging adaptive-mesh simulation outputs of GR-CHOMBO, with varying space resolution that requires careful interpolation, among the other things. On the native data, without any necessary pre-processing, the visualization workflow nicely scales up to a few tens of node, granting interactive frame rates. 

The images below show the dark matter density, field value, and an example flux for the case of an asymmetric black  hole merger system (2:1 mass ratio).

Dark Matter densityScalar field

Azimuthal flux

Distributed volume rendering with Intel® OSPRay™ Studio (2020+) 

Astro-Lab members: Salvatore Cielo
Project partners:  Elisabeth Mayer (LRZ), Johannes Günther (Intel Germany)

Besides its integration with VisIt, OSPRay can be used as a standalone raytracing engine. Earlier attempts by project partners on the Frontera supercomputer showed in particular how OSPRay Studio can be successfully applied to scientific visualization.  Studio presents several appealing features as a visualization engine for Exascale and pre-Exascale supercomputer:

  • distributed rendering capabilities through its hybrid MPI+TBB parallel scheme (ongoing development);
  • a lightweight but complete GUI, or alternative a CLI for batch jobs submission;
  • unprecedented throughput in FPS, for the first time allowing interactive performance even for the largest datasets;
  • panoramic/stereographic camera (3D-360) for real-time VR applications (ongoing development);
  • as part of the Intel oneAPI rendering toolkit, it makes the best use of current and coming Intel hardware (ongoing development);
  • general-purpose CGI capabilities, not limited to the SciViz domain.

The first goal of the project is to install OSPRay Studio and relative plugins on SuperMUC-NG, thus also creating a testing environment for the developing features (including I/O plugins for selected formats). We will then define guidelines for an optimal volume rendering workflow, eliminating all data pre-processing and reduction steps that require human intervention.
Finally, we will produce highlights of selected large-scale LRZ computational projects. The first ones will be data from the CompBioMed 2 project, for which we provide a sample image and animation.

ForeArmHD.mp4

Visualizing the world's largest turbulence simulation (2019-2020)

Astro-Lab members: Salvatore Cielo, Luigi Iapichino
Project partners: 
Johannes Günther (Intel Germany), Elisabeth Mayer (LRZ), Christoph Federrath (Australian National University), Markus Wiedemann (LRZ)

We present an approach for visualization in HPC environments, based on the ray tracing engine Intel® OSPRay© associated with VisIt. Part of the software stack of the LRZ (and included in the public release of VisIt since version 3), this method has been applied to the visualization of the largest simulations of interstellar turbulence ever performed, produced on SuperMUC-NG. The hybrid (MPI+Threading Building Blocks) parallelization of OSPRay and VisIt allows efficient scaling up to about 150 thousand cores, making it possible to visualize the data at the full, unprecedented resolution of 100483 grid elements (about 23 TB per snapshot). The main advantage of this approach, which can  now be applied to any volume data produced in LRZ projects, is to harness for visualization the power of the same supercomputer the simulation was generated with, avoiding the need for dedicated systems and data treatment and conditioning such as transfers, storage of duplicates, format conversion and reduction. In turn this was made possible by the convergence of hardware utilization of Intel oneAPI© in view of the Exascale computing era.

References and impact:

  • Our explorative visualization featuring the sonic scale of interstellar turbulence was selected among the six finalist for the Scientific Visualization & Data Analytics Showcase at SC19.
  • A companion paper and link to video are available. A standalone version of the paper is still under review. Further readings can be found  on HPCwire (here and here), GCS  and ScienceNode.
  • Our instance of VisIt is since available for all users of LRZ supercomputers, even in remote GUI form. 

Simulation Analysis and Post-processing

As modern scientific simulations grow ever more in size and complexity, even their analysis and post-processing becomes increasingly demanding, calling for the use of HPC resources and methods.

Lightcone query on the Magneticum  Web Portal with LCMaker (2020-21)

Astro-Lab members: Milan Krisko
Project partners: Klaus Dolag (Ludwig-Maximilians-Universität München), Joseph O'Leary (Ludwig-Maximilians-Universität München)

Users of the Magneticum web portal can access and share the output of large, cosmological, hydro-dynamical simulations with a broad scientific community. The portal also allows to query and download the results of direct processings of the raw simulation data on a remote computing cluster. Introducing light-cones will allow to produce realistic virtual observations of the lookback universe, by properly stacking simulated volumes.

LCMaker (O'Leary+ 2021) is a project for creating light-cones from the Magneticum cosmological hydro simulation for a freely chosen geometry following the method of Kitzblicher and White (2007).
This service is currently a prototype for the Box2/hr and Box2b/hr simulations; more be made available for virtual observation with time. This service first displays a plot preview will for the chosen observational parameters, then generates the light-cone and fills it within the simulations'  galaxy content.

Lightcone Geometry from Magneticum

References:

Speeding simulation analysis up with YT and Intel Distribution for Python (2019)

Astro-Lab members: Salvatore Cielo, Luigi Iapichino
Project partners:
Fabio Baruffa (Intel Germany), Christoph Federrath (Australian National University), Matteo Bugli (CEA Saclay) 

yt is a parallel, open source post-processing python package for numerical simulations in astrophysics, made popular by its cross-format compatibility, its active community of developers and its integration with several other professional python instruments.  We showed how the Intel® Distribution for Python enhances yt's performance and parallel scalability, through the optimization of lower-level libraries Numpy and Scipy, which make use of the optimized Intel® Math Kernel Library (Intel-MKL) and the Intel MPI library for distributed computing.

Compared to the default Anaconda python distribution (which showed no parallel scaling at all), common astrophysical operations can be sped up by 4x or 5x on Intel CPUs. These operations include: computing and summing derived datafields, producing 2D phase plots, analyzing structure in cosmological simulations or creating telescope-specific synthetic X-ray observations.

            yt speedup on SKX with Intel python in common tasks

References and impact: