Inhalt type flat printable false
The Astro-Lab engages in code modernization projects fulfilling the specifications for high-level support in the Gauss Centre for Supercomputing.
Hover pointer over images to show descriptions.
Ongoing Projects
Modernization of The Black Hole Accretion Code (BHAC) (2020)
AstroLab contact: Luigi Iapichino, Salvatore Cielo
Application partner: Oliver Porth (Uni Amsterdam), Hector Olivares (Radboud Univ.)
Project partner: Fabio Baruffa (Intel), Anupam Karmakar (Lenovo)
BHAC (the Black Hole Accretion Code) is a multidimensional general relativistic magnetohydrodynamics code based on the MPI-AMRVAC framework. BHAC solves the equations of ideal general relativistic magnetohydrodynamics in one, two or three dimensions on arbitrary stationary space-times, using an efficient block based approach.
BHAC has proven efficient scaling up to ~200 compute node in pure MPI. The hybrid parallelization scheme with OpenMP introduced by the developers could successfully extend scaling up to ~1600 nodes, at the cost of a drop in efficiency to 40% or 65% (with 4 or 8 threads per node, respectively). The main goal for the modernization of the code is thus to check, profile and optimize the OpenMP implementation and node-level performance, including vectorization and memory access.
Early tests reveal a rather high degree of vectorization, though with some room for improvement as the code contains a mix of compute-bound and memory-bound (the slight majority) kernels. The node performance vs MPI/OMP ratio confirms a degrading for increasing OpenMP threads, which needs further investigation.
Optimization of the GReX code (2019)
AstroLab contact: Luigi Iapichino, Salvatore Cielo
Application partners: Elias Most, Jens Papenfort (Institute for Theoretical Physics, Univ. Frankfurt)
Project partner: Fabio Baruffa (Intel)
The gravitational waves (GW) events produced by collision and merging of compact objects such as neutron stars have recently been observed by the LIGO network for the first time. Understanding the observed electro-magnetic counterparts of these events can grant a complete, multi-messenger view of these extreme phenomena, but requires extensive numerical work to include the strong magnetic fields and turbulence involved. As these require extremely high resolution and computing power, scaling parallel simulation codes to large core units is a must.
The GReX code has successfully run simulations of neutron star mergers up to 32 k cores on SuperMUC-NG, while taking large advantage of the SIMD capabilities of modern CPUs. Scaling to extreme core counts (> 100k) is however required to leverage on the power of the coming Exascale Supercomputers. During the optimization project with the Astro-Lab, we identified the main bottlenecks in load balancing and communication when using a large number of AMR grids with many cells. After a detailed characterization through Application Performance Snapshot and Intel Trace Analyzer and Collector, we performed several node-level scaling tests and investigated the effect of local grid tiling on the performance up to 256 nodes. However the best results were obtained by an by rewriting the way the code exchanges ghost cells around grids via MPI.
.
Finalized projects
Erweitern | ||
---|---|---|
| ||
AstroLab contact: Luigi Iapichino In this project we improved the parallelization scheme of ECHO-3DHPC, an efficient astrophysical code used in the modelling of relativistic plasmas. With the help of the Intel Software Development Tools, like Fortran compiler and Profile-Guided Optimization (PGO), Intel MPI library, VTune Amplifier and Inspector we have investigated the performance issues and improved the application scalability and the time to solution. The node-level performance is improved by 2.3x and, thanks to the improved threading parallelisation, the hybrid MPI-OpenMP version of the code outperforms the MPI-only, thus lowering the MPI communication overhead. More details: see article on Intel Parallel Universe Magazine 34, p. 49. |
Erweitern | ||
---|---|---|
| ||
AstroLab contact: Nicolay Hammer The LRZ AstroLab support project for BHAC targeted the modernisation of the code's solver scheme towards a task-based algorithm. A series of profiling (using Intel VTune Amplifier) and analysis steps before and after the implementation of the tasking revealed further bottlenecks in the memory access pattern. These performance issues were tackled by refactoring important loops of the source code during a dedicated hackathon/workshop for the BHAC developers, held on-site at LRZ. These activities were completed by further efforts which tackled parallel I/O challenges. More details on BHAC here. |
Erweitern | ||
---|---|---|
| ||
AstroLab contacts: Vytautas Jančauskas, Stephan Hachinger TARDIS is a numerical code for Monte-Carlo simulations of supernova spectrum formation. It serves as a tool to analyse the conditions generating the spectra of observed supernovae, i.e. to trace back the supernova structure from the observations. With this ADVISOR 2016 project, our aim was to obtain an overview of possible performance bottlenecks (and of discovery strategies for bottlenecks in the future), and first steps towards an optimised TARDIS code. The code is largely parallelised with OpenMP, and job-farming techniques (e.g. with MPI) are used to perform ensemble runs on larger machines. In the context of ADVISOR 2016 and LRZ Astro-Lab, a comprehensive standard profiling/scaling test of the TARDIS code was performed. The profiling brought out no obvious, easy-to-resolve bottlenecks or problems, but was very valuable for planning alogrithmic and conceptual improvements (e.g. an improved convergence strategy) for TARDIS v2.0. More details: see TARDIS on GitHub, TARDIS on readthedocs, TARDIS v1.0.1 via DOI. |