|Online Course||OpenMP Programming Workshop (register via PRACE)|
|Date||17.02.2021 – 19.02.2021|
|Registration deadline||03.02.2021 23:55|
Please register via the PRACE registration page https://events.prace-ri.eu/event/1084/
Please register with your official e-mail address to prove your affiliation. Following your successful registration, you will receive further information approx. 2 weeks before the course.
With the increasing prevalence of multicore processors, shared-memory programming models are essential. OpenMP is a popular, portable, widely supported, and easy-to-use shared-memory model.
Since its advent in 1997, the OpenMP programming model has proved to be a key driver behind parallel programming for shared-memory architectures. Its powerful and flexible programming model has allowed researchers from various domains to enable parallelism in their applications. Over the more than two decades of its existence, OpenMP has tracked the evolution of hardware and the complexities of software to ensure that it stays as relevant to today’s high performance computing community as it was in 1997.
This workshop will cover a wide range of topics, reaching from the basics of OpenMP programming using the "OpenMP Common Core" to really advanced topics. During each day lectures will be mixed with hands-on sessions.
Tools for Perf. and Correctness
Introduction to OpenMP
Host Perf.: SIMD
Misc. OpenMP 5.0 Features
Hands-on: Introduction to OpenMP
Host Perf.: NUMA
Roadmap / Outlook
End: approx. 16:30 CET
The first day will cover basic parallel programming with OpenMP.
Most OpenMP programmers use only around 21 items from the specification. We call these the “OpenMP Common Core”. By focusing on the common core on the first day, we make OpenMP what it was always meant to be: a simple API for parallel application programmers.
In this hands-on tutorial, students use active learning through a carefully selected set of exercises, to master the Common Core and learn to apply it to their own problems.
Day 2 and 3
Day 2 and 3 will cover advanced topics like:
- Mastering Tasking with OpenMP, Taskloops, Dependencies and Cancellation
- Host Performance: SIMD / Vectorization
- Host Performance: NUMA Aware Programming, Memory Access, Task Affinity, Memory Management
- Tool Support for Performance and Correctness, VI-HPS Tools
- Offloading to Accelerators
- Other Advanced Features of OpenMP 5.0
- Future Roadmap of OpenMP
Developers usually find OpenMP easy to learn. However, they are often disappointed with the performance and scalability of the resulting code. This disappointment stems not from shortcomings of OpenMP but rather with the lack of depth with which it is employed. The lectures on Day 2 and Day 3 will address this critical need by exploring the implications of possible OpenMP parallelization strategies, both in terms of correctness and performance.
We cover tasking with OpenMP and host performance, putting a focus on performance aspects, such as data and thread locality on NUMA architectures, false sharing, and exploitation of vector units. Also tools for performance and correctness will be presented.
Current trends in hardware bring co-processors such as GPUs into the fold. A modern platform is often a heterogeneous system with CPU cores, GPU cores, and other specialized accelerators. OpenMP has responded by adding directives that map code and data onto a device, the target directives. We will also explore these directives as they apply to programming GPUs.
OpenMP 5.0 features will be highlighted and the future roadmap of OpenMP will be presented.
All topics are accompanied with extensive case studies and we discuss the corresponding language features in-depth.
Topics might be still subject to change.
The course is organized as a PRACE training event by LRZ in collaboration with the OpenMP ARB and RWTH Aachen.
Basic C/C++ or Fortran knowledge. Basic OpenMP knowledge is useful for Day 2 and Day 3 of the Workshop, but will also be provided on the first day.
For the hands-on sessions participants need to use their own laptop or system with an OpenMP C/C++ or Fortran compiler installed.
The content level of the course is broken down as:
Dr.-Ing. Michael Klemm (OpenMP ARB, AMD), Dr. Christian Terboven (RWTH Aachen University)
Dr.-Ing. Michael Klemm is part of the HPC Center of Excellence at AMD. His focus is on High Performance & Throughput Computing Enabling. He obtained an M.Sc. in Computer Science in 2003. He received a Doctor of Engineering degree (Dr.-Ing.) in Computer Science from the Friedrich-Alexander-University Erlangen-Nuremberg, Germany, in 2008. His research focus was on compilers and runtime optimizations for distributed systems. His areas of interest include compiler construction, design of programming languages, parallel programming, and performance analysis and tuning. He is CEO of the OpenMP Language Committee, where he is also leading the group on the OpenMP Error Model efforts.
Dr. Christian Terboven is a senior scientist and leads the HPC group at RWTH Aachen University. His research interests center around Parallel Programming and related Software Engineering aspects. Dr. Terboven has been involved in the Analysis, Tuning and Parallelization of several large-scale simulation codes for various architectures. He is responsible for several research projects in the area of programming models and approaches to improve the productivity and efficiency of modern HPC systems. He is further co-author of the new book “Using OpenMP – The Next Step“, https://www.openmp.org/tech/using-openmp-next-step/.
Prices and Eligibility
The course is open and free of charge for people from academia and industry.
|1||17.02.2021||13:00 – 16:00||ONLINE||Day 1|
|2||18.02.2021||10:00 – 16:00||ONLINE||Day 2|
|3||19.02.2021||10:00 – 16:00||ONLINE||Day 3|