2022-02-08 OpenMP Programming Workshop (homp1w21)

Online CourseOpenMP Programming Workshop (register via PRACE)
Numberhomp1w21
Places available65
Date08.02.2022 – 10.02.2022
Price€ 0.00
PlaceONLINE


Room
Registration deadline01.02.2022 23:55
E-mail

education@lrz.de

Registration

Please register via the PRACE registration page https://events.prace-ri.eu/event/1279/

with your official e-mail address to prove your affiliation. Following your successful registration, you will receive further information approx. 1-2 weeks before the course.

Contents

With the increasing prevalence of multicore processors, shared-memory programming models are essential. OpenMP is a popular, portable, widely supported, and easy-to-use shared-memory model.

Since its advent in 1997, the OpenMP programming model has proved to be a key driver behind parallel programming for shared-memory architectures.  Its powerful and flexible programming model has allowed researchers from various domains to enable parallelism in their applications.  Over the more than two decades of its existence, OpenMP has tracked the evolution of hardware and the complexities of software to ensure that it stays as relevant to today’s high performance computing community as it was in 1997.

This workshop will cover a wide range of  topics, reaching from the basics of OpenMP programming using the "OpenMP Common Core" to really advanced topics. During each day lectures will be mixed with hands-on sessions.

Tentative Agenda


Day 1

Day 2

Day 3

09:00-10:30 CET

Introduction to OpenMP 1

Tasking 1

  • Tasking Intro
  • Lab 1

GPUs

  • OpenMP for Compute Accelerators

10:45-12:15 CET

Hands-on: Introduction to OpenMP

Tasking 2

  • Taskloop
  • Dependencies
  • Cancellation
  • Lab 2

Tools for Perf. and Correctness

  • VI-HPS Tools for Performance
  • VI-HPS Tools for Correctness

13:00-14:45 CET

Introduction to OpenMP 2
 
 

Host Perf.: SIMD

  • Vectorisation
  • Lab 3

Misc. OpenMP 5.0 Features

  • DOACROSS Loops

15:00-16:00 CET

Hands-on: Introduction to OpenMP

Host Perf.: NUMA

  • Memory Access
  • Task Affinity
  • Memory Management
  • Lab 4

Roadmap / Outlook

  • Open Discussion
  • OpenMP 5.1 and beyond

End: approx. 16:30 CET

Details

Day 1

The first day will cover basic parallel programming with OpenMP.

Most OpenMP programmers use only around 21 items from the specification. We call these the “OpenMP Common Core”. By focusing on the common core on the first day, we make OpenMP what it was always meant to be: a simple API for parallel application programmers.

In this hands-on tutorial, students use active learning through a carefully selected set of exercises, to master the Common Core and learn to apply it to their own problems.

Day 2 and 3

Day 2 and 3 will cover advanced topics like:

  • Mastering Tasking with OpenMP, Taskloops, Dependencies and Cancellation
  • Host Performance: SIMD / Vectorization
  • Host Performance: NUMA Aware Programming, Memory Access, Task Affinity, Memory Management
  • Tool Support for Performance and Correctness, VI-HPS Tools
  • Offloading to Accelerators
  • Other Advanced Features of OpenMP 5.1
  • Future Roadmap of OpenMP

Developers usually find OpenMP easy to learn. However, they are often disappointed with the performance and scalability of the resulting code. This disappointment stems not from shortcomings of OpenMP but rather with the lack of depth with which it is employed. The lectures on Day 2 and Day 3 will address this critical need by exploring the implications of possible OpenMP parallelization strategies, both in terms of correctness and performance.

We cover tasking with OpenMP and host performance, putting a focus on performance aspects, such as data and thread locality on NUMA architectures, false sharing, and exploitation of vector units. Also tools for performance and correctness will be presented.

Current trends in hardware bring co-processors such as GPUs into the fold. A modern platform is often a heterogeneous system with CPU cores, GPU cores, and other specialized accelerators. OpenMP has responded by adding directives that map code and data onto a device, the target directives. We will also explore these directives as they apply to programming GPUs.

OpenMP 5.0 features will be highlighted and the future roadmap of OpenMP will be presented.

All topics are accompanied with extensive case studies and we discuss the corresponding language features in-depth.

Topics might be still subject to change.

The course is organized as a PRACE training event by LRZ in collaboration with the OpenMP ARB and RWTH Aachen.

Prerequisites

Basic C/C++ or Fortran knowledge. Basic OpenMP knowledge is useful for Day 2 and Day 3 of the Workshop, but will also be provided on the first day.

Hands-On

For the hands-on sessions participants need to use their own laptops or systems with a C/C++ or Fortran compiler supporting at least OpenMP 4.5 installed (see https://www.openmp.org/resources/openmp-compilers-tools/).

Content Level

The content level of the course is broken down as:

Beginner's content:

33.3%

Intermediate content:

33.3%

Advanced content:

33.3%

Community-targeted content:

0%

Language

English

Lecturers

Dr.-Ing. Michael Klemm (OpenMP ARB, AMD), Dr. Christian Terboven (RWTH Aachen University)

Dr.-Ing. Michael Klemm is part of the HPC Center of Excellence at AMD. His focus is on High Performance & Throughput Computing Enabling. He obtained an M.Sc. in Computer Science in 2003. He received a Doctor of Engineering degree (Dr.-Ing.) in Computer Science from the Friedrich-Alexander-University Erlangen-Nuremberg, Germany, in 2008. His research focus was on compilers and runtime optimizations for distributed systems. His areas of interest include compiler construction, design of programming languages, parallel programming, and performance analysis and tuning. He is CEO of the OpenMP Language Committee, where he is also leading the group on the OpenMP Error Model efforts.

Dr. Christian Terboven is a senior scientist and leads the HPC group at RWTH Aachen University. His research interests center around Parallel Programming and related Software Engineering aspects. Dr. Terboven has been involved in the Analysis, Tuning and Parallelization of several large-scale simulation codes for various architectures. He is responsible for several research projects in the area of programming models and approaches to improve the productivity and efficiency of modern HPC systems. He is further co-author of the new book “Using OpenMP – The Next Step“, https://www.openmp.org/tech/using-openmp-next-step/.

Prices and Eligibility

The course is open and free of charge for people from academia and industry from the Member States (MS) of the European Union (EU) and Associated Countries to the Horizon 2020 programme.


No.DateTimeLeaderLocationRoomDescription
108.02.202210:00 – 16:00
ONLINE
Day 1
209.02.202210:00 – 16:00
ONLINE
Day 2
310.02.202210:00 – 16:00
ONLINE
Day 3