2023-07-11 Deep Learning and GPU Programming using OpenACC @ HLRS (hdlw1s23)

CourseDeep Learning and GPU Programming using OpenACC @ HLRS (register via HLRS)
Numberhdlw1s23
Available places32
Date11.07.2023 – 13.07.2023
PriceEUR 30.00 – 600.00
LocationUniversität Stuttgart - Höchstleistungsrechenzentrum Stuttgart
Nobelstraße 19
70569 Stuttgart
Room0.439 / Rühle Saal
Registration deadline12.06.2023 23:55
E-maileducation@lrz.de


This course will take place on-site at HLRS in Stuttgart!

Registration

Please register with your official e-mail address to prove your affiliation via HLRS.

Overview

Learn how to accelerate your applications with OpenACC, how to train and deploy a neural network to solve real-world problems, and how to effectively parallelize training of deep neural networks on Multi-GPUs.

The workshop combines an introduction to Deep Learning and Deep Learning for Multi-GPUs with a lecture on Accelerated Computing with OpenACC.

The lectures are interleaved with many hands-on sessions using Jupyter Notebooks. The exercises will be done a the AI partition of HLRS's cluster Hawk.

This course is organised by HLRS (Germany). All instructors are NVIDIA certified University Ambassadors.

1st day: Accelerated Computing with OpenACC

  • How to profile and optimize your CPU-only applications to identify hot spots for acceleration
  • How to use OpenACC directives to GPU accelerate your codebase
  • How to optimize data movement between the CPU and GPU accelerator

Upon completion, you'll be ready to use OpenACC to GPU accelerate CPU-only applications.

2nd day: Introduction to Deep Learning

  • Implement common deep learning workflows, such as image classification and object detection
  • Experiment with data, training parameters, network structure, and other strategies to increase performance and capability
  • Deploy your neural networks to start solving real-world problems

Upon completion, you’ll be able to start solving problems on your own with deep learning.

3rd day: Introduction to Deep Learning for Multi-GPUs

  • Approaches to multi-GPUs training
  • Algorithmic and engineering challenges to large-scale training
  • Key techniques used to overcome the challenges mentioned above

Upon completion, you'll be able to effectively parallelize training of deep neural networks using TensorFlow.

Preliminary Agenda

1st day: Accelerated Computing with OpenACC (9:00 - 17:00)

On the first day you learn the basics of OpenACC, a high-level programming language for programming on GPUs. Discover how to accelerate the performance of your applications beyond the limits of CPU-only programming with simple pragmas.

2nd day: Introduction to Deep Learning  (9:00 - 17:00)

Explore the fundamentals of deep learning by training neural networks and using results to improve performance and capabilities.

During this day, you’ll learn the basics of deep learning by training and deploying neural networks.

3rd day: Introduction to Deep Learning for Multi-GPUs (9:00 - 17:00)

The computational requirements of deep neural networks used to enable AI applications like self-driving cars are enormous. A single training cycle can take weeks on a single GPU or even years for larger datasets like those used in self-driving car research. Using multiple GPUs for deep learning can significantly shorten the time required to train lots of data, making solving complex problems with deep learning feasible.

On the third day we will teach you how to use multiple GPUs to train neural networks.

Prerequisites

For day one, you need basic experience with C/C++ or Fortran. Suggested resources to satisfy prerequisites: the learn-c.org interactive tutorial, https://www.learn-c.org/.

On day two, you need an understanding of fundamental programming concepts in Python 3, such as functions, loops, dictionaries, and arrays; familiarity with Pandas data structures; and an understanding of how to compute a regression line.
Suggested resources to satisfy prerequisites: Python Beginner’s Guide.

Experience with Deep Learning using Python 3 and, in particular, gradient descent model training will be needed on day three.

Familiarity with TensorFlow and Keras will be a plus as it will be used in the hands-on sessions. For those who did not use these before, you can find tutorials here: github.com/tensorflow/docs/tree/master/site/en/r1/tutorials/keras.

Hands-On

The exercises will be carried out on HLRS's cluster Hawk.

Language

English 

Lecturers

Dr. Momme Allalen, PD Dr. Juan Durillo Barrionuevo, Dr. Volker Weinberg (LRZ and NVIDIA University Ambassadors).

Prices and Eligibility

The course is open for people from academia and industry.

The following categories can be selected during registration:

  • Students without Diploma/Master: 30 EUR
  • PhD students or employees at a German university or public research institute: 60 EUR
  • PhD students or employees at a university or public research institute in an EU, EU-associated or PRACE country other than Germany: 120 EUR.
  • PhD students or employees at a university or public research institute outside of EU, EU-associated or PRACE countries: 240 EUR
  • Other participants, e.g., from industry, other public service providers, or government: 600 EUR


No.DateTimeLeaderLocationRoomDescription
111.07.202309:00 – 17:00Volker Weinberg
Momme Allalen
Universität Stuttgart - Höchstleistungsrechenzentrum Stuttgart0.439 / Rühle SaalLecture
212.07.202309:00 – 17:00Juan Durillo BarrionuevoUniversität Stuttgart - Höchstleistungsrechenzentrum Stuttgart0.439 / Rühle SaalLecture
313.07.202309:00 – 17:00Juan Durillo BarrionuevoUniversität Stuttgart - Höchstleistungsrechenzentrum Stuttgart0.439 / Rühle SaalLecture