2014-06-05 Efficient Parallel Programming with GASPI (HGPI1S14)

Date:Thursday, June 5 2014, 9:00-13:00
Location:

LRZ Building, University campus Garching, near Munich.

Contents:

In this tutorial we present an asynchronous dataflow programming model for Partitioned Global Address Spaces (PGAS) as an alternative to the programming model of MPI. GASPI, which stands for Global Address Space Programming Interface, is a partitioned global address space (PGAS) API.

The GASPI API is designed as a C/C++/Fortran library and focused on three key objectives: scalability, flexibility and fault tolerance. In order to achieve its much improved scaling behaviour GASPI aims at asynchronous dataflow with remote completion, rather than bulk-synchronous message exchanges. GASPI follows a single/multiple program multiple data (SPMD/MPMD) approach and offers a small, yet powerful API (see also http://www.gaspi.de and http://www.gpi-site.com).

GASPI is successfully used in academic and industrial simulation applications. Hands-on sessions (in C and Fortran) will allow users to immediately test and understand the basic constructs of GASPI. This course provides scientific training in Computational Science, and in addition, the scientific exchange of the participants among themselves.

Agenda:

09:00-09:15 Registration

09:15-10:00 General introduction to GASPI

10:00-10:15 One sided communication in GASPI

10:15-10:30 Coffee Break

10:30-10:45 Memory segments in GASPI

10:45-12:15 Data Flow in GASPI

12:15-12:30 Collectives and Passive Communication

12:30-13:00 Questions and Answers

Prerequisites
  • users must have an existing account on the system
  • good knowledge of compilers and HPC languages and parallelization concepts (MPI, OpenMP) is required
Language:English
Teacher:

Dr. Christian Simmendinger, T-Systems Solutions for Research GmbH

Dr. Mirko Rahn, Fraunhofer ITWM

Registration:Please choose course HGPI1S14 from the LRZ registration form. Deadline for registration is May 30, 2014.