From bb4dd39997c0a9f6665bb83af5fa20295fd50ab1 Mon Sep 17 00:00:00 2001 From: Andreas Herten Date: Wed, 1 Jun 2022 16:52:10 +0200 Subject: [PATCH] Update Zenodo details --- .zenodo.json | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/.zenodo.json b/.zenodo.json index fb8d225..e012d96 100644 --- a/.zenodo.json +++ b/.zenodo.json @@ -29,19 +29,19 @@ "title": "Efficient Distributed GPU Programming for Exascale", - "publication_date": "2021-11-14", + "publication_date": "2022-05-29", - "description": "

Over the past years, GPUs became ubiquitous in HPC installations around the world. Today, they provide the majority of performance of some of the largest supercomputers (e.g. Summit, Sierra, JUWELS Booster). This trend continues in upcoming pre-exascale and exascale systems (LUMI, Leonardo; Frontier): GPUs are chosen as the core computing devices to enter this next era of HPC.

To take advantage of future GPU-accelerated systems with tens of thousands of devices, application developers need to have the proper skills and tools to understand, manage, and optimize distributed GPU applications.

In this tutorial, participants will learn techniques to efficiently program large-scale multi-GPU systems. While programming multiple GPUs with MPI is explained in detail, also advanced techniques and models (NCCL, NVSHMEM, …) are presented. Tools for analysis are used to motivate implementation of performance optimizations. The tutorial combines lectures and hands-on exercises, using Europe’s fastest supercomputer, JUWELS Booster with NVIDIA A100 GPUs.

", + "description": "

Over the past years, GPUs became ubiquitous in HPC installations around the world. Today, they provide the majority of performance of some of the largest supercomputers (e.g. Summit, Sierra, JUWELS Booster). This trend continues in the pre-exascale and exascale systems (LUMI, Leonardo; Perlmutter, Frontier): GPUs are chosen as the core computing devices to enter this next era of HPC.

To take advantage of future GPU-accelerated systems with tens of thousands of devices, application developers need to have the propers skills and tools to understand, manage, and optimize distributed GPU applications. In this tutorial, participants will learn techniques to efficiently program large-scale multi-GPU systems. While programming multiple GPUs with MPI is explained in detail, advanced tuning techniques and complementary programming models like NCCL and NVSHMEM are presented as well. Tools for analysis are shown and used to motivate and implement performance optimizations. The tutorial is a combination of lectures and hands-on exercises, using Europe’s fastest supercomputer, JUWELS Booster with NVIDIA GPUs, for interactive learning and discovery.

", - "notes": "Slides and exercises of tutorial presented virtually at SC21 (International Conference for High Performance Computing, Networking, Storage, and Analysis); https://sc21.supercomputing.org/presentation/?id=tut138&sess=sess188", + "notes": "Slides and exercises of tutorial presented virtually at ISC22 (ISC High Performance 2022); https://app.swapcard.com/widget/event/isc-high-performance-2022/planning/UGxhbm5pbmdfODYxMTQ2", "access_right": "open", - "conference_title": "Supercomputing Conference 2021", - "conference_acronym": "SC21", - "conference_dates": "14-19 November 2021", - "conference_place": "St. Louis, MO, USA and virtual", - "conference_url": "https://sc21.supercomputing.org/", + "conference_title": "ISC HPC 2022", + "conference_acronym": "ISC22", + "conference_dates": "29 May-02 June 2022", + "conference_place": "Hamburg, Germany", + "conference_url": "https://www.isc-hpc.com/", "conference_session": "Tutorials", "conference_session_part": "Day 1",