Skip to content

Commit

Permalink
Update Zenodo details
Browse files Browse the repository at this point in the history
  • Loading branch information
AndiH committed Jun 1, 2022
1 parent 16505f7 commit bb4dd39
Showing 1 changed file with 8 additions and 8 deletions.
16 changes: 8 additions & 8 deletions .zenodo.json
Original file line number Diff line number Diff line change
Expand Up @@ -29,19 +29,19 @@

"title": "Efficient Distributed GPU Programming for Exascale",

"publication_date": "2021-11-14",
"publication_date": "2022-05-29",

"description": "<p>Over the past years, GPUs became ubiquitous in HPC installations around the world. Today, they provide the majority of performance of some of the largest supercomputers (e.g. Summit, Sierra, JUWELS Booster). This trend continues in upcoming pre-exascale and exascale systems (LUMI, Leonardo; Frontier): GPUs are chosen as the core computing devices to enter this next era of HPC.</p><p>To take advantage of future GPU-accelerated systems with tens of thousands of devices, application developers need to have the proper skills and tools to understand, manage, and optimize distributed GPU applications.</p><p>In this tutorial, participants will learn techniques to efficiently program large-scale multi-GPU systems. While programming multiple GPUs with MPI is explained in detail, also advanced techniques and models (NCCL, NVSHMEM, …) are presented. Tools for analysis are used to motivate implementation of performance optimizations. The tutorial combines lectures and hands-on exercises, using Europe’s fastest supercomputer, JUWELS Booster with NVIDIA A100 GPUs.</p>",
"description": "<p>Over the past years, GPUs became ubiquitous in HPC installations around the world. Today, they provide the majority of performance of some of the largest supercomputers (e.g. Summit, Sierra, JUWELS Booster). This trend continues in the pre-exascale and exascale systems (LUMI, Leonardo; Perlmutter, Frontier): GPUs are chosen as the core computing devices to enter this next era of HPC.</p><p>To take advantage of future GPU-accelerated systems with tens of thousands of devices, application developers need to have the propers skills and tools to understand, manage, and optimize distributed GPU applications. In this tutorial, participants will learn techniques to efficiently program large-scale multi-GPU systems. While programming multiple GPUs with MPI is explained in detail, advanced tuning techniques and complementary programming models like NCCL and NVSHMEM are presented as well. Tools for analysis are shown and used to motivate and implement performance optimizations. The tutorial is a combination of lectures and hands-on exercises, using Europe’s fastest supercomputer, JUWELS Booster with NVIDIA GPUs, for interactive learning and discovery.</p>",

"notes": "Slides and exercises of tutorial presented virtually at SC21 (International Conference for High Performance Computing, Networking, Storage, and Analysis); https://sc21.supercomputing.org/presentation/?id=tut138&sess=sess188",
"notes": "Slides and exercises of tutorial presented virtually at ISC22 (ISC High Performance 2022); https://app.swapcard.com/widget/event/isc-high-performance-2022/planning/UGxhbm5pbmdfODYxMTQ2",

"access_right": "open",

"conference_title": "Supercomputing Conference 2021",
"conference_acronym": "SC21",
"conference_dates": "14-19 November 2021",
"conference_place": "St. Louis, MO, USA and virtual",
"conference_url": "https://sc21.supercomputing.org/",
"conference_title": "ISC HPC 2022",
"conference_acronym": "ISC22",
"conference_dates": "29 May-02 June 2022",
"conference_place": "Hamburg, Germany",
"conference_url": "https://www.isc-hpc.com/",
"conference_session": "Tutorials",
"conference_session_part": "Day 1",

Expand Down

0 comments on commit bb4dd39

Please sign in to comment.