Skip to content

Commit

Permalink
Added lesson 10
Browse files Browse the repository at this point in the history
  • Loading branch information
Michael Greenburg committed Oct 30, 2023
1 parent b7e7221 commit 818c6d9
Show file tree
Hide file tree
Showing 5 changed files with 28 additions and 10 deletions.
2 changes: 2 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,4 +34,6 @@ I need to re-make some of the videos with the current example code; the ones tha

The MPI example code should maybe be converted to use [MPL](https://github.com/rabauke/mpl) instead.

The MPI reading doesn't have enough information on communications--e.g. `MPI_{I,}{Send,Recv,Sendrecv}`

Once LLVM or GCC supports compiling DPC++ for GPUs, use that rather than `nvc++` for the GPU phase.
4 changes: 2 additions & 2 deletions lessons.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,10 +20,10 @@
### [8: Threading in C++](lessons/8.md)

### [9: Distributed Programming and MPI](lessons/9.md)
<!---

### [10: Distributed Programming and MPI](lessons/10-mpi.md)
### [10: Distributed Programming and MPI continued](lessons/10.md)

<!---
### [11: Applications of HPC](lessons/11-applications.md)
### [12: Accelerators](lessons/12-accelerators.md)
Expand Down
16 changes: 16 additions & 0 deletions lessons/10.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
# Distributed Programming and MPI

This is a continuation of [lesson 9](9.md)

## Study guide

- Understand what differentiates shared memory and distributed memory parallel programming
- Know how to send data between MPI processes
- Know how to read from and write to files with MPI I/O
- Know what ghost/halo cells are and understand how to use them to split data between processes

## Readings and Assignments

Continue working on [phase 7](../project/phase7.md)

[Quiz: Distributed Programming and MPI](https://byu.instructure.com/courses/21221/quizzes)
4 changes: 2 additions & 2 deletions lessons/9.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,8 @@ In this lesson you'll learn the concepts underpinning multi-node computation and

## Study guide

- Understand what differentiates shared memory and distributed memory parallel programming
- Know how to write simple MPI programs in C++
- Know how to write and compile simple MPI and MPI I/O programs in C++
- Know how to launch an MPI program with a given number of processors

## Readings and Assignments

Expand Down
12 changes: 6 additions & 6 deletions readings/mpi.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ Read [subsections 2.6.3.1-6](EijkhoutHPCTutorialsVol1.pdf#subsection.2.6.3) of E

## MPI

The [Message Passing Interface (MPI)](https://en.wikipedia.org/wiki/Message_Passing_Interface) is an interface for passing data between processes using messages. These processes can be on the same machine or across nodes. All MPI programs being with a call to `MPI_Init` and end with `MPI_Finalize`. The MPI functions are defined in `mpi.h`.
The [Message Passing Interface (MPI)](https://en.wikipedia.org/wiki/Message_Passing_Interface) is an interface for passing data between processes using messages. It allows for **distributed memory** programming, unlike OpenMP or C++ threads which require **shared memory**; this means that an MPI program can span multiple nodes. These processes can be on the same machine or across nodes. All MPI programs being with a call to `MPI_Init` and end with `MPI_Finalize`. The MPI functions are defined in `mpi.h`.

```c++
#include <iostream>
Expand All @@ -35,7 +35,7 @@ With most MPI compilers, you can use `mpic++` in the place of a C++ compiler lik
mpicxx -std=c++20 -o myprog myprog.cpp
```

The partial `CMakeLists.txt` below will build the MPI program only if the MPI compiler for C++ is found; this allows building the other executables even if MPI isn't available.
The partial `CMakeLists.txt` below will build an MPI program only if the MPI compiler for C++ is found; this allows building the other executables even if MPI isn't available.

```cmake
cmake_minimum_required(VERSION 3.9)
Expand All @@ -62,9 +62,9 @@ Not knowing the name of the function that you are looking for, though, renders s

## MPI I/O

The MPI datatypes which describe the memory layout of messages are reused to describe the file layout on persistent storage.
The MPI data types which describe the memory layout of messages are reused to describe the file layout on persistent storage.

File are opened with `MPI_File_open` and closed with `MPI_File_close`. There are various "modes" for opening files. This example opens the file in read-only mode. If it doesn't exist, `MPI_File_open` will return an error.
File are opened with `MPI_File_open` and closed with `MPI_File_close`. There are various "modes" for opening files. This example opens the file in read-only mode. If it doesn't exist, `MPI_File_open` will return an [error](https://www.mcs.anl.gov/research/projects/mpi/mpi-standard/mpi-report-1.1/node148.htm).

```c++
MPI_File handle;
Expand Down Expand Up @@ -110,7 +110,7 @@ MPI_File_read_all(f, &n, 1, MPI_INT, MPI_STATUS_IGNORE);
// Read body
int local_n = n / mpi_size;
int local_offset = n * local_n;
int local_offset = mpi_rank * local_n;
if (mpi_rank == mpi_size-1) local_n += n % mpi_size; // last proc gets remainder
std::vector<int> v(local_n);
MPI_File_read_at(f, header_size+local_offset, v.data(), v.size(), MPI_INT, MPI_STATUS_IGNORE);
Expand Down Expand Up @@ -145,7 +145,7 @@ std::array<int, 4> data{1, 3, 5, 7};
MPI_Send(&data[0], 1, Quad, dest, tag, comm);

// when finished with the type
MPI_Type_free($Quad);
MPI_Type_free(&Quad);
```
### `MPI_Type_create_struct`
Expand Down

0 comments on commit 818c6d9

Please sign in to comment.