Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multithreaded array initialization #68

Open
wants to merge 4 commits into
base: main
Choose a base branch
from

Conversation

carstenbauer
Copy link

@carstenbauer carstenbauer commented Oct 4, 2022

For better performance on systems with multiple NUMA domains. See my extensive comment on discourse.

With this PR, I get about 40% speedup for this example (with USE_GPU=false) when using a full AMD Zen3 CPU (64 cores, 4 NUMA domains) of Noctua 2.

Timings (s) before

╭───────────┬─────────┬─────────┬─────────╮
│ # Threads │       1 │       8 │      64 │
├───────────┼─────────┼─────────┼─────────┤
│   compact │ 12.8708 │ 2.42357 │ 2.43713 │
│    spread │ 12.8708 │ 2.38331 │  3.3897 │
╰───────────┴─────────┴─────────┴─────────╯

Timings (s) after

╭───────────┬─────────┬─────────┬─────────╮
│ # Threads │       1 │       8 │      64 │
├───────────┼─────────┼─────────┼─────────┤
│   compact │ 12.8762 │ 2.41895 │ 1.51899 │
│    spread │ 12.8762 │ 2.35042 │ 2.08579 │
╰───────────┴─────────┴─────────┴─────────╯

Speedup in %

╭───────────┬─────┬─────┬──────╮
│ # Threads │   1 │   8 │   64 │
├───────────┼─────┼─────┼──────┤
│   compact │ 0.0 │ 0.0 │ 38.0 │
│    spread │ 0.0 │ 1.0 │ 38.0 │
╰───────────┴─────┴─────┴──────╯

NOTES:

  • We see that the changes have essentially no impact on the single threaded case but give speedups when run with many threads (on a multi-NUMA domain system).
  • We see that if we stay within one NUMA domain (e.g. 8 threads) we don't observe a speedup (as expected).
  • compact and spread indicate the thread pinning strategy.
  • Ideally, the access pattern of the parallel initialization should match the access pattern of the stencil as much as possible. In this PR, I just do the "trivial" parallel initialization. (In principle, one could think about passing the custom user kernel to @zeros and co, analyze its structure and then initialize "accordingly". But that's difficult...)

cc @luraess @omlins

PS: Working on it at the GPU4GEO Hackathon in the Schwarzwald 😉

@luraess
Copy link
Collaborator

luraess commented Oct 4, 2022

Thanks for the contribution. I guess having something in PS for the Threads backend to control pinning and threads to cores mapping (or have an close to optimal default solution) would be great! Especially for AMD cpus with many NUMA regions where this becomes significant.

@omlins omlins self-requested a review October 4, 2022 17:46
@carstenbauer
Copy link
Author

BTW, @omlins, depending on how easy/difficult it would be to give me test access to Piz Daint I could run some benchmarks there as well.

@omlins
Copy link
Owner

omlins commented Oct 6, 2022

@carstenbauer, as Ludovic told you probably already, Piz Daint does not have any AMD CPUs. Thus, for testing this Superzack, Ludovic's cluster, will be better.

@carstenbauer
Copy link
Author

carstenbauer commented Oct 21, 2022

I quickly tested another example, namely https://github.com/omlins/ParallelStencil.jl/blob/main/miniapps/acoustic3D.jl (with the visualization/animation part commented out. Same configuration as above, i.e. a 64 core node of Noctua 2 with 64 Julia threads that I pinned compactly. Below are the timings of the acoustic3D() function before and with this PR.

# Before PR: 44.315157 seconds (779.52 k allocations: 840.038 MiB, 1.09% gc time)
# With PR: 18.557505 seconds (791.20 k allocations: 840.475 MiB, 2.71% gc time)

This corresponds to about a 2.4x speedup. (cc @luraess)

@carstenbauer carstenbauer marked this pull request as ready for review December 2, 2022 11:23
@omlins
Copy link
Owner

omlins commented Dec 12, 2022

This relates also to #53 (comment)

@carstenbauer
Copy link
Author

What's holding back merging this?

@ranocha
Copy link
Contributor

ranocha commented Sep 13, 2023

Bump

@luraess
Copy link
Collaborator

luraess commented Oct 19, 2023

@omlins bump

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants