Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tutorial on Turing.jl and performance #2415

Open
torfjelde opened this issue Dec 3, 2024 · 0 comments
Open

Tutorial on Turing.jl and performance #2415

torfjelde opened this issue Dec 3, 2024 · 0 comments

Comments

@torfjelde
Copy link
Member

We should have a tutorial which goes through the following aspects of a model:

  1. How to benchmark and profile a model.
  2. How to choose the right AD backend.
  3. How to write models in a performant manner.

This is what I'm imagining (please do make suggestions!)

1. Benchmarking and profiling

Should include:

  • Usage of TuringBenchmarking.jl.
  • Usage of a profiler, e.g. PProf.jl, both in the evaluation of the model itself and in gradient computations.

2. Choosing AD backend

This will be informed by the steps in the previous section.

Also, maybe have some description (or at least point to another resource which outlines) the pros and cons of the different AD backends.

3. Writing performant models

This comes down to a few tricks:

  • Keep LHS of ~ "simple", i.e. x ~ filldist(Normal(), n) instead of x[i] ~ Normal() inside a for-loop.
  • Use typing information, e.g. ::Type{TV}=Vector{Float64} in the model definition, and so on.
  • Checking of type stability, e.g. using DynamicPPL.DebugUtils.model_typed (or maybe using Cthulu.jl?)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant