Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove (duplicate) samplers being defined explicitly in Turing.jl #2413

Open
3 tasks
torfjelde opened this issue Dec 3, 2024 · 2 comments
Open
3 tasks

Remove (duplicate) samplers being defined explicitly in Turing.jl #2413

torfjelde opened this issue Dec 3, 2024 · 2 comments
Milestone

Comments

@torfjelde
Copy link
Member

torfjelde commented Dec 3, 2024

We're duplicating a lot of code and a lot of effort by having a bunch of sampler (or rather, InferenceAlgorithm) implementations in Turing.jl itself.

There are a few reasons for this is / was the case:

  1. The old approach of doing Gibbs sampling took an approach that required hooking into the assume and observe statements for samplers and to mutate the varinfo in a particular, even if the functionality of the sampler itself (when used outside of Gibbs) didn't require it.
  2. The samplers in Turing.jl would often offer more convenient constructors while the sampler packages themselves, e.g. AdvancedHMC.jl, would offer a more flexible but also more complicated interfaces.
  3. InferenceAlgorithm allows us to overload the sample call explicitly to do some "non-standard" things, e.g. use chain_type=MCMCChains.Chains as the default, instead of chain_type=Vector as is default in AbstractMCMC.jl.

Everything but (3) is "easily" addressable (i.e. only requires dev-time, not necessarily any discussion on how to do it):

Removing the InferenceAlgorithm type (3)

Problem

Currently, all the samplers in Turing.jl have most of their code living outside of Turing.jl + inside Turing.jl we define a "duplicate" which is not an AbstractMCMC.AbstractSampler (as typically expected by AbstractMCMC.sample), but instead a subtype of Turing.Infernece.InferenceAlgorithm:

abstract type InferenceAlgorithm end
abstract type ParticleInference <: InferenceAlgorithm end
abstract type Hamiltonian <: InferenceAlgorithm end
abstract type StaticHamiltonian <: Hamiltonian end
abstract type AdaptiveHamiltonian <: Hamiltonian end

But exactly because these are not AbstractMCMC.AbstractSampler, we can overload sample calls to do more than what sample does for a given AbstractSampler.

One of the things we do is to make chain_type=Chains rather than chain_type=Vector (as is the default in AbstractMCMC.jl):

function AbstractMCMC.sample(
rng::AbstractRNG,
model::AbstractModel,
sampler::Sampler{<:InferenceAlgorithm},
ensemble::AbstractMCMC.AbstractMCMCEnsemble,
N::Integer,
n_chains::Integer;
chain_type=MCMCChains.Chains,
progress=PROGRESS[],
kwargs...,
)
return AbstractMCMC.mcmcsample(
rng,
model,
sampler,
ensemble,
N,
n_chains;
chain_type=chain_type,
progress=progress,
kwargs...,
)
end

Another is to perform some simple model checks to stop the user from doing things they shouldn't, e.g. accidentally using a model twice (this is done using DynamicPPL.check_model):

function AbstractMCMC.sample(
rng::AbstractRNG,
model::AbstractModel,
alg::InferenceAlgorithm,
N::Integer;
check_model::Bool=true,
kwargs...,
)
check_model && _check_model(model, alg)
return AbstractMCMC.sample(rng, model, Sampler(alg, model), N; kwargs...)
end

However, as mentioned before, having to repeat all these sampler constructors just to go from working with a AbstractSampler to InferenceAlgorithm so we can do these things is a) very annoying to maintain, and b) makes it all very confusing for newcomers to contribute.

Now, the problem is that cannot simple start overloading sample(model::DynamicPPL.Model, sampler::AbstractMCMC.AbstractSampler, ...) calls since sampler packages might define something like sample(model::AbstractMCMC.AbstractModel, sampler::MySampler, ...) (we have DynamicPPL.Model <: AbstractMCMC.AbstractModel btw) which would give rise to a host of method ambiguities.

Someone might say "oh, but nobody is going to impelment sample(model::AbstractMCMC.AbstractModel, sampler::MySampler, ...); they're always going to implement a sampler for a specific model type, e.g. AbstractMCMC.LogDensityModel", but this is not great for two reasons: a) "meta" samplers, i.e. samplers that use other samplers as components, might want to be agnostic to what the underlying model is as this "meta" sampler doesn't interact directly with the model itself, and b) if we do so, we're claiming that DynamicPPL.Model is, in some way, a special and more important model type than all other subtypes of AbstractModel, which is the exact opposite of what we wanted to do with AbstractMCMC.jl (we wanted it to be a "sampler package for all, not just Turing.jl").

externalsampler introduced in #2008 is a step towards this, but in the end we don't want to require externalsampler to wrap every sampler passed to Turing.jl; we really only want this to have to wrap samplers which do not support all the additional niceties that Turing.jl's current sample provides.

Solution 1: rename or duplicate sample

The only true solution I see, which is very, very annoying, is to either

  1. Not export AbstractMCMC.sample from Turing.jl, and instead define and export a separate Turing.sample which is a fancy wrapper around AbstractMCMC.sample.
  2. Define a new entry-point for sample from Turing.jl with a different name, e.g. infer or mcmc (or even use the internal mcmcsample from AbstractMCMC.jl naming but making it public).

None of these are ideal tbh.

(1) sucks because so many of the packages are using StatsBase.sample (as we are in AbstractMCMC.jl) for this very reasonable interface, and so diverging from this is confusing + we'll easily end up with naming collisions in the namespace of the user, e.g. using Turing, AbstractMCMC would immediately cause two sample methods to be imported.

(2) is also a bit annoying as this would be a highly breaking change. It's also a bit annoying because, well, sample is a much better name 🤷

IMHO, I think (2) is best here though. If we define a method called mcmc or mcmcsample (ideally we'd do something with AbstractMCMC.mcmcsample) which is exported from Turing.jl, we could do away with all of InferenceAlgorithm and its implementations in favour of a single (or a few) overloads of this method.

@torfjelde torfjelde added this to the Turing v1.0.0 milestone Dec 3, 2024
@torfjelde torfjelde changed the title Remove (most) samplers being defined explicitly in Turing.jl Remove (duplicate) samplers being defined explicitly in Turing.jl Dec 3, 2024
@penelopeysm
Copy link
Member

penelopeysm commented Dec 3, 2024

Would it be possible to:

  1. Create a new abstract type AbstractMCMC.TuringManagedSampler
  2. In the sampler packages, write struct HMC <: AbstractMCMC.TuringManagedSampler
  3. In Turing, we can then overload sample(::DynamicPPL.Model, ::AbstractMCMC.TuringManagedSampler), which calls check_model etc. followed by AbstractMCMC.mcmcsample
  4. It's then our responsibility to make sure we don't define sample(::AbstractMCMC.AbstractModel, ::HMC) anywhere as that will lead to method ambiguities

If someone then defines TheirSampler <: AbstractMCMC.AbstractSampler, they won't run into method ambiguities unless TheirSampler also subtypes AbstractMCMC.TuringManagedSampler (and we should make it abundantly clear that this shouldn't be done).

One point of awkwardness might be that if they then want to get the nice Turing bells and whistles, they have to declare sample(::DynamicPPL.Model, ::TheirSampler) themselves, effectively duplicating our definition. That's probably an acceptable cost as long as the bells and whistles aren't too much (like a call to check_model should be a reasonable thing to expect someone implementing a sampler to copy themselves).

(I should say that before posting this comment I had around 3 different ideas, each of which I started to write down before immediately realising that they were terrible. I haven't yet found a fatal flaw in this one, so I'm optimistic 😄 but it might just be the 4th terrible idea)

@torfjelde
Copy link
Member Author

That is not a bad idea for sure, but my immediate worries are: a) where does this Turing-managed sampler go, and b) how would people hook into this functionality in, say, a Turing.jl extension?

Issue (b) seems like an annoying one that is difficult to circumvent when we do subtyping (as is the issue with InferenceAlgorithm).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants