-
Notifications
You must be signed in to change notification settings - Fork 19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add some default implementations of AbstractModel
#81
Comments
This kind of thing has come up a lot I think. My thought was that this stuff should end up in AbstractPPL.jl, but I guess not? I'm not sure what APPL is up to now. In general the idea is that the inference code should be handle both the sampler and the model, which means that it's not really valuable to include here. I am sympathetic though to the litany of reproductions of super basic model code that shows up everywhere -- each inference package that uses AbstractMCMC generally builds it's own model which feels wrong to me. I think the breakdown you had (LogDensityModel, DifferentiableLogDensityModel, and TransformedModel) is pretty good, and I wouldn't mind including abstract types for this kind of thing in AbstractMCMC just to clarify the interface a little bit. |
So I'm about 55% leaning towards including some supertypes for models, but IMO defaults (like a struct with a function field or something) are a little sketchier. |
I support keeping these types in |
I agree with @yebai here; this isn't anything PPL-specific and thus shouldn't go into AbstractPPL.jl.
Even such simple defaults? How come? |
Mostly to keep ideological purity for the fact that |
I just skimmed through the discussion (I'm on vacation until beginning/mid of August, so not keeping up with everything until then) but I guess for interoperability it would be (more?) helpful to add traits such as discussed in #75 (comment). Similar to e.g. when working with the Tables interface, implementations of algorithms could just work with an arbitrary AbstractModel and call the relevant functions. Even with default model structs otherwise multiple implementations are needed if one wants to support different AbstractModel types. Additionally, it seems less likely to run into ambiguity errors if one does not specialize on the model type. Another comment: LogDensityModel and DifferentiableLogDensityModel seem a bit redundant - one could instead just dispatch on the type of the log density function by eg querying the differentiation capabilities of the function in a similar way as done in LogDensityProblems. |
Agreed.
Please don't, I like curmudgeonly Cameron.
Nice! Enjoy man:)
I'm down with this 👍
Ah yes, 100% agree. I was trying to remember our discussion of this before but couldn't, sooo I'll admit I partially made this issue hoping that you'd chime in and suggest the superior approach 👼 Hey, it worked! |
Now that DensityInterface will eventually find its way into Turing, I'm throwing in @oschulz's suggestions from here:
Not that I endorse all of this, but DensityInterface.LogFuncDensity would be a good candidate for the default implementation you guys were talking about, wouldn't it? Transformation capabilities would then come in from the interfaces in Bijectors/ChangesOfVariables. |
LogFuncDensity eg doesn't provide a way to query if the function is differentiable or not and it doesn't bundle derivatives such as eg LogDensityProblems, so it seems a bit too barebone? I imagine it would be useful to support and integrate it in some way though. |
Yeah, that's missing compared to LogDensityProblems. The ideal solution IMHO would be for LogDensityProblems (if that fits our purposes here) to depend on DensityInterface, which the maintainer is only ready to do when the latter becomes mature enough (tpapp/LogDensityProblems.jl#78 (comment)). I can't judge whether LogDensityProblems is exactly what is needed here, though (and it uses TransformVariables instead of Bijectors, right)? The alternative would kind of be to replicate the same kind of interface package: DensityInterface + ChangesOfVariables + some differentiation interface. Is AbstractDifferentiation ready and fitting? |
I think the traits for models would be implemented in AbstractMCMC anyways and would probably not be limited to the density function. So I assume the easiest would be to just add the required functionality in AbstractMCMC initially.
It exists but it only supports ForwardDiff (via Requires 😢). Not a single package depends on it and uses it currently: https://juliahub.com/ui/Packages/AbstractDifferentiation/Y4WMq/0.2.1 |
Btw, this comment is relevant: #85 (comment)
Am of the same opinion 👍
I was also looking at AbstractDifferentiation.jl because of this, and was somewhat disappointed to see that a) it uses Requires.jl, and b) it only supports ForwardDiff.jl atm. I originally thought it was going to be more like ChainRulesCore.jl where each respective package would implement the interface, and then we'd be golden. Seems like this isn't the case though? |
AFAIK it's still the plan in the long term but it was decided to add glue code until it is more stable: JuliaDiff/ForwardDiff.jl#550 (comment) |
How does |
It doesn't:) In that case it's "fine" because Metropolis-Hastings doesn't require differentiability. I'm going to cross-post #85 (comment) because I feel like that comment should be here rather than in the other thread:
|
TransformVariables supports InverseFunctions and ChangesOfVariables now, btw. (tpapp/TransformVariables.jl#85). I promised @torfjelde that I'd do a PR to add initial support to Bijectors as well soon (TuringLang/Bijectors.jl#199 , later on, Bijectors may then undergo some refactoring, as far as I understand). I hope this will enable us to not depend on specific transformation packages so much in the future. I hope to have VariateTransformations (exact transformation of variates between distributions, pulled out from BAT.jl) ready for registration soon as well, it will support InverseFunctions and ChangesOfVariables natively. |
I think a trait-based API to query a density or function about differentiation-related information would work well with DensityInterface and LogFuncDensity. |
But wouldn't
be exactly what we now have as
Shouldn't this rather be struct Posterior{M,D}
prior::M
likelihood::D
end with I have the same thing in BAT, only everything is I would welcome a common lightweight package for Bayesian primitives like that, if you guys are interested. We recently decided to go for a breaking BAT 3.0 release next, so I can smash some porcelain and get rid of some past sins in the design. It would be nice if we had things like a common |
It indeed seems like that is the case, yeah:)
I don't think we should assume I think a better idea would be to provide some notion of promotion between the different types, e.g.: # Could just dispatch on `D<:DensityKind` but could
# mess up if someone decides to implement a new one.
for D in (:NoDensity, :HasDensity, :IsDensity)
@eval begin
Base.:+(::$D, ::$D) = $D()
end
end
Base.:+(::IsDensity, ::HasDensity) = HasDensity()
Base.:+(::HasDensity, ::IsDensity) = HasDensity()
Base.:+(::NoDensity, ::DensityKind) = NoDensity()
Base.:+(::DensityKind, ::NoDensity) = NoDensity() though I'm happy to not overload Then we just do struct JointDensity{P,L}
prior::P
likelihood::L
end
function DensityInterface.DensityKind(joint::JointDensity)
return DensityInterface.DensityKind(joint.prior) + DensityInterface.DensityKind(joint.likelihood)
end
Very interested! |
And for conversion from package-specific implementations of models/densities/etc., do we just implement function Base.convert(::Type{Joint}, model::DynamicPPL.Model)
return Joint(
DensityInterface.logfuncdensity(Base.Fix1(DynamicPPL.logprior)),
DensityInterface.logfuncdensity(Base.Fix1(DynamicPPL.loglikelihood))
)
end Or maybe it's better to just overload the constructor |
I while ago I would have said it could be a useful convenience, but @cscherrer and @mschauer cured me of that notion. :-) If that's intended, e.g. for testing scenarios, one can do |
What cure did they give you? I'm asking for a friend 😳 But on a serious note, I agree that it's preferable to not use a |
function priorof end
function likelihoodof end
struct Posterior{L,P}
likelihood::L
prior::P
end
function Posterior(likelihood::L, prior::P) where {L,P}
DensityKind(likelihood) === IsDensity() || throw ArgumentError("DensityKind(likelihood) must be IsDensity()")
DensityKind(prior) === HasDensity() || throw ArgumentError("DensityKind(prior) must be HasDensity()")
Posterior{L,P}(likelihood, prior)
end
likelihoodof(p::Posterior) = p.likelihood
priorof(p::Posterior) = p.prior
function DensityInterface.logdensityof(p::Posterior, x)
logdensityof(likelihoodof(p), x) + logdensityof(priorof(p), x)
end
@inline DensityInterface.DensityKind(::Posterior) = HasDensity()
Question is, should it be type based (there would be an |
They explained very patiently that it's a bad idea to mix up measures and densities, they play very different roles in an RN integral/derivative. While we often informally treat a distribution and it's density the same, they should not be treated as the same thing. DensityInterface build a bridge here by allowing
It's a slippery slope, especially after we were very careful in defining the semantics of
Are there actually user use cases for using a distribution as a likelihood? I mainly use it to test samplers. |
I'm fine with either I think, but if we're doing trait-based we'll need a way to dispatch on having this trait. |
You mean beyond |
So I'm very familiar with the difference between measures and densities, but most people won't be 😕 Whether or not this is inexact is not something a user should have to deal with IMO.
But shouldn't this just be handled downstream? AFAIK the general consensus in Julia is to not restrict arguments by types unless we're a) disambiguating, or b) to ensure that the arguments have certain functionalities. In this case we're doing the latter, but the methods we end up using, e.g.
Sure, this is a fair point 👍 But I think the issue of people trying to pass in a |
Yeah. My personal motivation behind a |
The likelihood, sure, but the prior? If the prior doesn't have a least a few distribution-like features (ability to draw random numbers as start values, ability to estimate (co)variance or find a transformation to Normal/unconstraint space, and so on), it's kinda hard to run inference algorithms with it.
At least in BAT.jl, I need more from the prior than just it's density function (see above).
Indeed - BAT tries to transform the prior away, by default. And it's it not transformed away, it'll evaluate the prior first and then evaluate the likelihood only if the To me, prior and likelihood are very much not interchangeable. That's why I wouldn't call it joint - a joint distribution doesn't care so much what's left and right. But |
But this is what I mean: it should be handled downstream, i.e. if a inference algoritm requires more than just evaluation then it should complain if it's not given such:) Isn't this also in line with what you said earlier about having a very general common costruction, and then the respective packages can convert it into whatever they want? To give some concrete examples, given an initial point (which could be any point on the real line) MH and HMC does not need anything but evaluation of the joint to be able to sample from it.
But for many natural use-cases, e.g. many inference algorithms, they are indeed interchangable. We could then of course just not use the |
Oh yes, definitely! Sorry, I didn't mean to imply that such an abstract package should be opinionated about what kind of operations (besides But I still think that we should require Edit: see different approach below.
Yes, I fully agree. |
I've been thinking about this a bit more - your're right, it's probably best to focus on decomposing posterior-like objects into likelihood and prior in this kind of interface package, instead of constructing posteriors. How about this approach - we define (just a draft) something like abstract type PosteriorKind end
struct IsPosterior <: PosteriorKind end
struct NoPosterior <: PosteriorKind end
function getprior end
function getlikelihood end
function test_posterior(posterior, x = rand(getprior(posterior))) end
@test PosteriorKind(posterior) === IsPosterior()
likelihood = getlikelihood(posterior)
prior = getprior(posterior)
@test DensityKind(likelihood) === IsDensity()
@test DensityKind(prior) === HasDensity()
test_density_interface(likelihood, x)
test_density_interface(prior, x)
test_density_interface(posterior, x)
@test logdensityof(posterior, x) = logdensityof(likelihood, x) + logdensityof(prior, x)
end So no default posterior implementation in BayesianConcepts.jl. This should be enough for inference packages to use a posterior - users will use more specific packages to define the posterior. These can, if they want to, support accepting anything as a likelihood (even a distribution) or prior (e.g. just some density function). That takes care of the user convenience you mentioned. It'll be the job of the posterior constructors (in Turing packages, in BATBase, etc.) to convert/wrap these so that While we'd require Would that work for you, @torfjelde ? |
Probably fixed by #110 |
In AdvancedHMC.jl we now have a
DifferentiableDensityModel
which we required to implement the AbstractMCMC.jl-interface.IMO it would be super-nice if AbstractMCMC.jl provided a simple barebones implementation of such models, e.g.
LogDensityModel
,DifferentiableLogDensityModel
, maybe even aTransformedModel
which has a field corresponding to a transformation used and then transformation packages such as Bijectors.jl and TransformVariable.jl can hook into it by overloading aAbstractMCMC.forward(model, transformation)
(probably find a better name) which returns the constrained input and the logabsdetjac-factor, i.e. something similar to LogDensityProblems.jl but more lightweight (i.e. no direct dependency on TransformVariables.jl, etc.).Alternatively we could make a
AbstractMCMCModels.jl
package which contains implementations ofAbstractModel
, but IMO the above should be so lightweight that it shouldn't be much of a problem to include.EDIT: The main motivation is to avoid a large number of re-implementations of at least
LogDensityModel
andDifferentiableLogDensityModel
spread around in different packages.The text was updated successfully, but these errors were encountered: