-
-
Notifications
You must be signed in to change notification settings - Fork 29
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
dynamic_inference update #60
Changes from 2 commits
1290e91
1d7e979
abeb9fa
de2a545
5edcadc
1c895d6
5fd096f
2dfe6d5
5ae7281
cb96b6c
9dc5fc2
1074a7f
b7553d4
c3466b0
09f43d6
fb3cbc5
5c6828c
5c60629
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -33,7 +33,7 @@ function dynamichmc_inference(prob::DiffEqBase.DEProblem, alg, t, data, priors, | |
kwargs...) | ||
likelihood = sol -> sum( sum(logpdf.(Normal(0.0, σ), sol(t) .- data[:, i])) | ||
for (i, t) in enumerate(t) ) | ||
|
||
println(typeof(transformations)) | ||
dynamichmc_inference(prob, alg, likelihood, priors, transformations; | ||
ϵ=ϵ, initial=initial, num_samples=num_samples, | ||
kwargs...) | ||
|
@@ -43,39 +43,35 @@ function dynamichmc_inference(prob::DiffEqBase.DEProblem, alg, likelihood, prior | |
ϵ=0.001, initial=Float64[], num_samples=1000, | ||
kwargs...) | ||
P = DynamicHMCPosterior(alg, prob, likelihood, priors, kwargs) | ||
ChrisRackauckas marked this conversation as resolved.
Show resolved
Hide resolved
|
||
println(typeof(transformations)) | ||
prob_transform(P::DynamicHMCPosterior) = as((transformations)) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. No need for There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I felt this would give a cleaner interface, this can be changed later on too though so I'll keep this in mind for sure. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. What I am saying is that defining a function just for the sole purpose of returning the function dynamichmc_inference(prob::DiffEqBase.DEProblem, alg, likelihood, priors, transformations;
ϵ=0.001, initial=Float64[], num_samples=1000,
kwargs...)
P = DynamicHMCPosterior(alg, prob, likelihood, priors, kwargs)
PT = TransformedLogDensity(transformations, P)
PTG = FluxGradientLogDensity(PT);
chain, NUTS_tuned = NUTS_init_tune_mcmc(PTG,num_samples, ϵ=ϵ)
posterior = transform.(Ref(PTG.transformation), get_position.(chain));
return posterior, chain, NUTS_tuned
end instead. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Oh I see. |
||
PT = TransformedLogDensity(prob_transform(P), P) | ||
PTG = FluxGradientLogDensity(PT); | ||
|
||
transformations_tuple = Tuple(transformations) | ||
parameter_transformation = TransformationTuple(transformations_tuple) # assuming a > 0 | ||
PT = TransformLogLikelihood(P, parameter_transformation) | ||
PTG = ForwardGradientWrapper(PT, zeros(length(priors))); | ||
|
||
lower_bound = Float64[] | ||
upper_bound = Float64[] | ||
|
||
for i in priors | ||
push!(lower_bound, minimum(i)) | ||
push!(upper_bound, maximum(i)) | ||
end | ||
# lower_bound = Float64[] | ||
# upper_bound = Float64[] | ||
|
||
# If no initial position is given use local minimum near expectation of priors. | ||
if length(initial) == 0 | ||
for i in priors | ||
push!(initial, mean(i)) | ||
end | ||
initial_opt = Optim.minimizer(optimize(a -> -P(a),lower_bound,upper_bound,initial,Fminbox(GradientDescent()))) | ||
end | ||
# for i in priors | ||
# push!(lower_bound, minimum(i)) | ||
# push!(upper_bound, maximum(i)) | ||
# end | ||
|
||
initial_inverse_transformed = Float64[] | ||
for i in 1:length(initial_opt) | ||
para = TransformationTuple(transformations[i]) | ||
push!(initial_inverse_transformed,inverse(para, (initial_opt[i], ))[1]) | ||
end | ||
#println(initial_inverse_transformed) | ||
sample, NUTS_tuned = NUTS_init_tune_mcmc(PTG, | ||
initial_inverse_transformed, | ||
num_samples, ϵ=ϵ) | ||
# # If no initial position is given use local minimum near expectation of priors. | ||
# if length(initial) == 0 | ||
# for i in priors | ||
# push!(initial, mean(i)) | ||
# end | ||
# initial_opt = Optim.minimizer(optimize(a -> -P(a),lower_bound,upper_bound,initial,Fminbox(GradientDescent()))) | ||
# end | ||
|
||
posterior = ungrouping_map(Vector, get_transformation(PT) ∘ get_position, sample) | ||
# initial_inverse_transformed = Float64[] | ||
# for i in 1:length(initial_opt) | ||
# para = TransformationTuple(transformations[i]) | ||
# push!(initial_inverse_transformed,inverse(para, (initial_opt[i], ))[1]) | ||
# end | ||
# #println(initial_inverse_transformed) | ||
chain, NUTS_tuned = NUTS_init_tune_mcmc(PTG,num_samples, ϵ=ϵ) | ||
posterior = transform.(Ref(PTG.transformation), get_position.(chain)); | ||
ChrisRackauckas marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
return posterior, sample, NUTS_tuned | ||
return posterior, chain, NUTS_tuned | ||
end |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,8 +1,8 @@ | ||
using DiffEqBayes, OrdinaryDiffEq, ParameterizedFunctions, RecursiveArrayTools | ||
using DynamicHMC, DiffWrappers, ContinuousTransformations | ||
using DynamicHMC, TransformVariables | ||
using Parameters, Distributions, Optim | ||
|
||
f1 = @ode_def_nohes LotkaVolterraTest1 begin | ||
f1 = @ode_def LotkaVolterraTest1 begin | ||
dx = a*x - x*y | ||
dy = -3*y + x*y | ||
end a | ||
|
@@ -16,8 +16,8 @@ t = collect(range(1,stop=10,length=10)) # observation times | |
sol = solve(prob1,Tsit5()) | ||
randomized = VectorOfArray([(sol(t[i]) + σ * randn(2)) for i in 1:length(t)]) | ||
data = convert(Array,randomized) | ||
|
||
bayesian_result = dynamichmc_inference(prob1, Tsit5(), t, data, [Normal(1.5, 1)], [bridge(ℝ, ℝ⁺, )]) | ||
transform = (a = asℝ₊) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think this need a |
||
bayesian_result = dynamichmc_inference(prob1, Tsit5(), t, data, [Normal(1.5, 1)], transform) | ||
@test mean(bayesian_result[1][1]) ≈ 1.5 atol=1e-1 | ||
|
||
# With hand-code likelihood function | ||
|
@@ -33,11 +33,11 @@ likelihood = function (sol) | |
end | ||
return l | ||
end | ||
bayesian_result = dynamichmc_inference(prob1, Tsit5(), likelihood, [Truncated(Normal(1.5, 1), 0, 2)], [bridge(ℝ, ℝ⁺, )]) | ||
bayesian_result = dynamichmc_inference(prob1, Tsit5(), likelihood, [Truncated(Normal(1.5, 1), 0, 2)]) | ||
@test mean(bayesian_result[1][1]) ≈ 1.5 atol=1e-1 | ||
|
||
|
||
f1 = @ode_def_nohes LotkaVolterraTest4 begin | ||
f1 = @ode_def LotkaVolterraTest4 begin | ||
dx = a*x - b*x*y | ||
dy = -c*y + d*x*y | ||
end a b c d | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure why you are printing it, but that's a minor thing.
Regarding the API: perhaps I am missing something, but I would make transformations part of the problem, ie in
DynamicHMCPosterior
.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you clarify how that would work, I also feel that adding the transformation like I am doing is not really going to work (I actually copied this right off your linear regression example according to my understanding).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You could also make the transformation a slot in the composite type. But with this interface, it may not be needed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh I see, the does sound good, but then how would the applied is not clear to me, I can imagine that with the
ContinuousTransformations
interface, I'll push with what I think would work here but I doubt it will be correct 😅