-
Notifications
You must be signed in to change notification settings - Fork 154
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
missing features and todos for score estimation #1226
Comments
Is it the case that it does not work accurately, or that this doesn't run at all? Trying to find the MAP with gradient ascent requires differentiating through the Regardless, even if we can backprop through |
Update after talking to @manuelgloeckler, the easiest way to calculate the MAP here would be to use the score directly at a time t = epsilon, instead of calculating and gradding through the exact log_prob. which as stated above would be really inefficient. I will implement this soon. |
I think that's actually what @michaeldeistler had implemented already. It's in the backup branch, here: Lines 946 to 954 in 2b233ce
and then in case of the score-based potential it would just use the gradient directly from here: sbi/sbi/inference/potentials/score_based_potential.py Lines 132 to 160 in 2b233ce
Or are you referring to yet a different approach? |
@manuelgloeckler ping re: IID sampling |
there are a couple of unsolved problems and enhancements for NPSE:
MAP
MAP is using the score directly for doing gradient ascent on the posterior to find to MAP. this is currently not working accurately
IID sampling
sbi/sbi/samplers/score/score.py
Lines 93 to 109 in 9c6734f
Log prob and sampling via CNF
Once trained, we can use the score_estimator to define a probabilistic ODE, e.g., a CNF via
zuko
and directly calllog_prob
andsample
on it. At the moment this is already happening when constructing theScorePosterior
withsample_with="ode"
. However, it is a bit all over the place, e.g., log_prob is coming from the potential via zuko anyways, and the for the sampling we construct a flow with each call. A possible solution to make things clearer is creating aODEPosterior
that could be used by flow matching as well.Allow transforms for potential
See
score_estimator_based_potential
, which currently asserts whether thetaenable_transform=False
Better converged checks
Unlike the
._converged
method in base.py, this method does not reset to the best model. We noticed that this improves performance. Deleting this method will make C2ST tests fail. This is because the loss is very stochastic, so resetting might reset to an underfitted model. Ideally, we would write a custom._converged()
method which checks whether the loss is still going down for all t.The text was updated successfully, but these errors were encountered: