-
-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
changed ar1 logp to use ar1 precision instead of innovation precision #3899
changed ar1 logp to use ar1 precision instead of innovation precision #3899
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi,
Thanks for reporting and solving the issue!
The changes look good to me. Can you however fix the tests on AR1 please? This seems to break them, as indicated by Travis CI.
Also, this may be a trivial question, but can there be cases when the data was initiated at zero, not in the infinite past? Would the changes implemented here still be appropriate or should we also rely on the previous implementation?
Thanks for checking it @AlexAndorra . I'm not experienced with github and/or travis but I took a look, and it appears to me the unit test is asking if the logp function using pymc3/theano is the same as if it were calculated in scipy, is this right? In that case, the reason it was passing before is because the unit test also used the incorrect specification of the likelihood for the first observation. Now they disagree - is it OK to change the test in order to make it pass? i.e. from:
to:
this would also require changing:
to:
As for time series starting at zero, as I understand it this would violate the stationarity assumption (that the mean and variance are the same across time) of an AR1 process. It's a little bit confusing because in the tutorial, infinite past is assumed, but the time series also starts at zero which is in conflict with that: Thanks! |
The changes you laid out for the tests look sound to me (the goal is indeed to change the test so that it reflects the new AR1 parametrization and, hence, so that it passes) 👌 Good point about the example NB: I think it should be corrected to start the time series in the infinite past too and use the new AR1 -- do you feel like doing this? It would make the PR complete 😃 Also, don't forget to add a line about this PR in the release notes, under maintenance. |
OK thanks for the advice, I'll do those things this week :) |
Thank you! And tell me if you have any questions |
@ljmartin, did you find some time to look into that? Do you need some help? |
thanks for checking in @AlexAndorra, had a busy week. Will it update the PR if I just commit to my PR-branch of the forked pymc3? I just did that to fix the tests - Ill check back in a few minutes to see if they pass. And also fix up the example NB! |
Thanks a lot @ljmartin ! Yeah, it'll update the PR and relauch the tests automatically if you push to your branch. |
Co-authored-by: Thomas Wiecki <[email protected]>
Co-authored-by: Thomas Wiecki <[email protected]>
Codecov Report
@@ Coverage Diff @@
## master #3899 +/- ##
=========================================
Coverage ? 86.39%
=========================================
Files ? 86
Lines ? 13723
Branches ? 0
=========================================
Hits ? 11856
Misses ? 1867
Partials ? 0
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- All good on the tests part now, thanks 👍
- For the NB, it's a little hidden in the source code, so here's the link. You'll need to create a new jupyter kernel with your PyMC develop in the venv (tell me if you have questions on how to do that).
Also, at the very end of the NB, can you add a last cell with these commands (after doingpip install watermark
):
%load_ext watermark
%watermark -n -u -v -iv -w
And finally Black-format the NB with pip install black_nbconvert
and then black_nbconvert /path/to/a/notebook.ipynb
on the command line -- this will reformat the NB inplace automatically 😉
Thanks a lot in advance!
Okey doke, I pip-installed the PR-branch into a fresh conda environment in order to run and edit the notebook. While editing the notebook, I got the
Can anyone try running this using the latest development version, and see if the sampling completes? It's only short - if it fails then it does so within a few seconds. To reiterate, it runs fine on the stable version of pymc3 so I don't think the model is misspecified.
|
Yeah it looks like another example of this issue. In short, it looks like the The ways forward I usually use:
Tell me if this helps 😉 |
Yes looks like the same error. Unfortunately I couldn't get that little script to work using the tips you suggested. The only thing that works is moving to Metropolis MCMC i.e.
Using Metropolis the results are good as expected. What to do? Shall I wait for the kinks in NUTS to be ironed out? Doesn't really make sense to suggest using Metropolis in the example notebooks |
I agree, it doesn't seem like a good idea to suggest Metropolis. I'll check this out locally and keep you posted 😉 |
Ok @ljmartin, I managed to sample from your model with NUTS, by changing the init method and using more regularizing priors. The following model should work and allow you to update the NB: with pm.Model() as ar1:
theta_ = pm.Normal('theta', 0., 0.5) # we assume process is stationary, so -1 < k_ < 1
tau_ = pm.Exponential('tau', 0.5) # the variance of the innovation term
center = pm.Normal('center', mu=y.mean(), sigma=2.5) # prior for the processs mean initially centred on the population mean
likelihood = pm.AR1('likelihood', k=theta_, tau_e=tau_, observed=y - center)
trace = pm.sample(init="adapt_diag", random_seed=42)
idata = az.from_pymc3(trace)
az.plot_trace(idata); Let me know how it goes 😉 |
Thanks for the code! I merged a new version of the notebook with the watermarking and black_nbconvert formatting, as well as adding a line to the RELEASE-NOTES.md. Something went weird because there was a conflict. I'm not sure how to resolve that - I can do it if you let me know how or someone else with write access can. |
Thanks @ljmartin, reviewing ASAP 😉 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks a lot @ljmartin, this is almost ready to merge 🎉
I just left a last batch of comments (promise 😉 ). Tell me if you have any questions!
Hey @AlexAndorra sorry for the wait, been a busy time! Ill work on this tonight. If I 'git pull' from my PR branch, will it sync that branch with the dev version of pymc3, and avoid conflicts? Would that overwrite the changes made so far? |
Hi @ljmartin, and thanks for committing to this 🙏 If you do want to sync with the base repository's master branch though, then, on your dev branch, after committing your changes locally, you can do: $ git fetch upstream
$ git rebase upstream/master Then push the changes to your remote branch as usual. |
Hi @ljmartin ! Did you manage in the end, or got any trouble? |
Hi @ljmartin! Do you think you'll have time to make the last changes in the coming days? Otherwise, do you mind if I take this over? |
@AlexAndorra If you have time and drive you can just push this over the finish line. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Black doing its thing always makes it hard to review... But the tests are green, so...
I didn't modify anything in the code files (except Blackify them) and had already reviewed and approved them, so they should be ok 😉 Most of the work was on the NB. |
Addresses #3892
The previous
logp
method for the AR1 model used the precision of the innovation term as the precision for the first observation. Assuming the AR(1) process was initiated in the infinite past, the first observation should be modelled as coming from anywhere in the dispersion of the whole AR(1) process.I made a previous PR (#3897) which I have now closed. It had duplicated code since I now realise that
tau_e
(the innovation precision) andtau
(the AR(1) precision) had already been defined, and the fix just needed a switch fromtau_e
totau
for the likelihood of the first observation.cheers