-
Notifications
You must be signed in to change notification settings - Fork 65
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add a temporal validation period to synthetic control and interrupted time series experiments #367
base: main
Are you sure you want to change the base?
Conversation
Check out this pull request on See visual diffs & provide feedback on Jupyter Notebooks. Powered by ReviewNB |
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #367 +/- ##
==========================================
+ Coverage 85.60% 85.81% +0.20%
==========================================
Files 22 22
Lines 1716 1748 +32
==========================================
+ Hits 1469 1500 +31
- Misses 247 248 +1 ☔ View full report in Codecov by Sentry. |
Based on one of @cetagostini 's PR's (#368), I'm wondering if we should add a small feature to calculate a ROPE based on the validation period. Something a bit like this: Any thoughts/comments welcome. I'm not convinced this is a good idea yet - especially because once we add in actual time series models then the credible interval will increase as we forecast further into the future. |
self.score = self.model.score(X=self.pre_X, y=self.pre_y) | ||
if self.validation_time is None: | ||
# We just have pre and post data, no validation data. So we can score the pre intervention data | ||
self.score = self.model.score(X=self.pre_X, y=self.pre_y) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you think we could replace on validation score by bayesian tail prob
instead of R2? So, the interpretation here is about how much the real mean during the validation diverge from the posterior mean.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure. Just looking into it so that I get it right - I just need the high level algorithm because I've not heard that much about it. Google search for "bayesian tail probability" shows very few hits. Is this a widely used approach? Doesn't matter if not, as long as it does what we want :)
test_its
was testing synthetic control rather than interrupted time seriesTODO
intervention_time
kwarg and add in additional logic to the existing classes?ValueError
whenvalidation_time
>=treatment_time
📚 Documentation preview 📚: https://causalpy--367.org.readthedocs.build/en/367/