-
-
Notifications
You must be signed in to change notification settings - Fork 553
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support sensitivities for Experiments #4415
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## develop #4415 +/- ##
=========================================
Coverage 99.41% 99.42%
=========================================
Files 292 292
Lines 22223 22337 +114
=========================================
+ Hits 22093 22208 +115
+ Misses 130 129 -1 ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me! Asking @MarcBerliner to review the PR too, as he knows a lot more about this than me.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks Martin! This is great, just a few minor comments on the code.
There are some experimental setups where a linearized sensitivity analysis might produce inaccurate results. Sequential experiment steps with a fixed end time will always work correctly. However, sequential steps with event-based termination conditions (like max voltage) may give the correct answer since the end time of each step is indeterminate and may change wrt the parameters. The only situation where an experiment step with an event-based termination is valid is if it’s the final step in the experiment. It would be nice to add a check for this in the code.
Yea, its a good point. For event-based termination conditions there would be an additional term in the sensitivity equations which we are assuming is close to zero. In our particular case where the model changes over the event I'm not sure if this term can even be defined, although it would be interesting to look into this further. I'd be reluctant to make this an error, and to allow users to still use this for event-based terminations. As long as the additional term is small the sensitivities can still be used within, for example, an optimisation algorithm that is robust to errors in the gradient (which many are). I'll put something in the docstrings to warn users about this, and I'll do a run-time check and emit a warning if the experiment contains an event-based termination, how does that sound? |
Sounds good to me. |
This feature would enable gradient computation while using |
Description
Support sensitivity calculation for
pybamm.Simulation
, including experimentsFixes #3834
Note that although the sensitivity calculation for
pybamm.ScipySolver
appears to work, it seems like there are bugs in this solver for experiments (see #4429) so this is currently not tested.Type of change
Please add a line in the relevant section of CHANGELOG.md to document the change (include PR #) - note reverse order of PR #s. If necessary, also add to the list of breaking changes.
Key checklist:
$ pre-commit run
(or$ nox -s pre-commit
) (see CONTRIBUTING.md for how to set this up to run automatically when committing locally, in just two lines of code)$ python run-tests.py --all
(or$ nox -s tests
)$ python run-tests.py --doctest
(or$ nox -s doctests
)You can run integration tests, unit tests, and doctests together at once, using
$ python run-tests.py --quick
(or$ nox -s quick
).Further checks: