Releases: sustainable-processes/summit
0.8.8
What's Changed
- Fix SOBO issue by @marcosfelt in #204
- Update README example by @marcosfelt in #202
- Bump oauthlib from 3.2.0 to 3.2.1 by @dependabot in #206
- Format parity_plot correctly by @marcosfelt in #210
- Bump joblib from 1.1.0 to 1.2.0 by @dependabot in #209
- Small edits in the docs by @ilario in #211
- Check emulator objectives by @marcosfelt in #205
- Bump protobuf from 3.20.1 to 3.20.2 by @dependabot in #208
- Updating pandas and numpy API usage by @marcosfelt in #215
- TSEMO no samples by @marcosfelt in #218
- Improve TSEMO categorical by @marcosfelt in #220
- Bump version to 0.8.8 by @marcosfelt in #221
New Contributors
Full Changelog: 0.8.7...0.8.8
0.8.7
0.8.6
What's Changed
Bug Fixes 🐛
- Fix bug in SnAr benchmark (#187) - thanks @Yujikaiya for the issue
- Fix issue with sklearn imports (#188)
0.8.5
0.8.3
Released version 0.8.3
0.8.1
Bump version
Denali
This verison comes with new optimization strategies as well as improvements to existing functionality. You can install it using pip:
pip install --upgrade summit
Below are some highlights!
Multitask Bayesian Optimization Strategy
Multitask models have been shown to improve performance of things like drug activity and site selectivity. We extended this concept to accelerate reaction optimization in a paper published in the NeurIPs ML4 Molecules workshop last year (see the code for the paper here). This functionality is encapsulated in the MTBO strategy. The strategy works by taking data from one reaction optimization and using it to help with another.
ENTMOOT Strategy
ENTMOOT is a technique that uses gradient boosted tree models inside a bayesian optimization loop. @jezsadler of Ruth Misener's research group kindly contributed a new strategy based on their original code. It is currently an experimental feature.
Improvements to TSEMO
TSEMO is the best performing strategy in Summit for multiobjective optimization, but it previously had issues with robustness. We changed from GPy to GPytorch for the implementation of gaussian processes (GPs), which resolved this issue. Additionally, TSEMO documentation was improved and more metadata about the GP hyperparameters were added to the return of suggest_experiments
.
Overhaul of the Experimental Emulator
The ExperimentalEmulator enables you to create new benchmarks based on experimental data. Underneath the hood, a machine learning model is trained, which predicts the outcomes of a reaction given the reaction conditions. The code for ExperimentalEmulator was simplified using Skorch, an extension to scikit-learn that works with pytorch. See this tutorial to learn how to create your own benchmark.
Deprecation of Gryffin
Gryffin is a strategy for optimization mixed categorical-continuous domains. This enables things like selecting catalysts when descriptors are not available. Unfortunately, there were repeated issues with installing Gryffin, so we removed it. Similar functionality can be achieved with the SOBO or MTBO strategy.
Other performance improvements and bug fixes
- Some imports were inlined to improve startup performance of Summit
- The dependency list was trimmed. We hope to improve this further by removing the need for GPy and GPyOpt and relying solely on GPytorch and BOtorch.
- and many more!
Denali (pre-release)
Denali (pre-release)
This is a pre-release of Denali, our newest update to Summit. Key features include:
- New Multitask strategy as in Multi-task Bayesian Optimization of Chemical Reactions (see #80)
- New ENTMOOT optimization strategy from this paper (#77)
- A refactor of the ExperimentalEmulator to use skorch (see #89)
- Deprecation of Gryffin (this is not final and might change before the full release)
- Trimming down of dependencies and faster imports due to better dependency management (see #87)
The docs still need to be updated to include the two new strategies and properly explain the changes to ExperimentalEmulator.
Summit 0.7.0
Constraints only applied when ENTMOOT uses Gurobi (#86) * Add files via upload First take on ENTMOOT strategy - init and suggest_experiments functions implemented. * Add files via upload Fixed some of the documentation * Update and rename emstrat.py to entmoot.py Renaming * Add files via upload * Adding ENTMOOT test * Turning off verbose logging * Update __init__.py * Update ci.yml * Update entmoot.py Changed default optimizer type to Gurobi, and added an error if constraints are applied when optimizer type is set to sampling. * Update entmoot.py Set default optimizer type back to sampling