Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for default input transfroms in MBM #3102

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

saitcakmak
Copy link
Contributor

Summary:
Input normalization is important to get the best performance out of the BoTorch models we use. The current setup relies on either using UnitX transform from Ax, or manually adding Normalize to ModelConfig.input_transform_classes to achieve input normalization.

  • UnitX is not ideal since it only applies to float valued RangeParameters. If we make everything into floats to use UnitX, we're locked into using continuous relaxation for acquisition optimization, which is something we want to move away from.
  • Normalize works well, particularly when bounds argument is provided (It's applied at each pass through the model, rather than once to the training data, but that's a separate discussion). However, requiring it as an explicit user input is a bit cumbersome.
    This diff adds the machinery for constructing a default set of input transforms. This implementation retains the previous InputPerturbation transform for robust optimization, and adds Normalize transform if the non-task features of the search space are not normalized.

With this change, we should be able to remove UnitX transform from an MBM model(spec) without losing input normalization.

Other considerations:

  • This setup only adds the default transforms if the input_transform_classes argument is left as DEFAULT. If the user supplies input_transform_classes or sets it to None, no defaults will be used. Would we want to add defaults even when the user supplies some transforms? If so, how would we decide whether to append or prepend the defaults?
  • As mentioned above, applying Normalize at each pass through the model is not super efficient. A vectorized application of an Ax transform should generally be more efficient. A longer term alternative would be to expand Ax-side UnitX to support more general parameter classes and types, without losing information in the process. This would require additional changes such as support for non-integer valued discrete RangeParameters, and support for non-integer discrete values in the mixed optimizer.

Differential Revision: D65622788

Summary:
Input normalization is important to get the best performance out of the BoTorch models we use. The current setup relies on either using `UnitX` transform from Ax, or manually adding `Normalize` to `ModelConfig.input_transform_classes` to achieve input normalization.
- `UnitX` is not ideal since it only applies to float valued `RangeParameters`. If we make everything into floats to use `UnitX`, we're locked into using continuous relaxation for acquisition optimization, which is something we want to move away from.
- `Normalize` works well, particularly when `bounds` argument is provided (It's applied at each pass through the model, rather than once to the training data, but that's a separate discussion). However, requiring it as an explicit user input is a bit cumbersome. 
This diff adds the machinery for constructing a default set of input transforms. This implementation retains the previous `InputPerturbation` transform for robust optimization, and adds `Normalize` transform if the non-task features of the search space are not normalized.

With this change, we should be able to remove `UnitX` transform from an MBM model(spec) without losing input normalization.

Other considerations:
- This setup only adds the default transforms if the `input_transform_classes` argument is left as `DEFAULT`. If the user supplies `input_transform_classes` or sets it to `None`, no defaults will be used. Would we want to add defaults even when the user supplies some transforms? If so, how would we decide whether to append or prepend the defaults?
- As mentioned above, applying `Normalize` at each pass through the model is not super efficient. A vectorized application of an Ax transform should generally be more efficient. A longer term alternative would be to expand Ax-side `UnitX` to support more general parameter classes and types, without losing information in the process. This would require additional changes such as support for non-integer valued discrete `RangeParameters`, and support for non-integer discrete values in the mixed optimizer.

Differential Revision: D65622788
@facebook-github-bot facebook-github-bot added the CLA Signed Do not delete this pull request or issue due to inactivity. label Nov 21, 2024
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D65622788

@codecov-commenter
Copy link

Codecov Report

Attention: Patch coverage is 96.07843% with 2 lines in your changes missing coverage. Please review.

Project coverage is 95.42%. Comparing base (5d1e441) to head (59d6980).
Report is 1 commits behind head on main.

Files with missing lines Patch % Lines
...rch_modular/input_constructors/input_transforms.py 91.66% 1 Missing ⚠️
ax/models/torch/botorch_modular/surrogate.py 95.65% 1 Missing ⚠️
Additional details and impacted files
@@           Coverage Diff           @@
##             main    #3102   +/-   ##
=======================================
  Coverage   95.41%   95.42%           
=======================================
  Files         495      495           
  Lines       49941    49963   +22     
=======================================
+ Hits        47653    47675   +22     
  Misses       2288     2288           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.


🚨 Try these New Features:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed Do not delete this pull request or issue due to inactivity. fb-exported
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants