-
Notifications
You must be signed in to change notification settings - Fork 404
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Document how to implement custom models #2474
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #2474 +/- ##
=======================================
Coverage 99.98% 99.98%
=======================================
Files 191 191
Lines 16856 16864 +8
=======================================
+ Hits 16854 16862 +8
Misses 2 2 ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a great tutorial, thank you very much for the contribution! I just have minor point about the sampler registration. Once it is ready, you can import the PR to fbcode. Landing it in fbcode will sync the changes to GitHub and close the PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is an awesome tutorial, thanks a lot for the contribution!
Some additional linear algebraic comments:
# Inverse of the gram matrix.
self.V = torch.linalg.inv(x.T @ x)
It's generally inadvisable to compute the inverse of a matrix. Rather than doing that, you'd typically compute a matrix decomposition of x.T @ x
and use that for solves down the line using forward-backward substitutions. Could you modify the code to that end? Happy to provide more specifics / code changes if that would be useful.
Also, some of this can get hairy in 32bit precision if the gram matrix is ill-conditioned. In general we recommend folks use botorch with 64bit precision (except when running on larger data sets on a GPU where perf really counts). Could you modify the tutorial to use torch.float64
? Either by choosing the dtype explicitly or by setting the default torch dtype at the beginning of the tutorial.
|
Great
|
okay, now I got it. Replaced the |
@jakee417 has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
website/tutorials.json
Outdated
@@ -153,4 +157,4 @@ | |||
"title": "Composite Bayesian Optimization with Multi-Task Gaussian Processes" | |||
} | |||
] | |||
} | |||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What happened here, a whitespace change?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awesome, this is a great tutorial and I may refer to it myself in the future. I left a couple minor suggestions, but this looks pretty good to me as-is.
@jakee417 has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
@jakee417 has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
Motivation
Issue #2306
Have you read the Contributing Guidelines on pull requests?
Yes. Added a tutorial which can be used for smoke tests.
Test Plan
Probabilistic linear regression, bayesian linear regression, and ensemble linear regression all yield optimization results close to (0, 0) which is groundtruth answer.
Random Forest doesn't seem to achieve groundtruth answer, likely due to its inability to incorporate gradient information into the optimization of the acquisition function.
Related PRs
N/A