-
Notifications
You must be signed in to change notification settings - Fork 3.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature Request: customizable early_stopping_tolerance #2526
Comments
@kryptonite0 Can you please provide a reproducible example or training logs at least? As I know, we do not have any default numerical tolerance. Refer to: LightGBM/python-package/lightgbm/callback.py Line 227 in fc991c9
https://docs.python.org/3/library/operator.html#operator.lt Linking dmlc/xgboost#4982 here. |
Ping @kryptonite0 As you can see from examples, there is no "default numerical tolerance (0.001)": LightGBM/examples/python-guide/advanced_example.py Lines 52 to 59 in 785e477
|
At present we do not have any "default numerical tolerance". But having customizable early stopping tolerance might be useful in some cases. |
Closed in favor of being in #2302. We decided to keep all feature requests in one place. Welcome to contribute this feature! Please re-open this issue (or post a comment if you are not a topic starter) if you are actively working on implementing this feature. |
Hi. I'm working on this, I'll make a PR soon. |
Corresponding XGBoost experience: |
Thanks for that. I believe my approach is the same as dmlc/xgboost#7137, basically the change I made was replacing LightGBM/python-package/lightgbm/callback.py Line 200 in 99cc4f2
with: def _gt_threshold(curr_score, best_score, threshold):
return curr_score > best_score + threshold and the opposite for the minimize case ( |
* initial changes * initial version * better handling of cases * warn only with positive threshold * remove early_stopping_threshold from high-level functions * remove remaining early_stopping_threshold * update test to use callback * better handling of cases * rename threshold to min_delta enhance parameter description update tests * Apply suggestions from code review Co-authored-by: Nikita Titov <[email protected]> * reduce num_boost_round in tests * Apply suggestions from code review Co-authored-by: Nikita Titov <[email protected]> * trigger ci Co-authored-by: Nikita Titov <[email protected]> Co-authored-by: Nikita Titov <[email protected]>
#4580 implemented this feature request for Python-package. Thank you very much @jmoralez ! |
This issue has been automatically locked since there has not been any recent activity since it was closed. |
I have a situation where the default numerical tolerance (0.001) for early stopping is too large. My target has a gamma distribution and the LGB Regressor reaches convergence too early, when the numerous low target values are well approximated by the model, but the few large values are still underestimated. When I deactivate early stopping, I can see the loss metric still improving at the 4th or more decimal digit, past the best iteration reached during early stopping. It would be great to be able to set manually the tolerance.
The text was updated successfully, but these errors were encountered: