-
Notifications
You must be signed in to change notification settings - Fork 3.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[python-package] make record_evaluation compatible with cv (fixes #4943) #4947
Conversation
callbacks = [lgb.record_evaluation(eval_result)] | ||
metrics = ['l2', 'rmse'] | ||
params = {'objective': 'l2', 'num_leaves': 3, 'metric': metrics} | ||
cv_hist = lgb.cv(params, ds, num_boost_round=5, stratified=False, callbacks=callbacks, eval_train_metric=True) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unfortunately, proposed solution crashes if you set eval_train_metric=False
in this test case.
ValueError Traceback (most recent call last)
<ipython-input-5-e50e682e859d> in <module>
----> 1 cv_hist = lgb.cv(params, data_train, num_boost_round=5, stratified=False, callbacks=callbacks, eval_train_metric=False)
D:\Miniconda3\lib\site-packages\lightgbm\engine.py in cv(params, train_set, num_boost_round, folds, nfold, stratified, shuffle, metrics, fobj, feval, init_model, feature_name, categorical_feature, early_stopping_rounds, fpreproc, verbose_eval, show_stdv, seed, callbacks, eval_train_metric, return_cvbooster)
645 try:
646 for cb in callbacks_after_iter:
--> 647 cb(callback.CallbackEnv(model=cvfolds,
648 params=params,
649 iteration=i,
D:\Miniconda3\lib\site-packages\lightgbm\callback.py in _callback(env)
141 def _callback(env: CallbackEnv) -> None:
142 if not eval_result:
--> 143 _init(env)
144 for item in env.evaluation_result_list:
145 if len(item) == 4:
D:\Miniconda3\lib\site-packages\lightgbm\callback.py in _init(env)
135 else: # cv
136 data_eval = item[1]
--> 137 data_name, eval_name = data_eval.split()
138 eval_result.setdefault(data_name, collections.OrderedDict())
139 eval_result[data_name].setdefault(eval_name, [])
ValueError: not enough values to unpack (expected 2, got 1)
item in env.evaluation_result_list
will look like
('cv_agg', 'l2', 0.077931728049798, False, 0.00858679769235092)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I fixed this for this callback by setting the first level key to valid
even when there's no train metric being computed to preserve the structure but maybe it'd be better if the second element from this result list was 'valid l2' to be consistent. WDYT?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
By that I mean removing the if in
LightGBM/python-package/lightgbm/engine.py
Lines 370 to 373 in a06fadf
if eval_train_metric: | |
key = f"{one_line[0]} {one_line[1]}" | |
else: | |
key = one_line[1] |
key = f"{one_line[0]} {one_line[1]}"
always.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
... but maybe it'd be better if the second element from this result list was 'valid l2' to be consistent.
Yeah, I support this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
changed in 914818c
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm going to mark this PR as breaking
due to this change.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thank you very much!
@jmoralez Please update this branch to unblock merging button for this PR. @jameslamb Would you like to take a look at this PR? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jameslamb Would you like to take a look at this PR?
Thanks for the @
@StrikerRUS ! I just looked through this and don't have any additional suggestions.
This pull request has been automatically locked since there has not been any recent activity since it was closed. To start a new related discussion, open a new issue at https://github.com/microsoft/LightGBM/issues including a reference to this. |
Makes the record_evaluation callback work with the cv function and returns a similar output to the one returned when used with the train function, i.e.
Closes #4943.