Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

metric average precision seems having wrong direction model selection #3648

Closed
penolove opened this issue Dec 15, 2020 · 3 comments
Closed

Comments

@penolove
Copy link
Contributor

penolove commented Dec 15, 2020

How you are using LightGBM?

python lightgbm 3.1.1

LightGBM component:

Environment info

Operating System: Linux

CPU/GPU model: Intel Core i5

Python version: 3.6

LightGBM version or commit hash:

Error message and / or logs

[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[1] valid_0's average_precision: 0.567788
Training until validation scores don't improve for 10 rounds
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[2] valid_0's average_precision: 0.573182
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[3] valid_0's average_precision: 0.541873
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[4] valid_0's average_precision: 0.548481
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[5] valid_0's average_precision: 0.560845
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[6] valid_0's average_precision: 0.541873
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[7] valid_0's average_precision: 0.545155
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[8] valid_0's average_precision: 0.535223
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[9] valid_0's average_precision: 0.526394
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[10] valid_0's average_precision: 0.529248
Did not meet early stopping. Best iteration is:
[9] valid_0's average_precision: 0.526394 # which should select the 0.573182 one, and if using the auc metric it will works properly

Reproducible example(s)

import numpy as np
import lightgbm as lgb
param =   {
        'learning_rate': 0.03,
        'objective': 'binary',
        'metric': 'average_precision',
        "early_stopping_round": 10,
        'num_leaves': 8,
        'num_iterations': 10,
}
N = 100
N_half = int(N / 2)
P = 50

x_train = np.random.rand(N, P)
y_train = np.concatenate([np.ones(N_half), np.zeros(N - N_half)])
x_test = np.random.rand(N, P)
y_test = np.concatenate([np.ones(N_half), np.zeros(N - N_half)])

d_train = lgb.Dataset(x_train, y_train)
d_valid = lgb.Dataset(x_test, y_test)
lgb.train(param, d_train, valid_sets=d_valid, verbose_eval=1)
@penolove
Copy link
Contributor Author

I think the root cause is

# branch 3.1.1. didn't cover the average_precision metric
# LightGBM/python-package/lightgbm/basic.py: 3280
                self.__higher_better_inner_eval = \
                    [name.startswith(('auc', 'ndcg@', 'map@')) for name in self.__name_inner_eval]

@StrikerRUS
Copy link
Collaborator

Fixed via #3649.

@github-actions
Copy link

This issue has been automatically locked since there has not been any recent activity since it was closed. To start a new related discussion, open a new issue at https://github.com/microsoft/LightGBM/issues including a reference to this.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Aug 23, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants