Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MultiTask models are not handling evaluation metrics correctly #303

Open
britojr opened this issue Nov 25, 2024 · 0 comments
Open

MultiTask models are not handling evaluation metrics correctly #303

britojr opened this issue Nov 25, 2024 · 0 comments

Comments

@britojr
Copy link

britojr commented Nov 25, 2024

Describe the bug(问题描述)

When dealing with MultiTask, i.e. when there is more than one target variable, the evaluation metrics are not being correctly applied. As we can see on this line the evaluation metric is applied as if y was a single column.

To Reproduce(复现步骤)
Steps to reproduce the behavior:

  1. Copy the data and example code on the documentation page for MultiTask model MMOE
  2. Execute the code as is and see the warning: UserWarning: The y_pred values do not sum to one. Make sure to pass probabilities.
  3. The warning indicates that that the columns in y_pred are being interpreted as a single probability distribution instead of two distinct distributions for each of the binary tasks.

Operating environment(运行环境):

  • python version: 3.12.4
  • torch version: 2.5.1
  • deepctr-torch version: 0.2.9
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant