Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Overall accuracy is reported as 0.0 while it should be greater than 0 #53485

Closed
przemekwitek opened this issue Mar 12, 2020 · 3 comments
Closed
Labels
>bug :ml Machine learning

Comments

@przemekwitek
Copy link
Contributor

Issue noticed and described by @wwang500:

I have a question about overall_accuracy result. when I ran _eval on car-parts classification/inference result index.

POST _ml/data_frame/_evaluate
{
  "index": "dest_car_parts_70_1583979097545",
  "evaluation": {
      "classification": {
        "actual_field": "ml.inference.predicted_value.keyword",
        "predicted_field": "ml.N_Lunker_prediction",
         "metrics": {
           "accuracy": {}
         }
      }
   }
}

I got this results:

{
  "classification" : {
    "accuracy" : {
      "classes" : [
        {
          "class_name" : "0",
          "accuracy" : 1.0
        },
        {
          "class_name" : "1",
          "accuracy" : 1.0
        }
      ],
      "overall_accuracy" : 0.0
    }
  }
}

shouldn't overall_accuracy be 1.0 too?
it might be caused the field mapping,
"ml.N_Lunker_prediction" : {"type" : "long"},
"ml.inference.predicted_value.keyword"

@przemekwitek przemekwitek added >bug :ml Machine learning labels Mar 12, 2020
@przemekwitek przemekwitek self-assigned this Mar 12, 2020
@elasticmachine
Copy link
Collaborator

Pinging @elastic/ml-core (:ml)

@przemekwitek
Copy link
Contributor Author

przemekwitek commented Mar 12, 2020

I reproduced the issue and already know why it happens. You were right with mappings mismatch.
Inference does not impose any mappings on the prediction field so it is mapped as text&keyword.
In the case you described the other field is of type long.

I think the sensible approach is to make comparison in evaluation painless script more lenient so that it compares string representations rather than raw values:
String.valueOf(doc[''{0}''].value).equals(String.valueOf(doc[''{1}''].value))
instead of:
doc[''{0}''].value == doc[''{1}''].value

#53458 implements this idea.

@przemekwitek
Copy link
Contributor Author

#53458 and its backport to 7.x are now merged in.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
>bug :ml Machine learning
Projects
None yet
Development

No branches or pull requests

2 participants