You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
shouldn't overall_accuracy be 1.0 too?
it might be caused the field mapping,
"ml.N_Lunker_prediction" : {"type" : "long"},
"ml.inference.predicted_value.keyword"
The text was updated successfully, but these errors were encountered:
I reproduced the issue and already know why it happens. You were right with mappings mismatch.
Inference does not impose any mappings on the prediction field so it is mapped as text&keyword.
In the case you described the other field is of type long.
I think the sensible approach is to make comparison in evaluation painless script more lenient so that it compares string representations rather than raw values:
String.valueOf(doc[''{0}''].value).equals(String.valueOf(doc[''{1}''].value))
instead of:
doc[''{0}''].value == doc[''{1}''].value
Issue noticed and described by @wwang500:
I have a question about overall_accuracy result. when I ran _eval on car-parts classification/inference result index.
I got this results:
shouldn't overall_accuracy be 1.0 too?
it might be caused the field mapping,
"ml.N_Lunker_prediction" : {"type" : "long"},
"ml.inference.predicted_value.keyword"
The text was updated successfully, but these errors were encountered: