-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix the EvaluationStore to match prediction and target entities #6419
Fix the EvaluationStore to match prediction and target entities #6419
Conversation
Signed-off-by: Davide <[email protected]>
04cbacd
to
c052e5b
Compare
Signed-off-by: Davide <[email protected]>
Signed-off-by: Davide <[email protected]>
Signed-off-by: Davide <[email protected]>
Thanks for submitting a pull request 🚀 @degiz will take a look at it as soon as possible ✨ |
@tabergma can you review this pr? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey @davidezanella
Thanks a lot for the PR! Could you please also write a unit for the change? I believe you can use existing tests/test_test.py
file.
Signed-off-by: Davide <[email protected]>
Signed-off-by: Davide <[email protected]>
@davidezanella Thanks for addressing this. I think your concern is valid and should be fixed, but I am not sure if your method fixes the complete problem. Currently you just iterating over the gold entities to see if there are matching predicted entities and you are adding |
…get ones Signed-off-by: Davide <[email protected]>
Hey @tabergma, you are right! I've fixed the code and the tests. |
Signed-off-by: Davide <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
doesn't seem I can cancel my "changes requested" 😞
@tabergma will review the code.
@davidezanella Sorry for the late reply, I was offline a couple of days. Will take a look at the PR today. Can you please resolve the conflicts in the meantime? Thanks. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great 🚀 Added a few comments. Can you please also add a changelog entry? Thanks.
Signed-off-by: Davide <[email protected]>
Signed-off-by: Davide <[email protected]>
Signed-off-by: Davide <[email protected]>
Signed-off-by: Davide <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great 💯 Thanks for fixing this!
key=lambda x: x.get("start"), | ||
) | ||
|
||
i_pred, i_target = 0, 0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Minor: We try to not use abbreviations in names. I would rename this to index_prediction
and index_target
.
i_pred, i_target = 0, 0 | ||
|
||
while i_pred < len(entity_predictions) or i_target < len(entity_targets): | ||
cmp = self._compare_entities( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Minor: I would rename this to comparison_result
.
def _compare_entities( | ||
entity_predictions: List[Dict[Text, Any]], | ||
entity_targets: List[Dict[Text, Any]], | ||
i_pred: int, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Minor: Rename to index_prediction
.
entity_predictions: List[Dict[Text, Any]], | ||
entity_targets: List[Dict[Text, Any]], | ||
i_pred: int, | ||
i_target: int, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Minor: Rename to index_target
.
@degiz I think you need to update your review otherwise we are not able to merge this PR. |
Proposed changes:
Practical example:
Adding 'None' at the end, as done originally, leads to wrong metrics because even the right prediction was marked as wrong.
Status (please check what you already did):
black
(please check Readme for instructions)