We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
For some metrics like nDCG, it is plausible that we have float relevance scores. Is it a way to use pytrec_eval for floating relevance scores?
The following sample:
import pytrec_eval import json qrel = { 'q1': { 'd1': 0.2, 'd2': 1.5, 'd3': 0, }, 'q2': { 'd2': 2.5, 'd3': 1, }, } run = { 'q1': { 'd1': 1.0, 'd2': 0.0, 'd3': 1.5, }, 'q2': { 'd1': 1.5, 'd2': 0.2, 'd3': 0.5, } } evaluator = pytrec_eval.RelevanceEvaluator( qrel, {'ndcg'}) print(json.dumps(evaluator.evaluate(run), indent=1))
Raised the following exception:
--------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-7-9cc469855e77> in <module> 28 29 evaluator = pytrec_eval.RelevanceEvaluator( ---> 30 qrel, {'ndcg'}) 31 32 print(json.dumps(evaluator.evaluate(run), indent=1)) TypeError: Expected relevance to be integer.
The text was updated successfully, but these errors were encountered:
Handling floating point relevance scores would require a larger change within trec_eval itself. https://github.com/usnistgov/trec_eval/blob/master/trec_format.h#L27
trec_eval
Sorry, something went wrong.
No branches or pull requests
For some metrics like nDCG, it is plausible that we have float relevance scores.
Is it a way to use pytrec_eval for floating relevance scores?
The following sample:
Raised the following exception:
The text was updated successfully, but these errors were encountered: