Skip to content

Commit

Permalink
Add ERR to ranking evaluation documentation (#32314)
Browse files Browse the repository at this point in the history
This change adds a section about the Expected Reciprocal Rank metric (ERR) to
the Ranking Evaluation documentation.
  • Loading branch information
Christoph Büscher committed Jul 24, 2018
1 parent f1d1ff2 commit 4bec3ad
Showing 1 changed file with 50 additions and 0 deletions.
50 changes: 50 additions & 0 deletions docs/reference/search/rank-eval.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -263,6 +263,56 @@ in the query. Defaults to 10.
|`normalize` | If set to `true`, this metric will calculate the https://en.wikipedia.org/wiki/Discounted_cumulative_gain#Normalized_DCG[Normalized DCG].
|=======================================================================

[float]
==== Expected Reciprocal Rank (ERR)

Expected Reciprocal Rank (ERR) is an extension of the classical reciprocal rank for the graded relevance case
(Olivier Chapelle, Donald Metzler, Ya Zhang, and Pierre Grinspan. 2009. http://olivier.chapelle.cc/pub/err.pdf[Expected reciprocal rank for graded relevance].)

It is based on the assumption of a cascade model of search, in which a user scans through ranked search
results in order and stops at the first document that satisfies the information need. For this reason, it
is a good metric for question answering and navigation queries, but less so for survey oriented information
needs where the user is interested in finding many relevant documents in the top k results.

The metric models the expectation of the reciprocal of the position at which a user stops reading through
the result list. This means that relevant document in top ranking positions will contribute much to the
overall score. However, the same document will contribute much less to the score if it appears in a lower rank,
even more so if there are some relevant (but maybe less relevant) documents preceding it.
In this way, the ERR metric discounts documents which are shown after very relevant documents. This introduces
a notion of dependency in the ordering of relevant documents that e.g. Precision or DCG don't account for.

[source,js]
--------------------------------
GET /twitter/_rank_eval
{
"requests": [
{
"id": "JFK query",
"request": { "query": { "match_all": {}}},
"ratings": []
}],
"metric": {
"expected_reciprocal_rank": {
"maximum_relevance" : 3,
"k" : 20
}
}
}
--------------------------------
// CONSOLE
// TEST[setup:twitter]

The `expected_reciprocal_rank` metric takes the following parameters:

[cols="<,<",options="header",]
|=======================================================================
|Parameter |Description
| `maximum_relevance` | Mandatory parameter. The highest relevance grade used in the user supplied
relevance judgments.
|`k` | sets the maximum number of documents retrieved per query. This value will act in place of the usual `size` parameter
in the query. Defaults to 10.
|=======================================================================

[float]
=== Response format

Expand Down

0 comments on commit 4bec3ad

Please sign in to comment.