Skip to content

Commit

Permalink
Apply suggestions from review.
Browse files Browse the repository at this point in the history
  • Loading branch information
afoucret committed Mar 11, 2024
1 parent e78bf59 commit ea008b9
Show file tree
Hide file tree
Showing 3 changed files with 7 additions and 5 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,8 @@ feature_extractors=[
feature_name="title_bm25",
query={"match": {"title": "{{query}}"}}
),
# We can use a script_score query to get the value of the field rating directly as a feature:
# We can use a script_score query to get the value
# of the field rating directly as a feature:
QueryFeatureExtractor(
feature_name="popularity",
query={
Expand All @@ -55,7 +56,8 @@ feature_extractors=[
}
},
),
# We can execute a script on the value of the query and use the return value as a feature:
# We can execute a script on the value of the query
# and use the return value as a feature:
QueryFeatureExtractor(
feature_name="query_length",
query={
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ For more information, see {subscriptions}.
[[learning-to-rank-rescorer]]
==== Learning To Rank as a rescorer

Once your LTR model is trained and deployed in {es}, it can be used as a <<rescore, rescorer>> in the <<search-your-data, Search API>>:
Once your LTR model is trained and deployed in {es}, it can be used as a <<rescore, rescorer>> in the <<search-your-data, search API>>:

[source,console]
----
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -109,11 +109,11 @@ The heart of LTR is of course an ML model. A model is trained using the training

The LTR space is evolving rapidly and many approaches and model types are being
experimented with. In practice {es} relies specifically on gradient boosted decision tree
(GBDT) models for LTR inference.
(https://en.wikipedia.org/wiki/Gradient_boosting#Gradient_tree_boosting[GBDT^]) models for LTR inference.

Note that {es} supports model inference but the training process itself must
happen outside of {es}, using a GBDT model. Among the most popular LTR models
used today, LambdaMART provides strong ranking performance with low inference
used today, https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/MSR-TR-2010-82.pdf[LambdaMART^] provides strong ranking performance with low inference
latencies. It relies on GBDT models and is therefore a perfect fit for LTR in
{es}.

Expand Down

0 comments on commit ea008b9

Please sign in to comment.