You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This issue is to track the effort on investigating how to apply the fast filter optimization proposed in #9310 to composite histogram (comp-agg).
The targeted scenario is when only one source which is a date histogram
Note that composite agg support pagination and the default size is 10 which can be customized by the size param and continue next page with after key.
How composite aggregation works?
Optimization using the leading source (first aggregation in comp-agg)
Based on the sorted data structure of the leading source including Point and TermsEnum, we can early terminate after enough buckets processed.
Optimization using the index sorting
If the sources of comp-agg match index sorting and an afterKey provided, we can use SearchAfterSortedDocQuery to produce a iterator that including the documents where we should continue the aggregation
Normal case
We will try to collect every sources' values per every document, add into the composite aggregation queue if it's competitive. Without existing sorted things, we cannot do much optimization rather than collect one by one and try push into a priority queue/heap.
Deferring collection for sub aggregation
The text was updated successfully, but these errors were encountered:
This issue is to track the effort on investigating how to apply the fast filter optimization proposed in #9310 to composite histogram (comp-agg).
The targeted scenario is when only one source which is a date histogram
Note that composite agg support pagination and the default size is 10 which can be customized by the
size
param and continue next page withafter
key.How composite aggregation works?
Based on the sorted data structure of the leading source including Point and TermsEnum, we can early terminate after enough buckets processed.
If the sources of comp-agg match index sorting and an afterKey provided, we can use SearchAfterSortedDocQuery to produce a iterator that including the documents where we should continue the aggregation
We will try to collect every sources' values per every document, add into the composite aggregation queue if it's competitive. Without existing sorted things, we cannot do much optimization rather than collect one by one and try push into a priority queue/heap.
The text was updated successfully, but these errors were encountered: