Skip to content

Commit

Permalink
Deprecates size: 0 for aggregations
Browse files Browse the repository at this point in the history
This change deprecates `size: 0` for the terms, significant terms and geohash grid aggregations

Relates to #18838
  • Loading branch information
colings86 committed Jun 14, 2016
1 parent d26adf5 commit ca416f2
Show file tree
Hide file tree
Showing 5 changed files with 45 additions and 43 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,8 @@
import org.apache.lucene.index.SortedNumericDocValues;
import org.apache.lucene.spatial.util.GeoHashUtils;
import org.elasticsearch.common.geo.GeoPoint;
import org.elasticsearch.common.logging.DeprecationLogger;
import org.elasticsearch.common.logging.Loggers;
import org.elasticsearch.common.xcontent.XContentParser;
import org.elasticsearch.index.fielddata.MultiGeoPointValues;
import org.elasticsearch.index.fielddata.SortedBinaryDocValues;
Expand Down Expand Up @@ -54,6 +56,8 @@
*/
public class GeoHashGridParser implements Aggregator.Parser {

private static final DeprecationLogger DEPRECATION_LOGGER = new DeprecationLogger(Loggers.getLogger(GeoHashGridParser.class));

@Override
public String type() {
return InternalGeoHashGrid.TYPE.name();
Expand Down Expand Up @@ -92,10 +96,14 @@ public AggregatorFactory parse(String aggregationName, XContentParser parser, Se

if (shardSize == 0) {
shardSize = Integer.MAX_VALUE;
DEPRECATION_LOGGER.deprecated("shardSize of 0 in aggregations is deprecated and will be invalid in future versions. "
+ "Please specify a shardSize greater than 0");
}

if (requiredSize == 0) {
requiredSize = Integer.MAX_VALUE;
DEPRECATION_LOGGER.deprecated("size of 0 in aggregations is deprecated and will be invalid in future versions. "
+ "Please specify a size greater than 0");
}

if (shardSize < 0) {
Expand Down Expand Up @@ -131,6 +139,7 @@ protected Aggregator createUnmapped(AggregationContext aggregationContext, Aggre
final InternalAggregation aggregation = new InternalGeoHashGrid(name, requiredSize,
Collections.<InternalGeoHashGrid.Bucket> emptyList(), pipelineAggregators, metaData);
return new NonCollectingAggregator(name, aggregationContext, parent, pipelineAggregators, metaData) {
@Override
public InternalAggregation buildEmptyAggregation() {
return aggregation;
}
Expand All @@ -151,8 +160,8 @@ protected Aggregator doCreateInternal(final ValuesSource.GeoPoint valuesSource,
}

private static class CellValues extends SortingNumericDocValues {
private MultiGeoPointValues geoValues;
private int precision;
private final MultiGeoPointValues geoValues;
private final int precision;

protected CellValues(MultiGeoPointValues geoValues, int precision) {
this.geoValues = geoValues;
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,8 @@

import org.elasticsearch.ElasticsearchException;
import org.elasticsearch.common.Explicit;
import org.elasticsearch.common.logging.DeprecationLogger;
import org.elasticsearch.common.logging.Loggers;
import org.elasticsearch.common.xcontent.XContentBuilder;
import org.elasticsearch.search.aggregations.Aggregator;
import org.elasticsearch.search.aggregations.AggregatorFactories;
Expand All @@ -40,6 +42,8 @@

public abstract class TermsAggregator extends BucketsAggregator {

private static final DeprecationLogger DEPRECATION_LOGGER = new DeprecationLogger(Loggers.getLogger(TermsAggregator.class));

public static class BucketCountThresholds {
private Explicit<Long> minDocCount;
private Explicit<Long> shardMinDocCount;
Expand All @@ -64,10 +68,14 @@ public void ensureValidity() {

if (shardSize.value() == 0) {
setShardSize(Integer.MAX_VALUE);
DEPRECATION_LOGGER.deprecated("shardSize of 0 in aggregations is deprecated and will be invalid in future versions. "
+ "Please specify a shardSize greater than 0");
}

if (requiredSize.value() == 0) {
setRequiredSize(Integer.MAX_VALUE);
DEPRECATION_LOGGER.deprecated("size of 0 in aggregations is deprecated and will be invalid in future versions. "
+ "Please specify a size greater than 0");
}
// shard_size cannot be smaller than size as we need to at least fetch <size> entries from every shards in order to return <size>
if (shardSize.value() < requiredSize.value()) {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -117,15 +117,9 @@ precision:: Optional. The string length of the geohashes used to define
size:: Optional. The maximum number of geohash buckets to return
(defaults to 10,000). When results are trimmed, buckets are
prioritised based on the volumes of documents they contain.
A value of `0` will return all buckets that
contain a hit, use with caution as this could use a lot of CPU
and network bandwidth if there are many buckets.

shard_size:: Optional. To allow for more accurate counting of the top cells
returned in the final result the aggregation defaults to
returning `max(10,(size x number-of-shards))` buckets from each
shard. If this heuristic is undesirable, the number considered
from each shard can be over-ridden using this parameter.
A value of `0` makes the shard size unlimited.


Original file line number Diff line number Diff line change
Expand Up @@ -224,12 +224,12 @@ are presented unstemmed, highlighted, with the right case, in the right order an
==== Custom background sets

Ordinarily, the foreground set of documents is "diffed" against a background set of all the documents in your index.
However, sometimes it may prove useful to use a narrower background set as the basis for comparisons.
For example, a query on documents relating to "Madrid" in an index with content from all over the world might reveal that "Spanish"
was a significant term. This may be true but if you want some more focused terms you could use a `background_filter`
on the term 'spain' to establish a narrower set of documents as context. With this as a background "Spanish" would now
be seen as commonplace and therefore not as significant as words like "capital" that relate more strongly with Madrid.
Note that using a background filter will slow things down - each term's background frequency must now be derived on-the-fly from filtering posting lists rather than reading the index's pre-computed count for a term.
However, sometimes it may prove useful to use a narrower background set as the basis for comparisons.
For example, a query on documents relating to "Madrid" in an index with content from all over the world might reveal that "Spanish"
was a significant term. This may be true but if you want some more focused terms you could use a `background_filter`
on the term 'spain' to establish a narrower set of documents as context. With this as a background "Spanish" would now
be seen as commonplace and therefore not as significant as words like "capital" that relate more strongly with Madrid.
Note that using a background filter will slow things down - each term's background frequency must now be derived on-the-fly from filtering posting lists rather than reading the index's pre-computed count for a term.

==== Limitations

Expand Down Expand Up @@ -274,7 +274,7 @@ The scores are derived from the doc frequencies in _foreground_ and _background_

===== mutual information
Mutual information as described in "Information Retrieval", Manning et al., Chapter 13.5.1 can be used as significance score by adding the parameter

[source,js]
--------------------------------------------------
Expand All @@ -283,9 +283,9 @@ Mutual information as described in "Information Retrieval", Manning et al., Chap
}
--------------------------------------------------

Mutual information does not differentiate between terms that are descriptive for the subset or for documents outside the subset. The significant terms therefore can contain terms that appear more or less frequent in the subset than outside the subset. To filter out the terms that appear less often in the subset than in documents outside the subset, `include_negatives` can be set to `false`.
Mutual information does not differentiate between terms that are descriptive for the subset or for documents outside the subset. The significant terms therefore can contain terms that appear more or less frequent in the subset than outside the subset. To filter out the terms that appear less often in the subset than in documents outside the subset, `include_negatives` can be set to `false`.

Per default, the assumption is that the documents in the bucket are also contained in the background. If instead you defined a custom background filter that represents a different set of documents that you want to compare to, set
Per default, the assumption is that the documents in the bucket are also contained in the background. If instead you defined a custom background filter that represents a different set of documents that you want to compare to, set

[source,js]
--------------------------------------------------
Expand All @@ -296,7 +296,7 @@ Per default, the assumption is that the documents in the bucket are also contain

===== Chi square
Chi square as described in "Information Retrieval", Manning et al., Chapter 13.5.2 can be used as significance score by adding the parameter

[source,js]
--------------------------------------------------
Expand All @@ -309,15 +309,15 @@ Chi square behaves like mutual information and can be configured with the same p

===== google normalized distance
Google normalized distance as described in "The Google Similarity Distance", Cilibrasi and Vitanyi, 2007 (http://arxiv.org/pdf/cs/0412098v3.pdf) can be used as significance score by adding the parameter

[source,js]
--------------------------------------------------
"gnd": {
}
--------------------------------------------------

`gnd` also accepts the `background_is_superset` parameter.
`gnd` also accepts the `background_is_superset` parameter.


===== Percentage
Expand All @@ -328,7 +328,7 @@ The benefit of this heuristic is that the scoring logic is simple to explain to

It would be hard for a seasoned boxer to win a championship if the prize was awarded purely on the basis of percentage of fights won - by these rules a newcomer with only one fight under his belt would be impossible to beat.
Multiple observations are typically required to reinforce a view so it is recommended in these cases to set both `min_doc_count` and `shard_min_doc_count` to a higher value such as 10 in order to filter out the low-frequency terms that otherwise take precedence.

[source,js]
--------------------------------------------------
Expand All @@ -348,7 +348,7 @@ If none of the above measures suits your usecase than another option is to imple

===== scripted
Customized scores can be implemented via a script:

[source,js]
--------------------------------------------------
Expand All @@ -357,7 +357,7 @@ Customized scores can be implemented via a script:
}
--------------------------------------------------

Scripts can be inline (as in above example), indexed or stored on disk. For details on the options, see <<modules-scripting, script documentation>>.
Scripts can be inline (as in above example), indexed or stored on disk. For details on the options, see <<modules-scripting, script documentation>>.

Available parameters in the script are

Expand All @@ -374,9 +374,7 @@ default, the node coordinating the search process will request each shard to pro
and once all shards respond, it will reduce the results to the final list that will then be returned to the client.
If the number of unique terms is greater than `size`, the returned list can be slightly off and not accurate
(it could be that the term counts are slightly off and it could even be that a term that should have been in the top
size buckets was not returned).

If set to `0`, the `size` will be set to `Integer.MAX_VALUE`.
size buckets was not returned).

To ensure better accuracy a multiple of the final `size` is used as the number of terms to request from each shard
using a heuristic based on the number of shards. To take manual control of this setting the `shard_size` parameter
Expand All @@ -386,11 +384,8 @@ Low-frequency terms can turn out to be the most interesting ones once all result
significant_terms aggregation can produce higher-quality results when the `shard_size` parameter is set to
values significantly higher than the `size` setting. This ensures that a bigger volume of promising candidate terms are given
a consolidated review by the reducing node before the final selection. Obviously large candidate term lists
will cause extra network traffic and RAM usage so this is quality/cost trade off that needs to be balanced. If `shard_size` is set to -1 (the default) then `shard_size` will be automatically estimated based on the number of shards and the `size` parameter.

will cause extra network traffic and RAM usage so this is quality/cost trade off that needs to be balanced. If `shard_size` is set to -1 (the default) then `shard_size` will be automatically estimated based on the number of shards and the `size` parameter.

If set to `0`, the `shard_size` will be set to `Integer.MAX_VALUE`.


NOTE: `shard_size` cannot be smaller than `size` (as it doesn't make much sense). When it is, elasticsearch will
override it and reset it to be equal to `size`.
Expand Down Expand Up @@ -439,7 +434,7 @@ WARNING: Setting `min_doc_count` to `1` is generally not advised as it tends to

The default source of statistical information for background term frequencies is the entire index and this
scope can be narrowed through the use of a `background_filter` to focus in on significant terms within a narrower
context:
context:

[source,js]
--------------------------------------------------
Expand All @@ -449,7 +444,7 @@ context:
},
"aggs" : {
"tags" : {
"significant_terms" : {
"significant_terms" : {
"field" : "tag",
"background_filter": {
"term" : { "text" : "spain"}
Expand All @@ -460,9 +455,9 @@ context:
}
--------------------------------------------------

The above filter would help focus in on terms that were peculiar to the city of Madrid rather than revealing
terms like "Spanish" that are unusual in the full index's worldwide context but commonplace in the subset of documents containing the
word "Spain".
The above filter would help focus in on terms that were peculiar to the city of Madrid rather than revealing
terms like "Spanish" that are unusual in the full index's worldwide context but commonplace in the subset of documents containing the
word "Spain".

WARNING: Use of background filters will slow the query as each term's postings must be filtered to determine a frequency

Expand All @@ -482,7 +477,7 @@ There are different mechanisms by which terms aggregations can be executed:
- by using field values directly in order to aggregate data per-bucket (`map`)
- by using ordinals of the field and preemptively allocating one bucket per ordinal value (`global_ordinals`)
- by using ordinals of the field and dynamically allocating one bucket per ordinal value (`global_ordinals_hash`)

Elasticsearch tries to have sensible defaults so this is something that generally doesn't need to be configured.

`map` should only be considered when very few documents match a query. Otherwise the ordinals-based execution modes
Expand Down Expand Up @@ -514,4 +509,3 @@ in inner aggregations.
<1> the possible values are `map`, `global_ordinals` and `global_ordinals_hash`

Please note that Elasticsearch will ignore this execution hint if it is not applicable.

9 changes: 3 additions & 6 deletions docs/reference/aggregations/bucket/terms-aggregation.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ default, the node coordinating the search process will request each shard to pro
and once all shards respond, it will reduce the results to the final list that will then be returned to the client.
This means that if the number of unique terms is greater than `size`, the returned list is slightly off and not accurate
(it could be that the term counts are slightly off and it could even be that a term that should have been in the top
size buckets was not returned). If set to `0`, the `size` will be set to `Integer.MAX_VALUE`.
size buckets was not returned).

[[search-aggregations-bucket-terms-aggregation-approximate-counts]]
==== Document counts are approximate
Expand Down Expand Up @@ -149,15 +149,12 @@ The `shard_size` parameter can be used to minimize the extra work that comes wi
it will determine how many terms the coordinating node will request from each shard. Once all the shards responded, the
coordinating node will then reduce them to a final result which will be based on the `size` parameter - this way,
one can increase the accuracy of the returned terms and avoid the overhead of streaming a big list of buckets back to
the client. If set to `0`, the `shard_size` will be set to `Integer.MAX_VALUE`.
the client.


NOTE: `shard_size` cannot be smaller than `size` (as it doesn't make much sense). When it is, elasticsearch will
override it and reset it to be equal to `size`.

It is possible to not limit the number of terms that are returned by setting `size` to `0`. Don't use this
on high-cardinality fields as this will kill both your CPU since terms need to be return sorted, and your network.

The default `shard_size` is a multiple of the `size` parameter which is dependant on the number of shards.

==== Calculating Document Count Error
Expand Down Expand Up @@ -705,4 +702,4 @@ had a value.
}
--------------------------------------------------

<1> Documents without a value in the `tags` field will fall into the same bucket as documents that have the value `N/A`.
<1> Documents without a value in the `tags` field will fall into the same bucket as documents that have the value `N/A`.

0 comments on commit ca416f2

Please sign in to comment.