forked from elastic/elasticsearch
-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Adds a new auto-interval date histogram (elastic#28993)
* Adds a new auto-interval date histogram This change adds a new type of histogram aggregation called `auto_date_histogram` where you can specify the target number of buckets you require and it will find an appropriate interval for the returned buckets. The aggregation works by first collecting documents in buckets at second interval, when it has created more than the target number of buckets it merges these buckets into minute interval bucket and continues collecting until it reaches the target number of buckets again. It will keep merging buckets when it exceeds the target until either collection is finished or the highest interval (currently years) is reached. A similar process happens at reduce time. This aggregation intentionally does not support min_doc_count, offest and extended_bounds to keep the already complex logic from becoming more complex. The aggregation accepts sub-aggregations but will always operate in `breadth_first` mode deferring the computation of sub-aggregations until the final buckets from the shard are known. min_doc_count is effectively hard-coded to zero meaning that we will insert empty buckets where necessary. Closes elastic#9572 * Adds documentation * Added sub aggregator test * Fixes failing docs test * Brings branch up to date with master changes * trying to get tests to pass again * Fixes multiBucketConsumer accounting * Collects more buckets than needed on shards This gives us more options at reduce time in terms of how we do the final merge of the buckeets to produce the final result * Revert "Collects more buckets than needed on shards" This reverts commit 993c782. * Adds ability to merge within a rounding * Fixes nonn-timezone doc test failure * Fix time zone tests * iterates on tests * Adds test case and documentation changes Added some notes in the documentation about the intervals that can bbe returned. Also added a test case that utilises the merging of conseecutive buckets * Fixes performance bug The bug meant that getAppropriate rounding look a huge amount of time if the range of the data was large but also sparsely populated. In these situations the rounding would be very low so iterating through the rounding values from the min key to the max keey look a long time (~120 seconds in one test). The solution is to add a rough estimate first which chooses the rounding based just on the long values of the min and max keeys alone but selects the rounding one lower than the one it thinks is appropriate so the accurate method can choose the final rounding taking into account the fact that intervals are not always fixed length. Thee commit also adds more tests * Changes to only do complex reduction on final reduce * merge latest with master * correct tests and add a new test case for 10k buckets * refactor to perform bucket number check in innerBuild * correctly derive bucket setting, update tests to increase bucket threshold * fix checkstyle * address code review comments * add documentation for default buckets * fix typo
- Loading branch information
Showing
20 changed files
with
3,263 additions
and
6 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
283 changes: 283 additions & 0 deletions
283
docs/reference/aggregations/bucket/autodatehistogram-aggregation.asciidoc
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,283 @@ | ||
[[search-aggregations-bucket-autodatehistogram-aggregation]] | ||
=== Auto-interval Date Histogram Aggregation | ||
|
||
A multi-bucket aggregation similar to the <<search-aggregations-bucket-datehistogram-aggregation>> except | ||
instead of providing an interval to use as the width of each bucket, a target number of buckets is provided | ||
indicating the number of buckets needed and the interval of the buckets is automatically chosen to best achieve | ||
that target. The number of buckets returned will always be less than or equal to this target number. | ||
|
||
The buckets field is optional, and will default to 10 buckets if not specified. | ||
|
||
Requesting a target of 10 buckets. | ||
|
||
[source,js] | ||
-------------------------------------------------- | ||
POST /sales/_search?size=0 | ||
{ | ||
"aggs" : { | ||
"sales_over_time" : { | ||
"auto_date_histogram" : { | ||
"field" : "date", | ||
"buckets" : 10 | ||
} | ||
} | ||
} | ||
} | ||
-------------------------------------------------- | ||
// CONSOLE | ||
// TEST[setup:sales] | ||
|
||
==== Keys | ||
|
||
Internally, a date is represented as a 64 bit number representing a timestamp | ||
in milliseconds-since-the-epoch. These timestamps are returned as the bucket | ||
++key++s. The `key_as_string` is the same timestamp converted to a formatted | ||
date string using the format specified with the `format` parameter: | ||
|
||
TIP: If no `format` is specified, then it will use the first date | ||
<<mapping-date-format,format>> specified in the field mapping. | ||
|
||
[source,js] | ||
-------------------------------------------------- | ||
POST /sales/_search?size=0 | ||
{ | ||
"aggs" : { | ||
"sales_over_time" : { | ||
"auto_date_histogram" : { | ||
"field" : "date", | ||
"buckets" : 5, | ||
"format" : "yyyy-MM-dd" <1> | ||
} | ||
} | ||
} | ||
} | ||
-------------------------------------------------- | ||
// CONSOLE | ||
// TEST[setup:sales] | ||
|
||
<1> Supports expressive date <<date-format-pattern,format pattern>> | ||
|
||
Response: | ||
|
||
[source,js] | ||
-------------------------------------------------- | ||
{ | ||
... | ||
"aggregations": { | ||
"sales_over_time": { | ||
"buckets": [ | ||
{ | ||
"key_as_string": "2015-01-01", | ||
"key": 1420070400000, | ||
"doc_count": 3 | ||
}, | ||
{ | ||
"key_as_string": "2015-02-01", | ||
"key": 1422748800000, | ||
"doc_count": 2 | ||
}, | ||
{ | ||
"key_as_string": "2015-03-01", | ||
"key": 1425168000000, | ||
"doc_count": 2 | ||
} | ||
] | ||
} | ||
} | ||
} | ||
-------------------------------------------------- | ||
// TESTRESPONSE[s/\.\.\./"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,/] | ||
|
||
=== Intervals | ||
|
||
The interval of the returned buckets is selected based on the data collected by the | ||
aggregation so that the number of buckets returned is less than or equal to the number | ||
requested. The possible intervals returned are: | ||
|
||
[horizontal] | ||
seconds:: In multiples of 1, 5, 10 and 30 | ||
minutes:: In multiples of 1, 5, 10 and 30 | ||
hours:: In multiples of 1, 3 and 12 | ||
days:: In multiples of 1, and 7 | ||
months:: In multiples of 1, and 3 | ||
years:: In multiples of 1, 5, 10, 20, 50 and 100 | ||
|
||
In the worst case, where the number of daily buckets are too many for the requested | ||
number of buckets, the number of buckets returned will be 1/7th of the number of | ||
buckets requested. | ||
|
||
==== Time Zone | ||
|
||
Date-times are stored in Elasticsearch in UTC. By default, all bucketing and | ||
rounding is also done in UTC. The `time_zone` parameter can be used to indicate | ||
that bucketing should use a different time zone. | ||
|
||
Time zones may either be specified as an ISO 8601 UTC offset (e.g. `+01:00` or | ||
`-08:00`) or as a timezone id, an identifier used in the TZ database like | ||
`America/Los_Angeles`. | ||
|
||
Consider the following example: | ||
|
||
[source,js] | ||
--------------------------------- | ||
PUT my_index/log/1?refresh | ||
{ | ||
"date": "2015-10-01T00:30:00Z" | ||
} | ||
PUT my_index/log/2?refresh | ||
{ | ||
"date": "2015-10-01T01:30:00Z" | ||
} | ||
PUT my_index/log/3?refresh | ||
{ | ||
"date": "2015-10-01T02:30:00Z" | ||
} | ||
GET my_index/_search?size=0 | ||
{ | ||
"aggs": { | ||
"by_day": { | ||
"auto_date_histogram": { | ||
"field": "date", | ||
"buckets" : 3 | ||
} | ||
} | ||
} | ||
} | ||
--------------------------------- | ||
// CONSOLE | ||
|
||
UTC is used if no time zone is specified, three 1-hour buckets are returned | ||
starting at midnight UTC on 1 October 2015: | ||
|
||
[source,js] | ||
--------------------------------- | ||
{ | ||
... | ||
"aggregations": { | ||
"by_day": { | ||
"buckets": [ | ||
{ | ||
"key_as_string": "2015-10-01T00:00:00.000Z", | ||
"key": 1443657600000, | ||
"doc_count": 1 | ||
}, | ||
{ | ||
"key_as_string": "2015-10-01T01:00:00.000Z", | ||
"key": 1443661200000, | ||
"doc_count": 1 | ||
}, | ||
{ | ||
"key_as_string": "2015-10-01T02:00:00.000Z", | ||
"key": 1443664800000, | ||
"doc_count": 1 | ||
} | ||
] | ||
} | ||
} | ||
} | ||
--------------------------------- | ||
// TESTRESPONSE[s/\.\.\./"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,/] | ||
|
||
If a `time_zone` of `-01:00` is specified, then midnight starts at one hour before | ||
midnight UTC: | ||
|
||
[source,js] | ||
--------------------------------- | ||
GET my_index/_search?size=0 | ||
{ | ||
"aggs": { | ||
"by_day": { | ||
"auto_date_histogram": { | ||
"field": "date", | ||
"buckets" : 3, | ||
"time_zone": "-01:00" | ||
} | ||
} | ||
} | ||
} | ||
--------------------------------- | ||
// CONSOLE | ||
// TEST[continued] | ||
|
||
|
||
Now three 1-hour buckets are still returned but the first bucket starts at | ||
11:00pm on 30 September 2015 since that is the local time for the bucket in | ||
the specified time zone. | ||
|
||
[source,js] | ||
--------------------------------- | ||
{ | ||
... | ||
"aggregations": { | ||
"by_day": { | ||
"buckets": [ | ||
{ | ||
"key_as_string": "2015-09-30T23:00:00.000-01:00", | ||
"key": 1443657600000, | ||
"doc_count": 1 | ||
}, | ||
{ | ||
"key_as_string": "2015-10-01T00:00:00.000-01:00", | ||
"key": 1443661200000, | ||
"doc_count": 1 | ||
}, | ||
{ | ||
"key_as_string": "2015-10-01T01:00:00.000-01:00", | ||
"key": 1443664800000, | ||
"doc_count": 1 | ||
} | ||
] | ||
} | ||
} | ||
} | ||
--------------------------------- | ||
// TESTRESPONSE[s/\.\.\./"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,/] | ||
|
||
<1> The `key_as_string` value represents midnight on each day | ||
in the specified time zone. | ||
|
||
WARNING: When using time zones that follow DST (daylight savings time) changes, | ||
buckets close to the moment when those changes happen can have slightly different | ||
sizes than neighbouring buckets. | ||
For example, consider a DST start in the `CET` time zone: on 27 March 2016 at 2am, | ||
clocks were turned forward 1 hour to 3am local time. If the result of the aggregation | ||
was daily buckets, the bucket covering that day will only hold data for 23 hours | ||
instead of the usual 24 hours for other buckets. The same is true for shorter intervals | ||
like e.g. 12h. Here, we will have only a 11h bucket on the morning of 27 March when the | ||
DST shift happens. | ||
|
||
==== Scripts | ||
|
||
Like with the normal <<search-aggregations-bucket-datehistogram-aggregation, `date_histogram`>>, both document level | ||
scripts and value level scripts are supported. This aggregation does not however, support the `min_doc_count`, | ||
`extended_bounds` and `order` parameters. | ||
|
||
==== Missing value | ||
|
||
The `missing` parameter defines how documents that are missing a value should be treated. | ||
By default they will be ignored but it is also possible to treat them as if they | ||
had a value. | ||
|
||
[source,js] | ||
-------------------------------------------------- | ||
POST /sales/_search?size=0 | ||
{ | ||
"aggs" : { | ||
"sale_date" : { | ||
"auto_date_histogram" : { | ||
"field" : "date", | ||
"buckets": 10, | ||
"missing": "2000/01/01" <1> | ||
} | ||
} | ||
} | ||
} | ||
-------------------------------------------------- | ||
// CONSOLE | ||
// TEST[setup:sales] | ||
|
||
<1> Documents without a value in the `publish_date` field will fall into the same bucket as documents that have the value `2000-01-01`. | ||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.