-
Notifications
You must be signed in to change notification settings - Fork 25k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ingest timestamp is not parseable as a valid ES date #23168
Labels
>bug
:Data Management/Ingest Node
Execution or management of Ingest Pipelines including GeoIP
v5.2.0
Comments
talevy
added
:Data Management/Ingest Node
Execution or management of Ingest Pipelines including GeoIP
>bug
v5.2.0
labels
Feb 14, 2017
talevy
added a commit
to talevy/elasticsearch
that referenced
this issue
Apr 5, 2017
Previously, Mustache would call `toString` on the `_ingest.timestamp` field and return a date format that did not match Elasticsearch's defaults for date-mapping parsing. The new ZonedDateTime class in Java 8 happens to do format itself in the same way ES is expecting. Fixes elastic#23168.
talevy
added a commit
to talevy/elasticsearch
that referenced
this issue
May 8, 2017
This is a backport of the update to 6.0 that defaults this new behavior. Previously, Mustache would call `toString` on the `_ingest.timestamp` field and return a date format that did not match Elasticsearch's defaults for date-mapping parsing. The new ZonedDateTime class in Java 8 happens to do format itself in the same way ES is expecting. Fixes elastic#23168. This new fix can be found in the form of a cluster setting called `ingest.new_date_format`. By default, in 5.x, the existing behavior will remain the same. One will set this property to `true` in order to take advantage of this update for ingest-pipeline convenience.
talevy
added a commit
that referenced
this issue
May 8, 2017
Previously, Mustache would call `toString` on the `_ingest.timestamp` field and return a date format that did not match Elasticsearch's defaults for date-mapping parsing. The new ZonedDateTime class in Java 8 happens to do format itself in the same way ES is expecting. This commit adds support for a feature flag that enables the usage of this new date format that has more native behavior. Fixes #23168. This new fix can be found in the form of a cluster setting called `ingest.new_date_format`. By default, in 5.x, the existing behavior will remain the same. One will set this property to `true` in order to take advantage of this update for ingest-pipeline convenience.
talevy
added a commit
that referenced
this issue
May 8, 2017
Previously, Mustache would call `toString` on the `_ingest.timestamp` field and return a date format that did not match Elasticsearch's defaults for date-mapping parsing. The new ZonedDateTime class in Java 8 happens to do format itself in the same way ES is expecting. This commit adds support for a feature flag that enables the usage of this new date format that has more native behavior. Fixes #23168. This new fix can be found in the form of a cluster setting called `ingest.new_date_format`. By default, in 5.x, the existing behavior will remain the same. One will set this property to `true` in order to take advantage of this update for ingest-pipeline convenience.
This appears to have been marked as deprecated? Line 62 in 945b3cd
What is the expected way of creating an ingest pipeline that inserts a datetime? |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
>bug
:Data Management/Ingest Node
Execution or management of Ingest Pipelines including GeoIP
v5.2.0
To repro:
Create an index with a field 'timestamp' set to date type.
Create the following ingest pipeline:
Attempt to ingest a document using this pipeline.
Result:
This is specifically the method described in the documentation as a possible replacement for _timestamp. We can work around it with a date processor in the ingest pipeline, but that's a really clunky solution for users trying to get the _timestamp feature back.
The text was updated successfully, but these errors were encountered: