-
Notifications
You must be signed in to change notification settings - Fork 187
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
JDBC Input set jdbc_default_timezone loses milliseconds #140
Comments
+1 |
2 similar comments
+1 |
+1 |
The truncation still happens on latest logstash 5.2.2 for jdbc_default_timezone => "Europe/Zurich"
|
It's same for me with 'Europe/Madrid' |
Please fix this, it leads to data duplication which is obviously dangerous. |
I specifically encounter this (or something similar) when using a |
OK I now know what the problem is here. This line converts a DateTime into a Time but drops the sub-seconds in the process. |
…gs (#260) When :sql_last_value is sent the to database, it is converted to the JVM default timezone. This change honors the jdbc_default_timezone for use in the :sql_last_value timezone conversion. Assuming America/Chicago as the default JVM timezone, but the configuration declares UTC as the database timezone jdbc_default_timezone => "UTC" The last_run is correctly recorded as 2018-02-23 22:30:34.054592000 Z , but when the > :sql_last_value gets sent to the database, it incorrectly gets converted to the JVM default timezone ( > '2018-02-23 16:30:34' ). The same holds true if reversed, Logstash has UTC as the JVM timezone and your database records datetimes in a local timezone. This change fixes the conversion when sending the :sql_last_value to the database to honor the jdbc_default_timezone setting. Specifically, this leverages the fact that the Sequel library will handle the timezone conversions properly if passed a DateTime object, and won't don't any timezone conversions if passed a Time object. This change also refactors parts of the code for better readability and adds more tests. Fixes #140
Is this been fixed in any version of logstash-jdbc? |
The bug:
For example: 2016-06-07T10:20:05.223Z with no timezone set will convert via the machine's timezone with no data loss. e.g. if the computer was New York time, it would convert by adding +4 to the hours, and maintaining the rest of the data.
However, if you specify that the timezone is UTC by putting jdbc_default_timezone => "UTC" in your input JDBC plugin, it will convert the time to be 2016-06-07T10:20:05.000Z
Setup Info:
Logstash Version: 2.3.2 (current)
OS: Windows 7
Config File:
Sample Data:
SQL, pulling data from a datetime2 field.
Steps to reproduce: Ingest data into elasticsearch via the minimal logstash config shown above using the JDBC plugin with the JDBC driver sqljdbc4.jar from a SQL database table that contains a datetime2(7) field.
The text was updated successfully, but these errors were encountered: