-
Notifications
You must be signed in to change notification settings - Fork 187
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Don't execute select count(*) in the beginning #305
Comments
This functionality is making the jdbc input unusable for my data retrieval process because I continue to get back I/O errors due to how long this takes to run and my connections appear to be timing out during that time. EDIT: I took the query for this |
Even if i don't configure the jdbc_paging_enabled parameter,it still worked. |
This also prevents the JDBC input from working with a number of datastores that provide JDBC drivers but are not fully SQL-compliant such as Apache Jena, etc. |
Is there any update on this? I am having this same issue. This count(*) is causing a heavy query to run twice on our database. Is there any way to disable it? |
I'd like to ask the same question. 4.3.11 actually made the problem worse, since the count is now executing in all contexts, not just debug mode. We're seeing consistent failures because of the runtime of the count query. |
Hi I am having a related (or perhaps the same) issue. I am trying to read from a Firebird database with Logstash using the following When I analyze the debug information in my Windows terminal I note that somehow 'SELECT count(*) AS "COUNT" FROM is added in front of my statement and AS "T1" LIMIT 1 is added behind my statement. For example: SELECT count(*) AS "COUNT" FROM (select * from TABLE1) AS "T1" LIMIT 1 Obviously the SQL error I get is that the Token is unknown. Does anyone have a solution for it? |
Not sure if anyone else had this issue - I did and it is frustrating that Logstash can't put a kill on this count - it would be easy for them to let the user just insert their own count - preempting the query and causing them NO HARM WHATSOEVER. But, the workaround I found was to limit the query in the statement by the ID, so that the count was inherently limited. So basically
Of course, this means you have to babysit and up the numbers as the search passes it, but it's better than having it all just fail. |
I worked around it by using a prepared statement with no params. Since prepared statement does not support paging, the plugin does not issue the extra "count" query. |
Any update in respect to this? |
This only works when you have a query that is not expecting data to be used from another input. My work around was being forced to write everything in Python. Not exactly a logstash solution |
I have the same issue right now. I have a table with 100M rows. The same table was indexed with Sphinx and it was a breeze. Adding a separate |
This has been fixed here: logstash-plugins/logstash-integration-jdbc#95 |
For complex SQL queries as in our case it's performance killer, everything takes twice as long.
The text was updated successfully, but these errors were encountered: