-
Notifications
You must be signed in to change notification settings - Fork 25k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Find and kill long running queries #7157
Comments
@avleen just to add another comment to this ticket: the circuit breakers in 1.3 should handle this better than the version you're running, and the circuit breakers in 1.4 better still. |
aye, that was my thought, better timeout logic is very important (to properly bound requests across all its phase of execution, and make them cancelable), but the circuit breaker on 1.3 should do a good job at not allowing to load expensive data structure that can't be loaded, and the improved circuit breaker in 1.4 will allow to break on expensive requests that require too much resources for the request level (like the resources required to just compute sig terms). |
Fantastic. Thanks folks. We upgraded to 1.3 while the cluster was down.
|
We run all queries with
timeout=160s
, but I understand this only really bounds the collection phase of the search?We had someone run a query using aggregations today over billions of records, which brought the cluster down.
The cluster never OOM'd, but it did run into constant GC as the heap got full.
Another query was run again over potentially huge amounts of data. The cluster had indexing disabled as it was recovering from the previous event, and since the query was run it's been at 100% CPU for about 40 minutes now. The query, afaict, is still running. But we have no way of knowing what it is, and no way to kill it other than restarting the entire cluster.
I'd like to request a feature that lists all queries that are currently executing on every node, as well a way to kill them while they're in progress.
The text was updated successfully, but these errors were encountered: