-
Notifications
You must be signed in to change notification settings - Fork 25k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bound the number of search results returned by elasticsearch #4026
Comments
It looks like the reason why the nodes were unable to recover was the fact that the cluster was getting into a split brain state (multiple master nodes). In general I don't think that there is a way to limit the number of results returned by a query. |
We plan to have something called "circuit breaker" that allows to prevent queries from bringing down a node if there is not enough memory. The related issue is #2929 . |
Rather than adding a setting specifically to limit the size of the priority queue, we should aim to limit the amount of memory used by a request, and how long a request can run. This potentially allows admins to specify different policies for different users. First step is to add the priority queue to the circuit breaker. |
Just to be clear, #5466 addresses a bug related to specifying a size above 999999 that causes significant performance degradation. The size of the index and memory consumed seem to have absolutely nothing to do with the issue. i can reproduce this bug even with an index with 1 tiny document in it... so it can't be related to loading a huge result set in memory. For example in our production ES cluster (v1.1.1):
Why the significant difference in response time for same index simply by specifying a different size? That seems like a different issue from what #4026 addresses imo. |
@bobbyhubbard no they are related. specifying a large size (or a high |
Ah ok. BTW - I just upgraded our dev environment to 1.3.2 to confirm if this was still an issue or not. In production running 1.1.1 I can reproduce it all day long using the test case above. However, I cannot reproduce it against 1.3.2. |
Closing in favour of #9311 |
Hello,
When making a search request (post) like the following to elasticsearch using kibana I get a java heap space error and my elasticsearch node can't recover.
{"query":{"filtered":{"query":{"bool":{"should":[{"query_string":{"query":"*"}}]}},"filter":{"bool":{"must":[{"match_all":{}},{"bool":{"must":[{"match_all":{}}]}}]}}}},"highlight":{"fields":{},"fragment_size":2147483647,"pre_tags":["@start-highlight@"],"post_tags":["@end-highlight@"]},"size":1000000,"sort":[{"_id":{"order":"desc"}}]}
Question: Is it possible to somehow restrict the maximum value of "size" that someone can use? In general, Is it possible to bound the number of results returned by elasticsearch to avoid out of memory errors?
I don't want to increase the heap size (currently 1gb) as this would not solve the problem.
Regards,
Nick
The text was updated successfully, but these errors were encountered: