You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When I do a filter or query, and set the size valeu to a huge value (like 10000000), the memory consuption is increased a lot even though the final result be 10 documents.
If I do the same thing with a huge amount of users the memory is increased so fast that the nodes would become bottlenecks (sometimes a node shutdowns itself).
I solved that changing the size attribute to a low value, but I think it would be better if the ES deal with that in a better way, avoiding bad programmers to shutdown the server.
Regards.
The text was updated successfully, but these errors were encountered:
I don't know if that would be the best solution. There is already a Size parameter for query/filter.
The problem is the programmer is responsible to do that, and some times he will not understand what big size parameter in query/filter would cause in production environment.
A maximum size in the entire cluster or index would help avoiding cluster shutdown. But I think it would be better count how many documents are available and then cut off pre-allocated memory that would not be used.
When I do a filter or query, and set the size valeu to a huge value (like 10000000), the memory consuption is increased a lot even though the final result be 10 documents.
If I do the same thing with a huge amount of users the memory is increased so fast that the nodes would become bottlenecks (sometimes a node shutdowns itself).
I solved that changing the size attribute to a low value, but I think it would be better if the ES deal with that in a better way, avoiding bad programmers to shutdown the server.
Regards.
The text was updated successfully, but these errors were encountered: