Support much larger filters (backport of #72277) #72389
Merged
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Kibana's long had a problems sending Elasticsearch thousands of fields
to the source filtering API. It's probably not right to send thousands
of fields to it, but, in turn, we don't really have a good excuse for
rejecting them. I mean, we do. We reject them to keep ourselves from
allocating too much memory and crashing. But we can do better. This
makes us tollerate significantly larger field lists much more
efficiently.
First, this changes the way we generate the "guts" of the filter API so
that we use a ton less memory thanks to Daciuk, Mihov, Watson,
and Watson: https://www.aclweb.org/anthology/J00-1002.pdf
Lucene has an implementation of their algorithm built right in. We just
have to plug it in. With this a list of four thousand fields generates
thirty four thousand automata states instead of hundreds of thousdands.
Next we replace the algorithm that the source filtering uses to match
dotted field names with one that doesn't duplicate the automata we
build. This cuts that thirty four thousand states to seventeen thousand.
Finally we bump the maximum number of automata states we accept from
10,000 to 50,000 which is comfortable enough to handle the number of
state need for that four thousand field list. 50,000 state automata will
likely weigh in around a megabyte of heap. Heavy, but fine. Thanks to
Daciuk, et al, you can throw absolutely massive lists of fields at us
before we get that large. I couldn't do it with copy and paste. I don't
have that kind of creativity. I expect folks will shy away from sending
field lists that are megabytes long. But I never expected thousands of
fields, so what do I know.
Those the are enough for to take most of what kibana's historically sent
us without any problems. They are still huge requests and they'd be
better off not sending massive lists, but at least now the problem isn't
so pressing. That four thousand field list was 120kb over the wire.
That's big request!