Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support much larger filters (backport of #72277) #72389

Merged
merged 2 commits into from
Apr 28, 2021

Conversation

nik9000
Copy link
Member

@nik9000 nik9000 commented Apr 28, 2021

Kibana's long had a problems sending Elasticsearch thousands of fields
to the source filtering API. It's probably not right to send thousands
of fields to it, but, in turn, we don't really have a good excuse for
rejecting them. I mean, we do. We reject them to keep ourselves from
allocating too much memory and crashing. But we can do better. This
makes us tollerate significantly larger field lists much more
efficiently.

First, this changes the way we generate the "guts" of the filter API so
that we use a ton less memory thanks to Daciuk, Mihov, Watson,
and Watson: https://www.aclweb.org/anthology/J00-1002.pdf
Lucene has an implementation of their algorithm built right in. We just
have to plug it in. With this a list of four thousand fields generates
thirty four thousand automata states instead of hundreds of thousdands.

Next we replace the algorithm that the source filtering uses to match
dotted field names with one that doesn't duplicate the automata we
build. This cuts that thirty four thousand states to seventeen thousand.

Finally we bump the maximum number of automata states we accept from
10,000 to 50,000 which is comfortable enough to handle the number of
state need for that four thousand field list. 50,000 state automata will
likely weigh in around a megabyte of heap. Heavy, but fine. Thanks to
Daciuk, et al, you can throw absolutely massive lists of fields at us
before we get that large. I couldn't do it with copy and paste. I don't
have that kind of creativity. I expect folks will shy away from sending
field lists that are megabytes long. But I never expected thousands of
fields, so what do I know.

Those the are enough for to take most of what kibana's historically sent
us without any problems. They are still huge requests and they'd be
better off not sending massive lists, but at least now the problem isn't
so pressing. That four thousand field list was 120kb over the wire.
That's big request!

Kibana's long had a problems sending Elasticsearch thousands of fields
to the source filtering API. It's probably not right to send *thousands*
of fields to it, but, in turn, we don't really have a good excuse for
rejecting them. I mean, we do. We reject them to keep ourselves from
allocating too much memory and crashing. But we can do better. This
makes us tollerate significantly larger field lists much more
efficiently.

First, this changes the way we generate the "guts" of the filter API so
that we use a *ton* less memory thanks to Daciuk, Mihov, Watson,
and Watson: https://www.aclweb.org/anthology/J00-1002.pdf
Lucene has an implementation of their algorithm built right in. We just
have to plug it in. With this a list of four thousand fields generates
thirty four thousand automata states instead of hundreds of thousdands.

Next we replace the algorithm that the source filtering uses to match
dotted field names with one that doesn't duplicate the automata we
build. This cuts that thirty four thousand states to seventeen thousand.

Finally we bump the maximum number of automata states we accept from
10,000 to 50,000 which is *comfortable* enough to handle the number of
state need for that four thousand field list. 50,000 state automata will
likely weigh in around a megabyte of heap. Heavy, but fine. Thanks to
Daciuk, et al, you can throw absolutely massive lists of fields at us
before we get that large. I couldn't do it with copy and paste. I don't
have that kind of creativity. I expect folks will shy away from sending
field lists that are megabytes long. But I never expected thousands of
fields, so what do I know.

Those the are enough for to take most of what kibana's historically sent
us without any problems. They are still huge requests and they'd be
better off not sending massive lists, but at least now the problem isn't
so pressing. That four thousand field list was 120kb over the wire.
That's big request!
@nik9000 nik9000 changed the title Support much larger filters (#72277) Support much larger filters (backport of #72277) Apr 28, 2021
Files.lines leaks?!
@nik9000 nik9000 merged commit dc2e63f into elastic:7.x Apr 28, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant