Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reverting to legacy ES client behavior where maxSockets = Infinity #113644

Merged
merged 6 commits into from
Oct 6, 2021

Conversation

kobelb
Copy link
Contributor

@kobelb kobelb commented Oct 1, 2021

Resolves #112756

"Release Note: Resolving issue where Kibana's server could only have 256 concurrent HTTP requests open to Elasticsearch before it started queueing"

@kibanamachine
Copy link
Contributor

⏳ Build in-progress, with failures

Failed CI Steps

To update your PR or re-run it, just comment with:
@elasticmachine merge upstream

@kobelb kobelb marked this pull request as ready for review October 4, 2021 18:05
@kobelb kobelb requested a review from a team as a code owner October 4, 2021 18:05
@kobelb
Copy link
Contributor Author

kobelb commented Oct 4, 2021

@elasticmachine merge upstream

@@ -59,6 +59,10 @@ export function parseClientOptions(
// do not make assumption on user-supplied data content
// fixes https://github.com/elastic/kibana/issues/101944
disablePrototypePoisoningProtection: true,
agent: {
maxSockets: Infinity,
Copy link
Contributor

@mshustov mshustov Oct 5, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

from #112756 (comment)

When Kibana's communication with Elasticsearch occurs over TLS, there's rather significant overhead in establishing the TLS socket.
Therefore, if there's a stampeding herd of outbound requests to Elasticsearch
we can end up in a situation where we try to create so many new TLS sockets at the same time that we end up getting TLS handshake timeouts.
In these situations, it can be beneficial to set maxSockets to a lower number,
so that we don't try to establish so many new TLS sockets and we do HTTP request queueing in Kibana.

If we expect that masxSockets: Inifinity might affect performance for a TLS connection,
maybe we can set a fixed limit for them?

agent: {
      maxSockets: config.ssl ? 512 : 1024

Copy link
Contributor Author

@kobelb kobelb Oct 5, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is why I suggested making maxSockets configurable, we don't know the optimal setting here. The optimal setting here depends on at least the following:

  • long requests to Elasticsearch
  • bursty behavior
  • CPU speed
  • operating system settings

I don't think we should artificially cap this setting to just 512 or 1024.

@@ -59,6 +59,10 @@ export function parseClientOptions(
// do not make assumption on user-supplied data content
// fixes https://github.com/elastic/kibana/issues/101944
disablePrototypePoisoningProtection: true,
agent: {
maxSockets: Infinity,
keepAlive: config.keepAlive,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's configure keepAlive fallback explicitly to avoid a situation when Kibana depends on ES client defaults

keepAlive: config.keepAlive ?? true,

@kobelb
Copy link
Contributor Author

kobelb commented Oct 6, 2021

@elasticmachine merge upstream

@kibanamachine
Copy link
Contributor

💚 Build Succeeded

Metrics [docs]

✅ unchanged

History

To update your PR or re-run it, just comment with:
@elasticmachine merge upstream

@kobelb kobelb added the auto-backport Deprecated - use backport:version if exact versions are needed label Oct 6, 2021
@kobelb kobelb merged commit a4ee087 into elastic:master Oct 6, 2021
kibanamachine added a commit to kibanamachine/kibana that referenced this pull request Oct 6, 2021
…lastic#113644)

* Reverting to legacy ES client behavior where maxSockets = Infinity

* Removing unnused type

* Specifying keepAlive: true by default

Co-authored-by: Kibana Machine <[email protected]>
@kibanamachine
Copy link
Contributor

💚 Backport successful

Status Branch Result
7.x

This backport PR will be merged automatically after passing CI.

kibanamachine added a commit that referenced this pull request Oct 7, 2021
…113644) (#114211)

* Reverting to legacy ES client behavior where maxSockets = Infinity

* Removing unnused type

* Specifying keepAlive: true by default

Co-authored-by: Kibana Machine <[email protected]>

Co-authored-by: Brandon Kobel <[email protected]>
@timroes timroes added the Team:Core Core services & architecture: plugins, logging, config, saved objects, http, ES client, i18n, etc label Oct 27, 2021
@elasticmachine
Copy link
Contributor

Pinging @elastic/kibana-core (Team:Core)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
auto-backport Deprecated - use backport:version if exact versions are needed Feature:elasticsearch release_note:fix Team:Core Core services & architecture: plugins, logging, config, saved objects, http, ES client, i18n, etc v7.16.0 v8.0.0
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Elasticsearch client's maxSockets defaults to 256
6 participants