-
Notifications
You must be signed in to change notification settings - Fork 8.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reverting to legacy ES client behavior where maxSockets = Infinity #113644
Conversation
⏳ Build in-progress, with failures
Failed CI StepsTo update your PR or re-run it, just comment with: |
@elasticmachine merge upstream |
@@ -59,6 +59,10 @@ export function parseClientOptions( | |||
// do not make assumption on user-supplied data content | |||
// fixes https://github.com/elastic/kibana/issues/101944 | |||
disablePrototypePoisoningProtection: true, | |||
agent: { | |||
maxSockets: Infinity, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
from #112756 (comment)
When Kibana's communication with Elasticsearch occurs over TLS, there's rather significant overhead in establishing the TLS socket.
Therefore, if there's a stampeding herd of outbound requests to Elasticsearch
we can end up in a situation where we try to create so many new TLS sockets at the same time that we end up getting TLS handshake timeouts.
In these situations, it can be beneficial to set maxSockets to a lower number,
so that we don't try to establish so many new TLS sockets and we do HTTP request queueing in Kibana.
If we expect that masxSockets: Inifinity
might affect performance for a TLS connection,
maybe we can set a fixed limit for them?
agent: {
maxSockets: config.ssl ? 512 : 1024
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is why I suggested making maxSockets
configurable, we don't know the optimal setting here. The optimal setting here depends on at least the following:
- long requests to Elasticsearch
- bursty behavior
- CPU speed
- operating system settings
I don't think we should artificially cap this setting to just 512 or 1024.
@@ -59,6 +59,10 @@ export function parseClientOptions( | |||
// do not make assumption on user-supplied data content | |||
// fixes https://github.com/elastic/kibana/issues/101944 | |||
disablePrototypePoisoningProtection: true, | |||
agent: { | |||
maxSockets: Infinity, | |||
keepAlive: config.keepAlive, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's configure keepAlive
fallback explicitly to avoid a situation when Kibana depends on ES client defaults
keepAlive: config.keepAlive ?? true,
@elasticmachine merge upstream |
💚 Build SucceededMetrics [docs]
History
To update your PR or re-run it, just comment with: |
…lastic#113644) * Reverting to legacy ES client behavior where maxSockets = Infinity * Removing unnused type * Specifying keepAlive: true by default Co-authored-by: Kibana Machine <[email protected]>
💚 Backport successful
This backport PR will be merged automatically after passing CI. |
…113644) (#114211) * Reverting to legacy ES client behavior where maxSockets = Infinity * Removing unnused type * Specifying keepAlive: true by default Co-authored-by: Kibana Machine <[email protected]> Co-authored-by: Brandon Kobel <[email protected]>
Pinging @elastic/kibana-core (Team:Core) |
Resolves #112756
"Release Note: Resolving issue where Kibana's server could only have 256 concurrent HTTP requests open to Elasticsearch before it started queueing"