-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reduce the amount of has_privileges
call
#161229
Comments
Pinging @elastic/kibana-security (Team:Security) |
Pinging @elastic/response-ops (Team:ResponseOps) |
Based on the very long and intricate APM trace, it appears that unwinding the issue won't be a straightforward task and will likely require collaboration between @elastic/response-ops, @elastic/kibana-security, and @elastic/kibana-core. We might also require assistance from the @elastic/apm-ui to access and interpret these traces (unfortunately, I couldn't even load them in our o11y cluster and had to give up after waiting for around 15 minutes for the timeline to show up). Since the initial request originates from the Alerting UI and there are several requests to the |
@azasypkin happy to help out, feel free to put something on my calendar |
I am not an expert in this topic and I definitely do not want to lead any of you to wrong conclusions. When looking through the APM, I see 5x a I am not entirely sure if this should be part of this investigation, so I opened a second issue #161382 close it if no needed :) |
This is very similar to #82218 To address that issue we added a 5s UiSettings cache to each request. I suspect since these are server-side calls we don't use a UiClient from the request context but instead create one with |
Optimizations have been made in the Kibana Alerting and Task Manager framework to not call the has_privileges API as frequent. A downstream issue still exists whenever using search source / data view services (#192170) and some calls are still made by rule types (ex: security detection rules). Calls to such endpoint should be drastically reduced now. Perhaps sufficient to close this GitHub issue? |
Version: 8.8.1, everything on Elastic Cloud.
When performing a simple ES Query type alert rule that just checks if the amount of docs is above a threshold, like this:
The Kibana APM and Elasticsearch instrumentation then shows that a lot of time is spent on the
has_privileges
checks.from a transaction point of view.
Looking at the spans:
Here is what the entire trace looks like
The text was updated successfully, but these errors were encountered: