Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kibana_system role missing read permissions for metrics-apm-* and logs-apm-* #170031

Closed
uvNikita opened this issue Oct 27, 2023 · 12 comments · Fixed by #173001
Closed

kibana_system role missing read permissions for metrics-apm-* and logs-apm-* #170031

uvNikita opened this issue Oct 27, 2023 · 12 comments · Fixed by #173001
Labels
bug Fixes for quality problems that affect the customer experience Team:APM All issues that need APM UI Team support

Comments

@uvNikita
Copy link

Kibana version:
8.10.2

Elasticsearch version:
8.10.2

Original install method (e.g. download page, yum, from source, etc.):
ECK

Describe the bug:

kibana_system role currently has read permissions for traces-apm-*, but missing for logs-apm-* and metrics-apm-*.

APM application in Kibana is configured following these steps: https://www.elastic.co/guide/en/kibana/current/apm-spaces.html

This results in the following error in "Observability -> APM -> Settings -> Agent Configuration" tab: "The list of agent configurations could not be fetched. Your user may not have the sufficient permissions"

Browser Network tab shows the following failing request:

GET https://<kibana-url>/s/<kibana-space>/api/apm/settings/agent-configuration
{
  "statusCode": 500,
  "error": "Internal Server Error",
  "message": "security_exception\n\tRoot causes:\n\t\tsecurity_exception: action [indices:data/read/search] is unauthorized for service account [elastic/kibana] on indices [metrics-apm-<kibana-space>], this action is granted by the index privileges [read,all]",
  "attributes": {
    "data": {},
    "_inspect": []
  }
}

Steps to reproduce:

  1. Configure APM following these steps: https://www.elastic.co/guide/en/kibana/current/apm-spaces.html
  2. Navigate to "Observability -> APM -> Settings -> Agent Configuration"
  3. See permission Error

Alternatively, just check for missing permissions by sending GET _security/role/kibana_system

Expected behavior:
Should be able to see and create APM agent configuration.

Errors in browser console (if relevant):

GET https://<kibana-url>/s/<kibana-space>/api/apm/settings/agent-configuration
{
  "statusCode": 500,
  "error": "Internal Server Error",
  "message": "security_exception\n\tRoot causes:\n\t\tsecurity_exception: action [indices:data/read/search] is unauthorized for service account [elastic/kibana] on indices [metrics-apm-<kibana-space>], this action is granted by the index privileges [read,all]",
  "attributes": {
    "data": {},
    "_inspect": []
  }
}

Any additional context:
See also Case #01503117

@uvNikita uvNikita added the bug Fixes for quality problems that affect the customer experience label Oct 27, 2023
@botelastic botelastic bot added the needs-team Issues missing a team label label Oct 27, 2023
@jsanz jsanz added the Team:APM All issues that need APM UI Team support label Oct 27, 2023
@elasticmachine
Copy link
Contributor

Pinging @elastic/apm-ui (Team:APM)

@botelastic botelastic bot removed the needs-team Issues missing a team label label Oct 27, 2023
@sorenlouv
Copy link
Member

It is very surprising to me that the route GET /api/apm/settings/agent-configuration would attempt to access any other index than .apm-agent-configuration. If it does it sounds like a bug (or a side effect I haven't considered).

Either way thanks for bringing this to our attention, and thanks for opening a PR!

@uvNikita
Copy link
Author

@sqren So I tried to follow the trace where access to other indexes than .apm-agent-configuration would be made and I think you are right, at it's unnecessary.

Keep in mind that it's my first look at kibana source code, so probably someone else should double check this findings.

What I found so far:

So it doesn't seem to be necessary to create internalESClient with all APM configured indexes since they are not used anyway. My guess is the error will be gone if removing indexes from the call here (I haven't checked it):

const internalESClient = await createInternalESClientWithContext({

Looking at this code, it seems like there is also no way for us to see configuration only for services in the current kibana space (service environment). It would be a nice feature to make APM_AGENT_CONFIGURATION_INDEX configurable in the same way as other APM indexes are, but that's probably outside of the scope of this issue (?)

Let me know if there is anything else I can do. I can give it a shot at trying to remove those indexes from createInternalESClientWithContext call, but you probably would want someone with more experience with kibana code to do it anyway?

@jeramysoucy
Copy link
Contributor

Thank you, @uvNikita! From the Kibana Security team's perspective, we would prefer resolving this issue without having to augment kibana_system index access privileges. We typically only make exceptions for these types of indices when access is needed to support telemetry purposes. In this case, it seems like access is required only as a side effect of the current implementation.

@sqren Could someone from APM investigate further and provide some guidance to @uvNikita?

@sorenlouv
Copy link
Member

sorenlouv commented Nov 3, 2023

Thank you for the writeup @uvNikita. I think you are right, that apmIndices should be removed from createInternalESClientWithContext since it's just passed through and never used. I hope to get a PR up to fix this soon.

That being said, I couldn't reproduce the issue. Most likely because I'm using the default setup where metrics are written to metrics-apm.* and logs to metrics-apm.*. It looks like in your case, you are writing to metrics-apm-*. Is that intentional?

@elastic/apm-server Does above sound right, that by default we write to metrics-apm.* and not to metrics-apm-*?

@uvNikita
Copy link
Author

uvNikita commented Nov 6, 2023

@sqren Thanks for looking into it!

Yes, we are using <beat>-*-<kibana-space> template for all our indexes, including APM ones. It all works correctly except APM agent config, so would be nice to get rid of apmIndices since they are not needed anyway.

@sorenlouv
Copy link
Member

sorenlouv commented Nov 7, 2023

@uvNikita It looks like we will need your PR and that this cannot be fixed in Kibana. In getConfigsAppliedToAgentsThroughFleet the metrics index (metrics-apm*) is queried. Since this may run in the background (and not within a request context), it'll run as the kibana system user, so this user must have access to the metric index.

export async function getConfigsAppliedToAgentsThroughFleet(
internalESClient: APMInternalESClient
) {
const params = {
index: internalESClient.apmIndices.metric,
size: 0,

@uvNikita
Copy link
Author

uvNikita commented Nov 8, 2023

@sqren Good find!

kpatticha added a commit that referenced this issue Nov 15, 2023
Related #170031. 

The internal Elasticsearch client in APM currently takes `apmIndices` as
a required argument. This is due to legacy reasons and no longer needed
and should therefore be removed.

Co-authored-by: Katerina <[email protected]>
@kpatticha
Copy link
Contributor

@sqren Thanks for looking into it!

Yes, we are using <beat>-*-<kibana-space> template for all our indexes, including APM ones. It all works correctly except APM agent config, so would be nice to get rid of apmIndices since they are not needed anyway.

The PR is merged #170733.

@smith
Copy link
Contributor

smith commented Dec 1, 2023

Since #170733 is merged does that mean the PR to elasticsearch is still needed?

@sorenlouv
Copy link
Member

sorenlouv commented Dec 4, 2023

Since #170733 is merged does that mean the PR to elasticsearch is still needed?

Yes. That PR turned out not to have an impact on this problem but the refactor was still an improvement. The original problem will either need to be fixed via changes to documentation, or a change to how agent configurations are retrieved.

The problem in a nut shell is that the internal kibana user is querying user configurable indices. This means that if the user changes the indices to something non-standard, we cannot the internal user will not have permissions to query these.

@sorenlouv
Copy link
Member

Opened a PR for fixing this in #173001. This replaces elastic/elasticsearch#101467

sorenlouv added a commit that referenced this issue Dec 11, 2023
…s metrics (#173001)

Closes: #170031
Replaces: elastic/elasticsearch#101467 


**Problem**
We need to know if an agent config has been applied at the edge (by APM
agents). This is determined by comparing the etag (hash) of the config,
with the etag applied at the edges.

Previously the agent config itself contained this information
(`config.applied_by_agent`) but when running with fleet this will
instead be captured in `agent_config` metric documents.

Currently the internal kibana user retrieves the `agent_config` metric
documents from the APM metric index (`metrics-apm-*` by default). This
index is configurable by the end-user so can be changed to something the
internal user doesn't have access to. This is a problem.

**Solution**

This PR replaces the calls made by the internal client with calls made
by the authenticated end user (via `APMEventClient`). This approach
works for requests made from the browser/UI but doesn't work for
background tasks made by fleet. To work around this we only
conditionally query the metric index if the `APMEventClient` is
available.
If `APMEventClient` is not available `applied_by_agent` will be
`undefined`
sorenlouv added a commit to sorenlouv/kibana that referenced this issue Dec 11, 2023
…s metrics (elastic#173001)

Closes: elastic#170031
Replaces: elastic/elasticsearch#101467

**Problem**
We need to know if an agent config has been applied at the edge (by APM
agents). This is determined by comparing the etag (hash) of the config,
with the etag applied at the edges.

Previously the agent config itself contained this information
(`config.applied_by_agent`) but when running with fleet this will
instead be captured in `agent_config` metric documents.

Currently the internal kibana user retrieves the `agent_config` metric
documents from the APM metric index (`metrics-apm-*` by default). This
index is configurable by the end-user so can be changed to something the
internal user doesn't have access to. This is a problem.

**Solution**

This PR replaces the calls made by the internal client with calls made
by the authenticated end user (via `APMEventClient`). This approach
works for requests made from the browser/UI but doesn't work for
background tasks made by fleet. To work around this we only
conditionally query the metric index if the `APMEventClient` is
available.
If `APMEventClient` is not available `applied_by_agent` will be
`undefined`

(cherry picked from commit 58c7958)
sorenlouv added a commit to kibanamachine/kibana that referenced this issue Dec 11, 2023
…s metrics (elastic#173001)

Closes: elastic#170031
Replaces: elastic/elasticsearch#101467

**Problem**
We need to know if an agent config has been applied at the edge (by APM
agents). This is determined by comparing the etag (hash) of the config,
with the etag applied at the edges.

Previously the agent config itself contained this information
(`config.applied_by_agent`) but when running with fleet this will
instead be captured in `agent_config` metric documents.

Currently the internal kibana user retrieves the `agent_config` metric
documents from the APM metric index (`metrics-apm-*` by default). This
index is configurable by the end-user so can be changed to something the
internal user doesn't have access to. This is a problem.

**Solution**

This PR replaces the calls made by the internal client with calls made
by the authenticated end user (via `APMEventClient`). This approach
works for requests made from the browser/UI but doesn't work for
background tasks made by fleet. To work around this we only
conditionally query the metric index if the `APMEventClient` is
available.
If `APMEventClient` is not available `applied_by_agent` will be
`undefined`

(cherry picked from commit 58c7958)
sorenlouv added a commit to sorenlouv/kibana that referenced this issue Dec 11, 2023
…s metrics (elastic#173001)

Closes: elastic#170031
Replaces: elastic/elasticsearch#101467

**Problem**
We need to know if an agent config has been applied at the edge (by APM
agents). This is determined by comparing the etag (hash) of the config,
with the etag applied at the edges.

Previously the agent config itself contained this information
(`config.applied_by_agent`) but when running with fleet this will
instead be captured in `agent_config` metric documents.

Currently the internal kibana user retrieves the `agent_config` metric
documents from the APM metric index (`metrics-apm-*` by default). This
index is configurable by the end-user so can be changed to something the
internal user doesn't have access to. This is a problem.

**Solution**

This PR replaces the calls made by the internal client with calls made
by the authenticated end user (via `APMEventClient`). This approach
works for requests made from the browser/UI but doesn't work for
background tasks made by fleet. To work around this we only
conditionally query the metric index if the `APMEventClient` is
available.
If `APMEventClient` is not available `applied_by_agent` will be
`undefined`

(cherry picked from commit 58c7958)
sorenlouv pushed a commit to sorenlouv/kibana that referenced this issue May 1, 2024
…s metrics (elastic#173001)

Closes: elastic#170031
Replaces: elastic/elasticsearch#101467

**Problem**
We need to know if an agent config has been applied at the edge (by APM
agents). This is determined by comparing the etag (hash) of the config,
with the etag applied at the edges.

Previously the agent config itself contained this information
(`config.applied_by_agent`) but when running with fleet this will
instead be captured in `agent_config` metric documents.

Currently the internal kibana user retrieves the `agent_config` metric
documents from the APM metric index (`metrics-apm-*` by default). This
index is configurable by the end-user so can be changed to something the
internal user doesn't have access to. This is a problem.

**Solution**

This PR replaces the calls made by the internal client with calls made
by the authenticated end user (via `APMEventClient`). This approach
works for requests made from the browser/UI but doesn't work for
background tasks made by fleet. To work around this we only
conditionally query the metric index if the `APMEventClient` is
available.
If `APMEventClient` is not available `applied_by_agent` will be
`undefined`

(cherry picked from commit 58c7958)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Fixes for quality problems that affect the customer experience Team:APM All issues that need APM UI Team support
Projects
None yet
7 participants