Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Elastic Agent] Agent.id of running processes #21121

Closed
ph opened this issue Sep 16, 2020 · 16 comments · Fixed by #26394
Closed

[Elastic Agent] Agent.id of running processes #21121

ph opened this issue Sep 16, 2020 · 16 comments · Fixed by #26394
Assignees

Comments

@ph
Copy link
Contributor

ph commented Sep 16, 2020

Agent with System and Endpoint integrations. All agent.id's were different for the same host.

Agent: bffe5cf9-2436-476e-b4cb-39c64235b866
Filebeat: 40154036-66e3-4b71-9505-54c5e6ca74aa
Metricbeat: b9e0d066-a255-4e31-9dfe-13b1419da034
Endpoint: 49ecf6c5-6e09-4a36-8fc5-8f8dbaa77649

I think we should have the same ID on everything so we can correlate events generated by the "Elastic Agent"

@elasticmachine
Copy link
Collaborator

Pinging @elastic/ingest-management (Team:Ingest Management)

@ph
Copy link
Contributor Author

ph commented Sep 16, 2020

@blakerouse @ruflin WDYT?

@ferullo
Copy link

ferullo commented Sep 16, 2020

cc @scunningham @kevinlog

For my part, I am ambivalent whether Agent, Beats, Endpoint all share the same ID or not. Endpoint reports elastic.agent.id in all documents, which is Agent's ID, so Endpoint documents can be correlated even if Endpoint and Agent don't share the same ID.

@blakerouse
Copy link
Contributor

@ph surprised they are all different, I would expect them to all be the same.

@ruflin
Copy link
Contributor

ruflin commented Sep 17, 2020

I assume all elastic.agent.ids are the same? It is just that the agent.id are different as they are generated by each process? I agree this might be confusing as the processes should be an implementation detail.

@ruflin
Copy link
Contributor

ruflin commented Sep 17, 2020

Second thought: I wonder if we should even ship agent.id for the processes, as not in all cases the process itself is actually the agent but sometimes the observer.

@ph
Copy link
Contributor Author

ph commented Sep 17, 2020

@ruflin @blakerouse So the fix would makes sure that all process send the elastic.agent.id with the same id, the one from the agent.

Concerning Agent.id, Is this coming from the actual input in metricbeat/filebeat?

@ruflin
Copy link
Contributor

ruflin commented Sep 18, 2020

Yes, I would already expect today that elastic.agent.id is identical everywhere. Is this also not the case?

For agent.id, I think each input should decide to send it or not. But that will be more a Beats problem to solve. By default, it should not be sent.

@kevinlog
Copy link

kevinlog commented Sep 18, 2020

For my part, I am ambivalent whether Agent, Beats, Endpoint all share the same ID or not. Endpoint reports elastic.agent.id in all documents, which is Agent's ID, so Endpoint documents can be correlated even if Endpoint and Agent don't share the same ID.

What's important to me is that we still have the elastic.agent.id in the Endpoint so that we can use it in the app. It elastic.agent.id and agent.id end up being the same, that's fine too.

cc @ferullo @ph @ruflin

@ph
Copy link
Contributor Author

ph commented Oct 14, 2020

@blakerouse I believe that metrics/logs collected by the Elastic-Agent doesn't include that the agent id?

@ph ph assigned blakerouse and unassigned ruflin Oct 14, 2020
@ph ph added the v7.11.0 label Oct 14, 2020
@andrewkroh
Copy link
Member

I think this would be good to change the agent.id of the running processes to match what we show to users in the Fleet UI. It can be confusing to have different IDs and I think it leaks a bit of an implementation detail that Agent currently consists for several different processes (which might change in the future).

Screen Shot 2021-06-09 at 3 28 05 PM

@ph @blakerouse What would it take to change this for Beats with Agent (not worrying about Endpoint here)? This was my thinking of the options:

  • Add a processor that sets (overwrites) agent.id similar to how elastic_agent.id is set today. This is really simple, but the actual UUID the beat has in its meta.json might leak in other places (logs or monitoring data).
  • Add an option to the Beat to configure the agent ID.

@andrewkroh andrewkroh added the Team:Elastic-Agent Label for the Agent team label Jun 9, 2021
@elasticmachine
Copy link
Collaborator

Pinging @elastic/agent (Team:Agent)

@ph ph added 7.15 Candidate and removed v7.11.0 labels Jun 10, 2021
@ph
Copy link
Contributor Author

ph commented Jun 10, 2021

Add a processor that sets (overwrites) agent.id similar to how elastic_agent.id is set today. This is really simple, but the actual UUID the beat has in its meta.json might leak in other places (logs or monitoring data).

Not sure if that would really leak, it think this would take precedence? @michalpristas

@blakerouse
Copy link
Contributor

I would think the processor would overwrite the agent.id set on any event. But possibly leak if the overwritten agent.id is used in another field. In that case they would not match, but again that would be a different field and might be acceptable (if it is even used in another field).

@andrewkroh
Copy link
Member

Then would you be OK if I went ahead with opening a PR to add in the processor to overwrite the agent.id value?

@ph
Copy link
Contributor Author

ph commented Jun 17, 2021 via email

andrewkroh added a commit to andrewkroh/beats that referenced this issue Jun 21, 2021
This updates the inject_agent_info rule to set the `agent.id` field to the value of the Fleet Agent ID.
Previously this value was only added to the `elastic_agent.id` field, and the `agent.id` field was
a random UUID generated the first time a Beat process was run. And each Beat process would
have its own UUID.

This change affects metricbeat, filebeat, heartbeat, osquerybeat, and packetbeat (these are the
Beats that have an integration with Agent today). Heartbeat's Agent spec was missing the
`inject_agent_info` so I added it.

Closes elastic#21121
andrewkroh added a commit that referenced this issue Jun 22, 2021
This updates the inject_agent_info rule to set the `agent.id` field to the value of the Fleet Agent ID.
Previously this value was only added to the `elastic_agent.id` field, and the `agent.id` field was
a random UUID generated the first time a Beat process was run. And each Beat process would
have its own UUID.

This change affects metricbeat, filebeat, heartbeat, osquerybeat, and packetbeat (these are the
Beats that have an integration with Agent today). Heartbeat's Agent spec was missing the
`inject_agent_info` so I added it.

Closes #21121
mergify bot pushed a commit that referenced this issue Jun 22, 2021
This updates the inject_agent_info rule to set the `agent.id` field to the value of the Fleet Agent ID.
Previously this value was only added to the `elastic_agent.id` field, and the `agent.id` field was
a random UUID generated the first time a Beat process was run. And each Beat process would
have its own UUID.

This change affects metricbeat, filebeat, heartbeat, osquerybeat, and packetbeat (these are the
Beats that have an integration with Agent today). Heartbeat's Agent spec was missing the
`inject_agent_info` so I added it.

Closes #21121

(cherry picked from commit bb950bf)
andrewkroh added a commit that referenced this issue Jun 22, 2021
This updates the inject_agent_info rule to set the `agent.id` field to the value of the Fleet Agent ID.
Previously this value was only added to the `elastic_agent.id` field, and the `agent.id` field was
a random UUID generated the first time a Beat process was run. And each Beat process would
have its own UUID.

This change affects metricbeat, filebeat, heartbeat, osquerybeat, and packetbeat (these are the
Beats that have an integration with Agent today). Heartbeat's Agent spec was missing the
`inject_agent_info` so I added it.

Closes #21121

(cherry picked from commit bb950bf)

Co-authored-by: Andrew Kroh <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

7 participants