Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Disable Deployment and Replicaset enrichment by default #62

Merged
merged 2 commits into from
Oct 11, 2023

Conversation

gizas
Copy link
Contributor

@gizas gizas commented Oct 4, 2023

Disabling AddResourceMetadataConfig.Deployment: false and AddResourceMetadataConfig.Cronjob: false as per discussion here #31 (comment)

@gizas gizas requested a review from a team as a code owner October 4, 2023 15:00
@gizas gizas requested review from fearful-symmetry, leehinman, ChrsMark and tommyers-elastic and removed request for a team October 4, 2023 15:00
@gizas gizas added the Team:Cloudnative-Monitoring Label for the Cloud Native Monitoring team label Oct 5, 2023
@gizas gizas self-assigned this Oct 5, 2023
@elasticmachine
Copy link
Collaborator

💚 Build Succeeded

History

cc @gizas

@@ -58,7 +58,7 @@ func GetDefaultResourceMetadataConfig() *AddResourceMetadataConfig {
return &AddResourceMetadataConfig{
Node: metaCfg,
Namespace: metaCfg,
Deployment: true,
CronJob: true,
Deployment: false,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The change looks sane to me. This would need a changelog entry in the proper section as a breaking change.

Since this is a breaking change (with valid reasoning) please wait for validation from @tommyers-elastic and @mlunadia before merging.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 from me

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you please make it at least configurable? We use kubernetes.deployment do distinguish between pod types while configuring elastic-agent. Hint based discovery is still limited so we rely on condition based.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@bartoszcisek you can still enable it in your manifest

You need to enable the kubernetes provider

data:
  agent.yml: |-
    ...
    providers.kubernetes:
      add_resource_metadata:
        deployment: true
        cronjob: true

I am testing with 8.11.0-SNAPSHOT
I exec in my elastic agent pod:

elastic-agent inspect -v --variables --variables-wait 2s
....
- data_stream:
    namespace: default
  id: filestream-container-logs-d105fc5a-36b3-4d5a-b97a-288d141781de-kubernetes-7107f2fe-3ae6-49fb-9aa6-5de2a5e23259.local-path-provisioner
  meta:
    package:
      name: kubernetes
      version: 1.46.0
  name: kubernetes-1
  original_id: filestream-container-logs-d105fc5a-36b3-4d5a-b97a-288d141781de
  - add_fields:
      fields:
        container:
          name: local-path-provisioner
        deployment:   < -------This is present
          name: local-path-provisioner

And if you need also your metrics to be enriched, you can change the state_pod and state_container datasets with the add_resource_metadata block

streams:
          - id: >-
              kubernetes/metrics-kubernetes.state_container-d105fc5a-36b3-4d5a-b97a-288d141781de
            data_stream:
              type: metrics
              dataset: kubernetes.state_container
           ...
            add_resource_metadata:
              deployment: true
              cronjob: true

         - id: >-
              kubernetes/metrics-kubernetes.state_pod-d105fc5a-36b3-4d5a-b97a-288d141781de
            data_stream:
              type: metrics
              dataset: kubernetes.state_pod
           ...
            add_metadata: true
            add_resource_metadata:
              deployment: true
              cronjob: true

There is also the option to use ingest pipelines as described in this new doc https://www.elastic.co/guide/en/fleet/master/ingest-pipeline-kubernetes.html

@mlunadia
Copy link

mlunadia commented Oct 6, 2023

Since this is a breaking change (with valid reasoning) please wait for validation from @tommyers-elastic and @mlunadia before merging.

@ChrsMark what does this mean for users, what do they have to do when they upgrade?
Are there other mechanisms we currently/could communicate breaking changes to them besides release notes? eg. manifest comments, notifications on upgrade, etc...

@ChrsMark
Copy link
Member

ChrsMark commented Oct 10, 2023

Since this is a breaking change (with valid reasoning) please wait for validation from @tommyers-elastic and @mlunadia before merging.

@ChrsMark what does this mean for users, what do they have to do when they upgrade?

Hey @mlunadia. So on upgrade I expect that users:

  1. will stop seeing by default the kubernetes.deployment.name field being populated as part of Pod's metadata whenever the Pod is owned by a ReplicaSet and a Deployment.
  2. will stop seeing by default the kubernetes.cronjob.name field being populated as part of Pod's metadata whenever the Pod is owned by a Job and a Cronjob.

Are there other mechanisms we currently/could communicate breaking changes to them besides release notes? eg. manifest comments, notifications on upgrade, etc...

Maybe we can add a commented section in the manifests to note these settings so as users interested into these to be able to easily renable them. Also in our docs we can mention it somehow as well. @gizas what do you think?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Team:Cloudnative-Monitoring Label for the Cloud Native Monitoring team
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants