Cherry-pick #20512 to 7.x: Add k8s manifest leveraging leaderelection #20600
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Cherry-pick of PR #20512 to 7.x branch. Original message:
What does this PR do?
This PR proposes new Kubernetes manifests for shake of #19731, leveraging unique Autodiscover provider implemented at #20281.
With these manifests, only Metricbeat's Deamonset will be able to monitor whole k8s cluster since one Deamonset Pod each time will hold the leadership being responsible to coordinate metricsets that collect cluster wide metrics.
We will might need a meta issue to keep track of deprecating Deployment manifests (if needed).
Why is it important?
To get rid of the requirement to maintain/handle two different deployment strategies.
Checklist
CHANGELOG.next.asciidoc
orCHANGELOG-developer.next.asciidoc
.How to test this PR locally
Test the manifests similarly to the testing steps of #20281 (comment).
metricbeat-leaderelection-kubernetes.yml
properly to set the proper image (iedocker.elastic.co/beats/metricbeat:7.10.0-SNAPSHOT
) and the proper ES output (ie on Elastic Cloud)metricbeat-leaderelection-kubernetes.yml
manifest and make sure that all the desired metricsets are shipping events and that k8s related Dashboards are populated with data correctly.kubectl delete
the leader pod and make sure that the leadership is either transfered to another Pod or it is gained again by the new replacement Pod.Example:
Related issues