Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ksm] Allow collecting pod metrics from the Kubelet #28811

Merged
merged 7 commits into from
Sep 2, 2024

Conversation

davidor
Copy link
Member

@davidor davidor commented Aug 27, 2024

What does this PR do?

This PR introduces an option in the Kubernetes State Metrics (KSM) check to be able to collect pods from the Kubelet in node agents instead of collecting them from the API Server in the Cluster Agent or the Cluster check runners.

This is useful in clusters with a large number of pods where emitting pod metrics from a single check instance might be too much and causes issues like: long execution time, interference with other checks, high cpu/mem requirements, and instability issues in general.

The new option is pod_collection_mode and it accepts these values:

  • "" or "default": current behavior (this is the default).
  • "node_kubelet": collects pods from the Kubelet. This is meant to be enabled when the check is running on the node agent.
  • "cluster_unassigned": collects pods from the API server, but only the unassigned ones. This is meant to be enabled when the check is running on the cluster agent or the cluster check runner and "node_kubelet" is enabled in the node agents, because unassigned pods cannot be collected from node agents.

One thing to note is that when the node agent collects metrics from the kubelet and the cluster agent or cluster check runner collects metrics for other resources, label joins are not supported for pod metrics if the join source is not a pod.

There are some things that could be optimized or organized a bit better. For example, I'd like to use workloadmeta instead of the pod watcher in the reflector. But I prefer to start with a simple solution. I'll optimize and reorganize things in future PRs if needed.

Describe how to test/QA your changes

This is the minimal config needed to enable the feature using the helm chart:

datadog:
  kubelet:
    tlsVerify: false # Local cluster for testing
  confd:
    kubernetes_state_core.yaml: |-
      init_config:
      instances:
        - collectors:
            - pods
          pod_collection_mode: "node_kubelet"
clusterAgent:
  confd:
    kubernetes_state_core.yaml: |-
      init_config:
      instances:
        - pod_collection_mode: "cluster_unassigned"

The idea is to verify that using this config we get the same metrics with the same tags as when using the current way of running the KSM check. I have a notebook for this, ask me.
We also need to check that this doesn't introduce any performance issue on the node agents. I'm currently checking this on some internal clusters. Ask me for more details.

@davidor davidor added the team/container-platform The Container Platform Team label Aug 27, 2024
@davidor davidor added this to the 7.58.0 milestone Aug 27, 2024
@davidor davidor requested review from a team as code owners August 27, 2024 14:12
@pr-commenter
Copy link

pr-commenter bot commented Aug 27, 2024

Test changes on VM

Use this command from test-infra-definitions to manually test this PR changes on a VM:

inv create-vm --pipeline-id=43249506 --os-family=ubuntu

Note: This applies to commit a3d6311

@pr-commenter
Copy link

pr-commenter bot commented Aug 27, 2024

Regression Detector

Regression Detector Results

Run ID: 2ad5fca2-c1f9-4ea9-a251-7b6baeaa5ecf Metrics dashboard Target profiles

Baseline: b0127cb
Comparison: a3d6311

Performance changes are noted in the perf column of each table:

  • ✅ = significantly better comparison variant performance
  • ❌ = significantly worse comparison variant performance
  • ➖ = no significant change in performance

No significant changes in experiment optimization goals

Confidence level: 90.00%
Effect size tolerance: |Δ mean %| ≥ 5.00%

There were no significant changes in experiment optimization goals at this confidence level and effect size tolerance.

Fine details of change detection per experiment

perf experiment goal Δ mean % Δ mean % CI links
pycheck_lots_of_tags % cpu utilization +1.62 [-0.90, +4.13] Logs
file_tree memory utilization +0.17 [+0.10, +0.24] Logs
otel_to_otel_logs ingress throughput +0.14 [-0.67, +0.95] Logs
idle memory utilization +0.08 [+0.05, +0.11] Logs
uds_dogstatsd_to_api ingress throughput -0.00 [-0.00, +0.00] Logs
tcp_dd_logs_filter_exclude ingress throughput -0.00 [-0.01, +0.01] Logs
uds_dogstatsd_to_api_cpu % cpu utilization -0.09 [-1.06, +0.87] Logs
basic_py_check % cpu utilization -0.76 [-3.43, +1.92] Logs
tcp_syslog_to_blackhole ingress throughput -2.20 [-14.75, +10.36] Logs

Bounds Checks

perf experiment bounds_check_name replicates_passed
idle memory_usage 10/10

Explanation

A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".

For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:

  1. Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.

  2. Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.

  3. Its configuration does not mark it "erratic".

@davidor davidor force-pushed the davidor/ksm-pods-in-node-agent branch from dbdfda2 to a354744 Compare August 28, 2024 07:00
@davidor davidor force-pushed the davidor/ksm-pods-in-node-agent branch from a354744 to 662cd6b Compare August 29, 2024 09:27
Copy link
Contributor

@clamoriniere clamoriniere left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Only few comments to discuss futur change and usability, otherwise this PR is 💯
thanks for the clear and clean implementation 🙇

@davidor davidor force-pushed the davidor/ksm-pods-in-node-agent branch 2 times, most recently from 0c5e61e to 826ee2b Compare August 30, 2024 09:19
@davidor davidor force-pushed the davidor/ksm-pods-in-node-agent branch from a07d953 to a3d6311 Compare August 30, 2024 12:34
@davidor
Copy link
Member Author

davidor commented Sep 2, 2024

/merge

@dd-devflow
Copy link

dd-devflow bot commented Sep 2, 2024

🚂 MergeQueue: pull request added to the queue

The median merge time in main is 22m.

Use /merge -c to cancel this operation!

@dd-mergequeue dd-mergequeue bot merged commit 2e7d2bb into main Sep 2, 2024
231 checks passed
@dd-mergequeue dd-mergequeue bot deleted the davidor/ksm-pods-in-node-agent branch September 2, 2024 07:33
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
team/container-platform The Container Platform Team
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants