Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[pkg/util/kubernetes][CONTINT-477] add rate_limit_queries_remaining_min telemetry in cluster agent external metrics server #20497

Conversation

adel121
Copy link
Contributor

@adel121 adel121 commented Oct 30, 2023

What does this PR do?

This PR adds rate_limit_queries_remaining_min telemetry in cluster agent external metrics server. This metric represents the minimum of number of remaining requests over the last 2*refresh_period that the datadog client can send to the backend.

rate_limit_queries_remaining_min is updated in 2 cases:

  • The new received rate limit remaining queries is less than the existing value of rate_limit_queries_remaining_min
  • rate_limit_queries_remaining_min has not been updated for more than 2 * refresh_period

The refresh period is set to 30 seconds by default.

Motivation

As Datadog Cluster-Agent user, I would like to monitor the state of the external-metrics server to better react in case of Datadog API query Rate-limit errors, since it can impact my applications autoscaling.

The official datadog-cluster-agent integration already provide some metrics to see the current rate limit state by API key. It works fine when the Rate Limit quota is set for a period of 1h hour.
For example a customer can send 20k queries per hour. So the metric (generated every 15sec) can decrease during the 1 hour period. and create a monitor if the metrics return only 10% of the quota.

But with new rate limit configuration for large customer, this metric become useless since the Rate-Limit period is now 10sec. For example 200 queries every 10sec. Which make the value flaky and inaccurate when this metric is generated every 15sec.

Having a metric that indicates the minimum value of rate_limit_queries_remaining over some period of time will be useful to stabilise the metric to better react to rate limit errors in advance.

Additional Notes

  • The cluster agent sends 0-N queries to the backend every refresh period
  • Currently, rate_limit_queries_remaining is updated only after the response of the last (N-th) query is received.
  • This makes the metric flaky, especially when the Rate-Limit period becomes small.
  • With this new metric, the minimum value of rate_limit_queries_remaining is conserved during at least 2*refresh_period, and is investigated after every single query sent to the backend, which makes the metric less flaky and more useful to create monitors to be alerted before rate limit error occurs.

Possible Drawbacks / Trade-offs

In order for rate_limit_queries_remaining and rate_limit_queries_remaining_min to show significant difference, the cluster agents need to a lot of queries (so we need to have several HPA/WPA) to the backend.

Describe how to test/QA your changes

  • Start the cluster agent on a cluster that has several HPA/WPA, collecting external metrics via complex queries.
  • Run the following command several times on the leader cluster agent: datadog-cluster-agent telemetry | grep rate_limit_queries_remaining
  • The result will show you both rate_limit_queries_remaining and rate_limit_queries_remaining_min.
  • You should find that rate_limit_queries_remaining_min is always less than rate_limit_queries_remaining and that it takes at 2*refresh_period (by default 60 seconds) to increase.

Doing this test produced the following consecutive outputs:

# HELP rate_limit_queries_remaining number of queries remaining before next reset
# TYPE rate_limit_queries_remaining gauge
rate_limit_queries_remaining{endpoint="/api/v1/query",join_leader="true"} 353
# HELP rate_limit_queries_remaining_min minimum number of queries remaining before next reset observed during an expiration interval of 2*refresh period
# TYPE rate_limit_queries_remaining_min gauge
rate_limit_queries_remaining_min{endpoint="/api/v1/query",join_leader="true"} 197
adelhajhassan@COMP-Y0K2MF67D1 ~ % kubectl exec datadog-cluster-agent-6cdb6df59b-sktds   -n datadog-agent -- datadog-cluster-agent telemetry | grep rate_limit_queries_remaining
# HELP rate_limit_queries_remaining number of queries remaining before next reset
# TYPE rate_limit_queries_remaining gauge
rate_limit_queries_remaining{endpoint="/api/v1/query",join_leader="true"} 294
# HELP rate_limit_queries_remaining_min minimum number of queries remaining before next reset observed during an expiration interval of 2*refresh period
# TYPE rate_limit_queries_remaining_min gauge
rate_limit_queries_remaining_min{endpoint="/api/v1/query",join_leader="true"} 197
adelhajhassan@COMP-Y0K2MF67D1 ~ % kubectl exec datadog-cluster-agent-6cdb6df59b-sktds   -n datadog-agent -- datadog-cluster-agent telemetry | grep rate_limit_queries_remaining
# HELP rate_limit_queries_remaining number of queries remaining before next reset
# TYPE rate_limit_queries_remaining gauge
rate_limit_queries_remaining{endpoint="/api/v1/query",join_leader="true"} 386
# HELP rate_limit_queries_remaining_min minimum number of queries remaining before next reset observed during an expiration interval of 2*refresh period
# TYPE rate_limit_queries_remaining_min gauge
rate_limit_queries_remaining_min{endpoint="/api/v1/query",join_leader="true"} 197
adelhajhassan@COMP-Y0K2MF67D1 ~ % kubectl exec datadog-cluster-agent-6cdb6df59b-sktds   -n datadog-agent -- datadog-cluster-agent telemetry | grep rate_limit_queries_remaining
# HELP rate_limit_queries_remaining number of queries remaining before next reset
# TYPE rate_limit_queries_remaining gauge
rate_limit_queries_remaining{endpoint="/api/v1/query",join_leader="true"} 354
# HELP rate_limit_queries_remaining_min minimum number of queries remaining before next reset observed during an expiration interval of 2*refresh period
# TYPE rate_limit_queries_remaining_min gauge
rate_limit_queries_remaining_min{endpoint="/api/v1/query",join_leader="true"} 197
adelhajhassan@COMP-Y0K2MF67D1 ~ % kubectl exec datadog-cluster-agent-6cdb6df59b-sktds   -n datadog-agent -- datadog-cluster-agent telemetry | grep rate_limit_queries_remaining
# HELP rate_limit_queries_remaining number of queries remaining before next reset
# TYPE rate_limit_queries_remaining gauge
rate_limit_queries_remaining{endpoint="/api/v1/query",join_leader="true"} 376
# HELP rate_limit_queries_remaining_min minimum number of queries remaining before next reset observed during an expiration interval of 2*refresh period
# TYPE rate_limit_queries_remaining_min gauge
rate_limit_queries_remaining_min{endpoint="/api/v1/query",join_leader="true"} 220
adelhajhassan@COMP-Y0K2MF67D1 ~ % kubectl exec datadog-cluster-agent-6cdb6df59b-sktds   -n datadog-agent -- datadog-cluster-agent telemetry | grep rate_limit_queries_remaining
# HELP rate_limit_queries_remaining number of queries remaining before next reset
# TYPE rate_limit_queries_remaining gauge
rate_limit_queries_remaining{endpoint="/api/v1/query",join_leader="true"} 332
# HELP rate_limit_queries_remaining_min minimum number of queries remaining before next reset observed during an expiration interval of 2*refresh period
# TYPE rate_limit_queries_remaining_min gauge
rate_limit_queries_remaining_min{endpoint="/api/v1/query",join_leader="true"} 248

Reviewer's Checklist

  • If known, an appropriate milestone has been selected; otherwise the Triage milestone is set.
  • Use the major_change label if your change either has a major impact on the code base, is impacting multiple teams or is changing important well-established internals of the Agent. This label will be use during QA to make sure each team pay extra attention to the changed behavior. For any customer facing change use a releasenote.
  • A release note has been added or the changelog/no-changelog label has been applied.
  • Changed code has automated tests for its functionality.
  • Adequate QA/testing plan information is provided if the qa/skip-qa label is not applied.
  • At least one team/.. label has been applied, indicating the team(s) that should QA this change.
  • If applicable, docs team has been notified or an issue has been opened on the documentation repo.
  • If applicable, the need-change/operator and need-change/helm labels have been applied.
  • If applicable, the k8s/<min-version> label, indicating the lowest Kubernetes version compatible with this feature.
  • If applicable, the config template has been updated.

@adel121 adel121 added this to the 7.50.0 milestone Oct 30, 2023
@adel121 adel121 force-pushed the adelhajhassan/add_rate_limit_queries_remaining_min_telemetry_in_external_metrics_server branch 4 times, most recently from 00f70ce to 901a9c5 Compare October 30, 2023 22:13
@pr-commenter
Copy link

pr-commenter bot commented Oct 31, 2023

Bloop Bleep... Dogbot Here

Regression Detector Results

Run ID: ce18fb53-1a4f-4b4c-bc76-aac1f3a8775b
Baseline: edfde91
Comparison: aec1e20
Total datadog-agent CPUs: 7

Explanation

A regression test is an integrated performance test for datadog-agent in a repeatable rig, with varying configuration for datadog-agent. What follows is a statistical summary of a brief datadog-agent run for each configuration across SHAs given above. The goal of these tests are to determine quickly if datadog-agent performance is changed and to what degree by a pull request.

Because a target's optimization goal performance in each experiment will vary somewhat each time it is run, we can only estimate mean differences in optimization goal relative to the baseline target. We express these differences as a percentage change relative to the baseline target, denoted "Δ mean %". These estimates are made to a precision that balances accuracy and cost control. We represent this precision as a 90.00% confidence interval denoted "Δ mean % CI": there is a 90.00% chance that the true value of "Δ mean %" is in that interval.

We decide whether a change in performance is a "regression" -- a change worth investigating further -- if both of the following two criteria are true:

  1. The estimated |Δ mean %| ≥ 5.00%. This criterion intends to answer the question "Does the estimated change in mean optimization goal performance have a meaningful impact on your customers?". We assume that when |Δ mean %| < 5.00%, the impact on your customers is not meaningful. We also assume that a performance change in optimization goal is worth investigating whether it is an increase or decrease, so long as the magnitude of the change is sufficiently large.

  2. Zero is not in the 90.00% confidence interval "Δ mean % CI" about "Δ mean %". This statement is equivalent to saying that there is at least a 90.00% chance that the mean difference in optimization goal is not zero. This criterion intends to answer the question, "Is there a statistically significant difference in mean optimization goal performance?". It also means there is no more than a 10.00% chance this criterion reports a statistically significant difference when the true difference in mean optimization goal is zero -- a "false positive". We assume you are willing to accept a 10.00% chance of inaccurately detecting a change in performance when no true difference exists.

The table below, if present, lists those experiments that have experienced a statistically significant change in mean optimization goal performance between baseline and comparison SHAs with 90.00% confidence OR have been detected as newly erratic. Negative values of "Δ mean %" mean that baseline is faster, whereas positive values of "Δ mean %" mean that comparison is faster. Results that do not exhibit more than a ±5.00% change in their mean optimization goal are discarded. An experiment is erratic if its coefficient of variation is greater than 0.1. The abbreviated table will be omitted if no interesting change is observed.

No interesting changes in experiment optimization goals with confidence ≥ 90.00% and |Δ mean %| ≥ 5.00%.

Fine details of change detection per experiment.
experiment goal Δ mean % Δ mean % CI confidence
process_agent_standard_check_with_stats egress throughput +0.31 [-1.69, +2.30] 19.99%
idle egress throughput +0.17 [-2.29, +2.63] 9.12%
process_agent_standard_check egress throughput +0.14 [-3.40, +3.68] 5.10%
dogstatsd_string_interner_8MiB_100k ingress throughput +0.03 [-0.06, +0.13] 45.43%
uds_dogstatsd_to_api ingress throughput +0.03 [-0.14, +0.21] 24.16%
dogstatsd_string_interner_8MiB_50k ingress throughput +0.02 [-0.01, +0.06] 81.42%
tcp_syslog_to_blackhole ingress throughput +0.01 [-0.12, +0.14] 12.56%
dogstatsd_string_interner_128MiB_1k ingress throughput +0.00 [-0.14, +0.14] 2.49%
dogstatsd_string_interner_8MiB_100 ingress throughput +0.00 [-0.13, +0.13] 0.44%
dogstatsd_string_interner_64MiB_1k ingress throughput +0.00 [-0.13, +0.13] 0.31%
dogstatsd_string_interner_64MiB_100 ingress throughput -0.00 [-0.14, +0.14] 0.17%
dogstatsd_string_interner_128MiB_100 ingress throughput -0.00 [-0.14, +0.14] 0.24%
tcp_dd_logs_filter_exclude ingress throughput -0.00 [-0.06, +0.06] 2.32%
trace_agent_msgpack ingress throughput -0.00 [-0.13, +0.13] 1.11%
dogstatsd_string_interner_8MiB_1k ingress throughput -0.00 [-0.10, +0.10] 3.27%
dogstatsd_string_interner_8MiB_10k ingress throughput -0.00 [-0.05, +0.04] 13.74%
trace_agent_json ingress throughput -0.02 [-0.15, +0.12] 16.18%
file_tree egress throughput -0.09 [-1.93, +1.75] 6.15%
file_to_blackhole egress throughput -0.32 [-0.76, +0.13] 76.02%
process_agent_real_time_mode egress throughput -0.37 [-2.88, +2.14] 19.21%
otel_to_otel_logs ingress throughput -0.51 [-2.09, +1.07] 40.33%

@adel121 adel121 changed the title add rate_limit_queries_remaining_min telemetry in cluster agent exter… [pkg/util/kubernetes][CONTINT-477] add rate_limit_queries_remaining_min telemetry in cluster agent external metrics server Oct 31, 2023
@adel121 adel121 marked this pull request as ready for review October 31, 2023 09:39
@adel121 adel121 requested review from a team as code owners October 31, 2023 09:39
@adel121 adel121 force-pushed the adelhajhassan/add_rate_limit_queries_remaining_min_telemetry_in_external_metrics_server branch from caf55fd to 997c478 Compare October 31, 2023 14:40
@@ -61,6 +65,12 @@ const (
queryEndpoint = "/api/v1/query"
)

var (
refreshPeriod = config.Datadog.GetInt("external_metrics_provider.refresh_period")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't remember if config.Datadog is already initialized when the var is created. Did you get the chance to test this ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes I have tested this by building a custom cluster agent image and deploying it on a cluster with several hpa and DatadogMetric objects. The output shows that the refresh period is respected. So I think refreshPeriod is correctly initialized.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is correct, config.Datadog will not be parsed yet, you will always get the default value. You will probably need a lazy instantiation on first call with a sync.Once.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah yes I see your point.

I will update this.

@adel121 adel121 requested a review from AliDatadog November 7, 2023 10:48
@@ -61,6 +65,12 @@ const (
queryEndpoint = "/api/v1/query"
)

var (
refreshPeriod = config.Datadog.GetInt("external_metrics_provider.refresh_period")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is correct, config.Datadog will not be parsed yet, you will always get the default value. You will probably need a lazy instantiation on first call with a sync.Once.

le "github.com/DataDog/datadog-agent/pkg/util/kubernetes/apiserver/leaderelection/metrics"
)

type minRemainingRequests struct {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The class is pretty generic and should be called something like minTracker. You should also take in the target gauge as parameter instead of targeting rateLimitsRemainingMin directly.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree, minTracker would make more sense as a name here.

However, I will remove the gauge rateLimitsRemainingMin completely from the struct, because passing it as a parameter will still create a dependency on the tags that need to be set with the gauge. I think it would be better to set the gauge externally (i.e outside the update method)

isSet := mrr.val >= 0
hasExpired := time.Since(mrr.timestamp) > mrr.expiryDuration

if mrr.val >= newValFloat || !isSet || hasExpired {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For readability we usually put the comparison with the same logical intent. In our case we want to store if newVal is lower than existing val, which is expressed more clearly with newVal < mrr.val

mrr.Lock()
defer mrr.Unlock()

newValFloat, err := strconv.Atoi(newVal)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You get an int, not a float, it's usually not necessary to suffix with type like newVal is enough

@adel121 adel121 force-pushed the adelhajhassan/add_rate_limit_queries_remaining_min_telemetry_in_external_metrics_server branch from 997c478 to 37ed2fc Compare November 7, 2023 13:20
@adel121
Copy link
Contributor Author

adel121 commented Nov 7, 2023

Thanks @AliDatadog and @vboulineau for your remarks

I've updated the PR accordingly

@adel121 adel121 requested a review from vboulineau November 7, 2023 13:21
newVal, err := strconv.Atoi(queryLimits.Remaining)
if err == nil {
getMinRemainingRequestsTracker().update(newVal)
rateLimitsRemainingMin.Set(float64(minRemainingRequestsTracker.val), queryEndpoint, le.JoinLeaderLabel)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You should provide a get() on minTracker just to avoid leaking internal details

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks

updated.

@adel121 adel121 requested a review from vboulineau November 7, 2023 13:41
@adel121 adel121 force-pushed the adelhajhassan/add_rate_limit_queries_remaining_min_telemetry_in_external_metrics_server branch from 79fd062 to 62f5276 Compare November 7, 2023 15:03
@adel121 adel121 merged commit c50f63a into main Nov 8, 2023
7 checks passed
@adel121 adel121 deleted the adelhajhassan/add_rate_limit_queries_remaining_min_telemetry_in_external_metrics_server branch November 8, 2023 15:43
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants