-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[pkg/util/kubernetes][CONTINT-477] add rate_limit_queries_remaining_min telemetry in cluster agent external metrics server #20497
Conversation
00f70ce
to
901a9c5
Compare
Bloop Bleep... Dogbot HereRegression Detector ResultsRun ID: ce18fb53-1a4f-4b4c-bc76-aac1f3a8775b ExplanationA regression test is an integrated performance test for Because a target's optimization goal performance in each experiment will vary somewhat each time it is run, we can only estimate mean differences in optimization goal relative to the baseline target. We express these differences as a percentage change relative to the baseline target, denoted "Δ mean %". These estimates are made to a precision that balances accuracy and cost control. We represent this precision as a 90.00% confidence interval denoted "Δ mean % CI": there is a 90.00% chance that the true value of "Δ mean %" is in that interval. We decide whether a change in performance is a "regression" -- a change worth investigating further -- if both of the following two criteria are true:
The table below, if present, lists those experiments that have experienced a statistically significant change in mean optimization goal performance between baseline and comparison SHAs with 90.00% confidence OR have been detected as newly erratic. Negative values of "Δ mean %" mean that baseline is faster, whereas positive values of "Δ mean %" mean that comparison is faster. Results that do not exhibit more than a ±5.00% change in their mean optimization goal are discarded. An experiment is erratic if its coefficient of variation is greater than 0.1. The abbreviated table will be omitted if no interesting change is observed. No interesting changes in experiment optimization goals with confidence ≥ 90.00% and |Δ mean %| ≥ 5.00%. Fine details of change detection per experiment.
|
caf55fd
to
997c478
Compare
@@ -61,6 +65,12 @@ const ( | |||
queryEndpoint = "/api/v1/query" | |||
) | |||
|
|||
var ( | |||
refreshPeriod = config.Datadog.GetInt("external_metrics_provider.refresh_period") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't remember if config.Datadog
is already initialized when the var
is created. Did you get the chance to test this ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes I have tested this by building a custom cluster agent image and deploying it on a cluster with several hpa and DatadogMetric objects. The output shows that the refresh period is respected. So I think refreshPeriod
is correctly initialized.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is correct, config.Datadog
will not be parsed yet, you will always get the default value. You will probably need a lazy instantiation on first call with a sync.Once
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah yes I see your point.
I will update this.
@@ -61,6 +65,12 @@ const ( | |||
queryEndpoint = "/api/v1/query" | |||
) | |||
|
|||
var ( | |||
refreshPeriod = config.Datadog.GetInt("external_metrics_provider.refresh_period") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is correct, config.Datadog
will not be parsed yet, you will always get the default value. You will probably need a lazy instantiation on first call with a sync.Once
.
le "github.com/DataDog/datadog-agent/pkg/util/kubernetes/apiserver/leaderelection/metrics" | ||
) | ||
|
||
type minRemainingRequests struct { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The class is pretty generic and should be called something like minTracker
. You should also take in the target gauge as parameter instead of targeting rateLimitsRemainingMin
directly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree, minTracker
would make more sense as a name here.
However, I will remove the gauge rateLimitsRemainingMin
completely from the struct, because passing it as a parameter will still create a dependency on the tags that need to be set with the gauge. I think it would be better to set the gauge externally (i.e outside the update method)
isSet := mrr.val >= 0 | ||
hasExpired := time.Since(mrr.timestamp) > mrr.expiryDuration | ||
|
||
if mrr.val >= newValFloat || !isSet || hasExpired { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For readability we usually put the comparison with the same logical intent. In our case we want to store if newVal is lower than existing val
, which is expressed more clearly with newVal < mrr.val
mrr.Lock() | ||
defer mrr.Unlock() | ||
|
||
newValFloat, err := strconv.Atoi(newVal) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You get an int
, not a float
, it's usually not necessary to suffix with type like newVal
is enough
997c478
to
37ed2fc
Compare
Thanks @AliDatadog and @vboulineau for your remarks I've updated the PR accordingly |
newVal, err := strconv.Atoi(queryLimits.Remaining) | ||
if err == nil { | ||
getMinRemainingRequestsTracker().update(newVal) | ||
rateLimitsRemainingMin.Set(float64(minRemainingRequestsTracker.val), queryEndpoint, le.JoinLeaderLabel) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You should provide a get()
on minTracker
just to avoid leaking internal details
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks
updated.
79fd062
to
62f5276
Compare
What does this PR do?
This PR adds
rate_limit_queries_remaining_min
telemetry in cluster agent external metrics server. This metric represents the minimum of number of remaining requests over the last 2*refresh_period that the datadog client can send to the backend.rate_limit_queries_remaining_min
is updated in 2 cases:rate_limit_queries_remaining_min
rate_limit_queries_remaining_min
has not been updated for more than 2 * refresh_periodThe refresh period is set to 30 seconds by default.
Motivation
As Datadog Cluster-Agent user, I would like to monitor the state of the external-metrics server to better react in case of Datadog API query Rate-limit errors, since it can impact my applications autoscaling.
The official datadog-cluster-agent integration already provide some metrics to see the current rate limit state by API key. It works fine when the Rate Limit quota is set for a period of 1h hour.
For example a customer can send 20k queries per hour. So the metric (generated every 15sec) can decrease during the 1 hour period. and create a monitor if the metrics return only 10% of the quota.
But with new rate limit configuration for large customer, this metric become useless since the Rate-Limit period is now 10sec. For example 200 queries every 10sec. Which make the value flaky and inaccurate when this metric is generated every 15sec.
Having a metric that indicates the minimum value of
rate_limit_queries_remaining
over some period of time will be useful to stabilise the metric to better react to rate limit errors in advance.Additional Notes
rate_limit_queries_remaining
is updated only after the response of the last (N-th) query is received.rate_limit_queries_remaining
is conserved during at least2*refresh_period
, and is investigated after every single query sent to the backend, which makes the metric less flaky and more useful to create monitors to be alerted before rate limit error occurs.Possible Drawbacks / Trade-offs
In order for
rate_limit_queries_remaining
andrate_limit_queries_remaining_min
to show significant difference, the cluster agents need to a lot of queries (so we need to have several HPA/WPA) to the backend.Describe how to test/QA your changes
datadog-cluster-agent telemetry | grep rate_limit_queries_remaining
rate_limit_queries_remaining
andrate_limit_queries_remaining_min
.rate_limit_queries_remaining_min
is always less thanrate_limit_queries_remaining
and that it takes at 2*refresh_period (by default 60 seconds) to increase.Doing this test produced the following consecutive outputs:
Reviewer's Checklist
Triage
milestone is set.major_change
label if your change either has a major impact on the code base, is impacting multiple teams or is changing important well-established internals of the Agent. This label will be use during QA to make sure each team pay extra attention to the changed behavior. For any customer facing change use a releasenote.changelog/no-changelog
label has been applied.qa/skip-qa
label is not applied.team/..
label has been applied, indicating the team(s) that should QA this change.need-change/operator
andneed-change/helm
labels have been applied.k8s/<min-version>
label, indicating the lowest Kubernetes version compatible with this feature.