Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add logs agent pipeline performance telemetry #30744

Merged
merged 23 commits into from
Nov 15, 2024

Conversation

gh123man
Copy link
Member

@gh123man gh123man commented Nov 4, 2024

What does this PR do?

This PR introduces logs pipeline performance telemetry + makes internal buffers configurable.

Pipeline telemetry is comprised of two major pieces:

Utilization Ratio - Measurement of the ratio of how busy a unit of logic is vs it's idle time.

Utilization Items/Bytes - measurement of the number of elements and bytes present in a single component (and it's input channel).

Both of these metrics are exposed as gauges. Due to how fast log messages are processed - the raw data is very noisy. So to mitigate this some simple aggregation is performed:

  • utilization ratio is summed up over a 1 second window and aggregated into an ewma which is exposed as a Gauge.
  • utilization items/bytes is measured by computing the ewma of ingress - egress every second and reporting it as a Gauge.

In order to measure capacity we have to record ingress and egress throughout the logs pipeline. This means adding ingress and egress markers around when channel operations happen.

This PR is broken down into individual commits that should make it easier to review.

Motivation

Increase visibility of logs agent performance to drive optimization efforts.

Describe how to test/QA your changes

  • QA done by SMP run on PR
  • Spot check check utilization still works

Possible Drawbacks / Trade-offs

Additional Notes

@agent-platform-auto-pr
Copy link
Contributor

agent-platform-auto-pr bot commented Nov 4, 2024

Test changes on VM

Use this command from test-infra-definitions to manually test this PR changes on a VM:

inv create-vm --pipeline-id=49071745 --os-family=ubuntu

Note: This applies to commit 7f57f8c

Copy link

cit-pr-commenter bot commented Nov 4, 2024

Regression Detector

Regression Detector Results

Metrics dashboard
Target profiles
Run ID: bfdf86af-342d-4c8f-a5ce-390a0b43c42c

Baseline: 3e3d2d2
Comparison: 7f57f8c
Diff

Optimization Goals: ❌ Significant changes detected

perf experiment goal Δ mean % Δ mean % CI trials links
tcp_syslog_to_blackhole ingress throughput -26.45 [-26.50, -26.40] 1 Logs

Fine details of change detection per experiment

perf experiment goal Δ mean % Δ mean % CI trials links
basic_py_check % cpu utilization +2.09 [-1.70, +5.89] 1 Logs
quality_gate_idle memory utilization +0.55 [+0.49, +0.60] 1 Logs bounds checks dashboard
file_to_blackhole_1000ms_latency_linear_load egress throughput +0.21 [-0.28, +0.69] 1 Logs
file_to_blackhole_300ms_latency egress throughput +0.09 [-0.10, +0.27] 1 Logs
file_to_blackhole_0ms_latency egress throughput +0.01 [-0.45, +0.47] 1 Logs
file_to_blackhole_100ms_latency egress throughput +0.01 [-0.31, +0.32] 1 Logs
uds_dogstatsd_to_api ingress throughput +0.00 [-0.10, +0.10] 1 Logs
tcp_dd_logs_filter_exclude ingress throughput -0.00 [-0.01, +0.01] 1 Logs
uds_dogstatsd_to_api_cpu % cpu utilization -0.00 [-0.72, +0.71] 1 Logs
file_to_blackhole_1000ms_latency egress throughput -0.03 [-0.52, +0.45] 1 Logs
file_to_blackhole_500ms_latency egress throughput -0.06 [-0.30, +0.18] 1 Logs
otel_to_otel_logs ingress throughput -0.57 [-1.25, +0.10] 1 Logs
file_tree memory utilization -1.32 [-1.44, -1.19] 1 Logs
quality_gate_idle_all_features memory utilization -3.12 [-3.26, -2.99] 1 Logs bounds checks dashboard
pycheck_lots_of_tags % cpu utilization -4.51 [-7.82, -1.19] 1 Logs
tcp_syslog_to_blackhole ingress throughput -26.45 [-26.50, -26.40] 1 Logs

Bounds Checks: ❌ Failed

perf experiment bounds_check_name replicates_passed links
file_to_blackhole_1000ms_latency lost_bytes 0/10
file_to_blackhole_300ms_latency lost_bytes 0/10
file_to_blackhole_500ms_latency lost_bytes 0/10
quality_gate_idle memory_usage 3/10 bounds checks dashboard
quality_gate_idle_all_features memory_usage 7/10 bounds checks dashboard
file_to_blackhole_0ms_latency lost_bytes 10/10
file_to_blackhole_0ms_latency memory_usage 10/10
file_to_blackhole_1000ms_latency memory_usage 10/10
file_to_blackhole_1000ms_latency_linear_load memory_usage 10/10
file_to_blackhole_100ms_latency lost_bytes 10/10
file_to_blackhole_100ms_latency memory_usage 10/10
file_to_blackhole_300ms_latency memory_usage 10/10
file_to_blackhole_500ms_latency memory_usage 10/10

Explanation

Confidence level: 90.00%
Effect size tolerance: |Δ mean %| ≥ 5.00%

Performance changes are noted in the perf column of each table:

  • ✅ = significantly better comparison variant performance
  • ❌ = significantly worse comparison variant performance
  • ➖ = no significant change in performance

A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".

For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:

  1. Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.

  2. Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.

  3. Its configuration does not mark it "erratic".

@gh123man gh123man force-pushed the brian/logs-pipeline-telemetry-AMLII-2134 branch from 8db8b77 to a0c07a0 Compare November 4, 2024 21:48
blt added a commit that referenced this pull request Nov 5, 2024
This commit updates the file_to_blackhole experiments to rely on the lading
logrotate FS generator, allowing us to assert on whether the log Agent loses
bytes in the course of its operation. I've added a new check asserting that no
bytes are lost, although I do not expect this to pass on higher latency intake
experiments yet.

REF #30744
REF DataDog/lading#1090

Signed-off-by: Brian L. Troutwine <[email protected]>
blt added a commit that referenced this pull request Nov 5, 2024
This commit updates the file_to_blackhole experiments to rely on the lading
logrotate FS generator, allowing us to assert on whether the log Agent loses
bytes in the course of its operation. I've added a new check asserting that no
bytes are lost, although I do not expect this to pass on higher latency intake
experiments yet.

REF #30744
REF DataDog/lading#1090

Signed-off-by: Brian L. Troutwine <[email protected]>
@gh123man gh123man marked this pull request as ready for review November 5, 2024 17:57
@gh123man gh123man requested review from a team as code owners November 5, 2024 17:57
@gh123man gh123man requested a review from hush-hush November 5, 2024 17:57
blt added a commit that referenced this pull request Nov 5, 2024
This commit updates the file_to_blackhole experiments to rely on the lading
logrotate FS generator, allowing us to assert on whether the log Agent loses
bytes in the course of its operation. I've added a new check asserting that no
bytes are lost, although I do not expect this to pass on higher latency intake
experiments yet.

REF #30744
REF DataDog/lading#1090

Signed-off-by: Brian L. Troutwine <[email protected]>
Copy link
Member

@hush-hush hush-hush left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good for ASC

pkg/logs/metrics/metrics.go Outdated Show resolved Hide resolved
pkg/logs/processor/processor.go Outdated Show resolved Hide resolved
pkg/logs/processor/processor.go Outdated Show resolved Hide resolved
@gh123man gh123man requested review from a team as code owners November 6, 2024 18:37
Copy link

cit-pr-commenter bot commented Nov 6, 2024

Go Package Import Differences

Baseline: 3e3d2d2
Comparison: 7f57f8c

binaryosarchchange
agentlinuxamd64
+1, -0
+github.com/DataDog/datadog-agent/pkg/util/utilizationtracker
agentlinuxarm64
+1, -0
+github.com/DataDog/datadog-agent/pkg/util/utilizationtracker
agentwindowsamd64
+1, -0
+github.com/DataDog/datadog-agent/pkg/util/utilizationtracker
agentdarwinamd64
+1, -0
+github.com/DataDog/datadog-agent/pkg/util/utilizationtracker
agentdarwinarm64
+1, -0
+github.com/DataDog/datadog-agent/pkg/util/utilizationtracker
iot-agentlinuxamd64
+1, -0
+github.com/DataDog/datadog-agent/pkg/util/utilizationtracker
iot-agentlinuxarm64
+1, -0
+github.com/DataDog/datadog-agent/pkg/util/utilizationtracker
heroku-agentlinuxamd64
+1, -0
+github.com/DataDog/datadog-agent/pkg/util/utilizationtracker
cluster-agentlinuxamd64
+1, -0
+github.com/DataDog/datadog-agent/pkg/util/utilizationtracker
cluster-agentlinuxarm64
+1, -0
+github.com/DataDog/datadog-agent/pkg/util/utilizationtracker
cluster-agent-cloudfoundrylinuxamd64
+1, -0
+github.com/DataDog/datadog-agent/pkg/util/utilizationtracker
cluster-agent-cloudfoundrylinuxarm64
+1, -0
+github.com/DataDog/datadog-agent/pkg/util/utilizationtracker
dogstatsdlinuxamd64
+1, -0
+github.com/DataDog/datadog-agent/pkg/util/utilizationtracker
dogstatsdlinuxarm64
+1, -0
+github.com/DataDog/datadog-agent/pkg/util/utilizationtracker
process-agentlinuxamd64
+1, -0
+github.com/DataDog/datadog-agent/pkg/util/utilizationtracker
process-agentlinuxarm64
+1, -0
+github.com/DataDog/datadog-agent/pkg/util/utilizationtracker
process-agentwindowsamd64
+1, -0
+github.com/DataDog/datadog-agent/pkg/util/utilizationtracker
process-agentdarwinamd64
+1, -0
+github.com/DataDog/datadog-agent/pkg/util/utilizationtracker
process-agentdarwinarm64
+1, -0
+github.com/DataDog/datadog-agent/pkg/util/utilizationtracker
heroku-process-agentlinuxamd64
+1, -0
+github.com/DataDog/datadog-agent/pkg/util/utilizationtracker
security-agentlinuxamd64
+1, -0
+github.com/DataDog/datadog-agent/pkg/util/utilizationtracker
security-agentlinuxarm64
+1, -0
+github.com/DataDog/datadog-agent/pkg/util/utilizationtracker
serverlesslinuxamd64
+1, -0
+github.com/DataDog/datadog-agent/pkg/util/utilizationtracker
serverlesslinuxarm64
+1, -0
+github.com/DataDog/datadog-agent/pkg/util/utilizationtracker
system-probelinuxamd64
+1, -0
+github.com/DataDog/datadog-agent/pkg/util/utilizationtracker
system-probelinuxarm64
+1, -0
+github.com/DataDog/datadog-agent/pkg/util/utilizationtracker
system-probewindowsamd64
+1, -0
+github.com/DataDog/datadog-agent/pkg/util/utilizationtracker

Comment on lines +88 to +93
TlmUtilizationRatio = telemetry.NewGauge("logs_component_utilization", "ratio", []string{"name", "instance"}, "Gauge of the utilization ratio of a component")
// TlmUtilizationItems is the capacity of a component by number of elements
// Both the number of items and the number of bytes are aggregated and exposed as a ewma.
TlmUtilizationItems = telemetry.NewGauge("logs_component_utilization", "items", []string{"name", "instance"}, "Gauge of the number of items currently held in a component and it's bufferes")
// TlmUtilizationBytes is the capacity of a component by number of bytes
TlmUtilizationBytes = telemetry.NewGauge("logs_component_utilization", "bytes", []string{"name", "instance"}, "Gauge of the number of bytes currently held in a component and it's bufferes")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as the config global accessor, we should consider not using the telemetry global accessor.

I know is a bit painful to refactor this parts of the codebase, but the more we keep using global the harder is going to be in the future. I'm happy to help with the effort.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree.
I think we need to prioritize refactoring more of the logs agent to use components.
Unfortunately we only got so far as to componentize the top level bits

@gh123man gh123man requested review from vickenty and remeh November 8, 2024 20:19
Copy link
Contributor

@vickenty vickenty left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it make sense to add some tests for the new telemetry code?

pkg/logs/metrics/utilization_monitor.go Outdated Show resolved Hide resolved
pkg/logs/metrics/utilization_monitor.go Outdated Show resolved Hide resolved
pkg/logs/processor/processor.go Outdated Show resolved Hide resolved
pkg/logs/processor/processor.go Outdated Show resolved Hide resolved
pkg/logs/sender/sender.go Show resolved Hide resolved
pkg/logs/tailers/file/tailer.go Outdated Show resolved Hide resolved
Copy link
Contributor

@remeh remeh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 on Vikentiy's comment about possibly adding unit tests on the new telemetry code if possible.

- Fix capacity sampling ewma (every second)
- added unit tests
- move to it's own package
- rename to be general purpose
@gh123man gh123man requested review from a team as code owners November 13, 2024 18:03
@gh123man
Copy link
Member Author

@vickenty Thanks for the feedback.
Ive reworked our build-in utilization tracker and used it to address the remaining feedback.

here are the commits:
Remove dependency on 3rd party ewma library
Refactor utilization tracker
Refactor utilization monitor to use utilization_tracker

Ive also added unit test coverage where appropriate.

@gh123man gh123man requested a review from vickenty November 13, 2024 18:12
tasks/modules.py Outdated Show resolved Hide resolved
Copy link
Contributor

📥 📢 Info, this pull request increases the binary size of serverless extension by 37120 bytes. Each MB of binary size increase means about 10ms of additional cold start time, so this pull request would increase cold start time by 0ms.

Debug info

If you have questions, we are happy to help, come visit us in the #serverless slack channel and provide a link to this comment.

We suggest you consider adding the !serverless build tag to remove any new dependencies not needed in the serverless extension.

Copy link
Contributor

Serverless Benchmark Results

BenchmarkStartEndInvocation comparison between 438bf1b and 7cb4a92.

tl;dr

Use these benchmarks as an insight tool during development.

  1. Skim down the vs base column in each chart. If there is a ~, then there was no statistically significant change to the benchmark. Otherwise, ensure the estimated percent change is either negative or very small.

  2. The last row of each chart is the geomean. Ensure this percentage is either negative or very small.

What is this benchmarking?

The BenchmarkStartEndInvocation compares the amount of time it takes to call the start-invocation and end-invocation endpoints. For universal instrumentation languages (Dotnet, Golang, Java, Ruby), this represents the majority of the duration overhead added by our tracing layer.

The benchmark is run using a large variety of lambda request payloads. In the charts below, there is one row for each event payload type.

How do I interpret these charts?

The charts below comes from benchstat. They represent the statistical change in duration (sec/op), memory overhead (B/op), and allocations (allocs/op).

The benchstat docs explain how to interpret these charts.

Before the comparison table, we see common file-level configuration. If there are benchmarks with different configuration (for example, from different packages), benchstat will print separate tables for each configuration.

The table then compares the two input files for each benchmark. It shows the median and 95% confidence interval summaries for each benchmark before and after the change, and an A/B comparison under "vs base". ... The p-value measures how likely it is that any differences were due to random chance (i.e., noise). The "~" means benchstat did not detect a statistically significant difference between the two inputs. ...

Note that "statistically significant" is not the same as "large": with enough low-noise data, even very small changes can be distinguished from noise and considered statistically significant. It is, of course, generally easier to distinguish large changes from noise.

Finally, the last row of the table shows the geometric mean of each column, giving an overall picture of how the benchmarks changed. Proportional changes in the geomean reflect proportional changes in the benchmarks. For example, given n benchmarks, if sec/op for one of them increases by a factor of 2, then the sec/op geomean will increase by a factor of ⁿ√2.

I need more help

First off, do not worry if the benchmarks are failing. They are not tests. The intention is for them to be a tool for you to use during development.

If you would like a hand interpreting the results come chat with us in #serverless-agent in the internal DataDog slack or in #serverless in the public DataDog slack. We're happy to help!

Benchmark stats
goos: linux
goarch: amd64
pkg: github.com/DataDog/datadog-agent/pkg/serverless/daemon
cpu: AMD EPYC 7763 64-Core Processor                
                                      │ baseline/benchmark.log │       current/benchmark.log        │
                                      │         sec/op         │   sec/op     vs base               │
api-gateway-appsec.json                            86.75µ ± 2%   89.14µ ± 4%       ~ (p=0.089 n=10)
api-gateway-kong-appsec.json                       69.32µ ± 2%   69.79µ ± 1%       ~ (p=0.353 n=10)
api-gateway-kong.json                              67.59µ ± 1%   67.56µ ± 1%       ~ (p=0.971 n=10)
api-gateway-non-proxy-async.json                   105.7µ ± 2%   107.4µ ± 1%       ~ (p=0.052 n=10)
api-gateway-non-proxy.json                         106.8µ ± 1%   106.7µ ± 3%       ~ (p=0.631 n=10)
api-gateway-websocket-connect.json                 71.74µ ± 2%   70.51µ ± 1%  -1.71% (p=0.023 n=10)
api-gateway-websocket-default.json                 63.59µ ± 1%   63.35µ ± 2%       ~ (p=0.971 n=10)
api-gateway-websocket-disconnect.json              64.16µ ± 1%   63.62µ ± 1%       ~ (p=0.247 n=10)
api-gateway.json                                   116.1µ ± 1%   114.8µ ± 1%  -1.09% (p=0.003 n=10)
application-load-balancer.json                     65.03µ ± 2%   64.68µ ± 2%       ~ (p=0.165 n=10)
cloudfront.json                                    49.01µ ± 4%   48.11µ ± 2%  -1.85% (p=0.043 n=10)
cloudwatch-events.json                             39.25µ ± 1%   39.32µ ± 2%       ~ (p=0.631 n=10)
cloudwatch-logs.json                               67.71µ ± 2%   66.88µ ± 1%  -1.23% (p=0.043 n=10)
custom.json                                        31.44µ ± 2%   31.27µ ± 2%       ~ (p=0.579 n=10)
dynamodb.json                                      95.01µ ± 1%   93.72µ ± 1%  -1.36% (p=0.004 n=10)
empty.json                                         29.51µ ± 1%   30.09µ ± 2%  +1.97% (p=0.001 n=10)
eventbridge-custom.json                            48.62µ ± 3%   48.63µ ± 2%       ~ (p=0.971 n=10)
eventbridge-no-bus.json                            47.91µ ± 2%   47.17µ ± 2%       ~ (p=0.123 n=10)
eventbridge-no-timestamp.json                      47.80µ ± 1%   47.19µ ± 6%       ~ (p=0.363 n=10)
eventbridgesns.json                                63.76µ ± 3%   63.20µ ± 2%       ~ (p=0.393 n=10)
eventbridgesqs.json                                73.07µ ± 1%   72.19µ ± 2%       ~ (p=0.063 n=10)
http-api.json                                      74.28µ ± 1%   72.40µ ± 2%  -2.54% (p=0.000 n=10)
kinesis-batch.json                                 72.77µ ± 1%   70.88µ ± 2%  -2.59% (p=0.001 n=10)
kinesis.json                                       55.33µ ± 2%   57.37µ ± 3%  +3.70% (p=0.002 n=10)
s3.json                                            61.18µ ± 2%   61.97µ ± 4%       ~ (p=0.631 n=10)
sns-batch.json                                     93.04µ ± 1%   91.39µ ± 1%  -1.78% (p=0.005 n=10)
sns.json                                           68.60µ ± 1%   67.94µ ± 1%       ~ (p=0.393 n=10)
snssqs.json                                        120.1µ ± 2%   116.1µ ± 2%  -3.35% (p=0.005 n=10)
snssqs_no_dd_context.json                          108.3µ ± 2%   104.8µ ± 1%  -3.17% (p=0.000 n=10)
sqs-aws-header.json                                60.97µ ± 2%   58.15µ ± 4%  -4.62% (p=0.000 n=10)
sqs-batch.json                                     98.55µ ± 2%   93.95µ ± 1%  -4.67% (p=0.000 n=10)
sqs.json                                           73.94µ ± 3%   71.74µ ± 2%  -2.98% (p=0.009 n=10)
sqs_no_dd_context.json                             69.91µ ± 2%   65.41µ ± 4%  -6.43% (p=0.000 n=10)
stepfunction.json                                  47.43µ ± 3%   45.18µ ± 4%  -4.75% (p=0.000 n=10)
geomean                                            67.20µ        66.39µ       -1.21%

                                      │ baseline/benchmark.log │        current/benchmark.log        │
                                      │          B/op          │     B/op      vs base               │
api-gateway-appsec.json                           37.35Ki ± 0%   37.35Ki ± 0%       ~ (p=0.984 n=10)
api-gateway-kong-appsec.json                      26.95Ki ± 0%   26.95Ki ± 0%       ~ (p=0.494 n=10)
api-gateway-kong.json                             24.45Ki ± 0%   24.45Ki ± 0%       ~ (p=0.753 n=10)
api-gateway-non-proxy-async.json                  48.13Ki ± 0%   48.14Ki ± 0%       ~ (p=0.754 n=10)
api-gateway-non-proxy.json                        47.37Ki ± 0%   47.35Ki ± 0%       ~ (p=0.352 n=10)
api-gateway-websocket-connect.json                25.54Ki ± 0%   25.53Ki ± 0%       ~ (p=0.118 n=10)
api-gateway-websocket-default.json                21.44Ki ± 0%   21.44Ki ± 0%       ~ (p=0.697 n=10)
api-gateway-websocket-disconnect.json             21.22Ki ± 0%   21.22Ki ± 0%       ~ (p=0.238 n=10)
api-gateway.json                                  49.61Ki ± 0%   49.58Ki ± 0%  -0.06% (p=0.000 n=10)
application-load-balancer.json                    23.32Ki ± 0%   23.32Ki ± 0%       ~ (p=0.210 n=10)
cloudfront.json                                   17.70Ki ± 0%   17.69Ki ± 0%       ~ (p=0.271 n=10)
cloudwatch-events.json                            11.75Ki ± 0%   11.75Ki ± 0%       ~ (p=0.564 n=10)
cloudwatch-logs.json                              53.40Ki ± 0%   53.40Ki ± 0%       ~ (p=0.305 n=10)
custom.json                                       9.771Ki ± 0%   9.774Ki ± 0%       ~ (p=0.468 n=10)
dynamodb.json                                     40.84Ki ± 0%   40.82Ki ± 0%  -0.04% (p=0.022 n=10)
empty.json                                        9.319Ki ± 0%   9.321Ki ± 0%       ~ (p=0.985 n=10)
eventbridge-custom.json                           15.02Ki ± 0%   15.01Ki ± 0%       ~ (p=0.403 n=10)
eventbridge-no-bus.json                           14.00Ki ± 0%   14.01Ki ± 0%       ~ (p=0.810 n=10)
eventbridge-no-timestamp.json                     14.01Ki ± 0%   14.02Ki ± 0%       ~ (p=0.183 n=10)
eventbridgesns.json                               20.97Ki ± 0%   20.95Ki ± 0%  -0.10% (p=0.037 n=10)
eventbridgesqs.json                               25.19Ki ± 0%   25.17Ki ± 0%       ~ (p=0.305 n=10)
http-api.json                                     23.98Ki ± 0%   23.92Ki ± 0%       ~ (p=0.093 n=10)
kinesis-batch.json                                27.17Ki ± 0%   27.11Ki ± 0%       ~ (p=0.138 n=10)
kinesis.json                                      17.96Ki ± 0%   17.94Ki ± 0%       ~ (p=0.867 n=10)
s3.json                                           20.50Ki ± 1%   20.45Ki ± 1%       ~ (p=0.280 n=10)
sns-batch.json                                    39.98Ki ± 0%   39.90Ki ± 0%  -0.22% (p=0.023 n=10)
sns.json                                          25.16Ki ± 0%   25.16Ki ± 0%       ~ (p=0.724 n=10)
snssqs.json                                       53.88Ki ± 0%   53.91Ki ± 0%       ~ (p=0.912 n=10)
snssqs_no_dd_context.json                         47.66Ki ± 0%   47.59Ki ± 0%       ~ (p=0.240 n=10)
sqs-aws-header.json                               19.38Ki ± 1%   19.48Ki ± 0%       ~ (p=0.280 n=10)
sqs-batch.json                                    42.29Ki ± 0%   42.25Ki ± 0%       ~ (p=0.481 n=10)
sqs.json                                          26.13Ki ± 0%   26.21Ki ± 0%       ~ (p=0.060 n=10)
sqs_no_dd_context.json                            21.91Ki ± 1%   21.86Ki ± 1%       ~ (p=0.086 n=10)
stepfunction.json                                 14.32Ki ± 1%   14.32Ki ± 1%       ~ (p=0.684 n=10)
geomean                                           24.62Ki        24.61Ki       -0.03%

                                      │ baseline/benchmark.log │        current/benchmark.log        │
                                      │       allocs/op        │ allocs/op   vs base                 │
api-gateway-appsec.json                             630.5 ± 0%   630.5 ± 0%       ~ (p=1.000 n=10)
api-gateway-kong-appsec.json                        489.0 ± 0%   489.0 ± 0%       ~ (p=1.000 n=10) ¹
api-gateway-kong.json                               467.0 ± 0%   467.0 ± 0%       ~ (p=1.000 n=10)
api-gateway-non-proxy-async.json                    725.0 ± 0%   725.0 ± 0%       ~ (p=1.000 n=10)
api-gateway-non-proxy.json                          716.0 ± 0%   716.0 ± 0%       ~ (p=0.474 n=10)
api-gateway-websocket-connect.json                  453.0 ± 0%   453.0 ± 0%       ~ (p=0.474 n=10)
api-gateway-websocket-default.json                  379.0 ± 0%   379.0 ± 0%       ~ (p=1.000 n=10)
api-gateway-websocket-disconnect.json               370.0 ± 0%   370.0 ± 0%       ~ (p=1.000 n=10)
api-gateway.json                                    791.0 ± 0%   790.0 ± 0%  -0.13% (p=0.005 n=10)
application-load-balancer.json                      353.0 ± 0%   353.0 ± 0%       ~ (p=1.000 n=10) ¹
cloudfront.json                                     285.0 ± 0%   285.0 ± 0%       ~ (p=0.474 n=10)
cloudwatch-events.json                              221.0 ± 0%   221.0 ± 0%       ~ (p=0.474 n=10)
cloudwatch-logs.json                                217.0 ± 0%   217.0 ± 0%       ~ (p=1.000 n=10)
custom.json                                         169.0 ± 0%   169.0 ± 1%       ~ (p=1.000 n=10)
dynamodb.json                                       590.0 ± 0%   590.0 ± 0%       ~ (p=0.211 n=10)
empty.json                                          160.5 ± 0%   160.5 ± 0%       ~ (p=1.000 n=10)
eventbridge-custom.json                             266.0 ± 0%   266.0 ± 0%       ~ (p=1.000 n=10)
eventbridge-no-bus.json                             257.5 ± 0%   258.0 ± 0%       ~ (p=1.000 n=10)
eventbridge-no-timestamp.json                       258.0 ± 0%   258.0 ± 0%       ~ (p=0.596 n=10)
eventbridgesns.json                                 326.0 ± 0%   326.0 ± 0%       ~ (p=0.104 n=10)
eventbridgesqs.json                                 367.0 ± 0%   367.0 ± 0%       ~ (p=0.495 n=10)
http-api.json                                       435.0 ± 0%   434.0 ± 0%       ~ (p=0.242 n=10)
kinesis-batch.json                                  393.0 ± 0%   392.0 ± 1%       ~ (p=0.278 n=10)
kinesis.json                                        287.5 ± 1%   287.0 ± 0%       ~ (p=1.000 n=10)
s3.json                                             360.0 ± 1%   359.5 ± 1%       ~ (p=0.438 n=10)
sns-batch.json                                      479.0 ± 0%   478.0 ± 0%       ~ (p=0.090 n=10)
sns.json                                            346.5 ± 0%   346.5 ± 0%       ~ (p=0.844 n=10)
snssqs.json                                         478.0 ± 0%   478.5 ± 0%       ~ (p=0.975 n=10)
snssqs_no_dd_context.json                           438.5 ± 1%   437.0 ± 0%       ~ (p=0.290 n=10)
sqs-aws-header.json                                 285.0 ± 1%   287.0 ± 0%       ~ (p=0.279 n=10)
sqs-batch.json                                      516.0 ± 1%   515.5 ± 0%       ~ (p=0.457 n=10)
sqs.json                                            362.5 ± 0%   364.0 ± 0%  +0.41% (p=0.034 n=10)
sqs_no_dd_context.json                              350.5 ± 1%   350.0 ± 1%       ~ (p=0.095 n=10)
stepfunction.json                                   238.0 ± 1%   238.0 ± 1%       ~ (p=0.838 n=10)
geomean                                             367.7        367.6       -0.01%
¹ all samples are equal

@gh123man gh123man added the qa/done QA done before merge and regressions are covered by tests label Nov 14, 2024
func (i *CapacityMonitor) sample() {
select {
case <-i.tickChan:
i.avgItems = ewma(float64(i.ingress-i.egress), i.avgItems)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We discussed this elsewhere already, but just for the record: a Histogram is probably more appropriate choice for a fast moving metric like this, since it will also capture short spikes that can be missed by periodic sampling like this, that would still nonetheless contribute to the agent's memory usage.

@gh123man
Copy link
Member Author

/merge

@dd-devflow
Copy link

dd-devflow bot commented Nov 15, 2024

Devflow running: /merge

View all feedbacks in Devflow UI.


2024-11-15 14:44:42 UTC ℹ️ MergeQueue: pull request added to the queue

The median merge time in main is 24m.


2024-11-15 14:44:43 UTC ℹ️ MergeQueue: merge request added to the queue

The median merge time in main is 24m.

@dd-mergequeue dd-mergequeue bot merged commit de46c0a into main Nov 15, 2024
244 checks passed
@dd-mergequeue dd-mergequeue bot deleted the brian/logs-pipeline-telemetry-AMLII-2134 branch November 15, 2024 15:13
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
changelog/no-changelog long review PR is complex, plan time to review it qa/done QA done before merge and regressions are covered by tests team/agent-metrics-logs team/agent-processing-and-routing
Projects
None yet
Development

Successfully merging this pull request may close these issues.