-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[APR-55] Add support for failing over logs in High Availability mode. #23502
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM for files owned by agent shared components
We may want to consider adding a condition here to force HTTP when HA is enabled. If an agent comes up during a failover, the connectivity check could fail forcing the agent to fallback to the TCP sender which feels like incorrect behavior in this scenario. |
Bloop Bleep... Dogbot HereRegression Detector ResultsRun ID: 5c524fad-130b-40c1-9b41-0e1e04bae461 Performance changes are noted in the perf column of each table:
No significant changes in experiment optimization goalsConfidence level: 90.00% There were no significant changes in experiment optimization goals at this confidence level and effect size tolerance.
|
perf | experiment | goal | Δ mean % | Δ mean % CI |
---|---|---|---|---|
➖ | file_to_blackhole | % cpu utilization | -0.14 | [-6.73, +6.45] |
Fine details of change detection per experiment
perf | experiment | goal | Δ mean % | Δ mean % CI |
---|---|---|---|---|
➖ | pycheck_1000_100byte_tags | % cpu utilization | +2.65 | [-2.67, +7.98] |
➖ | process_agent_real_time_mode | memory utilization | +0.67 | [+0.63, +0.71] |
➖ | file_tree | memory utilization | +0.44 | [+0.37, +0.50] |
➖ | otel_to_otel_logs | ingress throughput | +0.42 | [-0.23, +1.07] |
➖ | process_agent_standard_check_with_stats | memory utilization | +0.20 | [+0.16, +0.23] |
➖ | tcp_syslog_to_blackhole | ingress throughput | +0.20 | [+0.12, +0.28] |
➖ | idle | memory utilization | +0.10 | [+0.06, +0.14] |
➖ | basic_py_check | % cpu utilization | +0.08 | [-2.36, +2.52] |
➖ | trace_agent_json | ingress throughput | +0.03 | [-0.01, +0.07] |
➖ | tcp_dd_logs_filter_exclude | ingress throughput | +0.00 | [-0.05, +0.05] |
➖ | uds_dogstatsd_to_api | ingress throughput | +0.00 | [-0.06, +0.06] |
➖ | trace_agent_msgpack | ingress throughput | -0.02 | [-0.03, -0.01] |
➖ | file_to_blackhole | % cpu utilization | -0.14 | [-6.73, +6.45] |
➖ | process_agent_standard_check | memory utilization | -0.26 | [-0.29, -0.22] |
➖ | uds_dogstatsd_to_api_cpu | % cpu utilization | -0.39 | [-3.51, +2.73] |
Explanation
A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".
For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:
-
Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.
-
Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.
-
Its configuration does not mark it "erratic".
f469a95
to
a631176
Compare
Addressed this in 90e5cf0, btw. |
/merge |
🚂 MergeQueue This merge request is not mergeable yet, because of pending checks/missing approvals. It will be added to the queue as soon as checks pass and/or get approvals. Use |
Test changes on VMUse this command from test-infra-definitions to manually test this PR changes on a VM: inv create-vm --pipeline-id=30039846 --os-family=ubuntu |
🚂 MergeQueue Added to the queue. There are 2 builds ahead of this PR! (estimated merge in less than 26m) Use |
Regression DetectorRegression Detector ResultsRun ID: b1ced823-3713-4daa-bd13-f21788fe4462 Performance changes are noted in the perf column of each table:
No significant changes in experiment optimization goalsConfidence level: 90.00% There were no significant changes in experiment optimization goals at this confidence level and effect size tolerance.
|
perf | experiment | goal | Δ mean % | Δ mean % CI |
---|---|---|---|---|
➖ | file_to_blackhole | % cpu utilization | +6.17 | [-0.72, +13.07] |
Fine details of change detection per experiment
perf | experiment | goal | Δ mean % | Δ mean % CI |
---|---|---|---|---|
➖ | file_to_blackhole | % cpu utilization | +6.17 | [-0.72, +13.07] |
➖ | tcp_syslog_to_blackhole | ingress throughput | +1.13 | [+1.02, +1.23] |
➖ | process_agent_standard_check | memory utilization | +0.41 | [+0.37, +0.45] |
➖ | idle | memory utilization | +0.28 | [+0.23, +0.32] |
➖ | file_tree | memory utilization | +0.18 | [+0.07, +0.29] |
➖ | process_agent_standard_check_with_stats | memory utilization | +0.12 | [+0.07, +0.16] |
➖ | trace_agent_msgpack | ingress throughput | +0.02 | [+0.01, +0.04] |
➖ | tcp_dd_logs_filter_exclude | ingress throughput | +0.00 | [-0.00, +0.00] |
➖ | trace_agent_json | ingress throughput | -0.00 | [-0.03, +0.02] |
➖ | uds_dogstatsd_to_api | ingress throughput | -0.02 | [-0.22, +0.19] |
➖ | process_agent_real_time_mode | memory utilization | -0.08 | [-0.12, -0.04] |
➖ | otel_to_otel_logs | ingress throughput | -0.17 | [-0.60, +0.25] |
➖ | uds_dogstatsd_to_api_cpu | % cpu utilization | -0.91 | [-3.76, +1.95] |
➖ | basic_py_check | % cpu utilization | -1.03 | [-3.26, +1.20] |
➖ | pycheck_1000_100byte_tags | % cpu utilization | -1.47 | [-6.61, +3.68] |
Explanation
A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".
For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:
-
Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.
-
Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.
-
Its configuration does not mark it "erratic".
What does this PR do?
Adds support for the failing over of logs when High Availability mode is enabled and failover is triggered.
Motivation
This work is happening as part of the larger HAMR effort.
Additional Notes
The implementation follows closely with what we did for metrics (#21644), which seemed to fit best when placed down at the level of each
DestinationSender
.Possible Drawbacks / Trade-offs
Describe how to test/QA your changes
As in #21644, no Agent behavior changes unless High Availability mode is enabled and failover is triggered.
/tmp/test-logs.log
(follow these instructions, specifically theTail files
use case, to configure a tailer for the log file)DD_LOGS_ENABLED=true DD_HA_ENABLED=true DD_HA_SITE=<failover site i.e. us5.datadoghq.com> DD_HA_API_KEY=xxxxxxxxxxx
while true; do sleep 1; echo "fake log line - the time is now $(date +%s)" >> /tmp/test-logs.log; done
ha.failover
totrue
:datadog-agent config set ha.failover true
ha.failover
tofalse
:datadog-agent config set ha.failover false