Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[APR-55] Add support for failing over logs in High Availability mode. #23502

Merged
merged 10 commits into from
Mar 13, 2024

Conversation

tobz
Copy link
Member

@tobz tobz commented Mar 6, 2024

What does this PR do?

Adds support for the failing over of logs when High Availability mode is enabled and failover is triggered.

Motivation

This work is happening as part of the larger HAMR effort.

Additional Notes

The implementation follows closely with what we did for metrics (#21644), which seemed to fit best when placed down at the level of each DestinationSender.

Possible Drawbacks / Trade-offs

Describe how to test/QA your changes

As in #21644, no Agent behavior changes unless High Availability mode is enabled and failover is triggered.

  • configure your Agent to tail a temporary file, such as /tmp/test-logs.log (follow these instructions, specifically the Tail files use case, to configure a tailer for the log file)
  • create a second API key to use for failover (best done in another org in another datacenter than your existing API key; easier to watch Logs Live Tail for both orgs)
  • start the Agent with High Availability mode and logs collection enabled, and the failover site and API key configured: DD_LOGS_ENABLED=true DD_HA_ENABLED=true DD_HA_SITE=<failover site i.e. us5.datadoghq.com> DD_HA_API_KEY=xxxxxxxxxxx
  • open another shell and run something like this: while true; do sleep 1; echo "fake log line - the time is now $(date +%s)" >> /tmp/test-logs.log; done
  • make sure you're seeing the logs come into the primary org
  • trigger failover by setting ha.failover to true: datadog-agent config set ha.failover true
  • see a log message in the Agent's output about "Forwarder for domain ... has been been failed over to, enabling it for HA."
  • check that logs are flowing not only into the primary org, but also the failover org
  • disable failover by setting ha.failover to false: datadog-agent config set ha.failover false
  • logs should stop flowing to the failover org

@tobz tobz requested review from a team as code owners March 6, 2024 20:18
@gh123man gh123man self-requested a review March 6, 2024 20:19
Copy link
Contributor

@ogaca-dd ogaca-dd left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM for files owned by agent shared components

comp/logs/agent/config/endpoints.go Outdated Show resolved Hide resolved
comp/logs/agent/config/endpoints.go Outdated Show resolved Hide resolved
pkg/config/utils/endpoints.go Outdated Show resolved Hide resolved
pkg/logs/sender/destination_sender.go Show resolved Hide resolved
pkg/logs/sender/destination_sender.go Show resolved Hide resolved
pkg/logs/sender/destination_sender.go Show resolved Hide resolved
pkg/logs/sender/destination_sender_test.go Outdated Show resolved Hide resolved
@gh123man
Copy link
Member

We may want to consider adding a condition here to force HTTP when HA is enabled.

If an agent comes up during a failover, the connectivity check could fail forcing the agent to fallback to the TCP sender which feels like incorrect behavior in this scenario.

@tobz tobz requested a review from a team as a code owner March 11, 2024 21:37
@tobz tobz requested a review from gh123man March 12, 2024 13:21
@pr-commenter
Copy link

pr-commenter bot commented Mar 12, 2024

Bloop Bleep... Dogbot Here

Regression Detector Results

Run ID: 5c524fad-130b-40c1-9b41-0e1e04bae461
Baseline: 39eea0d
Comparison: f469a95

Performance changes are noted in the perf column of each table:

  • ✅ = significantly better comparison variant performance
  • ❌ = significantly worse comparison variant performance
  • ➖ = no significant change in performance

No significant changes in experiment optimization goals

Confidence level: 90.00%
Effect size tolerance: |Δ mean %| ≥ 5.00%

There were no significant changes in experiment optimization goals at this confidence level and effect size tolerance.

Experiments ignored for regressions

Regressions in experiments with settings containing erratic: true are ignored.

perf experiment goal Δ mean % Δ mean % CI
file_to_blackhole % cpu utilization -0.14 [-6.73, +6.45]

Fine details of change detection per experiment

perf experiment goal Δ mean % Δ mean % CI
pycheck_1000_100byte_tags % cpu utilization +2.65 [-2.67, +7.98]
process_agent_real_time_mode memory utilization +0.67 [+0.63, +0.71]
file_tree memory utilization +0.44 [+0.37, +0.50]
otel_to_otel_logs ingress throughput +0.42 [-0.23, +1.07]
process_agent_standard_check_with_stats memory utilization +0.20 [+0.16, +0.23]
tcp_syslog_to_blackhole ingress throughput +0.20 [+0.12, +0.28]
idle memory utilization +0.10 [+0.06, +0.14]
basic_py_check % cpu utilization +0.08 [-2.36, +2.52]
trace_agent_json ingress throughput +0.03 [-0.01, +0.07]
tcp_dd_logs_filter_exclude ingress throughput +0.00 [-0.05, +0.05]
uds_dogstatsd_to_api ingress throughput +0.00 [-0.06, +0.06]
trace_agent_msgpack ingress throughput -0.02 [-0.03, -0.01]
file_to_blackhole % cpu utilization -0.14 [-6.73, +6.45]
process_agent_standard_check memory utilization -0.26 [-0.29, -0.22]
uds_dogstatsd_to_api_cpu % cpu utilization -0.39 [-3.51, +2.73]

Explanation

A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".

For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:

  1. Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.

  2. Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.

  3. Its configuration does not mark it "erratic".

@tobz tobz force-pushed the tobz/add-hamr-support-logs branch from f469a95 to a631176 Compare March 13, 2024 13:25
@tobz
Copy link
Member Author

tobz commented Mar 13, 2024

We may want to consider adding a condition here to force HTTP when HA is enabled.

If an agent comes up during a failover, the connectivity check could fail forcing the agent to fallback to the TCP sender which feels like incorrect behavior in this scenario.

Addressed this in 90e5cf0, btw.

@tobz
Copy link
Member Author

tobz commented Mar 13, 2024

/merge

@dd-devflow
Copy link

dd-devflow bot commented Mar 13, 2024

🚂 MergeQueue

This merge request is not mergeable yet, because of pending checks/missing approvals. It will be added to the queue as soon as checks pass and/or get approvals.
Note: if you pushed new commits since the last approval, you may need additional approval.
You can remove it from the waiting list with /remove command.

Use /merge -c to cancel this operation!

@pr-commenter
Copy link

pr-commenter bot commented Mar 13, 2024

Test changes on VM

Use this command from test-infra-definitions to manually test this PR changes on a VM:

inv create-vm --pipeline-id=30039846 --os-family=ubuntu

@dd-devflow
Copy link

dd-devflow bot commented Mar 13, 2024

🚂 MergeQueue

Added to the queue.

There are 2 builds ahead of this PR! (estimated merge in less than 26m)

Use /merge -c to cancel this operation!

@pr-commenter
Copy link

pr-commenter bot commented Mar 13, 2024

Regression Detector

Regression Detector Results

Run ID: b1ced823-3713-4daa-bd13-f21788fe4462
Baseline: f9ae7f4
Comparison: df4621f

Performance changes are noted in the perf column of each table:

  • ✅ = significantly better comparison variant performance
  • ❌ = significantly worse comparison variant performance
  • ➖ = no significant change in performance

No significant changes in experiment optimization goals

Confidence level: 90.00%
Effect size tolerance: |Δ mean %| ≥ 5.00%

There were no significant changes in experiment optimization goals at this confidence level and effect size tolerance.

Experiments ignored for regressions

Regressions in experiments with settings containing erratic: true are ignored.

perf experiment goal Δ mean % Δ mean % CI
file_to_blackhole % cpu utilization +6.17 [-0.72, +13.07]

Fine details of change detection per experiment

perf experiment goal Δ mean % Δ mean % CI
file_to_blackhole % cpu utilization +6.17 [-0.72, +13.07]
tcp_syslog_to_blackhole ingress throughput +1.13 [+1.02, +1.23]
process_agent_standard_check memory utilization +0.41 [+0.37, +0.45]
idle memory utilization +0.28 [+0.23, +0.32]
file_tree memory utilization +0.18 [+0.07, +0.29]
process_agent_standard_check_with_stats memory utilization +0.12 [+0.07, +0.16]
trace_agent_msgpack ingress throughput +0.02 [+0.01, +0.04]
tcp_dd_logs_filter_exclude ingress throughput +0.00 [-0.00, +0.00]
trace_agent_json ingress throughput -0.00 [-0.03, +0.02]
uds_dogstatsd_to_api ingress throughput -0.02 [-0.22, +0.19]
process_agent_real_time_mode memory utilization -0.08 [-0.12, -0.04]
otel_to_otel_logs ingress throughput -0.17 [-0.60, +0.25]
uds_dogstatsd_to_api_cpu % cpu utilization -0.91 [-3.76, +1.95]
basic_py_check % cpu utilization -1.03 [-3.26, +1.20]
pycheck_1000_100byte_tags % cpu utilization -1.47 [-6.61, +3.68]

Explanation

A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".

For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:

  1. Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.

  2. Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.

  3. Its configuration does not mark it "erratic".

@dd-mergequeue dd-mergequeue bot merged commit 12ce6f6 into main Mar 13, 2024
183 checks passed
@dd-mergequeue dd-mergequeue bot deleted the tobz/add-hamr-support-logs branch March 13, 2024 17:58
@github-actions github-actions bot added this to the 7.53.0 milestone Mar 13, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants