Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[CWS] fix new e2e cws #21151

Merged
merged 10 commits into from
Nov 29, 2023
Merged

[CWS] fix new e2e cws #21151

merged 10 commits into from
Nov 29, 2023

Conversation

paulcacheux
Copy link
Contributor

@paulcacheux paulcacheux commented Nov 28, 2023

What does this PR do?

This PR cleans up and fixes the new CWS e2e tests.
The main change is the removal ofWaitAgentLogs from the agent interface since it was not working (you cannot run cat as an agent subcommand)
This PR also adds the missing CODEOWNERS entry, and a main version of the CI job. Currently this job is allowed to fail, until we confirm it's not flaky and can be fully enabled in a few days.

Motivation

Additional Notes

Possible Drawbacks / Trade-offs

Describe how to test/QA your changes

Reviewer's Checklist

  • If known, an appropriate milestone has been selected; otherwise the Triage milestone is set.
  • Use the major_change label if your change either has a major impact on the code base, is impacting multiple teams or is changing important well-established internals of the Agent. This label will be use during QA to make sure each team pay extra attention to the changed behavior. For any customer facing change use a releasenote.
  • A release note has been added or the changelog/no-changelog label has been applied.
  • Changed code has automated tests for its functionality.
  • Adequate QA/testing plan information is provided if the qa/skip-qa label is not applied.
  • At least one team/.. label has been applied, indicating the team(s) that should QA this change.
  • If applicable, docs team has been notified or an issue has been opened on the documentation repo.
  • If applicable, the need-change/operator and need-change/helm labels have been applied.
  • If applicable, the k8s/<min-version> label, indicating the lowest Kubernetes version compatible with this feature.
  • If applicable, the config template has been updated.

@paulcacheux paulcacheux added changelog/no-changelog [deprecated] qa/skip-qa - use other qa/ labels [DEPRECATED] Please use qa/done or qa/no-code-change to skip creating a QA card team/agent-security labels Nov 28, 2023
@paulcacheux paulcacheux added this to the 7.51.0 milestone Nov 28, 2023
@paulcacheux paulcacheux marked this pull request as ready for review November 28, 2023 14:05
@paulcacheux paulcacheux requested a review from a team as a code owner November 28, 2023 14:05
@pr-commenter
Copy link

pr-commenter bot commented Nov 28, 2023

Bloop Bleep... Dogbot Here

Regression Detector Results

Run ID: 6e0ce82d-039d-4a5b-aa48-cfef2625506f
Baseline: cd7712c
Comparison: 80811b4
Total datadog-agent CPUs: 7

Explanation

A regression test is an integrated performance test for datadog-agent in a repeatable rig, with varying configuration for datadog-agent. What follows is a statistical summary of a brief datadog-agent run for each configuration across SHAs given above. The goal of these tests are to determine quickly if datadog-agent performance is changed and to what degree by a pull request.

Because a target's optimization goal performance in each experiment will vary somewhat each time it is run, we can only estimate mean differences in optimization goal relative to the baseline target. We express these differences as a percentage change relative to the baseline target, denoted "Δ mean %". These estimates are made to a precision that balances accuracy and cost control. We represent this precision as a 90.00% confidence interval denoted "Δ mean % CI": there is a 90.00% chance that the true value of "Δ mean %" is in that interval.

We decide whether a change in performance is a "regression" -- a change worth investigating further -- if both of the following two criteria are true:

  1. The estimated |Δ mean %| ≥ 5.00%. This criterion intends to answer the question "Does the estimated change in mean optimization goal performance have a meaningful impact on your customers?". We assume that when |Δ mean %| < 5.00%, the impact on your customers is not meaningful. We also assume that a performance change in optimization goal is worth investigating whether it is an increase or decrease, so long as the magnitude of the change is sufficiently large.

  2. Zero is not in the 90.00% confidence interval "Δ mean % CI" about "Δ mean %". This statement is equivalent to saying that there is at least a 90.00% chance that the mean difference in optimization goal is not zero. This criterion intends to answer the question, "Is there a statistically significant difference in mean optimization goal performance?". It also means there is no more than a 10.00% chance this criterion reports a statistically significant difference when the true difference in mean optimization goal is zero -- a "false positive". We assume you are willing to accept a 10.00% chance of inaccurately detecting a change in performance when no true difference exists.

The table below, if present, lists those experiments that have experienced a statistically significant change in mean optimization goal performance between baseline and comparison SHAs with 90.00% confidence OR have been detected as newly erratic. Negative values of "Δ mean %" mean that baseline is faster, whereas positive values of "Δ mean %" mean that comparison is faster. Results that do not exhibit more than a ±5.00% change in their mean optimization goal are discarded. An experiment is erratic if its coefficient of variation is greater than 0.1. The abbreviated table will be omitted if no interesting change is observed.

No interesting changes in experiment optimization goals with confidence ≥ 90.00% and |Δ mean %| ≥ 5.00%.

Fine details of change detection per experiment.
experiment goal Δ mean % Δ mean % CI confidence
file_tree egress throughput +0.52 [-1.38, +2.43] 34.92%
otel_to_otel_logs ingress throughput +0.52 [-1.06, +2.10] 41.43%
uds_dogstatsd_to_api ingress throughput +0.04 [-0.13, +0.21] 30.81%
file_to_blackhole egress throughput +0.03 [-0.99, +1.05] 4.28%
dogstatsd_string_interner_8MiB_100 ingress throughput +0.00 [-0.13, +0.13] 1.60%
dogstatsd_string_interner_128MiB_100 ingress throughput +0.00 [-0.14, +0.14] 0.39%
trace_agent_json ingress throughput +0.00 [-0.13, +0.14] 0.28%
dogstatsd_string_interner_64MiB_100 ingress throughput +0.00 [-0.14, +0.14] 0.22%
dogstatsd_string_interner_8MiB_100k ingress throughput +0.00 [-0.04, +0.04] 0.00%
dogstatsd_string_interner_64MiB_1k ingress throughput -0.00 [-0.13, +0.13] 0.44%
dogstatsd_string_interner_8MiB_1k ingress throughput -0.00 [-0.10, +0.10] 1.98%
dogstatsd_string_interner_128MiB_1k ingress throughput -0.01 [-0.15, +0.14] 6.13%
dogstatsd_string_interner_8MiB_10k ingress throughput -0.01 [-0.08, +0.05] 30.87%
tcp_dd_logs_filter_exclude ingress throughput -0.01 [-0.16, +0.13] 13.64%
trace_agent_msgpack ingress throughput -0.02 [-0.14, +0.11] 17.46%
idle egress throughput -0.03 [-2.49, +2.43] 1.54%
dogstatsd_string_interner_8MiB_50k ingress throughput -0.04 [-0.08, -0.00] 90.46%
tcp_syslog_to_blackhole ingress throughput -0.71 [-0.83, -0.58] 100.00%

@paulcacheux paulcacheux force-pushed the paulcacheux/fix-new-e2e-cws branch from 80811b4 to 344fee6 Compare November 29, 2023 08:04
@paulcacheux paulcacheux requested review from a team as code owners November 29, 2023 08:04
Copy link
Member

@davidor davidor left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍 for container-integrations files

.gitlab/e2e.yml Outdated Show resolved Hide resolved
.gitlab/e2e.yml Show resolved Hide resolved
Comment on lines +158 to +170
func (a *agentSuite) waitAgentLogs(agentName string, pattern string) error {
err := backoff.Retry(func() error {
output, err := a.Env().VM.ExecuteWithError(fmt.Sprintf("cat /var/log/datadog/%s.log", agentName))
if err != nil {
return err
}
if strings.Contains(output, pattern) {
return nil
}
return errors.New("no log found")
}, backoff.WithMaxRetries(backoff.NewConstantBackOff(500*time.Millisecond), 60))
return err
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I’m not familiar enough with the VM API to make a concrete counter proposition but there are two things that look slightly sub-optimal to me:

  • We are loading the the content of the full file in memory in one time. If this function is called after a while and the agent has enabled debug logs, it can trigger a big memory consumption.
  • Reading the whole file again and again is suboptimal. It would be more efficient to have something like a tail -f.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Honestly I agree with you that all of this is far from perfect. My goal with this PR is to take what was built by Momar and to have it passing the CI. Further improvements will follow, without the need to ping 3 teams in the process

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💬 suggestion
You could use testify Eventually instead of backoff, it provides retry logic - can be addressed in a follow up.

You could also test against fakeintake or DD intake for a better blackbox test.

I like @L3n41c suggestion, you could have a tail -f with a filter that stores only interesting lines, and can send an event when the filtered lines are read. Agreed can be part of a follow up

Comment on lines +158 to +170
func (a *agentSuite) waitAgentLogs(agentName string, pattern string) error {
err := backoff.Retry(func() error {
output, err := a.Env().VM.ExecuteWithError(fmt.Sprintf("cat /var/log/datadog/%s.log", agentName))
if err != nil {
return err
}
if strings.Contains(output, pattern) {
return nil
}
return errors.New("no log found")
}, backoff.WithMaxRetries(backoff.NewConstantBackOff(500*time.Millisecond), 60))
return err
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💬 suggestion
You could use testify Eventually instead of backoff, it provides retry logic - can be addressed in a follow up.

You could also test against fakeintake or DD intake for a better blackbox test.

I like @L3n41c suggestion, you could have a tail -f with a filter that stores only interesting lines, and can send an event when the filtered lines are read. Agreed can be part of a follow up

@paulcacheux paulcacheux merged commit 5b6ab7d into main Nov 29, 2023
178 of 179 checks passed
@paulcacheux paulcacheux deleted the paulcacheux/fix-new-e2e-cws branch November 29, 2023 10:34
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
changelog/no-changelog [deprecated] qa/skip-qa - use other qa/ labels [DEPRECATED] Please use qa/done or qa/no-code-change to skip creating a QA card team/agent-security
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants