-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[CWS] fix new e2e cws #21151
[CWS] fix new e2e cws #21151
Conversation
Bloop Bleep... Dogbot HereRegression Detector ResultsRun ID: 6e0ce82d-039d-4a5b-aa48-cfef2625506f ExplanationA regression test is an integrated performance test for Because a target's optimization goal performance in each experiment will vary somewhat each time it is run, we can only estimate mean differences in optimization goal relative to the baseline target. We express these differences as a percentage change relative to the baseline target, denoted "Δ mean %". These estimates are made to a precision that balances accuracy and cost control. We represent this precision as a 90.00% confidence interval denoted "Δ mean % CI": there is a 90.00% chance that the true value of "Δ mean %" is in that interval. We decide whether a change in performance is a "regression" -- a change worth investigating further -- if both of the following two criteria are true:
The table below, if present, lists those experiments that have experienced a statistically significant change in mean optimization goal performance between baseline and comparison SHAs with 90.00% confidence OR have been detected as newly erratic. Negative values of "Δ mean %" mean that baseline is faster, whereas positive values of "Δ mean %" mean that comparison is faster. Results that do not exhibit more than a ±5.00% change in their mean optimization goal are discarded. An experiment is erratic if its coefficient of variation is greater than 0.1. The abbreviated table will be omitted if no interesting change is observed. No interesting changes in experiment optimization goals with confidence ≥ 90.00% and |Δ mean %| ≥ 5.00%. Fine details of change detection per experiment.
|
80811b4
to
344fee6
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍 for container-integrations files
func (a *agentSuite) waitAgentLogs(agentName string, pattern string) error { | ||
err := backoff.Retry(func() error { | ||
output, err := a.Env().VM.ExecuteWithError(fmt.Sprintf("cat /var/log/datadog/%s.log", agentName)) | ||
if err != nil { | ||
return err | ||
} | ||
if strings.Contains(output, pattern) { | ||
return nil | ||
} | ||
return errors.New("no log found") | ||
}, backoff.WithMaxRetries(backoff.NewConstantBackOff(500*time.Millisecond), 60)) | ||
return err | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I’m not familiar enough with the VM API to make a concrete counter proposition but there are two things that look slightly sub-optimal to me:
- We are loading the the content of the full file in memory in one time. If this function is called after a while and the agent has enabled debug logs, it can trigger a big memory consumption.
- Reading the whole file again and again is suboptimal. It would be more efficient to have something like a
tail -f
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Honestly I agree with you that all of this is far from perfect. My goal with this PR is to take what was built by Momar and to have it passing the CI. Further improvements will follow, without the need to ping 3 teams in the process
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💬 suggestion
You could use testify Eventually
instead of backoff, it provides retry logic - can be addressed in a follow up.
You could also test against fakeintake or DD intake for a better blackbox test.
I like @L3n41c suggestion, you could have a tail -f
with a filter that stores only interesting lines, and can send an event when the filtered lines are read. Agreed can be part of a follow up
func (a *agentSuite) waitAgentLogs(agentName string, pattern string) error { | ||
err := backoff.Retry(func() error { | ||
output, err := a.Env().VM.ExecuteWithError(fmt.Sprintf("cat /var/log/datadog/%s.log", agentName)) | ||
if err != nil { | ||
return err | ||
} | ||
if strings.Contains(output, pattern) { | ||
return nil | ||
} | ||
return errors.New("no log found") | ||
}, backoff.WithMaxRetries(backoff.NewConstantBackOff(500*time.Millisecond), 60)) | ||
return err | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💬 suggestion
You could use testify Eventually
instead of backoff, it provides retry logic - can be addressed in a follow up.
You could also test against fakeintake or DD intake for a better blackbox test.
I like @L3n41c suggestion, you could have a tail -f
with a filter that stores only interesting lines, and can send an event when the filtered lines are read. Agreed can be part of a follow up
What does this PR do?
This PR cleans up and fixes the new CWS e2e tests.
The main change is the removal of
WaitAgentLogs
from the agent interface since it was not working (you cannot runcat
as an agent subcommand)This PR also adds the missing
CODEOWNERS
entry, and amain
version of the CI job. Currently this job is allowed to fail, until we confirm it's not flaky and can be fully enabled in a few days.Motivation
Additional Notes
Possible Drawbacks / Trade-offs
Describe how to test/QA your changes
Reviewer's Checklist
Triage
milestone is set.major_change
label if your change either has a major impact on the code base, is impacting multiple teams or is changing important well-established internals of the Agent. This label will be use during QA to make sure each team pay extra attention to the changed behavior. For any customer facing change use a releasenote.changelog/no-changelog
label has been applied.qa/skip-qa
label is not applied.team/..
label has been applied, indicating the team(s) that should QA this change.need-change/operator
andneed-change/helm
labels have been applied.k8s/<min-version>
label, indicating the lowest Kubernetes version compatible with this feature.