Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Split tests across multiple VMSets #21075

Merged
merged 46 commits into from
Dec 1, 2023
Merged

Conversation

usamasaqib
Copy link
Contributor

@usamasaqib usamasaqib commented Nov 23, 2023

What does this PR do?

The PR adds the ability in KMT to split system-probe tests across multiple VMSets.
This PR splits the tests into two sets. The first set runs only pkg/network/tracer without TestUSMSuite. The second sets run all the other tests along with TestUSMSuite.

TestUSMSuite and TestTracerSuite are the two longest running test suites. This PR aims to run these in parallel.

Motivation

Reduce the total test time by running tests in parallel in separate VMs.

Additional Notes

Associated PRs in test-infra-definitions:

Associated PR in ami-builder for standardizing architecture names:

Possible Drawbacks / Trade-offs

Describe how to test/QA your changes

Reviewer's Checklist

  • If known, an appropriate milestone has been selected; otherwise the Triage milestone is set.
  • Use the major_change label if your change either has a major impact on the code base, is impacting multiple teams or is changing important well-established internals of the Agent. This label will be use during QA to make sure each team pay extra attention to the changed behavior. For any customer facing change use a releasenote.
  • A release note has been added or the changelog/no-changelog label has been applied.
  • Changed code has automated tests for its functionality.
  • Adequate QA/testing plan information is provided if the qa/skip-qa label is not applied.
  • At least one team/.. label has been applied, indicating the team(s) that should QA this change.
  • If applicable, docs team has been notified or an issue has been opened on the documentation repo.
  • If applicable, the need-change/operator and need-change/helm labels have been applied.
  • If applicable, the k8s/<min-version> label, indicating the lowest Kubernetes version compatible with this feature.
  • If applicable, the config template has been updated.

@usamasaqib usamasaqib added changelog/no-changelog [deprecated] qa/skip-qa - use other qa/ labels [DEPRECATED] Please use qa/done or qa/no-code-change to skip creating a QA card team/ebpf-platform labels Nov 23, 2023
@usamasaqib usamasaqib added this to the 7.51.0 milestone Nov 23, 2023
@usamasaqib usamasaqib requested review from a team as code owners November 23, 2023 18:34
@pr-commenter
Copy link

pr-commenter bot commented Nov 23, 2023

Bloop Bleep... Dogbot Here

Regression Detector Results

Run ID: 7bdc56bc-f573-46ac-91dd-53a21ec991b7
Baseline: e94970c
Comparison: 683b7e2
Total datadog-agent CPUs: 7

Explanation

A regression test is an integrated performance test for datadog-agent in a repeatable rig, with varying configuration for datadog-agent. What follows is a statistical summary of a brief datadog-agent run for each configuration across SHAs given above. The goal of these tests are to determine quickly if datadog-agent performance is changed and to what degree by a pull request.

Because a target's optimization goal performance in each experiment will vary somewhat each time it is run, we can only estimate mean differences in optimization goal relative to the baseline target. We express these differences as a percentage change relative to the baseline target, denoted "Δ mean %". These estimates are made to a precision that balances accuracy and cost control. We represent this precision as a 90.00% confidence interval denoted "Δ mean % CI": there is a 90.00% chance that the true value of "Δ mean %" is in that interval.

We decide whether a change in performance is a "regression" -- a change worth investigating further -- if both of the following two criteria are true:

  1. The estimated |Δ mean %| ≥ 5.00%. This criterion intends to answer the question "Does the estimated change in mean optimization goal performance have a meaningful impact on your customers?". We assume that when |Δ mean %| < 5.00%, the impact on your customers is not meaningful. We also assume that a performance change in optimization goal is worth investigating whether it is an increase or decrease, so long as the magnitude of the change is sufficiently large.

  2. Zero is not in the 90.00% confidence interval "Δ mean % CI" about "Δ mean %". This statement is equivalent to saying that there is at least a 90.00% chance that the mean difference in optimization goal is not zero. This criterion intends to answer the question, "Is there a statistically significant difference in mean optimization goal performance?". It also means there is no more than a 10.00% chance this criterion reports a statistically significant difference when the true difference in mean optimization goal is zero -- a "false positive". We assume you are willing to accept a 10.00% chance of inaccurately detecting a change in performance when no true difference exists.

The table below, if present, lists those experiments that have experienced a statistically significant change in mean optimization goal performance between baseline and comparison SHAs with 90.00% confidence OR have been detected as newly erratic. Negative values of "Δ mean %" mean that baseline is faster, whereas positive values of "Δ mean %" mean that comparison is faster. Results that do not exhibit more than a ±5.00% change in their mean optimization goal are discarded. An experiment is erratic if its coefficient of variation is greater than 0.1. The abbreviated table will be omitted if no interesting change is observed.

No interesting changes in experiment optimization goals with confidence ≥ 90.00% and |Δ mean %| ≥ 5.00%.

Fine details of change detection per experiment.
experiment goal Δ mean % Δ mean % CI confidence
idle egress throughput +0.04 [-2.33, +2.41] 2.41%
file_to_blackhole egress throughput +0.04 [-0.98, +1.05] 4.84%
tcp_syslog_to_blackhole ingress throughput +0.04 [-0.09, +0.17] 35.01%
trace_agent_json ingress throughput +0.03 [-0.10, +0.16] 32.77%
uds_dogstatsd_to_api ingress throughput +0.03 [-0.15, +0.21] 20.84%
trace_agent_msgpack ingress throughput +0.02 [-0.12, +0.15] 15.28%
dogstatsd_string_interner_64MiB_1k ingress throughput +0.00 [-0.13, +0.13] 1.91%
dogstatsd_string_interner_64MiB_100 ingress throughput +0.00 [-0.13, +0.14] 0.96%
dogstatsd_string_interner_8MiB_1k ingress throughput +0.00 [-0.10, +0.10] 0.45%
dogstatsd_string_interner_128MiB_100 ingress throughput +0.00 [-0.14, +0.14] 0.07%
dogstatsd_string_interner_128MiB_1k ingress throughput -0.00 [-0.14, +0.14] 1.49%
dogstatsd_string_interner_8MiB_100 ingress throughput -0.00 [-0.13, +0.12] 4.27%
file_tree egress throughput -0.01 [-1.80, +1.79] 0.37%
dogstatsd_string_interner_8MiB_10k ingress throughput -0.03 [-0.09, +0.04] 52.37%
tcp_dd_logs_filter_exclude ingress throughput -0.03 [-0.17, +0.11] 28.59%
dogstatsd_string_interner_8MiB_100k ingress throughput -0.04 [-0.08, -0.00] 91.13%
dogstatsd_string_interner_8MiB_50k ingress throughput -0.05 [-0.09, -0.01] 95.35%
otel_to_otel_logs ingress throughput -0.12 [-1.69, +1.45] 9.70%

@usamasaqib usamasaqib force-pushed the usama.saqib/multiple-vmsets branch from d5d8fdc to 09a27a4 Compare November 24, 2023 09:47
@usamasaqib usamasaqib requested a review from a team as a code owner November 24, 2023 09:53
@@ -0,0 +1,5 @@
{
"pkg/network/tracer": {
"run": "TestUSMSuite"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is the duration of TestUSMSuite? I'm wondering if it is worth the extra complexity to skip/run individual tests, rather than just dealing at the package level.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This might need to be a regex to ensure no other tests are accidentally included.

Copy link
Contributor Author

@usamasaqib usamasaqib Nov 28, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is the duration of TestUSMSuite?

It will have the same duration as pkg/network/tracer

I'm wondering if it is worth the extra complexity to skip/run individual tests, rather than just dealing at the package level.

It does have added complexity, I agree. However, this way we don't have to rely on refactors to properly distribute tests across vmsets. Secondly, it also gives finer control of which sets should run the tests.

This might need to be a regex to ensure no other tests are accidentally included.

Here I am only trying to run TestUSMSuite, which will then trigger all the subtests in the different build modes. So a regex is not required I think.

.gitlab-ci.yml Outdated Show resolved Hide resolved
Copy link
Contributor Author

@usamasaqib usamasaqib left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Waiting on DataDog/test-infra-definitions#482 to update test-infra runner image and verison.

@usamasaqib usamasaqib force-pushed the usama.saqib/multiple-vmsets branch from 81c36b8 to 010293d Compare November 29, 2023 19:12
@usamasaqib usamasaqib force-pushed the usama.saqib/multiple-vmsets branch from 05a41b7 to 6b591be Compare November 30, 2023 09:12
@usamasaqib usamasaqib merged commit b4af2bb into main Dec 1, 2023
189 checks passed
@usamasaqib usamasaqib deleted the usama.saqib/multiple-vmsets branch December 1, 2023 17:26
wdhif pushed a commit that referenced this pull request Dec 4, 2023
* parallelize kmt tests

* shorten URNs

* add test set suffix to CI visibility related files

* add test set suffix to artifacts

* pass tests to run as configuration

* run tests with new configuration

* simplify

* fix runner_CMD

* read config data from file

* organize files

* prints testnames

* fix compilation

* update test-infra runner

* add to task to generate vmconfig.json at runtime

* generate configuration file

* avoid use of uninstalled library in ci

* fix file open

* pass vmconfig file in the ci

* add sets into vmconfig file

* full path to vmconfig file

* pass output file parameter

* fix parameter name

* make disk into list

* lint

* pass test sets

* make vmconfig file global

* remove arch specific file

* use tags in vmconfig file instead of names

* exact match on domain name

* make include explicit

* run and skip are lists

* use wildcard to only signify includes

* keep single test job per arch

* use grep pattern everywhere

* fix needs for cleanup jobs

* separte junit upload jobs by architecture

* fix test-runner configuration file names

* python-linting

* fix sorting

* update test-infra

* update test-infra runner image

* fix import sorting

* remove unused variables

* remote unused vmconfig file

* Update tasks/kernel_matrix_testing/vmconfig.py

Co-authored-by: Bryce Kahle <[email protected]>

* remove arch mapping

---------

Co-authored-by: Bryce Kahle <[email protected]>
wdhif pushed a commit that referenced this pull request Dec 4, 2023
* parallelize kmt tests

* shorten URNs

* add test set suffix to CI visibility related files

* add test set suffix to artifacts

* pass tests to run as configuration

* run tests with new configuration

* simplify

* fix runner_CMD

* read config data from file

* organize files

* prints testnames

* fix compilation

* update test-infra runner

* add to task to generate vmconfig.json at runtime

* generate configuration file

* avoid use of uninstalled library in ci

* fix file open

* pass vmconfig file in the ci

* add sets into vmconfig file

* full path to vmconfig file

* pass output file parameter

* fix parameter name

* make disk into list

* lint

* pass test sets

* make vmconfig file global

* remove arch specific file

* use tags in vmconfig file instead of names

* exact match on domain name

* make include explicit

* run and skip are lists

* use wildcard to only signify includes

* keep single test job per arch

* use grep pattern everywhere

* fix needs for cleanup jobs

* separte junit upload jobs by architecture

* fix test-runner configuration file names

* python-linting

* fix sorting

* update test-infra

* update test-infra runner image

* fix import sorting

* remove unused variables

* remote unused vmconfig file

* Update tasks/kernel_matrix_testing/vmconfig.py

Co-authored-by: Bryce Kahle <[email protected]>

* remove arch mapping

---------

Co-authored-by: Bryce Kahle <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
changelog/no-changelog [deprecated] qa/skip-qa - use other qa/ labels [DEPRECATED] Please use qa/done or qa/no-code-change to skip creating a QA card team/ebpf-platform
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants