Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

migrate snooper tests to use new local DNS server #20615

Merged
merged 25 commits into from
Nov 7, 2023

Conversation

akarpz
Copy link
Contributor

@akarpz akarpz commented Nov 3, 2023

What does this PR do?

Motivation

https://datadoghq.atlassian.net/browse/NPM-3012
https://datadoghq.atlassian.net/browse/NPM-3071
https://datadoghq.atlassian.net/browse/NPM-3044

Additional Notes

Possible Drawbacks / Trade-offs

Describe how to test/QA your changes

Reviewer's Checklist

  • If known, an appropriate milestone has been selected; otherwise the Triage milestone is set.
  • Use the major_change label if your change either has a major impact on the code base, is impacting multiple teams or is changing important well-established internals of the Agent. This label will be use during QA to make sure each team pay extra attention to the changed behavior. For any customer facing change use a releasenote.
  • A release note has been added or the changelog/no-changelog label has been applied.
  • Changed code has automated tests for its functionality.
  • Adequate QA/testing plan information is provided if the qa/skip-qa label is not applied.
  • At least one team/.. label has been applied, indicating the team(s) that should QA this change.
  • If applicable, docs team has been notified or an issue has been opened on the documentation repo.
  • If applicable, the need-change/operator and need-change/helm labels have been applied.
  • If applicable, the k8s/<min-version> label, indicating the lowest Kubernetes version compatible with this feature.
  • If applicable, the config template has been updated.

leeavital and others added 16 commits November 2, 2023 09:54
Most of our DNS tests were hitting google DNS (8.8.8.8). This PR runs a
singleton DNS server which responds with canned responses to domains
used in our tests.

The test server creates a 'dummy' interface on 10.10.10.10 and binds on
TCP/UDP port 53. The server code is all in the testutil/testdns package.

This should avoid any packet loss causing test flakiness.

The one exception is TestTracerSuite/TestDNSStatsWithNAT which still
uses NAT. I couldn't figure out how to get NAT working to a dummy
interface.
@pr-commenter
Copy link

pr-commenter bot commented Nov 3, 2023

Bloop Bleep... Dogbot Here

Regression Detector Results

Run ID: 992741b1-4e7d-408f-9c1a-60324bc6475c
Baseline: 108d837
Comparison: 5a95593
Total datadog-agent CPUs: 7

Explanation

A regression test is an integrated performance test for datadog-agent in a repeatable rig, with varying configuration for datadog-agent. What follows is a statistical summary of a brief datadog-agent run for each configuration across SHAs given above. The goal of these tests are to determine quickly if datadog-agent performance is changed and to what degree by a pull request.

Because a target's optimization goal performance in each experiment will vary somewhat each time it is run, we can only estimate mean differences in optimization goal relative to the baseline target. We express these differences as a percentage change relative to the baseline target, denoted "Δ mean %". These estimates are made to a precision that balances accuracy and cost control. We represent this precision as a 90.00% confidence interval denoted "Δ mean % CI": there is a 90.00% chance that the true value of "Δ mean %" is in that interval.

We decide whether a change in performance is a "regression" -- a change worth investigating further -- if both of the following two criteria are true:

  1. The estimated |Δ mean %| ≥ 5.00%. This criterion intends to answer the question "Does the estimated change in mean optimization goal performance have a meaningful impact on your customers?". We assume that when |Δ mean %| < 5.00%, the impact on your customers is not meaningful. We also assume that a performance change in optimization goal is worth investigating whether it is an increase or decrease, so long as the magnitude of the change is sufficiently large.

  2. Zero is not in the 90.00% confidence interval "Δ mean % CI" about "Δ mean %". This statement is equivalent to saying that there is at least a 90.00% chance that the mean difference in optimization goal is not zero. This criterion intends to answer the question, "Is there a statistically significant difference in mean optimization goal performance?". It also means there is no more than a 10.00% chance this criterion reports a statistically significant difference when the true difference in mean optimization goal is zero -- a "false positive". We assume you are willing to accept a 10.00% chance of inaccurately detecting a change in performance when no true difference exists.

The table below, if present, lists those experiments that have experienced a statistically significant change in mean optimization goal performance between baseline and comparison SHAs with 90.00% confidence OR have been detected as newly erratic. Negative values of "Δ mean %" mean that baseline is faster, whereas positive values of "Δ mean %" mean that comparison is faster. Results that do not exhibit more than a ±5.00% change in their mean optimization goal are discarded. An experiment is erratic if its coefficient of variation is greater than 0.1. The abbreviated table will be omitted if no interesting change is observed.

No interesting changes in experiment optimization goals with confidence ≥ 90.00% and |Δ mean %| ≥ 5.00%.

Fine details of change detection per experiment.
experiment goal Δ mean % Δ mean % CI confidence
file_to_blackhole egress throughput +1.00 [+0.55, +1.45] 99.98%
otel_to_otel_logs ingress throughput +0.65 [-0.94, +2.23] 49.82%
tcp_syslog_to_blackhole ingress throughput +0.47 [+0.34, +0.60] 100.00%
process_agent_standard_check_with_stats egress throughput +0.16 [-1.84, +2.17] 10.65%
process_agent_real_time_mode egress throughput +0.13 [-2.38, +2.64] 6.74%
uds_dogstatsd_to_api ingress throughput +0.02 [-0.14, +0.19] 18.05%
dogstatsd_string_interner_8MiB_10k ingress throughput +0.01 [-0.05, +0.07] 20.04%
dogstatsd_string_interner_8MiB_100 ingress throughput +0.00 [-0.12, +0.13] 3.77%
dogstatsd_string_interner_64MiB_1k ingress throughput +0.00 [-0.13, +0.13] 0.32%
dogstatsd_string_interner_128MiB_100 ingress throughput +0.00 [-0.14, +0.14] 0.25%
dogstatsd_string_interner_8MiB_100k ingress throughput +0.00 [-0.05, +0.05] 0.02%
trace_agent_json ingress throughput -0.00 [-0.13, +0.13] 0.01%
dogstatsd_string_interner_64MiB_100 ingress throughput -0.00 [-0.14, +0.14] 0.24%
process_agent_standard_check egress throughput -0.00 [-3.52, +3.52] 0.12%
dogstatsd_string_interner_128MiB_1k ingress throughput -0.01 [-0.15, +0.13] 7.92%
dogstatsd_string_interner_8MiB_1k ingress throughput -0.01 [-0.11, +0.08] 16.46%
dogstatsd_string_interner_8MiB_50k ingress throughput -0.01 [-0.07, +0.05] 27.42%
trace_agent_msgpack ingress throughput -0.02 [-0.14, +0.11] 18.01%
tcp_dd_logs_filter_exclude ingress throughput -0.03 [-0.10, +0.03] 56.34%
idle egress throughput -0.07 [-2.51, +2.37] 3.58%
file_tree egress throughput -0.33 [-2.17, +1.51] 23.32%

@akarpz akarpz changed the title initial stab migrate snooper tests to use new local DNS server Nov 3, 2023
Comment on lines 36 to 51
func GetServerIP(t *testing.T, port int) net.IP {
if port != 53 {
non53ServerOnce.Do(func() {
globalServer.Start("tcp", port)
globalServer.Start("udp", port)
})
}
serverOnce.Do(func() {
globalServer, globalServerError = newServer()
globalServer.Start("tcp")
globalServer.Start("udp")
globalServer.Start("tcp", port)
globalServer.Start("udp", port)
})

require.NoError(t, globalServerError)
return net.ParseIP("127.0.0.1")
return net.ParseIP(localhostAddr)
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this could be cleaner if GetServerIP had it's original signature (taking a *testing.T) and started both the non-port-53 server and the port-53 server.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah that's fine... I think the only thing there is that the non-53 server will be started always, when it's only used in one test

pkg/network/tracer/tracer_linux_test.go Show resolved Hide resolved
@akarpz akarpz marked this pull request as ready for review November 3, 2023 21:35
@akarpz akarpz requested a review from a team as a code owner November 3, 2023 21:35
@akarpz akarpz added changelog/no-changelog team/networks [deprecated] qa/skip-qa - use other qa/ labels [DEPRECATED] Please use qa/done or qa/no-code-change to skip creating a QA card labels Nov 3, 2023
@akarpz akarpz added this to the 7.50.0 milestone Nov 3, 2023
h := &handler{}
shutdown, port := newTestServer(t, localhost, 0, "udp", h.ServeDNS)
defer shutdown()
ln, _ := net.Listen("tcp", "127.0.0.1:0")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For the cases where we need a random port, I think we should just spin up a server and shut it down in the test.

We can also pass in 0 for the port for the server spin up code to pick a port.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

that is what we were doing before... I think the idea here was to centralize the server code. but I was on the fence when doing this, wdyt @leeavital

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't have strong feelings, if we prefer to have a server-per-test, I'm fine with it. My thinking was it was nice to have a single global test server because:

  • tests run faster when they don't each have to spin up a server
  • tests don't have to deal with managing the lifecycle of
  • the nature of a stubbed server makes having a global easy
  • (obsolete) it was unlikely for the global server to conflict with other test fixtures since it listened on a nonsense ip (10.10.10.10)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was only proposing a local server for this test where we don't use port 53. The other tests can use the global server. In addition, the server creation/run code seems to be using the same var, globalServer, for both servers, which is likely going to be brittle.

func GetServerIP(t *testing.T, port int) net.IP {
if port != 53 {
non53ServerOnce.Do(func() {
globalServer.Start("tcp", port)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How does this work if the dns server with port 53 is already running since you're using the same globalServer variable for both?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The constructor newServer() used to take care of creating interfaces, and no longer does. We can dispense with storing the server in a global variable completely now that the server uses loopback

hmahmood
hmahmood previously approved these changes Nov 6, 2023
// GetServerIP returns the IP address of the test DNS server. The test DNS server returns canned responses for several
// known domains that are used in integration tests.
//
// see server#start to see which domains are handled.
func GetServerIP(t *testing.T) net.IP {
var err error
Copy link
Contributor

@leeavital leeavital Nov 6, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What will happen if:

  • test #1 calls GetServerIP and the sync.Once function fails
  • test #2 calls GetServerIP
    ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see what you mean. I think we need to keep the globalError and have it set by the call to ListenAndServe()

leeavital
leeavital previously approved these changes Nov 7, 2023
@akarpz akarpz changed the base branch from 8888_dns_server to main November 7, 2023 14:48
@akarpz akarpz dismissed stale reviews from leeavital and hmahmood November 7, 2023 14:48

The base branch was changed.

@akarpz akarpz merged commit edfde91 into main Nov 7, 2023
4 checks passed
@akarpz akarpz deleted the akarpowich/further_dns_test_improvements branch November 7, 2023 14:50
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
changelog/no-changelog [deprecated] qa/skip-qa - use other qa/ labels [DEPRECATED] Please use qa/done or qa/no-code-change to skip creating a QA card team/networks
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants