Skip to content

Latest commit

 

History

History
165 lines (130 loc) · 8.1 KB

how-to-write-tests-for-plugins.md

File metadata and controls

165 lines (130 loc) · 8.1 KB

How to write tests for plugins

Once you've added a new instrumentation rule according to how-to-add-a-new-rule.md, you need to add tests to verify your rules. opentelemetry-go-auto-instrumentation provides a convenient way to verify your rules.

Add a general plugin test case

Change directory to /test, and create a new directory for the plugin you want to test, for example redis. In the redis directory, there are some subdirectories and the name of each subdirectory represents the lowest supported version of the plugin. If you want to add tests for redis, you should do the following things:

1. Add lowest-version dependency for your rule

For example, if you add a rule that support redis from v9.0.5 to the latest version, you should verify the lowest redis version, which is v9.0.5 first. You may create a subdirectory named v9.0.5 and add the following go.mod:

module redis/v9.0.5

go 1.22

replace github.com/alibaba/opentelemetry-go-auto-instrumentation => ../../../../opentelemetry-go-auto-instrumentation

replace github.com/alibaba/opentelemetry-go-auto-instrumentation/test/verifier => ../../../../opentelemetry-go-auto-instrumentation/test/verifier

require (
	// import this dependency to use verifier
    github.com/alibaba/opentelemetry-go-auto-instrumentation/test/verifier v0.0.0-00010101000000-000000000000
	github.com/redis/go-redis/v9 v9.0.5
	go.opentelemetry.io/otel v1.30.0
	go.opentelemetry.io/otel/sdk v1.30.0
)

2. Write business logic based on the plugin

Then you need to write some business logic based on the redis plugin, for example, test_executing_commands.go does the basic get and set operation in redis. Your tests should cover all the usage scenarios of this plugin as much as possible.

3. Write verification code

Some telemetry data(like span) will be produced if the business code you've written is matched to the rule. For example, there should be two spans produced by test_executing_commands.go, one represents for the set redis operation and the other represents for the get redis operation. You should use the verifier to verify the correctness:

import "github.com/alibaba/opentelemetry-go-auto-instrumentation/test/verifier"

verifier.WaitAndAssertTraces(func (stubs []tracetest.SpanStubs) {
	verifier.VerifyDbAttributes(stubs[0][0], "set", "", "redis", "", "localhost", "set a b ex 5 ", "set")
	verifier.VerifyDbAttributes(stubs[1][0], "get", "", "redis", "", "localhost", "get a ", "get")
})

The WaitAndAssertTraces of the verifier accept a callback function, which provides all the traces that are produced. You should verify the attribute, the parent context and all other key information of every span in all the traces.

If you want to verify the metrics data, you can also use the verifier like the following code:

	verifier.WaitAndAssertMetrics(map[string]func(metricdata.ResourceMetrics) {
		"http.server.request.duration": func(mrs metricdata.ResourceMetrics) {
		if len(mrs.ScopeMetrics) <= 0 {
			panic("No http.server.request.duration metrics received!")
		}
		point := mrs.ScopeMetrics[0].Metrics[0].Data.(metricdata.Histogram[float64])
		if point.DataPoints[0].Count != 1 {
			panic("http.server.request.duration metrics count is not 1")
		}
		verifier.VerifyHttpServerMetricsAttributes(point.DataPoints[0].Attributes.ToSlice(), "GET", "/a", "", "http", "1.1", "http", 200)
		},
		"http.client.request.duration": func(mrs metricdata.ResourceMetrics) {
		if len(mrs.ScopeMetrics) <= 0 {
			panic("No http.client.request.duration metrics received!")
		}
		point := mrs.ScopeMetrics[0].Metrics[0].Data.(metricdata.Histogram[float64])
		if point.DataPoints[0].Count != 1 {
			panic("http.client.request.duration metrics count is not 1")
		}
		verifier.VerifyHttpClientMetricsAttributes(point.DataPoints[0].Attributes.ToSlice(), "GET", "127.0.0.1:"+strconv.Itoa(port), "", "http", "1.1", port, 200)
       },
	})

Users need to use WaitAndAssertMetrics method in verifier to verify the correctness of the metrics data. WaitAndAssertMetrics receives a map, the key of the map is the name of the metric, the value is the validation function for this metrics data. Users can write their own validation logic in the callback function.

4. Register tests

Finally, you should register the tests. You should write a _tests.go file in test directory to do the registration:

const redis_dependency_name = "github.com/redis/go-redis/v9"
const redis_module_name = "redis"

func init() {
	TestCases = append(TestCases, NewGeneralTestCase("redis-9.0.5-executing-commands-test", redis_module_name, "v9.0.5", "v9.5.1", "1.18", "", TestExecutingCommands)
}

func TestExecutingCommands(t *testing.T, env ...string) {
	redisC, redisPort := initRedisContainer()
	defer clearRedisContainer(redisC)
	UseApp("redis/v9.0.5")
	RunGoBuild(t, "go", "build", "test_executing_commands.go")
	env = append(env, "REDIS_PORT="+redisPort.Port())
	RunApp(t, "test_executing_commands", env...)
}

In the init function, you need to wrap your test case with NewGeneralTestCase, which receives the following arguments:

testName, moduleName, minVersion, maxVersion, minGoVersion, maxGoVersion string, testFunc func(t *testing.T, env ...string)

  1. testName: name of the test case.
  2. moduleName: the subdirectory name in test directory.
  3. minVersion: the lowest supported version of the plugin.
  4. maxVersion: the highest supported version of the plugin
  5. minGoVersion: the lowest supported Go version of the plugin.
  6. maxGoVersion: the highest supported Go version of the plugin.
  7. testFunc: test function to be executed.

You should build the test case with the opentelemetry-go-auto-instrumentation to make your test case able to produce telemetry data. Firstly you should call UseApp method to change directory to the directory of your test cases, and then call RunGoBuild to do hybrid compilation. Finally, use the RunApp to run the instrumented test-case binary to verify the telemetry data.

func TestExecutingUnsupportedCommands(t *testing.T, env ...string) {
	redisC, redisPort := initRedisContainer()
	defer clearRedisContainer(redisC)
	UseApp("redis/v9.0.5")
	RunGoBuild(t, "go", "build", "test_executing_unsupported_commands.go")
	env = append(env, "REDIS_PORT="+redisPort.Port())
	RunApp(t, "test_executing_unsupported_commands", env...)
}

Add a muzzle check case

Muzzle check is inspired by safety-mechanisms.md. It is impossible for us to run general plugin test for every version because it's going to take a lot of time. So opentelemetry-go-auto-instrumentation will pick some random version to do the hybrid compilation in order to verify the API compatibility between different versions. If the muzzle check finds that some APIs are changed in some version, community will create a new rule to adapt it.

Users can add a muzzle check case by calling NewMuzzleTestCase, the arguments taken by NewMuzzleTestCase are almost the same as NewGeneralTestCase. You need to additionally specify the dependency name of the plugin and the list of classes that need to do the muzzle check.

Add a latest-depth check case

Instrumentation tests are generally run against the lowest version of a library that we support to ensure a baseline against users with old dependency versions. Due to the nature of the agent and locations where we instrument private APIs, the agent may fail on a newly released version of the library. We run instrumentation tests additionally against the latest version of the library, as fetched from Maven, as part of a nightly build. If a new version of a library will not work with the agent, we find out through this build and can address it by the next release of the agent.

Users can add a muzzle check case by calling NewLatestDepthTestCase, the arguments taken by NewLatestDepthTestCase are almost the same as NewGeneralTestCase. You need to additionally specify the dependency name of the plugin and the list of classes that need to do the latest-depth check.