forked from open-telemetry/opentelemetry-collector-contrib
-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[connector/datadog] Allow export to traces pipelines #4796
Merged
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
…pen-telemetry#27825) This should help to catch exporters that are incorrectly claimed as not mutating.
…rs (open-telemetry#27805) **Description:** adding a feature - when async mode is enabled in the UDP receiver (udp input operator), separating reading from processing operations. This is important to reduce data-loss in high scale UDP scenarios. See original issue for more details. The async config block is changed now. Instead of readers field (determining the concurrency level of how many threads the udp receiver is running, all reading from the UDP port, processing, and sending downstream), it will now have 2 fields: - readers - determines the concurrency level of threads only reading from UDP port and pushing the packets to a channel. - processors - determines the concurrency level of threads reading from the channel, processing the packets, and sending downstream. - max_queue_length - determines the max size of the channel between the readers & the processors. Setting it high enough, allows to prevent data-loss in cases of downstream temporary latency. Once channel is full, the readers thread will stop until there's room in the queue (so to prevent unlimited memory usage). This improves performance and reduces UDP packet loss in high-scale scenarios. Note that async mode only supports this separation of readers from processors. If async config block isn't included, the default state **Link to tracking Issue:** 27613 **Testing:** Local stress tests ran all types of async config (no 'async', with 'async', etc.). Updating existing udp test accordingly. Also, ran scale tests and saw improvement in data-loss. **Documentation:** Updated md file for both udplogreceiver & stanza udp_input operator with the new flags. --------- Co-authored-by: Daniel Jaglowski <[email protected]>
…ering (open-telemetry#27844) **Description:** * Add a new `ordering_criteria.top_n` option, which allows a user to specify the number of files to track after ordering. * Default is 1, which was the existing behavior. **Link to tracking Issue:** open-telemetry#23788 **Testing:** Unit tests added. **Documentation:** Added new parameter to existing documentation.
…-telemetry#27840) **Description:** Improve the performance of the `MapHash` function, mostly by using the xxhash architecture optimized version. `hash.Sum` is a 'Go-code' only implementation `xxhash.Sum64` has optimized versions for different architectures Both result in the exact same hash though. For the given benchmarks, the gain is > 10% From `main`: ``` goos: linux goarch: amd64 pkg: github.com/open-telemetry/opentelemetry-collector-contrib/pkg/pdatautil cpu: 11th Gen Intel(R) Core(TM) i7-11800H @ 2.30GHz BenchmarkMapHashFourItems-16 47676003 236.0 ns/op 24 B/op 1 allocs/op BenchmarkMapHashEightItems-16 22551222 532.3 ns/op 32 B/op 2 allocs/op BenchmarkMapHashWithEmbeddedSliceAndMap-16 14098969 893.1 ns/op 56 B/op 3 allocs/op ``` The PR: ``` goos: linux goarch: amd64 pkg: github.com/open-telemetry/opentelemetry-collector-contrib/pkg/pdatautil cpu: 11th Gen Intel(R) Core(TM) i7-11800H @ 2.30GHz BenchmarkMapHashFourItems-16 59854737 203.4 ns/op 24 B/op 1 allocs/op BenchmarkMapHashEightItems-16 25609375 475.0 ns/op 32 B/op 2 allocs/op BenchmarkMapHashWithEmbeddedSliceAndMap-16 15950144 753.8 ns/op 56 B/op 3 allocs/op ``` **Testing:** (Re-)using the same tests and benchmarks to prove semantics didn't change.
…metry#27774) **Description:** Log at the debug level instead of info level. Existing behavior would cause excessive log lines on each successful push.
**Description:** Support ignore trace ID in span comparisons for ptracetest. **Link to tracking Issue:** open-telemetry#27687
**Description:** Support ignore span ID in span comparisons for ptracetest. **Link to tracking Issue:** open-telemetry#27685 **Testing:** make chlog-validate go test for pdatatest **Documentation:**
…pen-telemetry#27847) **Description:** The k8s go client's cache expects OnDelete handlers to handle objects of type DeletedFinalStateUnknown when the cache's watch mechanism misses a delete and notices later. This changes the processor to handle such deletes as if they were normal, rather than logging an error and dropping the change. **Link to tracking Issue:** open-telemetry#27632 **Testing:** Only what you see in the unit tests. I am open to suggestions, but I don't see this being a code path we can reasonably cover in the e2e test suite. Verified manually locally on a kind cluster. * Stood up two deployments loosely based off e2e testing resources, one w/ a collector built from this branch and the other docker.io/otel/opentelemetry-collector-contrib:latest. * Both included an additional container in the collector pod I used to fiddle with iptables rules. * Added rules to reject traffic to/from the kube api server * Deleted some namespaces containing deployments generating telemetry. * Restored connectivity by removing the iptables rules. * Observed the collector built from this branch was silent (aside from the junk the k8s client logs due to the broken connection) * Observed the latest ([0.87.0](https://hub.docker.com/layers/otel/opentelemetry-collector-contrib/0.87.0/images/sha256-77cdd395b828b09cb920c671966f09a87a40611aa6107443146086f2046f4a9a?context=explore)) collector logged a handful of errors for the deleted resources (api_v1.Pod, and apps_v1.ReplicaSet. I probably just didn't wait long enough for Namespace.) ``` 2023-10-19T02:18:37.781Z error kube/client.go:236 object received was not of type api_v1.Pod {"kind": "processor", "name": "k8sattributes", "pipeline": "metrics", "received": {"Key":"src1/telemetrygen-patched-766d55cbcb-8zktr","Obj":{"metadata":{"name":"telemetrygen-patched-766d55cbcb-8zktr","namespace":"src1","uid":"be5d2268-c8b0-434d-b3b8-8b18083c7a8b","creat ionTimestamp":"2023-10-19T02:01:08Z","labels":{"app":"telemetrygen-patched","pod-template-hash":"766d55cbcb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"telemetrygen-patched-766d55cbcb","uid":"a887d67a-d5d6-4269-b520-45dbb4f1cd82","controller":true,"blockOwnerDeletion":true}]},"spec":{"containers":[{"name":"telemetrygen","image":"localhost/telemetrygen :latest","resources":{}}],"nodeName":"manual-e2e-testing-control-plane"},"status":{"podIP":"10.244.0.56","startTime":"2023-10-19T02:01:08Z","containerStatuses":[{"name":"telemetrygen","state":{},"lastState":{},"ready":false,"restartCount":0,"image":"","imageID":"","containerID":"containerd://2821ef32cd8bf93a13414504c0f8f0c016c84be49d6ffdbd475d7e4681e90c51"}]}}}} github.com/open-telemetry/opentelemetry-collector-contrib/processor/k8sattributesprocessor/internal/kube.(*WatchClient).handlePodDelete github.com/open-telemetry/opentelemetry-collector-contrib/processor/[email protected]/internal/kube/client.go:236 k8s.io/client-go/tools/cache.ResourceEventHandlerFuncs.OnDelete k8s.io/[email protected]/tools/cache/controller.go:253 ... 2023-10-19T02:19:03.970Z error kube/client.go:868 object received was not of type apps_v1.ReplicaSet {"kind": "processor", "name": "k8sattributes", "pipeline": "metrics", "received": {"Key":"src1/telemetrygen-stable-5c444bb8b8","Obj":{"metadata":{"name":"telemetrygen-stable-5c444bb8b8","namespace":"src1","uid":"d37707ff-b308-4339-8543-a1caf5705ea8","creationTimestamp":null,"ownerReferences":[{"apiVersion":"apps/v1","kind":"Deployment","name":"telemetrygen-stable","uid":"c421276e-e1bf-40c5-85e1-e92e30363da5","controller":true,"blockOwnerDeletion":true}]},"spec":{"selector":null,"template":{"metadata":{"creationTimestamp":null},"spec":{"containers":null}}},"status":{"replicas":0}}}} github.com/open-telemetry/opentelemetry-collector-contrib/processor/k8sattributesprocessor/internal/kube.(*WatchClient).handleReplicaSetDelete github.com/open-telemetry/opentelemetry-collector-contrib/processor/[email protected]/internal/kube/client.go:868 k8s.io/client-go/tools/cache.ResourceEventHandlerFuncs.OnDelete k8s.io/[email protected]/tools/cache/controller.go:253 k8s.io/client-go/tools/cache.(*processorListener).run.func1 k8s.io/[email protected]/tools/cache/shared_informer.go:979 k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1 ... ``` **Documentation:** N/A - it is not clear to me whether or not this should land on the changelog. Its impact on users is marginal. Signed-off-by: Christian Kruse <[email protected]>
**Description:** Adding a Double converter to pkg/ottl **Link to tracking Issue:** closes open-telemetry#22056
…/multierr (open-telemetry#27835) **Description:** fileexporter: use errors.Join instead of go.uber.org/multierr **Link to tracking Issue:** open-telemetry#25121
Removed my review and put it on the upstream PR. We need a27af80 for the deployment to work |
…27846) **Description:** <Describe what has changed.> Allow datadogconnector export to traces pipelines **Link to tracking Issue:** <Issue number if applicable> **Testing:** <Describe what testing was performed and which tests were added.> **Documentation:** <Describe the documentation added.> --------- Co-authored-by: Pablo Baeyens <[email protected]>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Merge open-telemetry#27846 to prod.