-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
load-balancing exporter not route some traceID to same collector #27014
Comments
Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
In my logs, 10.100.194.91 is K8S' Service IP, Is it right? |
Is this a headless service? Or a regular service? |
regular service. But in dns mode, parse to get Service IP(10.100.194.91) , not real server IP list. |
A regular service will yield a single and different IP during every DNS query. Take a look at the metrics for the collector, and share the ones you see related to the load balancing. As mentioned in the documentation, you should use a headless service. |
@jpkrohling thank you for your advice, I will check the document and adjust config. |
@jpkrohling by the way, 😂the documentation mention 'headless' on https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/exporter/loadbalancingexporter? |
Sorry about that! It's buried in another doc. Would you mind adding this to the readme? https://opentelemetry.io/docs/collector/scaling/#scaling-stateful-collectors |
@jpkrohling OK, my pleasure. |
@Liubey , if you added a link to that page, could you please add a link here as well, so that other folks would find it when reading this issue? |
@jpkrohling sorry for my late action. |
… K8S(open-telemetry#27014) (open-telemetry#28687) **Description:** <Describe what has changed.> fix open-telemetry#27014 notice when in K8S, the DNS mode should config a headless service **Link to tracking Issue:** <Issue number if applicable> open-telemetry#27014
* [receiver/collectd] Move to use HTTPServerSettings with collectdreceiver (open-telemetry#28812) **Description:** Overhauls collectdreceiver to use the latest config helper features **Link to tracking Issue:** Fixes open-telemetry#28811 **Documentation:** No impact to docs. User interface remains the same. Separate changelog to notice API breaking changes, as the Config struct is changing. --------- Co-authored-by: Dmitrii Anoshin <[email protected]> * [chore][exporter/datadog] Re-enable TestTraceExporter (open-telemetry#28827) Re-enable TestTraceExporter. Fixes open-telemetry#27630 Co-authored-by: Pablo Baeyens <[email protected]> * [chore][receiver/hostmetrics] Skip process user error (un)muted test on non-Linux (open-telemetry#28829) **Description:** Fix open-telemetry#28828 - this is just disabling the test on non-Linux. The broken test was introduced via open-telemetry#28661. * [receiver/hostmetricsreceiver] Add support for cpu frequency metric (open-telemetry#27445) **Description:** : Added support for host's cpu frequency as part of the hostmetricsreceiver. **Link to tracking Issue:** open-telemetry#26532 **Testing:** 1. Using the following configuration: ```yml receivers: hostmetrics: collection_interval: 5s scrapers: cpu: metrics: system.cpu.frequency: enabled: true processors: resourcedetection/system: detectors: ["system"] system: hostname_sources: ["lookup", "cname", "dns", "os"] resource_attributes: host.name: enabled: true host.id: enabled: true host.cpu.cache.l2.size: enabled: true host.cpu.family: enabled: true host.cpu.model.id: enabled: true host.cpu.model.name: enabled: true host.cpu.stepping: enabled: true host.cpu.vendor.id: enabled: true service: pipelines: metrics: receivers: [hostmetrics] exporters: [file] processors: [resourcedetection/system] exporters: file: path: ./output.json ``` 2. Start the collector with ./bin/otelcontribcol_linux_amd64 --config examples/host_config.yaml 3. The output reports the added metric successfully: ```json { "resourceMetrics":[ { "scopeMetrics":[ { "scope":{ "name":"otelcol/hostmetricsreceiver/cpu", "version":"0.85.0-dev" }, "metrics":[ { "name":"system.cpu.frequency", "description":"Current frequency of the CPU core in MHz.", "unit":"MHz", "gauge":{ "dataPoints":[ { "attributes":[ { "key":"cpu", "value":{ "stringValue":"cpu0" } } ], "startTimeUnixNano":"1696487580000000000", "timeUnixNano":"1696512423758783158", "asDouble":3000 }, { "attributes":[ { "key":"cpu", "value":{ "stringValue":"cpu1" } } ], "startTimeUnixNano":"1696487580000000000", "timeUnixNano":"1696512423758783158", "asDouble":3000 }, ... ``` Signed-off-by: ChrsMark <[email protected]> * [encoding/zipkinencodingextension] add default case (open-telemetry#28689) **Description:** Fix bug when err is nil if an invalid version value is supplied. **Link to tracking Issue:** open-telemetry#28686 --------- Co-authored-by: Dmitrii Anoshin <[email protected]> * [chore] Upgrade cloud.google.com/go (open-telemetry#28840) To resolve failing build-and-test/checks CI job **Link to tracking Issue:** open-telemetry#28839 * [connector/exceptions] Add trace id and span id to generated logs (open-telemetry#28670) **Description:** <Describe what has changed.> The current implementation generates logs from recorded exceptions in spans, but is not possible to see which traces and spans generated those logs. This PR adds that information to the logs **Link to tracking Issue:** Fixes open-telemetry#24407 * [chore][exporter/loadbalancing] use headless service with DNS mode in K8S(open-telemetry#27014) (open-telemetry#28687) **Description:** <Describe what has changed.> fix open-telemetry#27014 notice when in K8S, the DNS mode should config a headless service **Link to tracking Issue:** <Issue number if applicable> open-telemetry#27014 * Update README.md (open-telemetry#28844) The Prometheus Remote write exporter is missing the details of default values for the remote write queue config. Added the values after looking into the code for the same. * exporter/datadog: disable APM stats via feature flag (open-telemetry#28616) This change adds the "exporter.datadogexporter.disable_apm_stats" feature flag, which can be enabled to disable APM stats computation. Updates open-telemetry#28615 * [receiver/zipkin] follow receiver contract (open-telemetry#28627) I came across `zipkinreceiver` and observed we don't follow the receiver [contract](https://github.com/open-telemetry/opentelemetry-collector/blob/b2961b799e2c1ec128f0539764af1fa10c839e04/receiver/doc.go#L21). We return `InternalServerError` straight away without checking permanent/non-permanent errors. We should probably return BadRequest in case of permanent errors open-telemetry/opentelemetry-collector#4335 **Testing:** Added test cases Co-authored-by: Andrzej Stencel <[email protected]> * [chore][exporter/sumologicexporter] use errors.Join instead of go.uber.org/multierr (open-telemetry#28614) **Description:** use errors.Join instead of go.uber.org/multierr **Link to tracking Issue:** open-telemetry#25121 --------- Co-authored-by: Andrzej Stencel <[email protected]> * [receiver/wavefront] wrap metrics receiver under carbon receiver instead of using export function (open-telemetry#27259) **Description:** Wavefrontreceiver is very similar to carbonreceiver: it is TCP based in which each received text line represents a single metric data point. In order to avoid using exported function `carbonreceiver.New(...)`, we can wrap metrics receiver under carbon receiver. **Link to tracking Issue:** open-telemetry#27248 **Testing:** make chlog-validate go test for wavefrontreceiver **Documentation:** --------- Co-authored-by: Pablo Baeyens <[email protected]> * [processor/k8sattributes] Fix node/ns labels/annotations extraction (open-telemetry#28838) Set attributes from namespace/node labels or annotations even if `k8s.namespace.name` and `k8s.node.name` are not extracted. Fixes open-telemetry#28837 * [processor/remoteobserver] rename to remotetapprocessor (open-telemetry#27874) **Description:** Rename remoteobserverprocessor to remotetapprocessor **Link to tracking Issue:** Fixes open-telemetry#27873 * [Spanmetrics] - Add exemplars to Sum metrics (open-telemetry#28671) **Description:** We don't have exemplars added to Sum metrics right now. This PR provides an enhancement to add exemplars to Sum metrics in Spanmetrics connector **Testing:** Added unit tests and also tested it in our local environment. * [chore] fix codeowners (open-telemetry#28855) Regenerate codeowners with `make gengithub` * feat(alertmanager): Add exporter factory and config (open-telemetry#27836) **Description:** Factory implementation of Alertmanager Exporter Initial PR - base configs and factory implementation **Link to tracking Issue:** [open-telemetry#23659](open-telemetry#23569) **Testing:** Unit tests for config and factory implementation **Documentation:** Readme and Sample Configs to use Alertmanager exporter --------- Signed-off-by: Juraci Paixão Kröhling <[email protected]> Co-authored-by: Juraci Paixão Kröhling <[email protected]> * Adds duration sampler distinct from latency in supplying two bounds (open-telemetry#26115) **Description:** Adds a bounded duration sampling processor, distinct from the existing latency one in that it has both lower and upper bounds Apologies for this appearing as a pull request out of nothing, my intent had actually been to create a review area against my own fork and raise an issue asking if you'd accept the PR. I think the need here is pretty obvious from the context, though I think it's easy to imagine preferring this to be a change to the existing processor. I raised as a new one as I thought it might make existing behavior cleaner to retain. **Link to tracking Issue:** As above this is a bit of a premature PR since I intended to raise as an issue, and thus there isn't one, but I think it's easy enough to deal with here so leaving open for now and have learned GitHub's ways for the future (I rarely use github). **Testing:** New module so associated tests are added showing all relevant behavior, and passing. **Documentation:** Updated README and example config --------- Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * [googlemanagedprometheusexporter] Clarify support status of this exporter (open-telemetry#28863) * Link to related GCP docs * Clarify mention of "traces" * Drop mention of PromQL support as a difference from `googlecloud` exporter * [filelogreceiver]: Add ability to sort by mtime (open-telemetry#28850) **Description:** <Describe what has changed.> * Adds a new `mtime` sort type, which will sort files by their modified time * Add a feature gate for `mtime` sort type An optional follow-up performance improvement may be made here, to have the finder return fs.DirEntry directly to query the mtime without making an extra call to os.Stat for each file. **Link to tracking Issue:** open-telemetry#27812 **Testing:** * Added unit tests for new functionality **Documentation:** * Added new `mode` parameter to filelogreceiver docs * [pkg/traslator] move skywalking_to_traces into pkg/translator (open-telemetry#28814) **Description:** A part of open-telemetry#28693 <!--Ex. Fixing a bug - Describe the bug and how this fixes the issue. Ex. Adding a feature - Explain what this achieves.--> move`skywalking_to_traces` in `skywalkingreceiver` into `pkg/translator/skywalking` **Link to tracking Issue:** <Issue number if applicable> **Testing:** <Describe what testing was performed and which tests were added.> **Documentation:** <Describe the documentation added.> --------- Signed-off-by: Jared Tan <[email protected]> * [chore][pkg/stanza] Adjust length of knownFiles based on number of matches (open-telemetry#28646) Follows open-telemetry#28493 This adjusts the length of `knownFiles` to be roughly 4x the number of matches per poll cycle. In other words, we will remember files for up to 4 poll cycles. Resolves open-telemetry#28567 * [chore][exporter/datadog] Add a section about how to switch to Zorkian (open-telemetry#28836) **Description:** Update README about disabling the feature gate of native metric client and falling back to Zorkian client. --------- Co-authored-by: Pablo Baeyens <[email protected]> * [cmd/telemetrygen] Use exporter per worker for better metrics throughput (open-telemetry#27201) Adding a feature - Use exporter per worker for better metrics throughput Initially when adding more workers in the telemetrygen config when running "metrics" it did not increase the metrics throughput since all workers used the same exporter. By creating one exporter per worker we can now increase the number of metrics being send to the backend. Fixes open-telemetry#26709 - Units tests pass - Ran local load tests with different configurations ## Before code change Generate metrics: ``` telemetrygen metrics \ --metric-type Sum \ --duration "60s" \ --rate "0" \ --workers "10" \ --otlp-http=false \ --otlp-endpoint <HOSTNAME> \ --otlp-attributes "service.name"=\"telemetrygen\" ``` Output: ``` metrics generated {"worker": 8, "metrics": 139} metrics generated {"worker": 0, "metrics": 139} metrics generated {"worker": 9, "metrics": 141} metrics generated {"worker": 4, "metrics": 140} metrics generated {"worker": 2, "metrics": 140} metrics generated {"worker": 3, "metrics": 140} metrics generated {"worker": 7, "metrics": 140} metrics generated {"worker": 5, "metrics": 140} metrics generated {"worker": 1, "metrics": 140} metrics generated {"worker": 6, "metrics": 140} ``` ## After code change ``` telemetrygen metrics \ --metric-type Sum \ --duration "60s" \ --rate "0" \ --workers "10" \ --otlp-http=false \ --otlp-endpoint <HOSTNAME> \ --otlp-attributes "service.name"=\"telemetrygen\" ``` Output: ``` metrics generated {"worker": 6, "metrics": 1292} metrics generated {"worker": 3, "metrics": 1277} metrics generated {"worker": 5, "metrics": 1272} metrics generated {"worker": 8, "metrics": 1251} metrics generated {"worker": 9, "metrics": 1241} metrics generated {"worker": 4, "metrics": 1227} metrics generated {"worker": 0, "metrics": 1212} metrics generated {"worker": 2, "metrics": 1201} metrics generated {"worker": 1, "metrics": 1333} metrics generated {"worker": 7, "metrics": 1363} ``` By adding more workers you can now export more metrics and use `telemetrygen` better for load testing use cases. With the code change I can now utilize my CPU better for load tests. When adding 200 workers to the above config the CPU usage can go above 80%. Before that CPU usage would be around 1% with 200 workers. ![image](https://github.com/open-telemetry/opentelemetry-collector-contrib/assets/558256/66727e5f-6b0a-44a3-8436-7e6985d6a01c) --------- Co-authored-by: Alex Boten <[email protected]> * [scraper/processscraper] Fix TestScrapeMetrics_MuteErrorFlags failures on windows and darwin (open-telemetry#28864) **Description:** There were some issues related to how `mock.On` works. With default mock and addition `On` which is already present it appends to a list and won't be called as one instance of a method is already there. So some expectations regarding return values were not met Metrics count for darwin is 3 because disk io is disabled [here](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/f509060a8d1ab5ca4b5827e0c60d1149e3059908/receiver/hostmetricsreceiver/internal/scraper/processscraper/process_scraper.go#L315) Tested locally on mac, windows 11 and ubuntu 22 **Link to tracking Issue:** open-telemetry#28828 * [chore][testbed] Do not use export function `carbonreceiver.New` (open-telemetry#28858) **Description:** Do not use export function `carbonreceiver.New` and replace with `factory.CreateMetricsReceiver`, then we can chore carbonreceiver to make it pass checkapi tool. **Link to tracking Issue:** open-telemetry#28857 * [chore] Run make gendependabot (open-telemetry#28868) To fix failing `build-and-test / checks` CI job * [chore] update codeowners (open-telemetry#28869) Run `make gengithub` locally. * [receiver/sshcheck] Change keyfile -> key_file in e.g. config and docs (open-telemetry#28834) `keyfile` was the key used in config and documented in sshcheck, but `key_file` is the preferred key for these purposes. **Link to tracking Issue:** open-telemetry#27035 **Testing:** Update tests to ensure this key is used in default. **Documentation:** Updated documentation to reflect the change in key. * [chore] [extension/jaegerremotesampling] Avoid port conflict in tests (open-telemetry#28874) Fixes open-telemetry#28873 * [exporter/azuremonitor] Add Connection String Support to Azure Monitor Exporter (open-telemetry#28854) **Description:** <Describe what has changed.> <!--Ex. Fixing a bug - Describe the bug and how this fixes the issue. Ex. Adding a feature - Explain what this achieves.--> This pull request introduces the ability to configure the Azure Monitor Exporter using a connection string, aligning the exporter configuration with Azure Monitor's recommended practices. The current implementation requires users to set the instrumentation key directly, which will soon be deprecated in favor of using the connection string. **Changes Made:** 1. Configuration Update: Modified the `Config` struct and related configuration parsing logic to support a `ConnectionString` field. 2. Parsing Logic: Implemented functionality to parse the connection string and extract necessary details, such as `InstrumentationKey` and `IngestionEndpoint`. 3. Updated Tests: Revised existing tests and added new ones to ensure coverage of the new configuration option. **Benefits:** * Streamlines the configuration process for end-users. * Aligns with Azure Monitor's best practices and recommended configuration approach. * Paves the way for the upcoming deprecation of direct instrumentation key configuration. **Backwards Compatibility:** This update maintains full backwards compatibility. Users currently utilizing the instrumentation key for configuration can continue to do so but are advised to transition to using the connection string. **To-Do** * Documentation Update in a follow up PR * Deprecation Notice: A future update will introduce a deprecation warning for users still configuring the exporter with the instrumentation key, encouraging them to switch to using a connection string. * Add support for `EndpointSuffix` in connection string - https://learn.microsoft.com/en-us/azure/azure-monitor/app/sdk-connection-string?tabs=dotnet5#connection-string-with-an-endpoint-suffix **Link to tracking Issue:** <Issue number if applicable> open-telemetry#28853 **Testing:** <Describe what testing was performed and which tests were added.> Conducted comprehensive testing, including unit tests, to validate that the new configuration option works as expected and does not introduce regressions. All tests are currently passing. ``` [Wed Nov 1 12:53:42 PDT 2023] --------- Transmitting 27 items --------- [Wed Nov 1 12:53:43 PDT 2023] Telemetry transmitted in 331.926261ms [Wed Nov 1 12:53:43 PDT 2023] Response: 200 [Wed Nov 1 12:53:43 PDT 2023] Items accepted/received: 27/27 [Wed Nov 1 12:53:53 PDT 2023] --------- Transmitting 30 items --------- [Wed Nov 1 12:53:53 PDT 2023] Telemetry transmitted in 73.171392ms [Wed Nov 1 12:53:53 PDT 2023] Response: 200 [Wed Nov 1 12:53:53 PDT 2023] Items accepted/received: 30/30 [Wed Nov 1 12:54:04 PDT 2023] --------- Transmitting 27 items --------- [Wed Nov 1 12:54:04 PDT 2023] Telemetry transmitted in 68.037724ms [Wed Nov 1 12:54:04 PDT 2023] Response: 200 [Wed Nov 1 12:54:04 PDT 2023] Items accepted/received: 27/27 ``` **Documentation:** <Describe the documentation added.> TODO, in a follow up PR. * [exporter/awsxray] Add aws sdk http error events to x-ray subsegment and strip prefix `AWS.SDK.` from aws remote service name (open-telemetry#27232) **Description:** <Describe what has changed.> <!--Ex. Fixing a bug - Describe the bug and how this fixes the issue. Ex. Adding a feature - Explain what this achieves.--> - Convert individual HTTP error events into exceptions within subsegments for AWS SDK spans. - Normalize the service name from `awsxray.AWSServiceAttribute` attribute by removing the `AWS.SDK.` prefix (in some aws sdk instrumentation, we have added the prefix to produce metrics with the prefix to clearly indicate the resource). This change ensures that X-Ray backend recognizes standard service names like "DynamoDb", "S3", etc., enabling correct identification of AWS service types. **Link to tracking Issue:** NA **Testing:** Unit tests are added. **Documentation:** NA --------- Co-authored-by: John Knollmeyer <[email protected]> Co-authored-by: John Knollmeyer <[email protected]> * [receiver/carbon] do not expose method (open-telemetry#28872) Do not export function `New` and pass checkapi. open-telemetry#26304 Signed-off-by: sakulali <[email protected]> * [chore] update testbed to embed jaeger exporter (open-telemetry#28880) Rather than importing a deprecated module, this embeds the contents of that module in the testbed. Part of open-telemetry#28647 Signed-off-by: Alex Boten <[email protected]> * Make replication stats return whole number (open-telemetry#28824) **Description:** I failed to reproduce []uint8 to int64 conversion but I was able to repro float64 to int64 conversion error. Different types may be due to different versions or values reported. The fix is forcing query to retrieve integer values. While this may seem like most obvious fix I'm not really aligned with it. What query is returning for is a lag as a decimal number (whole part is seconds) by forcing this to return just an int we kind of losing precision. `0.4s` are reported as `0` while it is `400ms`. My proposal here would consists of 2 options. First one is change reporting in a way that what we report is in fact time-span in `ms`. This could most likely be considered breaking. Second option (I'm more in favor of) is to change the type of what is reported (from int to float). This way unit is intact and does not break possible visualizations, but we gain precision and won't lose data. My first issue here so I wanted to get some feedback first before publishing something unreasonable. _EDIT_ Went with the option of deprecating metrics with second precision (still fixing conversion failures) and introducing alternative to these metrics with `_ms` suffix in name and millisecond precision. Old metrics are now behind a featuregate which is enabled by default for now. **Link to tracking Issue:** open-telemetry#26714 **Testing:** Setting up replicated postgres instances and testing method against this deployment. **Documentation:** - --------- Co-authored-by: Daniel Jaglowski <[email protected]> * Retract googlecloud exporter releases that don't have logging (open-telemetry#28884) **Description:** Logging was broken after open-telemetry#25900 (released in v0.84.0). It is fixed by open-telemetry/opentelemetry-collector#8792, which will be released in v0.89.0. This will help with any distributions that include the googlecloud exporter components. * [chore] move collectdreceiver shared code to an internal package (open-telemetry#28856) This allows the collectdreceiver to pass checkapi. * [chore] Increase Cache Go step timeout to 25min on Windows (open-telemetry#28859) **Description:** Increase the timeout of the "Cache Go" step in the `build-and-test-windows` workflow. I had a few failures with that today and glancing at the errors for the workflow I can see a few others. Few instances below: * https://github.com/open-telemetry/opentelemetry-collector-contrib/actions/runs/6722644168/job/18271035294#step:5:22 * https://github.com/open-telemetry/opentelemetry-collector-contrib/actions/runs/6725656509/job/18280490403#step:5:23 * https://github.com/open-telemetry/opentelemetry-collector-contrib/actions/runs/6726302253/job/18282301386#step:5:21 * [exporter/datadog] fix(docs): typo with especially (open-telemetry#28996) * Bump github.com/google/cadvisor from 0.47.3 to 0.48.1 in /receiver/awscontainerinsightreceiver (open-telemetry#28998) Second attempt after dependabot's PR open-telemetry#28974. There was a typo fixed in cadvisor `v0.48.1` that was a breaking change for us. This updates all references to correct spelling of `housekeeping`. Fixes open-telemetry#28995 * [receiver/kafkametrics] Using unique container networks and container names and attempt to fix flaky tests (open-telemetry#28903) **Description:** <Describe what has changed.> Using unique container networks and container names and attempt to fix flaky tests **Link to tracking Issue:** open-telemetry#26293 **Testing:** **Preparation:** DIR = receiver/kafkametricsreceiver CMD = go test -v -count=1 -race -timeout 360s -parallel 4 -tags=integration,"" -run=Integration ./... **Tests:** 1. If we manually modify the code(as shown below) and use invalid kafka broker, such as `localhost:invalid-port`, the same error as shown in the issue may occur. ``` // receiver/kafkametricsreceiver/integration_test.go scraperinttest.WithCustomConfig( func(t *testing.T, cfg component.Config, ci *scraperinttest.ContainerInfo) { rCfg := cfg.(*Config) rCfg.CollectionInterval = 5 * time.Second rCfg.Brokers = []string{"localhost:invalid-port"} rCfg.Scrapers = []string{"brokers", "consumers", "topics"} }), ``` 2. If we execute the test commands **sequentially** , it seems that the execution results are all correct. ``` # all result are correct for i in {1..100}; do echo "Run $i"; ./${CMD} ; done ``` 3. If we execute the commands in **parallel** end with **`&`**, sometimes the error shown in the issue may occur. ``` # sometimes result occur error for i in {1..20}; do echo "Run $i"; ./${CMD} &; done ``` **Inference:** I have found that duplicate container networks and container names can cause container creation to fail or result in successfully created containers with unavailable ports, which may lead to issues similar to the one shown. **Additional information:** Since Kafka's startup relies on ZooKeeper (which waits for the default `zookeeper.connection.timeout.ms=18000`), if Kafka starts first and ZooKeeper fails to start properly after the timeout duration, it will cause the Kafka container to fail to start correctly. I found the issue testcontainers/testcontainers-go#1791 wants to support that. **Documentation:** --------- Signed-off-by: sakulali <[email protected]> * [chore][processort/tailsamplingprocessor] Limit concurrency for certain tests (flay test on Windows runners) (open-telemetry#29014) **Description:** Limit number of goroutines started during `processor/tailsamplingprocessor` tests. This causes very frequently failures on the Windows tests, see [here](open-telemetry#28682 (comment)) for example. The issue is that the race detector has a hard limit on number of goroutines, see golang/go#23611. The fix limits the concurrency in two tests so this limit is not hit on GH Windows runners. **Link to tracking Issue:** Fix open-telemetry#9126 **Testing:** Increased the concurrency on the two changed tests caused the error and validated that it passed twice on my fork. **Documentation:** N/A * Codesmon/exporter/azuremonitor/persistent queue (open-telemetry#26258) Description: Added a new config item to support the QueueSettings values. Extended the exportHelper.New[Metrics|Logs|Traces]Exporter call to pass in the QueueSettings config, thus enabling persistent_queue for this exporter. Link to tracking Issue: Fixes issue open-telemetry#25859 Testing: Extending unit tests to check configuration changes are picked up. Documentation: Added sending_queue config items to README.md's configuration section. * [chore] update affiliation (open-telemetry#29019) Updated to match core * [receiver/collectd] move collectdreceiver to beta (open-telemetry#28997) Promote collectdreceiver as beta component Fixes open-telemetry#28658 * [chore] dependabot updates Wed Nov 8 16:58:54 UTC 2023 (open-telemetry#29028) Bump github.com/DataDog/datadog-agent/pkg/proto from 0.49.0-rc.2 to 0.50.0-devel in /exporter/datadogexporter Bump github.com/IBM/sarama from 1.41.3 to 1.42.0 in /exporter/kafkaexporter Bump github.com/IBM/sarama from 1.41.3 to 1.42.0 in /receiver/kafkareceiver Bump github.com/IBM/sarama from 1.41.3 to 1.42.1 in /receiver/kafkametricsreceiver Bump github.com/aws/aws-sdk-go from 1.46.7 to 1.47.3 in /exporter/awscloudwatchlogsexporter Bump github.com/aws/aws-sdk-go from 1.46.7 to 1.47.3 in /exporter/awsemfexporter Bump github.com/aws/aws-sdk-go from 1.46.7 to 1.47.3 in /exporter/awsxrayexporter Bump github.com/aws/aws-sdk-go from 1.46.7 to 1.47.3 in /exporter/datadogexporter Bump github.com/aws/aws-sdk-go from 1.46.7 to 1.47.3 in /extension/observer/ecsobserver Bump github.com/aws/aws-sdk-go from 1.46.7 to 1.47.3 in /internal/aws/awsutil Bump github.com/aws/aws-sdk-go from 1.46.7 to 1.47.3 in /internal/aws/cwlogs Bump github.com/aws/aws-sdk-go from 1.46.7 to 1.47.3 in /internal/aws/k8s Bump github.com/aws/aws-sdk-go from 1.46.7 to 1.47.3 in /internal/aws/proxy Bump github.com/aws/aws-sdk-go from 1.46.7 to 1.47.3 in /internal/aws/xray Bump github.com/aws/aws-sdk-go from 1.46.7 to 1.47.3 in /internal/aws/xray/testdata/sampleapp Bump github.com/aws/aws-sdk-go from 1.46.7 to 1.47.3 in /internal/metadataproviders Bump github.com/aws/aws-sdk-go from 1.46.7 to 1.47.3 in /processor/resourcedetectionprocessor Bump github.com/aws/aws-sdk-go from 1.46.7 to 1.47.3 in /receiver/awsecscontainermetricsreceiver Bump github.com/aws/aws-sdk-go from 1.46.7 to 1.47.3 in /receiver/awsxrayreceiver Bump github.com/aws/aws-sdk-go from 1.46.7 to 1.47.4 in /receiver/awscontainerinsightreceiver Bump github.com/aws/aws-sdk-go-v2 from 1.21.2 to 1.22.1 in /exporter/awskinesisexporter Bump github.com/aws/aws-sdk-go-v2 from 1.21.2 to 1.22.1 in /extension/sigv4authextension Bump github.com/aws/aws-sdk-go-v2/config from 1.19.1 to 1.22.0 in /exporter/awskinesisexporter Bump github.com/aws/aws-sdk-go-v2/config from 1.19.1 to 1.22.0 in /extension/sigv4authextension Bump github.com/aws/aws-sdk-go-v2/credentials from 1.13.43 to 1.15.1 in /exporter/awskinesisexporter Bump github.com/aws/aws-sdk-go-v2/credentials from 1.13.43 to 1.15.1 in /extension/sigv4authextension Bump github.com/aws/aws-sdk-go-v2/service/kinesis from 1.20.0 to 1.22.0 in /exporter/awskinesisexporter Bump github.com/aws/aws-sdk-go-v2/service/sts from 1.23.2 to 1.25.0 in /exporter/awskinesisexporter Bump github.com/aws/aws-sdk-go-v2/service/sts from 1.23.2 to 1.25.0 in /extension/sigv4authextension Bump github.com/golangci/golangci-lint from 1.55.1 to 1.55.2 in /internal/tools Bump github.com/gorilla/mux from 1.8.0 to 1.8.1 in /receiver/jaegerreceiver Bump github.com/gorilla/mux from 1.8.0 to 1.8.1 in /receiver/sapmreceiver Bump github.com/gorilla/mux from 1.8.0 to 1.8.1 in /receiver/signalfxreceiver Bump github.com/gorilla/mux from 1.8.0 to 1.8.1 in /receiver/skywalkingreceiver Bump github.com/gorilla/mux from 1.8.0 to 1.8.1 in /receiver/splunkhecreceiver Bump github.com/gorilla/mux from 1.8.0 to 1.8.1 in /testbed/mockdatareceivers/mockawsxrayreceiver Bump github.com/influxdata/influxdb-client-go/v2 from 2.12.3 to 2.12.4 in /receiver/influxdbreceiver Bump github.com/mattn/go-sqlite3 from 1.14.17 to 1.14.18 in /extension/storage Bump github.com/prometheus/procfs from 0.11.1 to 0.12.0 in /receiver/hostmetricsreceiver Bump github.com/shirou/gopsutil/v3 from 3.23.9 to 3.23.10 in /exporter/signalfxexporter Bump github.com/shirou/gopsutil/v3 from 3.23.9 to 3.23.10 in /extension/observer/hostobserver Bump github.com/shirou/gopsutil/v3 from 3.23.9 to 3.23.10 in /processor/resourcedetectionprocessor Bump github.com/shirou/gopsutil/v3 from 3.23.9 to 3.23.10 in /receiver/awscontainerinsightreceiver Bump github.com/shirou/gopsutil/v3 from 3.23.9 to 3.23.10 in /receiver/hostmetricsreceiver Bump github.com/shirou/gopsutil/v3 from 3.23.9 to 3.23.10 in /receiver/jmxreceiver Bump github.com/shirou/gopsutil/v3 from 3.23.9 to 3.23.10 in /testbed Bump github.com/spf13/cobra from 1.7.0 to 1.8.0 in /cmd/telemetrygen Bump github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/common from 1.0.777 to 1.0.782 in /exporter/tencentcloudlogserviceexporter Bump go.mongodb.org/atlas from 0.34.0 to 0.35.0 in /receiver/mongodbatlasreceiver Bump golang.org/x/mod from 0.13.0 to 0.14.0 in /cmd/configschema Bump golang.org/x/sys from 0.13.0 to 0.14.0 in /exporter/signalfxexporter Bump golang.org/x/sys from 0.13.0 to 0.14.0 in /pkg/stanza Bump golang.org/x/sys from 0.13.0 to 0.14.0 in /pkg/winperfcounters Bump golang.org/x/sys from 0.13.0 to 0.14.0 in /receiver/hostmetricsreceiver Bump golang.org/x/sys from 0.13.0 to 0.14.0 in /receiver/windowseventlogreceiver Bump golang.org/x/text from 0.13.0 to 0.14.0 in /cmd/configschema Bump golang.org/x/text from 0.13.0 to 0.14.0 in /cmd/mdatagen Bump golang.org/x/text from 0.13.0 to 0.14.0 in /internal/coreinternal Bump golang.org/x/text from 0.13.0 to 0.14.0 in /pkg/stanza Bump golang.org/x/text from 0.13.0 to 0.14.0 in /testbed Bump golang.org/x/time from 0.3.0 to 0.4.0 in /cmd/telemetrygen Bump google.golang.org/api from 0.148.0 to 0.149.0 in /exporter/f5cloudexporter Bump google.golang.org/api from 0.148.0 to 0.149.0 in /exporter/googlecloudpubsubexporter Bump google.golang.org/api from 0.148.0 to 0.149.0 in /receiver/googlecloudpubsubreceiver Bump google.golang.org/api from 0.148.0 to 0.149.0 in /receiver/googlecloudspannerreceiver * [chore] dependabot updates Wed Nov 8 18:29:02 UTC 2023 (open-telemetry#29052) Bump github.com/aws/aws-sdk-go from 1.47.4 to 1.47.5 in /exporter/awscloudwatchlogsexporter Bump github.com/aws/aws-sdk-go from 1.47.4 to 1.47.5 in /exporter/awsemfexporter Bump github.com/aws/aws-sdk-go from 1.47.4 to 1.47.5 in /exporter/awsxrayexporter Bump github.com/aws/aws-sdk-go from 1.47.4 to 1.47.5 in /exporter/datadogexporter Bump github.com/aws/aws-sdk-go from 1.47.4 to 1.47.5 in /extension/observer/ecsobserver Bump github.com/aws/aws-sdk-go from 1.47.4 to 1.47.5 in /internal/aws/awsutil Bump github.com/aws/aws-sdk-go from 1.47.4 to 1.47.5 in /internal/aws/cwlogs Bump github.com/aws/aws-sdk-go from 1.47.4 to 1.47.5 in /internal/aws/k8s Bump github.com/aws/aws-sdk-go from 1.47.4 to 1.47.5 in /internal/aws/proxy Bump github.com/aws/aws-sdk-go from 1.47.4 to 1.47.5 in /internal/aws/xray Bump github.com/aws/aws-sdk-go from 1.47.4 to 1.47.5 in /internal/aws/xray/testdata/sampleapp Bump github.com/aws/aws-sdk-go from 1.47.4 to 1.47.5 in /internal/metadataproviders Bump github.com/aws/aws-sdk-go from 1.47.4 to 1.47.5 in /processor/resourcedetectionprocessor Bump github.com/aws/aws-sdk-go from 1.47.4 to 1.47.5 in /receiver/awscontainerinsightreceiver Bump github.com/aws/aws-sdk-go from 1.47.4 to 1.47.5 in /receiver/awsecscontainermetricsreceiver Bump github.com/aws/aws-sdk-go from 1.47.4 to 1.47.5 in /receiver/awsxrayreceiver Bump github.com/aws/aws-sdk-go-v2/config from 1.22.0 to 1.22.2 in /exporter/awskinesisexporter Bump github.com/aws/aws-sdk-go-v2/config from 1.22.0 to 1.22.2 in /extension/sigv4authextension Bump github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/common from 1.0.782 to 1.0.786 in /exporter/tencentcloudlogserviceexporter Bump google.golang.org/api from 0.149.0 to 0.150.0 in /exporter/f5cloudexporter Bump google.golang.org/api from 0.149.0 to 0.150.0 in /exporter/googlecloudpubsubexporter Bump google.golang.org/api from 0.149.0 to 0.150.0 in /receiver/googlecloudpubsubreceiver Bump google.golang.org/api from 0.149.0 to 0.150.0 in /receiver/googlecloudspannerreceiver * [chore] dependabot updates Wed Nov 8 21:01:03 UTC 2023 (open-telemetry#29071) Bump github.com/aws/aws-sdk-go from 1.47.5 to 1.47.6 in /exporter/awscloudwatchlogsexporter Bump github.com/aws/aws-sdk-go from 1.47.5 to 1.47.6 in /exporter/awsemfexporter Bump github.com/aws/aws-sdk-go from 1.47.5 to 1.47.6 in /exporter/awsxrayexporter Bump github.com/aws/aws-sdk-go from 1.47.5 to 1.47.6 in /exporter/datadogexporter Bump github.com/aws/aws-sdk-go from 1.47.5 to 1.47.6 in /extension/observer/ecsobserver Bump github.com/aws/aws-sdk-go from 1.47.5 to 1.47.6 in /internal/aws/awsutil Bump github.com/aws/aws-sdk-go from 1.47.5 to 1.47.6 in /internal/aws/cwlogs Bump github.com/aws/aws-sdk-go from 1.47.5 to 1.47.6 in /internal/aws/k8s Bump github.com/aws/aws-sdk-go from 1.47.5 to 1.47.6 in /internal/aws/proxy Bump github.com/aws/aws-sdk-go from 1.47.5 to 1.47.6 in /internal/aws/xray Bump github.com/aws/aws-sdk-go from 1.47.5 to 1.47.6 in /internal/aws/xray/testdata/sampleapp Bump github.com/aws/aws-sdk-go from 1.47.5 to 1.47.6 in /internal/metadataproviders Bump github.com/aws/aws-sdk-go from 1.47.5 to 1.47.6 in /processor/resourcedetectionprocessor Bump github.com/aws/aws-sdk-go from 1.47.5 to 1.47.6 in /receiver/awscontainerinsightreceiver Bump github.com/aws/aws-sdk-go from 1.47.5 to 1.47.6 in /receiver/awsecscontainermetricsreceiver Bump github.com/aws/aws-sdk-go from 1.47.5 to 1.47.6 in /receiver/awsxrayreceiver * [exporter/influxdb] Remove //nolint indent-error-flow (open-telemetry#29073) I fixed linter issue by following this document. https://google.github.io/styleguide/go/decisions.html#indent-error-flow * hostmetricsreceiver: remove unused function (open-telemetry#29075) **Description:** `gopsutil` recently added the capability to pass environment vars through context. This is now done everywhere. This environment variable setting function is no longer used or necessary. This PR removes it. **Link to tracking Issue:** open-telemetry#23055 Signed-off-by: Braydon Kains <[email protected]> * [chore] bump go versions in workflows to 1.20.11 and 1.21.4 (open-telemetry#29080) This fixes security vulnerabilities found via govulncheck in the standard library when running against the previous patch versions of golang. While these vulnerabilities don't actually present themselves in the binary, the workflows when running govuln check fail and thus taking in the latest patches fix the issue. Testing gets caught in workflow run. Noticed the issue originally when running workflows on this pr: open-telemetry#28885 * [all][chore] Moved from interface{} to any for all go code (open-telemetry#29072) Additionally added a golangci-lint.yaml update to automatically apply this change to new code going forward Fixes open-telemetry#23811 --------- Co-authored-by: Alex Boten <[email protected]> * [receiver/dockerstats] rename struct and function to keep expected receiver.Factory and pass checkapi (open-telemetry#27086) Rename struct and function to keep expected receiver.Factory and pass checkapi open-telemetry#26304 go run cmd/checkapi/main.go . go test for dockerstatsreceiver Signed-off-by: sakulali <[email protected]> * [receiver/mongodbatlasreceiver] add provider resource attributes (open-telemetry#28835) **Description:** <Describe what has changed.> <!--Ex. Fixing a bug - Describe the bug and how this fixes the issue. Ex. Adding a feature - Explain what this achieves.--> This feature adds provider resource attributes `mongodb_atlas.provider.name` and `mongodb_atlas.region.name` to add additional context and filtering capabilities. **Link to tracking Issue:** <Issue number if applicable> open-telemetry#28833 **Testing:** <Describe what testing was performed and which tests were added.> Test were automatically updated. Live testing was performed and validated on clusters. **Documentation:** <Describe the documentation added.> Docs were automatically updated. * [exporter/syslog] Enable component (open-telemetry#28902) **Description:** Promote syslogexporter to alpha and add it to otelcontribcol **Link to tracking Issue:** related to: open-telemetry#21242, open-telemetry#21244, open-telemetry#21245 **Testing:** <Describe what testing was performed and which tests were added.> Manual tests: Configuration: ```yaml exporters: syslog: network: tcp port: 514 endpoint: 127.0.0.1 protocol: rfc5424 receivers: filelog: start_at: beginning include: - /Users/kkujawa/git/opentelemetry-collector-contrib/test.txt operators: - type: syslog_parser protocol: rfc5424 service: pipelines: logs: receivers: - filelog exporters: - syslog ``` Logs: ``` ./bin/otelcontribcol_darwin_amd64 --config /Users/kkujawa/git/opentelemetry-collector-contrib/bin/config.yaml 2023-11-06T12:59:31.656+0100 info [email protected]/telemetry.go:84 Setting up own telemetry... 2023-11-06T12:59:31.656+0100 info [email protected]/telemetry.go:201 Serving Prometheus metrics {"address": ":8888", "level": "Basic"} 2023-11-06T12:59:31.656+0100 info [email protected]/exporter.go:275 Development component. May change in the future. {"kind": "exporter", "data_type": "logs", "name": "syslog"} 2023-11-06T12:59:31.656+0100 info [email protected]/exporter.go:42 Syslog Exporter configured {"kind": "exporter", "data_type": "logs", "name": "syslog", "endpoint": "127.0.0.1", "Protocol": "rfc5424", "port": 514} 2023-11-06T12:59:31.657+0100 info [email protected]/service.go:143 Starting otelcontribcol... {"Version": "0.88.0-dev", "NumCPU": 16} 2023-11-06T12:59:31.657+0100 info extensions/extensions.go:33 Starting extensions... 2023-11-06T12:59:31.657+0100 info adapter/receiver.go:45 Starting stanza receiver {"kind": "receiver", "name": "filelog", "data_type": "logs"} 2023-11-06T12:59:31.657+0100 info [email protected]/service.go:169 Everything is ready. Begin running and processing data. 2023-11-06T12:59:31.858+0100 info fileconsumer/file.go:263 Started watching file {"kind": "receiver", "name": "filelog", "data_type": "logs", "component": "fileconsumer", "path": "/Users/kkujawa/git/opentelemetry-collector-contrib/test.txt"} ``` * [chore][pkg/stanza]: when found duplicate, continue from outer loop (open-telemetry#28889) **Description:** Fix a bug when duplicate readers are added to the active list even after the underlying file is closed. To fix this, continue from the outer loop. This doesn't result in any duplicates, but this will keep producing the following annoying error every time. ```2023-11-05T02:34:03.530+0530 ERROR Failed to seek {"component": "fileconsumer", "path": "/var/folders/fs/njj5c3xx7vdcsr28n19vykw00000gn/T/TestStalePartialFingerprintDiscarded2443925830/001/1616317274.log2", "error": "seek /var/folders/fs/njj5c3xx7vdcsr28n19vykw00000gn/T/TestStalePartialFingerprintDiscarded2443925830/001/1616317274.log2: file already closed"}``` **Testing:** Update the test to check the previouPollFiles * udp-receiver async - fix data corruption (with buffer pools) (open-telemetry#28898) **Description:** Fixing a bug in udp async mode only (but didn't affect the default non-async mode). Udp-receiver reuses the same buffer when each packet is processed. While that's working fine when running it without async config, it cause a significant amount of duplicate packets and corrupted packets being sent downstream. The reader-async thread is reading a packet from the udp port into the buffer, places that buffer in the channel, and reads another packet into the same buffer and pushes it to the channel. Let's say that the processor-async thread was a bit slow, so it only tries to read from the channel after the 2 items were placed in the channel. In that case, the processor thread will read 2 items from the channel, but it will be the same 2nd packet (since the 1st one was overwritten). In some cases, it seems the processor is reading a corrupted buffer (since the reader is currently writing into it). We can't fix it by having the reader allocate a new buffer before each time it reads a packet from the udp port, since that hurts performance significantly (reducing it up to ~50%). Instead, use a pool so the buffers are reused. Before reading a packet, the reader get a buffer from the pool. The processor returns it back to the pool after it has been successfully processed **Link to tracking Issue:** 27613 **Testing:** Ran existing unitests. Ran ran stress tests (sending 250k udp packets per second) duplicate/corruption issue didn't happen; performance wasn't hurt. **Documentation:** None * [chore][receiver/windowseventlog] remove duplicate function NewFactory and pass checkapi (open-telemetry#29020) **Description:** Remove duplicate function NewFactory and pass checkapi. **Link to tracking Issue:** open-telemetry#26304 **Testing:** go run cmd/checkapi/main.go . go test for windowseventlogreceiver **Documentation:** Signed-off-by: sakulali <[email protected]> * [chore][pkg/stanza][exporter/signalfx] One more interface{} -> any and skip flaky tests (open-telemetry#29101) See https://github.com/open-telemetry/opentelemetry-collector-contrib/pull/28898/files#r1389614720. Looks like a merge conflict. * [chore][CONTRIBUTING.md] Add triage process link (open-telemetry#29092) The `needs triage` label is directly related to how we define triaging. Added a link to the triaging definition to make the label's usage more clear. (Even though the triaging process paragraph is just above this table in the document, it's easy to miss). * fix(processor/k8sattributes): README was misleading/had typoes (open-telemetry#29108) **Description:** Fixes misleading documentation about which RBAC role is required and other invalid YAML I found along the way * [processor/k8sattributes] fix(docs): typo for kubernetes label (open-telemetry#29110) **Description:** typo for kubernetes label in k8sattributesprocessor **Link to tracking Issue:** n/a **Testing:** n/a docs **Documentation:** n/a * Update doc.go of filelogreceiver (open-telemetry#29100) * [connector/datadog] Set MutatesData to true (open-telemetry#29114) **Description:** Mark datadogconnector as `MutatesData` to prevent data race **Link to tracking Issue:** Fixes open-telemetry#29111 * cmd/telemetrygen: add HTTP export for logs (open-telemetry#29078) **Description:** Closes open-telemetry#18867 **Testing:** Ran opentelemetry-collector locally with debug exporter, then used telemetrygen with `--otlp-http` with and without `--otlp-insecure`. **Documentation:** None --------- Signed-off-by: ChrsMark <[email protected]> Signed-off-by: Juraci Paixão Kröhling <[email protected]> Signed-off-by: dependabot[bot] <[email protected]> Signed-off-by: Jared Tan <[email protected]> Signed-off-by: sakulali <[email protected]> Signed-off-by: Alex Boten <[email protected]> Signed-off-by: Braydon Kains <[email protected]> Co-authored-by: Antoine Toulme <[email protected]> Co-authored-by: Dmitrii Anoshin <[email protected]> Co-authored-by: Yang Song <[email protected]> Co-authored-by: Pablo Baeyens <[email protected]> Co-authored-by: Paulo Janotti <[email protected]> Co-authored-by: Chris Mark <[email protected]> Co-authored-by: VihasMakwana <[email protected]> Co-authored-by: Marc Tudurí <[email protected]> Co-authored-by: Eason Lau <[email protected]> Co-authored-by: Abhishek <[email protected]> Co-authored-by: Gabriel Aszalos <[email protected]> Co-authored-by: Andrzej Stencel <[email protected]> Co-authored-by: Joonsoo Park <[email protected]> Co-authored-by: sakulali <[email protected]> Co-authored-by: aishyandapalli <[email protected]> Co-authored-by: mcube8 <[email protected]> Co-authored-by: Juraci Paixão Kröhling <[email protected]> Co-authored-by: Garry Cairns <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Punya Biswal <[email protected]> Co-authored-by: Brandon Johnson <[email protected]> Co-authored-by: Jared Tan <[email protected]> Co-authored-by: Daniel Jaglowski <[email protected]> Co-authored-by: Marcel Birkner <[email protected]> Co-authored-by: Alex Boten <[email protected]> Co-authored-by: Michal Pristas <[email protected]> Co-authored-by: Nathan Slaughter <[email protected]> Co-authored-by: Rajkumar Rangaraj <[email protected]> Co-authored-by: Ping Xiang <[email protected]> Co-authored-by: John Knollmeyer <[email protected]> Co-authored-by: John Knollmeyer <[email protected]> Co-authored-by: David Ashpole <[email protected]> Co-authored-by: Karming <[email protected]> Co-authored-by: Curtis Robert <[email protected]> Co-authored-by: Colin Desmond <[email protected]> Co-authored-by: OpenTelemetry Bot <[email protected]> Co-authored-by: Yuki Nakamura <[email protected]> Co-authored-by: Braydon Kains <[email protected]> Co-authored-by: Adriel Perkins <[email protected]> Co-authored-by: lucasoskorep <[email protected]> Co-authored-by: Jon <[email protected]> Co-authored-by: Katarzyna Kujawa <[email protected]> Co-authored-by: hovavza <[email protected]> Co-authored-by: Liz Fong-Jones <[email protected]> Co-authored-by: Yoshi Yamaguchi <[email protected]> Co-authored-by: Andrew Wilkins <[email protected]>
… K8S(open-telemetry#27014) (open-telemetry#28687) **Description:** <Describe what has changed.> fix open-telemetry#27014 notice when in K8S, the DNS mode should config a headless service **Link to tracking Issue:** <Issue number if applicable> open-telemetry#27014
Component(s)
exporter/loadbalancing
What happened?
Description
I use sidecar-->collector cluster--> jaeger mode to collector span.
In sidecar, I use load balance to route traceID to some collector, so the config is:
By the way, I used resolver by dns before, But I got some result(same traceID not route to same collector).
In collector, I use tail_sampling to collector the trace I want, and use resourcedetection to tag collector id to test, the config is:
But I got the trace like:
You can see , ### the same traceID'spans reported by two collector, It's weird.
By the way, Server and Task are two servcie, Task published by Server, and execute by Task, Very typical asynchronous architecture.
Steps to Reproduce
Expected Result
I want same traceID route to the same collector, So I can use tail-sampling better to collector the interesting trace.
Actual Result
same traceID spans data route to different collector.
Collector version
0.85.0
Environment information
Environment
K8S
debian
OpenTelemetry Collector configuration
No response
Log output
2023-09-20T05:59:23.457Z info zapgrpc/zapgrpc.go:178 [core] [Channel #1] Channel created {"grpc_log": true}
2023-09-20T05:59:23.459Z info zapgrpc/zapgrpc.go:178 [core] [Channel #1] original dial target is: "10.100.194.91:4317" {"grpc_log": true}
2023-09-20T05:59:23.459Z info zapgrpc/zapgrpc.go:178 [core] [Channel #1] dial target "10.100.194.91:4317" parse failed: parse "10.100.194.91:4317": first path segment in URL cannot contain colon {"grpc_log": true}
2023-09-20T05:59:23.459Z info zapgrpc/zapgrpc.go:178 [core] [Channel #1] fallback to scheme "passthrough" {"grpc_log": true}
2023-09-20T05:59:23.459Z info zapgrpc/zapgrpc.go:178 [core] [Channel #1] parsed dial target is: {URL:{Scheme:passthrough Opaque: User: Host: Path:/10.100.194.91:4317 RawPath: OmitHost:false ForceQuery:false RawQuery: Fragment: RawFragment:}} {"grpc_log": true}
2023-09-20T05:59:23.459Z info zapgrpc/zapgrpc.go:178 [core] [Channel #1] Channel authority set to "10.100.194.91:4317" {"grpc_log": true}
2023-09-20T05:59:23.461Z info zapgrpc/zapgrpc.go:178 [core] [Channel #1] Resolver state updated: {
"Addresses": [
{
"Addr": "10.100.194.91:4317",
"ServerName": "",
"Attributes": null,
"BalancerAttributes": null,
"Metadata": null
}
],
"Endpoints": [
{
"Addresses": [
{
"Addr": "10.100.194.91:4317",
"ServerName": "",
"Attributes": null,
"BalancerAttributes": null,
"Metadata": null
}
],
"Attributes": null
}
],
"ServiceConfig": null,
"Attributes": null
} (resolver returned new addresses) {"grpc_log": true}
2023-09-20T05:59:23.461Z info zapgrpc/zapgrpc.go:178 [core] [Channel #1] Channel switches to new LB policy "pick_first" {"grpc_log": true}
2023-09-20T05:59:23.461Z info zapgrpc/zapgrpc.go:178 [core] [pick-first-lb 0xc002b4da70] Received new config {
"shuffleAddressList": false
}, resolver state {
"Addresses": [
{
"Addr": "10.100.194.91:4317",
"ServerName": "",
"Attributes": null,
"BalancerAttributes": null,
"Metadata": null
}
],
"Endpoints": [
{
"Addresses": [
{
"Addr": "10.100.194.91:4317",
"ServerName": "",
"Attributes": null,
"BalancerAttributes": null,
"Metadata": null
}
],
"Attributes": null
}
],
"ServiceConfig": null,
"Attributes": null
} {"grpc_log": true}
2023-09-20T05:59:23.461Z info zapgrpc/zapgrpc.go:178 [core] [Channel #1 SubChannel #2] Subchannel created {"grpc_log": true}
2023-09-20T05:59:23.462Z info zapgrpc/zapgrpc.go:178 [core] [Channel #1] Channel Connectivity change to CONNECTING {"grpc_log": true}
2023-09-20T05:59:23.464Z info zapgrpc/zapgrpc.go:178 [core] [Channel #1 SubChannel #2] Subchannel Connectivity change to CONNECTING {"grpc_log": true}
2023-09-20T05:59:23.465Z info zapgrpc/zapgrpc.go:178 [core] [Channel #1 SubChannel #2] Subchannel picks a new address "10.100.194.91:4317" to connect {"grpc_log": true}
Additional context
No response
The text was updated successfully, but these errors were encountered: