Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add custom container tag support #4952

Closed
wants to merge 70 commits into from

Conversation

mackjmr
Copy link
Member

@mackjmr mackjmr commented Nov 13, 2023

Description: This PR pulls in this change: DataDog/datadog-agent#20779 and this change DataDog/opentelemetry-mapping-go#193 which add custom container tagging support for traces, metrics and logs.

Link to tracking Issue:

Testing:

Documentation:

atoulme and others added 30 commits October 31, 2023 12:46
…ver (open-telemetry#28812)

**Description:**
Overhauls collectdreceiver to use the latest config helper features

**Link to tracking Issue:**
Fixes open-telemetry#28811

**Documentation:**
No impact to docs. User interface remains the same.
Separate changelog to notice API breaking changes, as the Config struct
is changing.

---------

Co-authored-by: Dmitrii Anoshin <[email protected]>
…on non-Linux (open-telemetry#28829)

**Description:**
Fix open-telemetry#28828 - this is just disabling the test on non-Linux. The broken
test was introduced via open-telemetry#28661.
…pen-telemetry#27445)

**Description:** : Added support for host's cpu frequency as part
                   of the hostmetricsreceiver.

**Link to tracking Issue:** open-telemetry#26532

**Testing:**

1. Using the following configuration:
```yml
receivers:
  hostmetrics:
    collection_interval: 5s
    scrapers:
      cpu:
        metrics:
          system.cpu.frequency:
            enabled: true

processors:
  resourcedetection/system:
    detectors: ["system"]
    system:
      hostname_sources: ["lookup", "cname", "dns", "os"]
      resource_attributes:
        host.name:
          enabled: true
        host.id:
          enabled: true
        host.cpu.cache.l2.size:
          enabled: true
        host.cpu.family:
          enabled: true
        host.cpu.model.id:
          enabled: true
        host.cpu.model.name:
          enabled: true
        host.cpu.stepping:
          enabled: true
        host.cpu.vendor.id:
          enabled: true

service:
  pipelines:
    metrics:
      receivers: [hostmetrics]
      exporters: [file]
      processors: [resourcedetection/system]

exporters:
  file:
    path: ./output.json
```

2. Start the collector with ./bin/otelcontribcol_linux_amd64 --config
examples/host_config.yaml
3. The output reports the added metric successfully:

```json
{
   "resourceMetrics":[
      {
         "scopeMetrics":[
            {
               "scope":{
                  "name":"otelcol/hostmetricsreceiver/cpu",
                  "version":"0.85.0-dev"
               },
               "metrics":[
                  {
                     "name":"system.cpu.frequency",
                     "description":"Current frequency of the CPU core in MHz.",
                     "unit":"MHz",
                     "gauge":{
                        "dataPoints":[
                           {
                              "attributes":[
                                 {
                                    "key":"cpu",
                                    "value":{
                                       "stringValue":"cpu0"
                                    }
                                 }
                              ],
                              "startTimeUnixNano":"1696487580000000000",
                              "timeUnixNano":"1696512423758783158",
                              "asDouble":3000
                           },
                           {
                              "attributes":[
                                 {
                                    "key":"cpu",
                                    "value":{
                                       "stringValue":"cpu1"
                                    }
                                 }
                              ],
                              "startTimeUnixNano":"1696487580000000000",
                              "timeUnixNano":"1696512423758783158",
                              "asDouble":3000
                           },
...
```

Signed-off-by: ChrsMark <[email protected]>
…8689)

**Description:** Fix bug when err is nil if an invalid version value is
supplied.

**Link to tracking Issue:** open-telemetry#28686

---------

Co-authored-by: Dmitrii Anoshin <[email protected]>
To resolve failing build-and-test/checks CI job

**Link to tracking Issue:**
open-telemetry#28839
…en-telemetry#28670)

**Description:** <Describe what has changed.>
The current implementation generates logs from recorded exceptions in
spans, but is not possible to see which traces and spans generated those
logs. This PR adds that information to the logs

**Link to tracking Issue:** Fixes open-telemetry#24407
… K8S(open-telemetry#27014) (open-telemetry#28687)

**Description:** <Describe what has changed.>
fix
open-telemetry#27014
notice when in K8S, the DNS mode should config a headless service

**Link to tracking Issue:** <Issue number if applicable>
open-telemetry#27014
The Prometheus Remote write exporter is missing the details of default
values for the remote write queue config. Added the values after looking
into the code for the same.
…28616)

This change adds the "exporter.datadogexporter.disable_apm_stats"
feature flag, which can be enabled to disable APM stats computation.

Updates open-telemetry#28615
I came across `zipkinreceiver` and observed we don't
follow the receiver
[contract](https://github.com/open-telemetry/opentelemetry-collector/blob/b2961b799e2c1ec128f0539764af1fa10c839e04/receiver/doc.go#L21).
We return `InternalServerError` straight away without checking
permanent/non-permanent errors.

We should probably return BadRequest in case of permanent errors

open-telemetry/opentelemetry-collector#4335

**Testing:** Added test cases

Co-authored-by: Andrzej Stencel <[email protected]>
…r.org/multierr (open-telemetry#28614)

**Description:** use errors.Join instead of go.uber.org/multierr

**Link to tracking Issue:** open-telemetry#25121 

---------

Co-authored-by: Andrzej Stencel <[email protected]>
…ead of using export function (open-telemetry#27259)

**Description:** 
Wavefrontreceiver is very similar to carbonreceiver: it is TCP based in
which each received text line represents a single metric data point. In
order to avoid using exported function `carbonreceiver.New(...)`, we can
wrap metrics receiver under carbon receiver.

**Link to tracking Issue:** 

open-telemetry#27248

**Testing:** 
make chlog-validate
go test for wavefrontreceiver

**Documentation:**

---------

Co-authored-by: Pablo Baeyens <[email protected]>
…pen-telemetry#28838)

Set attributes from namespace/node labels or annotations even if
`k8s.namespace.name` and `k8s.node.name` are not extracted.

Fixes
open-telemetry#28837
…ry#27874)

**Description:**
Rename remoteobserverprocessor to remotetapprocessor

**Link to tracking Issue:**
Fixes open-telemetry#27873
**Description:** 
We don't have exemplars added to Sum metrics right now. This PR provides
an enhancement to add exemplars to Sum metrics in Spanmetrics connector


**Testing:** 
Added unit tests and also tested it in our local environment.
Regenerate codeowners with `make gengithub`
…7836)

**Description:** Factory implementation of Alertmanager Exporter
Initial PR - base configs and factory implementation

**Link to tracking Issue:**
[open-telemetry#23659](open-telemetry#23569)

**Testing:** Unit tests for config and factory implementation

**Documentation:** Readme and Sample Configs to use Alertmanager
exporter

---------

Signed-off-by: Juraci Paixão Kröhling <[email protected]>
Co-authored-by: Juraci Paixão Kröhling <[email protected]>
…pen-telemetry#26115)

**Description:** Adds a bounded duration sampling processor, distinct
from the existing latency one in that it has both lower and upper bounds

Apologies for this appearing as a pull request out of nothing, my intent
had actually been to create a review area against my own fork and raise
an issue asking if you'd accept the PR. I think the need here is pretty
obvious from the context, though I think it's easy to imagine preferring
this to be a change to the existing processor. I raised as a new one as
I thought it might make existing behavior cleaner to retain.

**Link to tracking Issue:** As above this is a bit of a premature PR
since I intended to raise as an issue, and thus there isn't one, but I
think it's easy enough to deal with here so leaving open for now and
have learned GitHub's ways for the future (I rarely use github).

**Testing:** New module so associated tests are added showing all
relevant behavior, and passing.

**Documentation:** Updated README and example config

---------

Signed-off-by: dependabot[bot] <[email protected]>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
…rter (open-telemetry#28863)

*   Link to related GCP docs
*   Clarify mention of "traces"
* Drop mention of PromQL support as a difference from `googlecloud`
exporter
**Description:** <Describe what has changed.>
* Adds a new `mtime` sort type, which will sort files by their modified
time
* Add a feature gate for `mtime` sort type

An optional follow-up performance improvement may be made here, to have
the finder return fs.DirEntry directly to query the mtime without making
an extra call to os.Stat for each file.

**Link to tracking Issue:** open-telemetry#27812

**Testing:**
* Added unit tests for new functionality

**Documentation:** 
* Added new `mode` parameter to filelogreceiver docs
…elemetry#28814)

**Description:** A part of
open-telemetry#28693
<!--Ex. Fixing a bug - Describe the bug and how this fixes the issue.
Ex. Adding a feature - Explain what this achieves.-->

move`skywalking_to_traces` in `skywalkingreceiver` into
`pkg/translator/skywalking`

**Link to tracking Issue:** <Issue number if applicable>

**Testing:** <Describe what testing was performed and which tests were
added.>

**Documentation:** <Describe the documentation added.>

---------

Signed-off-by: Jared Tan <[email protected]>
…tches (open-telemetry#28646)

Follows
open-telemetry#28493

This adjusts the length of `knownFiles` to be roughly 4x the number of
matches per poll cycle. In other words, we will remember files for up to
4 poll cycles.

Resolves
open-telemetry#28567
open-telemetry#28836)

**Description:** 
Update README about disabling the feature gate of native metric client
and falling back to Zorkian client.

---------

Co-authored-by: Pablo Baeyens <[email protected]>
…put (open-telemetry#27201)

Adding a feature - Use exporter per worker for better metrics throughput

Initially when adding more workers in the telemetrygen config when
running "metrics" it did not increase the metrics throughput since all
workers used the same exporter.

By creating one exporter per worker we can now increase the number of
metrics being send to the backend.

Fixes open-telemetry#26709

- Units tests pass
- Ran local load tests with different configurations

## Before code change

Generate metrics:

```
telemetrygen metrics \
    --metric-type Sum \
    --duration "60s" \
    --rate "0" \
    --workers "10" \
    --otlp-http=false \
    --otlp-endpoint <HOSTNAME> \
    --otlp-attributes "service.name"=\"telemetrygen\"
```

Output:
```
metrics generated	{"worker": 8, "metrics": 139}
metrics generated	{"worker": 0, "metrics": 139}
metrics generated	{"worker": 9, "metrics": 141}
metrics generated	{"worker": 4, "metrics": 140}
metrics generated	{"worker": 2, "metrics": 140}
metrics generated	{"worker": 3, "metrics": 140}
metrics generated	{"worker": 7, "metrics": 140}
metrics generated	{"worker": 5, "metrics": 140}
metrics generated	{"worker": 1, "metrics": 140}
metrics generated	{"worker": 6, "metrics": 140}
```

## After code change

```
telemetrygen metrics \
    --metric-type Sum \
    --duration "60s" \
    --rate "0" \
    --workers "10" \
    --otlp-http=false \
    --otlp-endpoint <HOSTNAME> \
    --otlp-attributes "service.name"=\"telemetrygen\"
```

Output:

```
metrics generated	{"worker": 6, "metrics": 1292}
metrics generated	{"worker": 3, "metrics": 1277}
metrics generated	{"worker": 5, "metrics": 1272}
metrics generated	{"worker": 8, "metrics": 1251}
metrics generated	{"worker": 9, "metrics": 1241}
metrics generated	{"worker": 4, "metrics": 1227}
metrics generated	{"worker": 0, "metrics": 1212}
metrics generated	{"worker": 2, "metrics": 1201}
metrics generated	{"worker": 1, "metrics": 1333}
metrics generated	{"worker": 7, "metrics": 1363}
```

By adding more workers you can now export more metrics and use
`telemetrygen` better for load testing use cases.

With the code change I can now utilize my CPU better for load tests.
When adding 200 workers to the above config the CPU usage can go above
80%. Before that CPU usage would be around 1% with 200 workers.


![image](https://github.com/open-telemetry/opentelemetry-collector-contrib/assets/558256/66727e5f-6b0a-44a3-8436-7e6985d6a01c)

---------

Co-authored-by: Alex Boten <[email protected]>
…s on windows and darwin (open-telemetry#28864)

**Description:** 

There were some issues related to how `mock.On` works. With default mock
and addition `On` which is already present it appends to a list and
won't be called as one instance of a method is already there. So some
expectations regarding return values were not met

Metrics count for darwin is 3 because disk io is disabled
[here](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/f509060a8d1ab5ca4b5827e0c60d1149e3059908/receiver/hostmetricsreceiver/internal/scraper/processscraper/process_scraper.go#L315)

Tested locally on mac, windows 11 and ubuntu 22

**Link to tracking Issue:** open-telemetry#28828
…n-telemetry#28858)

**Description:**
Do not use export function `carbonreceiver.New` and replace with
`factory.CreateMetricsReceiver`, then we can chore carbonreceiver to
make it pass checkapi tool.

**Link to tracking Issue:**

open-telemetry#28857
To fix failing `build-and-test / checks` CI job
open-telemetry#28834)

`keyfile` was the key used in config and documented in sshcheck, but
`key_file` is the preferred key for these purposes.

**Link to tracking Issue:** open-telemetry#27035 

**Testing:** Update tests to ensure this key is used in default.

**Documentation:** Updated documentation to reflect the change in key.
opentelemetrybot and others added 22 commits November 8, 2023 14:03
…y#29052)

Bump github.com/aws/aws-sdk-go from 1.47.4 to 1.47.5 in
/exporter/awscloudwatchlogsexporter
Bump github.com/aws/aws-sdk-go from 1.47.4 to 1.47.5 in
/exporter/awsemfexporter
Bump github.com/aws/aws-sdk-go from 1.47.4 to 1.47.5 in
/exporter/awsxrayexporter
Bump github.com/aws/aws-sdk-go from 1.47.4 to 1.47.5 in
/exporter/datadogexporter
Bump github.com/aws/aws-sdk-go from 1.47.4 to 1.47.5 in
/extension/observer/ecsobserver
Bump github.com/aws/aws-sdk-go from 1.47.4 to 1.47.5 in
/internal/aws/awsutil
Bump github.com/aws/aws-sdk-go from 1.47.4 to 1.47.5 in
/internal/aws/cwlogs
Bump github.com/aws/aws-sdk-go from 1.47.4 to 1.47.5 in
/internal/aws/k8s
Bump github.com/aws/aws-sdk-go from 1.47.4 to 1.47.5 in
/internal/aws/proxy
Bump github.com/aws/aws-sdk-go from 1.47.4 to 1.47.5 in
/internal/aws/xray
Bump github.com/aws/aws-sdk-go from 1.47.4 to 1.47.5 in
/internal/aws/xray/testdata/sampleapp
Bump github.com/aws/aws-sdk-go from 1.47.4 to 1.47.5 in
/internal/metadataproviders
Bump github.com/aws/aws-sdk-go from 1.47.4 to 1.47.5 in
/processor/resourcedetectionprocessor
Bump github.com/aws/aws-sdk-go from 1.47.4 to 1.47.5 in
/receiver/awscontainerinsightreceiver
Bump github.com/aws/aws-sdk-go from 1.47.4 to 1.47.5 in
/receiver/awsecscontainermetricsreceiver
Bump github.com/aws/aws-sdk-go from 1.47.4 to 1.47.5 in
/receiver/awsxrayreceiver
Bump github.com/aws/aws-sdk-go-v2/config from 1.22.0 to 1.22.2 in
/exporter/awskinesisexporter
Bump github.com/aws/aws-sdk-go-v2/config from 1.22.0 to 1.22.2 in
/extension/sigv4authextension
Bump github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/common
from 1.0.782 to 1.0.786 in /exporter/tencentcloudlogserviceexporter
Bump google.golang.org/api from 0.149.0 to 0.150.0 in
/exporter/f5cloudexporter
Bump google.golang.org/api from 0.149.0 to 0.150.0 in
/exporter/googlecloudpubsubexporter
Bump google.golang.org/api from 0.149.0 to 0.150.0 in
/receiver/googlecloudpubsubreceiver
Bump google.golang.org/api from 0.149.0 to 0.150.0 in
/receiver/googlecloudspannerreceiver
…y#29071)

Bump github.com/aws/aws-sdk-go from 1.47.5 to 1.47.6 in
/exporter/awscloudwatchlogsexporter
Bump github.com/aws/aws-sdk-go from 1.47.5 to 1.47.6 in
/exporter/awsemfexporter
Bump github.com/aws/aws-sdk-go from 1.47.5 to 1.47.6 in
/exporter/awsxrayexporter
Bump github.com/aws/aws-sdk-go from 1.47.5 to 1.47.6 in
/exporter/datadogexporter
Bump github.com/aws/aws-sdk-go from 1.47.5 to 1.47.6 in
/extension/observer/ecsobserver
Bump github.com/aws/aws-sdk-go from 1.47.5 to 1.47.6 in
/internal/aws/awsutil
Bump github.com/aws/aws-sdk-go from 1.47.5 to 1.47.6 in
/internal/aws/cwlogs
Bump github.com/aws/aws-sdk-go from 1.47.5 to 1.47.6 in
/internal/aws/k8s
Bump github.com/aws/aws-sdk-go from 1.47.5 to 1.47.6 in
/internal/aws/proxy
Bump github.com/aws/aws-sdk-go from 1.47.5 to 1.47.6 in
/internal/aws/xray
Bump github.com/aws/aws-sdk-go from 1.47.5 to 1.47.6 in
/internal/aws/xray/testdata/sampleapp
Bump github.com/aws/aws-sdk-go from 1.47.5 to 1.47.6 in
/internal/metadataproviders
Bump github.com/aws/aws-sdk-go from 1.47.5 to 1.47.6 in
/processor/resourcedetectionprocessor
Bump github.com/aws/aws-sdk-go from 1.47.5 to 1.47.6 in
/receiver/awscontainerinsightreceiver
Bump github.com/aws/aws-sdk-go from 1.47.5 to 1.47.6 in
/receiver/awsecscontainermetricsreceiver
Bump github.com/aws/aws-sdk-go from 1.47.5 to 1.47.6 in
/receiver/awsxrayreceiver
**Description:**
`gopsutil` recently added the capability to pass environment vars
through context. This is now done everywhere. This environment variable
setting function is no longer used or necessary. This PR removes it.

**Link to tracking Issue:** open-telemetry#23055

Signed-off-by: Braydon Kains <[email protected]>
…emetry#29080)

This fixes security vulnerabilities found via govulncheck in the
standard library when running against the previous patch versions of
golang. While these vulnerabilities don't actually present themselves in
the binary, the workflows when running govuln check fail and thus taking
in the latest patches fix the issue.


Testing gets caught in workflow run. Noticed the issue originally when
running workflows on this pr:
open-telemetry#28885
…metry#29072)

Additionally added a golangci-lint.yaml update to automatically apply
this change to new code going forward

Fixes open-telemetry#23811

---------

Co-authored-by: Alex Boten <[email protected]>
…ceiver.Factory and pass checkapi (open-telemetry#27086)

Rename struct and function to keep expected receiver.Factory and pass
checkapi

open-telemetry#26304

go run cmd/checkapi/main.go .
go test for dockerstatsreceiver

Signed-off-by: sakulali <[email protected]>
…n-telemetry#28835)

**Description:** <Describe what has changed.>
<!--Ex. Fixing a bug - Describe the bug and how this fixes the issue.
Ex. Adding a feature - Explain what this achieves.-->
This feature adds provider resource attributes
`mongodb_atlas.provider.name` and `mongodb_atlas.region.name` to add
additional context and filtering capabilities.

**Link to tracking Issue:** <Issue number if applicable>
open-telemetry#28833 

**Testing:** <Describe what testing was performed and which tests were
added.>
Test were automatically updated. Live testing was performed and
validated on clusters.
**Documentation:** <Describe the documentation added.>
Docs were automatically updated.
**Description:** Promote syslogexporter to alpha and add it to
otelcontribcol

**Link to tracking Issue:**  related to: open-telemetry#21242, open-telemetry#21244, open-telemetry#21245

**Testing:** <Describe what testing was performed and which tests were
added.>
Manual tests:
Configuration:
```yaml
exporters:
  syslog:
    network: tcp
    port: 514
    endpoint: 127.0.0.1
    protocol: rfc5424

receivers:
  filelog:
    start_at: beginning
    include:
    - /Users/kkujawa/git/opentelemetry-collector-contrib/test.txt
    operators:
      - type: syslog_parser
        protocol: rfc5424

service:
  pipelines:
    logs:
      receivers:
        - filelog
      exporters:
        - syslog
```

Logs:
```
 ./bin/otelcontribcol_darwin_amd64 --config /Users/kkujawa/git/opentelemetry-collector-contrib/bin/config.yaml 
2023-11-06T12:59:31.656+0100    info    [email protected]/telemetry.go:84   Setting up own telemetry...
2023-11-06T12:59:31.656+0100    info    [email protected]/telemetry.go:201  Serving Prometheus metrics      {"address": ":8888", "level": "Basic"}
2023-11-06T12:59:31.656+0100    info    [email protected]/exporter.go:275  Development component. May change in the future.        {"kind": "exporter", "data_type": "logs", "name": "syslog"}
2023-11-06T12:59:31.656+0100    info    [email protected]/exporter.go:42   Syslog Exporter configured      {"kind": "exporter", "data_type": "logs", "name": "syslog", "endpoint": "127.0.0.1", "Protocol": "rfc5424", "port": 514}
2023-11-06T12:59:31.657+0100    info    [email protected]/service.go:143    Starting otelcontribcol...      {"Version": "0.88.0-dev", "NumCPU": 16}
2023-11-06T12:59:31.657+0100    info    extensions/extensions.go:33     Starting extensions...
2023-11-06T12:59:31.657+0100    info    adapter/receiver.go:45  Starting stanza receiver        {"kind": "receiver", "name": "filelog", "data_type": "logs"}
2023-11-06T12:59:31.657+0100    info    [email protected]/service.go:169    Everything is ready. Begin running and processing data.
2023-11-06T12:59:31.858+0100    info    fileconsumer/file.go:263        Started watching file   {"kind": "receiver", "name": "filelog", "data_type": "logs", "component": "fileconsumer", "path": "/Users/kkujawa/git/opentelemetry-collector-contrib/test.txt"}
```
…pen-telemetry#28889)

**Description:** 
Fix a bug when duplicate readers are added to the active list even after
the underlying file is closed. To fix this, continue from the outer
loop.
This doesn't result in any duplicates, but this will keep producing the
following annoying error every time.
```2023-11-05T02:34:03.530+0530       ERROR       Failed to seek  {"component": "fileconsumer", "path": "/var/folders/fs/njj5c3xx7vdcsr28n19vykw00000gn/T/TestStalePartialFingerprintDiscarded2443925830/001/1616317274.log2", "error": "seek /var/folders/fs/njj5c3xx7vdcsr28n19vykw00000gn/T/TestStalePartialFingerprintDiscarded2443925830/001/1616317274.log2: file already closed"}```

**Testing:** Update the test to check the previouPollFiles
…lemetry#28898)

**Description:** Fixing a bug in udp async mode only (but didn't affect
the default non-async mode).
Udp-receiver reuses the same buffer when each packet is processed.
While that's working fine when running it without async config, it cause
a significant amount of duplicate packets and corrupted packets being
sent downstream.
The reader-async thread is reading a packet from the udp port into the
buffer, places that buffer in the channel, and reads another packet into
the same buffer and pushes it to the channel.
Let's say that the processor-async thread was a bit slow, so it only
tries to read from the channel after the 2 items were placed in the
channel. In that case, the processor thread will read 2 items from the
channel, but it will be the same 2nd packet (since the 1st one was
overwritten). In some cases, it seems the processor is reading a
corrupted buffer (since the reader is currently writing into it).
We can't fix it by having the reader allocate a new buffer before each
time it reads a packet from the udp port, since that hurts performance
significantly (reducing it up to ~50%). Instead, use a pool so the
buffers are reused.
Before reading a packet, the reader get a buffer from the pool. The
processor returns it back to the pool after it has been successfully
processed

**Link to tracking Issue:** 27613

**Testing:** Ran existing unitests. 
Ran ran stress tests (sending 250k udp packets per second)
duplicate/corruption issue didn't happen; performance wasn't hurt.

**Documentation:** None
…y and pass checkapi (open-telemetry#29020)

**Description:**
Remove duplicate function NewFactory and pass checkapi.

**Link to tracking Issue:**

open-telemetry#26304

**Testing:**
go run cmd/checkapi/main.go .
go test for windowseventlogreceiver

**Documentation:**

Signed-off-by: sakulali <[email protected]>
The `needs triage` label is directly related to how we define triaging.
Added a link to the triaging definition to make the label's usage more
clear. (Even though the triaging process paragraph is just above this
table in the document, it's easy to miss).
…telemetry#29108)

**Description:**
Fixes misleading documentation about which RBAC role is required and
other invalid YAML I found along the way
…telemetry#29110)

**Description:** typo for kubernetes label in k8sattributesprocessor

**Link to tracking Issue:** n/a

**Testing:** n/a docs

**Documentation:** n/a
**Description:** 
Mark datadogconnector as `MutatesData` to prevent data race

**Link to tracking Issue:**
Fixes
open-telemetry#29111
**Description:**

Closes
open-telemetry#18867

**Testing:**

Ran opentelemetry-collector locally with debug exporter, then used
telemetrygen with `--otlp-http` with and without `--otlp-insecure`.

**Documentation:** None
Base automatically changed from mackjmr/merge-main-to-prod-staging to prod November 13, 2023 12:50
@mackjmr mackjmr closed this Nov 13, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.