Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Clickhouse exporter improvements for logs #1

Merged
merged 9 commits into from
Mar 23, 2023

Conversation

mauidude
Copy link
Collaborator

WIP DO NOT MERGE YET

@CLAassistant
Copy link

CLAassistant commented Mar 10, 2023

CLA assistant check
All committers have signed the CLA.

Makefile Outdated
@@ -221,7 +221,7 @@ run:
docker-component: check-component
GOOS=linux GOARCH=amd64 $(MAKE) $(COMPONENT)
cp ./bin/$(COMPONENT)_linux_amd64 ./cmd/$(COMPONENT)/$(COMPONENT)
docker build -t $(COMPONENT) ./cmd/$(COMPONENT)/
docker buildx build --platform=linux/amd64 -t $(COMPONENT) ./cmd/$(COMPONENT)/
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

wasn't able to build on m1 mac without this

@@ -9,7 +9,8 @@ ARG USER_UID=10001
USER ${USER_UID}

COPY --from=prep /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/ca-certificates.crt
COPY otelcontribcol /
COPY otelcontribcol /otelcol-contrib
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

their Dockerfiles are confusing... the helm chart was expecting to run the command /otelcol-contrib but the binary that gets built and copied into the image is /otelcontribcol?

err := doWithTx(ctx, e.client, func(tx *sql.Tx) error {
statement, err := tx.PrepareContext(ctx, e.insertSQL)
err := func() error {
scope, err := e.client.Begin()
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is apparently the way to do batch inserts using the database/sql interface of the clickhouse driver https://clickhouse.com/docs/en/integrations/go/clickhouse-go/database-sql-api#batch-insert

@@ -238,17 +246,3 @@ func renderCreateLogsTableSQL(cfg *Config) string {
func renderInsertLogsSQL(cfg *Config) string {
return fmt.Sprintf(insertLogsSQLTemplate, cfg.LogsTableName)
}

func doWithTx(_ context.Context, db *sql.DB, fn func(tx *sql.Tx) error) error {
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this was not the proper way, they were using a tx but they were using an insert per row which is not correct, it's one prepared insert and then you append each row to the batch using Exec()

@vadimp
Copy link

vadimp commented Mar 12, 2023

Actually it doesn't look like we've changed anything, but reduced the consistency of the insertion.
Underneath, Prepare calls PrepareContext, and Exec calls ExecContext.

Assuming I've not messed up with sql wrapper interfaces and followed the code correctly, the only difference is removal of transaction semantics that have nothing more than some consistency guarantees in writing data across partitions. Doesn't mean is a bad thing for our case.

Both Exec and ExecContext execute the query immediately and return an empty result set in our case. So the roundtrip executed in both cases.

func (s *Stmt) ExecContext(ctx context.Context, args ...any) (Result, error) {
	s.closemu.RLock()
	defer s.closemu.RUnlock()

	var res Result
	err := s.db.retry(func(strategy connReuseStrategy) error {
		dc, releaseConn, ds, err := s.connStmt(ctx, strategy)
		if err != nil {
			return err
		}

		res, err = resultFromStatement(ctx, dc.ci, ds, args...)
		releaseConn(err)
		return err
	})

	return res, err
}

The real performance improvement will come from bulk inserts based on Append
Example here:
https://github.com/ClickHouse/clickhouse-go/blob/main/examples/clickhouse_api/batch.go

My assumption is that the performance improvement we see is coming from the usage of gateway that results in lower parallelism and as a result more "coordination" in comparison to much higher number of collectors running as DaemonSets.

@mauidude
Copy link
Collaborator Author

According to their docs and examples, this is how you do batching with the golang database/sql api. The changes I made, were to do this correctly. The way it was previously implemented, was doing each individual log as individually prepared INSERT INTO table (...) VALUES (...) statements (for each log -> open tx, prepare insert, exec insert), when you are supposed to use a prepared statement for the entire batch within a transaction (open tx, prepare, for each row -> append log, commit). It was not batching them previously.

Actually it doesn't look like we've changed anything, but reduced the consistency of the insertion.

I have to verify but I believe we've actually increased the consistency as I believe the batch is atomic. I am not convinced that the transactions of the previous implementation did anything because there aren't really transactional semantics in ClickHouse. However, if the batched insert meets these conditions then it is my understanding that it would be "atomic". I believe our inserts satisfy these critieria or the configs can be tuned to make them satisfied (such as increasing the max_insert_block_size if needed).

The real performance improvement will come from bulk inserts based on Append

This is what I am doing but using the database/sql api instead of the clickhouse driver api directly. I had implemented it this way originally, but realized they were the same thing and then decided to keep the sql interface since the initialization of the driver was a bit different and I was worried about having to add more configurations. The database/sql api seemed easier to integrate since it was already being used.

My assumption is that the performance improvement we see is coming from the usage of gateway

Correct, this was already in place when I introduced my changes to the exporter, which seemed to have fixed reduced the memory usage of the agents, and also fixed the insertion issues (large number of parts, cluster instability, etc...) into clickhouse. It doesn't mean that these changes aren't necessary as there are clear misunderstandings of how to insert into clickhouse in the code base. I also only changed the logging code. Metrics and traces have similar issues that need to be updated but I will probably defer those until later.

@vadimp
Copy link

vadimp commented Mar 14, 2023

I have to verify

I looked into similar database/std batch example to verify it by debugging the flow. This example is the same flow we run. Indeed, the underlying implementation batches "values" in memory before the execution.

image

image

@mauidude
Copy link
Collaborator Author

i updated the code to use the native protocol, which isn't supported by the database/sql interface

@mauidude mauidude marked this pull request as ready for review March 23, 2023 17:28
@mauidude mauidude changed the title [wip] Clickhouse exporter improvements for logs Clickhouse exporter improvements for logs Mar 23, 2023
@mauidude mauidude merged commit 4bc4399 into main Mar 23, 2023
mauidude pushed a commit that referenced this pull request Aug 9, 2023
…emetry#24676)

**Description:** The metadata.yml for the SSH check receiver currently
documents a resource attribute containing the SSH endpoint but this is
not emitted. This PR updates the receiver to include this resource
attribute.

**Link to tracking Issue:** open-telemetry#24441 

**Testing:**

Example collector config:
```yaml
receivers:
  sshcheck:
    endpoint: 13.245.150.131:22
    username: ec2-user
    key_file: /Users/dewald.dejager/.ssh/sandbox.pem
    collection_interval: 15s
    known_hosts: /Users/dewald.dejager/.ssh/known_hosts
    ignore_host_key: false
    resource_attributes:
      "ssh.endpoint":
        enabled: true

exporters:
  logging:
    verbosity: detailed
  prometheus:
    endpoint: 0.0.0.0:8081
    resource_to_telemetry_conversion:
      enabled: true

service:
  pipelines:
    metrics:
      receivers: [sshcheck]
      exporters: [logging, prometheus]
```

The log output looks like this:
```
2023-07-30T16:52:38.724+0200    info    MetricsExporter {"kind": "exporter", "data_type": "metrics", "name": "logging", "resource metrics": 1, "metrics": 2, "data points": 2}
2023-07-30T16:52:38.724+0200    info    ResourceMetrics #0
Resource SchemaURL: 
Resource attributes:
     -> ssh.endpoint: Str(13.245.150.131:22)
ScopeMetrics #0
ScopeMetrics SchemaURL: 
InstrumentationScope otelcol/sshcheckreceiver 0.82.0-dev
Metric #0
Descriptor:
     -> Name: sshcheck.duration
     -> Description: Measures the duration of SSH connection.
     -> Unit: ms
     -> DataType: Gauge
NumberDataPoints #0
StartTimestamp: 2023-07-30 14:52:22.381672 +0000 UTC
Timestamp: 2023-07-30 14:52:38.404003 +0000 UTC
Value: 319
Metric #1
Descriptor:
     -> Name: sshcheck.status
     -> Description: 1 if the SSH client successfully connected, otherwise 0.
     -> Unit: 1
     -> DataType: Sum
     -> IsMonotonic: false
     -> AggregationTemporality: Cumulative
NumberDataPoints #0
StartTimestamp: 2023-07-30 14:52:22.381672 +0000 UTC
Timestamp: 2023-07-30 14:52:38.404003 +0000 UTC
Value: 1
```

And the Prometheus metrics look like this:
```
# HELP sshcheck_duration Measures the duration of SSH connection.
# TYPE sshcheck_duration gauge
sshcheck_duration{ssh_endpoint="13.245.150.131:22"} 311
# HELP sshcheck_status 1 if the SSH client successfully connected, otherwise 0.
# TYPE sshcheck_status gauge
sshcheck_status{ssh_endpoint="13.245.150.131:22"} 1
```
RoryCrispin pushed a commit that referenced this pull request Sep 5, 2023
)

**Description:** 

Adding command line argument `--status-code` to `telemetrygen traces`,
which accepts `(Unset,Error,Ok)` (case sensitive) or the enum equivalent
of `(0,1,2)`.

Running 

```shell
telemetrygen traces --otlp-insecure --traces 1 --status-code 1
```

against a minimal local collector yields

```txt
2023-07-29T21:27:57.862+0100	info	ResourceSpans #0
Resource SchemaURL: https://opentelemetry.io/schemas/1.4.0
Resource attributes:
     -> service.name: Str(telemetrygen)
ScopeSpans #0
ScopeSpans SchemaURL:
InstrumentationScope telemetrygen
Span #0
    Trace ID       : f6dc4be32c78b9999c69d504a79e68c1
    Parent ID      : 4e2cd6e0e90cf2ea
    ID             : 20835413e32d26a5
    Name           : okey-dokey
    Kind           : Server
    Start time     : 2023-07-29 20:27:57.861602 +0000 UTC
    End time       : 2023-07-29 20:27:57.861726 +0000 UTC
    Status code    : Error
    Status message :
Attributes:
     -> net.peer.ip: Str(1.2.3.4)
     -> peer.service: Str(telemetrygen-client)
Span #1
    Trace ID       : f6dc4be32c78b9999c69d504a79e68c1
    Parent ID      :
    ID             : 4e2cd6e0e90cf2ea
    Name           : lets-go
    Kind           : Client
    Start time     : 2023-07-29 20:27:57.861584 +0000 UTC
    End time       : 2023-07-29 20:27:57.861726 +0000 UTC
    Status code    : Error
    Status message :
Attributes:
     -> net.peer.ip: Str(1.2.3.4)
     -> peer.service: Str(telemetrygen-server)
```

and similarly (the string version)

```shell
telemetrygen traces --otlp-insecure --traces 1 --status-code '"Ok"'
```

produces 

```txt
Resource SchemaURL: https://opentelemetry.io/schemas/1.4.0
Resource attributes:
     -> service.name: Str(telemetrygen)
ScopeSpans #0
ScopeSpans SchemaURL:
InstrumentationScope telemetrygen
Span #0
    Trace ID       : dfd830da170acfe567b12f87685d7917
    Parent ID      : 8e15b390dc6a1ccc
    ID             : 165c300130532072
    Name           : okey-dokey
    Kind           : Server
    Start time     : 2023-07-29 20:29:16.026965 +0000 UTC
    End time       : 2023-07-29 20:29:16.027089 +0000 UTC
    Status code    : Ok
    Status message :
Attributes:
     -> net.peer.ip: Str(1.2.3.4)
     -> peer.service: Str(telemetrygen-client)
Span #1
    Trace ID       : dfd830da170acfe567b12f87685d7917
    Parent ID      :
    ID             : 8e15b390dc6a1ccc
    Name           : lets-go
    Kind           : Client
    Start time     : 2023-07-29 20:29:16.026956 +0000 UTC
    End time       : 2023-07-29 20:29:16.027089 +0000 UTC
    Status code    : Ok
    Status message :
Attributes:
     -> net.peer.ip: Str(1.2.3.4)
     -> peer.service: Str(telemetrygen-server)
```

The default is `Unset` which is the current behaviour.

**Link to tracking Issue:**

24286

**Testing:**

Added unit tests which covers both valid and invalid inputs.

**Documentation:**

Command line arguments are self documenting via the usage info in the
flag.

Co-authored-by: Pablo Baeyens <[email protected]>
RoryCrispin pushed a commit that referenced this pull request Nov 24, 2023
open-telemetry#29116)

**Description:** 

As originally proposed in open-telemetry#26991 before I got distracted

Exposes the duration of generated spans as a command line parameter. It
uses a `DurationVar` flag so units can be easily provided and are
automatically applied.

Example usage:

```bash
telemetrygen traces --traces 100 --otlp-insecure --span-duration 10ns # nanoseconds
telemetrygen traces --traces 100 --otlp-insecure --span-duration 10us # microseconds
telemetrygen traces --traces 100 --otlp-insecure --span-duration 10ms # milliseconds
telemetrygen traces --traces 100 --otlp-insecure --span-duration 10s # seconds
```

**Testing:** 

Ran without the argument provided `telemetrygen traces --traces 1
--otlp-insecure` and seen spans publishing with the default value.

Ran again with the argument provided: `telemetrygen traces --traces 1
--otlp-insecure --span-duration 1s`

And observed the expected output:

```
Resource SchemaURL: https://opentelemetry.io/schemas/1.4.0
Resource attributes:
     -> service.name: Str(telemetrygen)
ScopeSpans #0
ScopeSpans SchemaURL: 
InstrumentationScope telemetrygen 
Span #0
    Trace ID       : 8b441587ffa5820688b87a6b511d634c
    Parent ID      : 39faad428638791b
    ID             : 88f0886894bd4ee2
    Name           : okey-dokey
    Kind           : Server
    Start time     : 2023-11-12 02:05:07.97443 +0000 UTC
    End time       : 2023-11-12 02:05:08.97443 +0000 UTC
    Status code    : Unset
    Status message : 
Attributes:
     -> net.peer.ip: Str(1.2.3.4)
     -> peer.service: Str(telemetrygen-client)
Span #1
    Trace ID       : 8b441587ffa5820688b87a6b511d634c
    Parent ID      : 
    ID             : 39faad428638791b
    Name           : lets-go
    Kind           : Client
    Start time     : 2023-11-12 02:05:07.97443 +0000 UTC
    End time       : 2023-11-12 02:05:08.97443 +0000 UTC
    Status code    : Unset
    Status message : 
Attributes:
     -> net.peer.ip: Str(1.2.3.4)
     -> peer.service: Str(telemetrygen-server)
	{"kind": "exporter", "data_type": "traces", "name": "debug"}
```

**Documentation:** No documentation added.

---------

Co-authored-by: Pablo Baeyens <[email protected]>
RoryCrispin pushed a commit that referenced this pull request Jul 19, 2024
**Description:** <Describe what has changed.>
<!--Ex. Fixing a bug - Describe the bug and how this fixes the issue.
Ex. Adding a feature - Explain what this achieves.-->
This PR implements the new container logs parser as it was proposed at
open-telemetry#31959.

**Link to tracking Issue:** <Issue number if applicable>
open-telemetry#31959

**Testing:** <Describe what testing was performed and which tests were
added.>

Added unit tests. Providing manual testing steps as well:

### How to test this manually

1. Using the following config file:
```yaml
receivers:
  filelog:
    start_at: end
    include_file_name: false
    include_file_path: true
    include:
    - /var/log/pods/*/*/*.log
    operators:
      - id: container-parser
        type: container
        output: m1
      - type: move
        id: m1
        from: attributes.k8s.pod.name
        to: attributes.val
      - id: some
        type: add
        field: attributes.key2.key_in
        value: val2

exporters:
  debug:
    verbosity: detailed

service:
  pipelines:
    logs:
      receivers: [filelog]
      exporters: [debug]
      processors: []
```
2. Start the collector:
`./bin/otelcontribcol_linux_amd64 --config
~/otelcol/container_parser/config.yaml`
3. Use the following bash script to create some logs:
```bash
#! /bin/bash

echo '2024-04-13T07:59:37.505201169-05:00 stdout P This is a very very long crio line th' >> /var/log/pods/kube-scheduler-kind-control-plane_49cc7c1fd3702c40b2686ea7486091d3/kube-scheduler43/1.log
echo '{"log":"INFO: log line here","stream":"stdout","time":"2029-03-30T08:31:20.545192187Z"}' >> /var/log/pods/kube-controller-kind-control-plane_49cc7c1fd3702c40b2686ea7486091d6/kube-controller/1.log
echo '2024-04-13T07:59:37.505201169-05:00 stdout F at is awesome! crio is awesome!' >> /var/log/pods/kube-scheduler-kind-control-plane_49cc7c1fd3702c40b2686ea7486091d3/kube-scheduler43/1.log
echo '2021-06-22T10:27:25.813799277Z stdout P some containerd log th' >> /var/log/pods/kube-scheduler-kind-control-plane_49cc7c1fd3702c40b2686ea7486091d3/kube-scheduler44/1.log
echo '{"log":"INFO: another log line here","stream":"stdout","time":"2029-03-30T08:31:20.545192187Z"}' >> /var/log/pods/kube-controller-kind-control-plane_49cc7c1fd3702c40b2686ea7486091d6/kube-controller/1.log
echo '2021-06-22T10:27:25.813799277Z stdout F at is super awesome! Containerd is awesome' >> /var/log/pods/kube-scheduler-kind-control-plane_49cc7c1fd3702c40b2686ea7486091d3/kube-scheduler44/1.log



echo '2024-04-13T07:59:37.505201169-05:00 stdout F standalone crio line which is awesome!' >> /var/log/pods/kube-scheduler-kind-control-plane_49cc7c1fd3702c40b2686ea7486091d3/kube-scheduler43/1.log
echo '2021-06-22T10:27:25.813799277Z stdout F standalone containerd line that is super awesome!' >> /var/log/pods/kube-scheduler-kind-control-plane_49cc7c1fd3702c40b2686ea7486091d3/kube-scheduler44/1.log
```
4. Run the above as a bash script to verify any parallel processing.
Verify that the output is correct.


### Test manually on k8s

1. `make docker-otelcontribcol && docker tag otelcontribcol
otelcontribcol-dev:0.0.1 && kind load docker-image
otelcontribcol-dev:0.0.1`
2. Install using the following helm values file:
```yaml
mode: daemonset
presets:
  logsCollection:
    enabled: true

image:
  repository: otelcontribcol-dev
  tag: "0.0.1"
  pullPolicy: IfNotPresent

command:
  name: otelcontribcol

config:
  exporters:
    debug:
      verbosity: detailed
  receivers:
    filelog:
      start_at: end
      include_file_name: false
      include_file_path: true
      exclude:
        - /var/log/pods/default_daemonset-opentelemetry-collector*_*/opentelemetry-collector/*.log
      include:
        - /var/log/pods/*/*/*.log
      operators:
        - id: container-parser
          type: container
          output: some
        - id: some
          type: add
          field: attributes.key2.key_in
          value: val2


  service:
    pipelines:
      logs:
        receivers: [filelog]
        processors: [batch]
        exporters: [debug]
```
3. Check collector's output to verify the logs are parsed properly:
```console
2024-05-10T07:52:02.307Z	info	LogsExporter	{"kind": "exporter", "data_type": "logs", "name": "debug", "resource logs": 1, "log records": 2}
2024-05-10T07:52:02.307Z	info	ResourceLog #0
Resource SchemaURL: 
ScopeLogs #0
ScopeLogs SchemaURL: 
InstrumentationScope  
LogRecord #0
ObservedTimestamp: 2024-05-10 07:52:02.046236071 +0000 UTC
Timestamp: 2024-05-10 07:52:01.92533954 +0000 UTC
SeverityText: 
SeverityNumber: Unspecified(0)
Body: Str(otel logs at 07:52:01)
Attributes:
     -> log: Map({"iostream":"stdout"})
     -> time: Str(2024-05-10T07:52:01.92533954Z)
     -> k8s: Map({"container":{"name":"busybox","restart_count":"0"},"namespace":{"name":"default"},"pod":{"name":"daemonset-logs-6f6mn","uid":"1069e46b-03b2-4532-a71f-aaec06c0197b"}})
     -> logtag: Str(F)
     -> key2: Map({"key_in":"val2"})
     -> log.file.path: Str(/var/log/pods/default_daemonset-logs-6f6mn_1069e46b-03b2-4532-a71f-aaec06c0197b/busybox/0.log)
Trace ID: 
Span ID: 
Flags: 0
LogRecord #1
ObservedTimestamp: 2024-05-10 07:52:02.046411602 +0000 UTC
Timestamp: 2024-05-10 07:52:02.027386192 +0000 UTC
SeverityText: 
SeverityNumber: Unspecified(0)
Body: Str(otel logs at 07:52:02)
Attributes:
     -> log.file.path: Str(/var/log/pods/default_daemonset-logs-6f6mn_1069e46b-03b2-4532-a71f-aaec06c0197b/busybox/0.log)
     -> time: Str(2024-05-10T07:52:02.027386192Z)
     -> log: Map({"iostream":"stdout"})
     -> logtag: Str(F)
     -> k8s: Map({"container":{"name":"busybox","restart_count":"0"},"namespace":{"name":"default"},"pod":{"name":"daemonset-logs-6f6mn","uid":"1069e46b-03b2-4532-a71f-aaec06c0197b"}})
     -> key2: Map({"key_in":"val2"})
Trace ID: 
Span ID: 
Flags: 0
...
```


**Documentation:** <Describe the documentation added.>  Added

Signed-off-by: ChrsMark <[email protected]>
RoryCrispin pushed a commit that referenced this pull request Jul 19, 2024
…try#33225)

**Description:** <Describe what has changed.>
Using the DB span example below, X-Ray exporter failed to generate the
expected DB call subsegment names because it could not parse JDBC
connection strings that start with the `jdbc:` prefix.
```
Span #1
    Trace ID       : 663a0b68a5e3849c09c07f914b3df738
    Parent ID      : 1052e2a4a2516884
    ID             : 374de78b552e23c2
    Name           : orders@no-appsignals-mysql-1.cnkqok6c8mo1.eu-west-1.rds.amazonaws.com
    Kind           : Client
    Start time     : 2024-05-07 11:07:20.62 +0000 UTC
    End time       : 2024-05-07 11:07:20.624 +0000 UTC
    Status code    : Unset
    Status message :
Attributes:
     -> db.connection_string: Str(jdbc:mysql://no-appsignals-mysql-1.cnkqok6c8mo1.eu-west-1.rds.amazonaws.com:3306)
     -> db.name: Str(orders)
     -> db.system: Str(MySQL)
     -> db.user: Str([email protected])
```

**Link to tracking Issue:** <Issue number if applicable>

**Testing:** <Describe what testing was performed and which tests were
added.>
local tests
RoryCrispin pushed a commit that referenced this pull request Jul 19, 2024
…pen-telemetry#33353)

**Description:** <Describe what has changed.>
<!--Ex. Fixing a bug - Describe the bug and how this fixes the issue.
Ex. Adding a feature - Explain what this achieves.-->
Container parser should add k8s metadata as resource attributes and not
as log record attributes.

**Link to tracking Issue:** <Issue number if applicable> Fixes
open-telemetry#33341

**Testing:** <Describe what testing was performed and which tests were
added.>
Manual testing on local k8s cluster:

```console
2024-06-04T06:40:08.219Z	info	ResourceLog #0
Resource SchemaURL: 
Resource attributes:
     -> k8s.pod.uid: Str(d5ecc924-e255-4525-b5be-6437939b1e4d)
     -> k8s.container.name: Str(busybox)
     -> k8s.namespace.name: Str(default)
     -> k8s.pod.name: Str(daemonset-logs-dhzcq)
     -> k8s.container.restart_count: Str(0)
ScopeLogs #0
ScopeLogs SchemaURL: 
InstrumentationScope  
LogRecord #0
ObservedTimestamp: 2024-06-04 06:40:08.007370503 +0000 UTC
Timestamp: 2024-06-04 06:40:07.855932421 +0000 UTC
SeverityText: 
SeverityNumber: Unspecified(0)
Body: Str(otel logs at 06:40:07)
Attributes:
     -> logtag: Str(F)
     -> key2: Map({"key_in":"val2"})
     -> log.file.path: Str(/var/log/pods/default_daemonset-logs-dhzcq_d5ecc924-e255-4525-b5be-6437939b1e4d/busybox/0.log)
     -> time: Str(2024-06-04T06:40:07.855932421Z)
     -> log.iostream: Str(stdout)
Trace ID: 
Span ID: 
Flags: 0
LogRecord #1
ObservedTimestamp: 2024-06-04 06:40:08.007451031 +0000 UTC
Timestamp: 2024-06-04 06:40:07.957875321 +0000 UTC
SeverityText: 
SeverityNumber: Unspecified(0)
Body: Str(otel logs at 06:40:07)
Attributes:
     -> log.file.path: Str(/var/log/pods/default_daemonset-logs-dhzcq_d5ecc924-e255-4525-b5be-6437939b1e4d/busybox/0.log)
     -> log.iostream: Str(stdout)
     -> time: Str(2024-06-04T06:40:07.957875321Z)
     -> key2: Map({"key_in":"val2"})
     -> logtag: Str(F)
Trace ID: 
Span ID: 
Flags: 0
```

**Documentation:** <Describe the documentation added.> ~

---------

Signed-off-by: ChrsMark <[email protected]>
RoryCrispin pushed a commit that referenced this pull request Jul 19, 2024
…try.log_response_body` config (open-telemetry#33854)

**Description:** <Describe what has changed.>
<!--Ex. Fixing a bug - Describe the bug and how this fixes the issue.
Ex. Adding a feature - Explain what this achieves.-->
- Add `telemetry.log_request_body` and `telemetry.log_response_body`
config for debugging. Debug log will contain field `request_body` and/or
`response_body` in the same log line instead of separate lines to avoid
interleaved log lines.
- Change "Request failed" log level to debug.

Output:
```
2024-07-02T14:09:24.983+0100	debug	elasticsearchexporter/elasticsearch_bulk.go:67	Request roundtrip completed.	{"kind": "exporter", "data_type": "logs", "name": "elasticsearch", "response_body": "{\"version\":{\"number\":\"1.2.3\"}}\n", "path": "/", "method": "GET", "duration": 0.000865486, "status": "200 OK"}
2024-07-02T14:09:24.984+0100	debug	elasticsearchexporter/elasticsearch_bulk.go:67	Request roundtrip completed.	{"kind": "exporter", "data_type": "logs", "name": "elasticsearch", "request_body": "{\"create\":{\"_index\":\"logs-test-idx\"}}\n{\"@timestamp\":\"2024-07-02T13:09:24.970187592Z\",\"Attributes\":{\"a\":\"test\",\"b\":5,\"batch_index\":\"batch_1\",\"c\":3,\"d\":true,\"item_index\":\"item_1\"},\"Body\":\"Load Generator Counter #0\",\"Scope\":{\"name\":\"\",\"version\":\"\"},\"SeverityNumber\":11,\"SeverityText\":\"INFO3\",\"TraceFlags\":1}\n{\"create\":{\"_index\":\"logs-test-idx\"}}\n{\"@timestamp\":\"2024-07-02T13:09:24.970187592Z\",\"Attributes\":{\"a\":\"test\",\"b\":5,\"batch_index\":\"batch_1\",\"c\":3,\"d\":true,\"item_index\":\"item_2\"},\"Body\":\"Load Generator Counter #1\",\"Scope\":{\"name\":\"\",\"version\":\"\"},\"SeverityNumber\":11,\"SeverityText\":\"INFO3\",\"TraceFlags\":1}\n", "response_body": "{\"took\":0,\"errors\":false,\"items\":[{\"create\":{\"_index\":\"logs-test-idx\",\"_id\":\"\",\"_version\":0,\"result\":\"\",\"status\":201,\"_seq_no\":0,\"_primary_term\":0,\"_shards\":{\"total\":0,\"successful\":0,\"failed\":0},\"error\":{\"type\":\"\",\"reason\":\"\",\"caused_by\":{\"type\":\"\",\"reason\":\"\"}}}},{\"create\":{\"_index\":\"logs-test-idx\",\"_id\":\"\",\"_version\":0,\"result\":\"\",\"status\":201,\"_seq_no\":0,\"_primary_term\":0,\"_shards\":{\"total\":0,\"successful\":0,\"failed\":0},\"error\":{\"type\":\"\",\"reason\":\"\",\"caused_by\":{\"type\":\"\",\"reason\":\"\"}}}}]}\n", "path": "/_bulk", "method": "POST", "duration": 0.000539979, "status": "200 OK"}
```

Required config to log
```
exporters:
  elasticsearch:
    telemetry:
      log_request_body: true
      log_response_body: true
    
service:
  telemetry:
    logs:
      level: debug
```

For easier analysis, limit the size of request body size. Use
`num_workers`=1 and lower `flush.bytes` and/or `flush.interval`.

**Link to tracking Issue:** <Issue number if applicable>

**Testing:** <Describe what testing was performed and which tests were
added.>

Manually verified with a modified integration test.

**Documentation:** <Describe the documentation added.>
Garbett1 pushed a commit that referenced this pull request Oct 24, 2024
… Histo --> Histogram (open-telemetry#33824)

## Description

This PR adds a custom metric function to the transformprocessor to
convert exponential histograms to explicit histograms.

Link to tracking issue: Resolves open-telemetry#33827

**Function Name**
```
convert_exponential_histogram_to_explicit_histogram
```

**Arguments:**

- `distribution` (_upper, midpoint, uniform, random_)
- `ExplicitBoundaries: []float64`

**Usage example:**

```yaml
processors:
  transform:
    error_mode: propagate
    metric_statements:
    - context: metric
      statements:
        - convert_exponential_histogram_to_explicit_histogram("random", [10.0, 20.0, 30.0, 40.0, 50.0, 60.0, 70.0, 80.0, 90.0, 100.0]) 
```

**Converts:**

```
Resource SchemaURL: 
ScopeMetrics #0
ScopeMetrics SchemaURL: 
InstrumentationScope  
Metric #0
Descriptor:
     -> Name: response_time
     -> Description: 
     -> Unit: 
     -> DataType: ExponentialHistogram
     -> AggregationTemporality: Delta
ExponentialHistogramDataPoints #0
Data point attributes:
     -> metric_type: Str(timing)
StartTimestamp: 1970-01-01 00:00:00 +0000 UTC
Timestamp: 2024-07-31 09:35:25.212037 +0000 UTC
Count: 44
Sum: 999.000000
Min: 40.000000
Max: 245.000000
Bucket (32.000000, 64.000000], Count: 10
Bucket (64.000000, 128.000000], Count: 22
Bucket (128.000000, 256.000000], Count: 12
        {"kind": "exporter", "data_type": "metrics", "name": "debug"}
```

**To:**

```
Resource SchemaURL: 
ScopeMetrics #0
ScopeMetrics SchemaURL: 
InstrumentationScope  
Metric #0
Descriptor:
     -> Name: response_time
     -> Description: 
     -> Unit: 
     -> DataType: Histogram
     -> AggregationTemporality: Delta
HistogramDataPoints #0
Data point attributes:
     -> metric_type: Str(timing)
StartTimestamp: 1970-01-01 00:00:00 +0000 UTC
Timestamp: 2024-07-30 21:37:07.830902 +0000 UTC
Count: 44
Sum: 999.000000
Min: 40.000000
Max: 245.000000
ExplicitBounds #0: 10.000000
ExplicitBounds #1: 20.000000
ExplicitBounds #2: 30.000000
ExplicitBounds #3: 40.000000
ExplicitBounds #4: 50.000000
ExplicitBounds #5: 60.000000
ExplicitBounds #6: 70.000000
ExplicitBounds #7: 80.000000
ExplicitBounds #8: 90.000000
ExplicitBounds #9: 100.000000
Buckets #0, Count: 0
Buckets #1, Count: 0
Buckets #2, Count: 0
Buckets #3, Count: 2
Buckets #4, Count: 5
Buckets #5, Count: 0
Buckets #6, Count: 3
Buckets #7, Count: 7
Buckets #8, Count: 2
Buckets #9, Count: 4
Buckets #10, Count: 21
        {"kind": "exporter", "data_type": "metrics", "name": "debug"}
```

### Testing

- Several unit tests have been created. We have also tested by ingesting
and converting exponential histograms from the `statsdreceiver` as well
as directly via the `otlpreceiver` over grpc over several hours with a
large amount of data.

- We have clients that have been running this solution in production for
a number of weeks.

### Readme description:

### convert_exponential_hist_to_explicit_hist

`convert_exponential_hist_to_explicit_hist([ExplicitBounds])`

the `convert_exponential_hist_to_explicit_hist` function converts an
ExponentialHistogram to an Explicit (_normal_) Histogram.

`ExplicitBounds` is represents the list of bucket boundaries for the new
histogram. This argument is __required__ and __cannot be empty__.

__WARNING:__

The process of converting an ExponentialHistogram to an Explicit
Histogram is not perfect and may result in a loss of precision. It is
important to define an appropriate set of bucket boundaries to minimize
this loss. For example, selecting Boundaries that are too high or too
low may result histogram buckets that are too wide or too narrow,
respectively.

---------

Co-authored-by: Kent Quirk <[email protected]>
Co-authored-by: Tyler Helmuth <[email protected]>
Garbett1 pushed a commit that referenced this pull request Oct 24, 2024
…etry#35544)

**Description:** <Describe what has changed.>
<!--Ex. Fixing a bug - Describe the bug and how this fixes the issue.
Ex. Adding a feature - Explain what this achieves.-->

As described at
open-telemetry#35491,
it is useful to provide the option to the users for defining
`receiver_creator`'s templates per container.

In this regard, the current PR introduces a new type of Endpoint called
`PodContainer` that matches the rule type `pod.container`. This Endpoint
is emitted for each container of the Pod similarly to how the `Port`
Endpoints are emitted per container that defines a port.

A complete example on how to use this feature to apply different parsing
on each of the Pod's container is provided in the `How to test this
manually` section.

**Link to tracking Issue:** <Issue number if applicable> Fixes
open-telemetry#35491

**Testing:** <Describe what testing was performed and which tests were
added.> TBA

**Documentation:** <Describe the documentation added.> TBA


### How to test this manually

1. Use the following values file to deploy the Collector's Helm chart
```yaml
mode: daemonset

image:
  repository: otelcontribcol-dev
  tag: "latest"
  pullPolicy: IfNotPresent

command:
  name: otelcontribcol

clusterRole:
  create: true
  rules:
   - apiGroups:
     - ''
     resources:
     - 'pods'
     - 'nodes'
     verbs:
     - 'get'
     - 'list'
     - 'watch'
   - apiGroups: [ "" ]
     resources: [ "nodes/proxy"]
     verbs: [ "get" ]
   - apiGroups:
       - ""
     resources:
       - nodes/stats
     verbs:
       - get
   - nonResourceURLs:
       - "/metrics"
     verbs:
       - get

extraVolumeMounts:
 - name: varlogpods
   mountPath: /var/log/pods
   readOnly: true

extraVolumes:
  - name: varlogpods
    hostPath:
      path: /var/log/pods

config:
  extensions:
    k8s_observer:
      auth_type: serviceAccount
      node: ${env:K8S_NODE_NAME}
      observe_nodes: true
  exporters:
    debug:
      verbosity: basic
  receivers:
    receiver_creator/logs:
      watch_observers: [ k8s_observer ]
      receivers:
        filelog/busybox:
          rule: type == "pod.container" && pod.labels["otel.logs"] == "true" && container_name == "busybox"
          config:
            include:
              - /var/log/pods/`pod.namespace`_`pod.name`_`pod.uid`/`container_name`/*.log
            include_file_name: false
            include_file_path: true
            operators:
              - id: container-parser
                type: container
              - type: add
                field: attributes.log.template
                value: busybox
        filelog/lazybox:
          rule: type == "pod.container" && pod.labels["otel.logs"] == "true" && container_name == "lazybox"
          config:
            include:
              - /var/log/pods/`pod.namespace`_`pod.name`_`pod.uid`/`container_name`/*.log
            include_file_name: false
            include_file_path: true
            operators:
              - id: container-parser
                type: container
              - type: add
                field: attributes.log.template
                value: lazybox
  service:
    extensions: [health_check, k8s_observer]
    pipelines:
      logs:
        receivers: [receiver_creator/logs]
        processors: [batch]
        exporters: [debug]
```
2. Follow the logs of the Collector's Pod i.e: `k logs -f
daemonset-opentelemetry-collector-agent-2hrg5`
3. Deploy a sample Pod which consists of 2 different containers:

```yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: daemonset-logs
  labels:
    app: daemonset-logs
spec:
  selector:
    matchLabels:
      app.kubernetes.io/component: migration-logger
      otel.logs: "true"
  template:
    metadata:
      labels:
        app.kubernetes.io/component: migration-logger
        otel.logs: "true"
    spec:
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
      containers:
        - name: lazybox
          image: busybox
          args:
            - /bin/sh
            - -c
            - while true; do echo "otel logs at $(date +%H:%M:%S)" && sleep 0.1s; done
        - name: busybox
          image: busybox
          args:
            - /bin/sh
            - -c
            - while true; do echo "otel logs at $(date +%H:%M:%S)" && sleep 0.1s; done
```

Verify in the logs that only 2 filelog receivers are started, one per
container:

```console
2024-10-02T12:05:17.506Z	info	[email protected]/observerhandler.go:96	starting receiver	{"kind": "receiver", "name": "receiver_creator/logs", "data_type": "logs", "name": "filelog/lazybox", "endpoint": "10.244.0.13", "endpoint_id": "k8s_observer/01543800-cfea-4c10-8220-387e60f65151/lazybox"}
2024-10-02T12:05:17.508Z	info	adapter/receiver.go:47	Starting stanza receiver	{"kind": "receiver", "name": "receiver_creator/logs", "data_type": "logs", "name": "filelog/lazybox/receiver_creator/logs{endpoint=\"10.244.0.13\"}/k8s_observer/01543800-cfea-4c10-8220-387e60f65151/lazybox"}
2024-10-02T12:05:17.508Z	info	[email protected]/observerhandler.go:96	starting receiver	{"kind": "receiver", "name": "receiver_creator/logs", "data_type": "logs", "name": "filelog/busybox", "endpoint": "10.244.0.13", "endpoint_id": "k8s_observer/01543800-cfea-4c10-8220-387e60f65151/busybox"}
2024-10-02T12:05:17.510Z	info	adapter/receiver.go:47	Starting stanza receiver	{"kind": "receiver", "name": "receiver_creator/logs", "data_type": "logs", "name": "filelog/busybox/receiver_creator/logs{endpoint=\"10.244.0.13\"}/k8s_observer/01543800-cfea-4c10-8220-387e60f65151/busybox"}
2024-10-02T12:05:17.709Z	info	fileconsumer/file.go:256	Started watching file	{"kind": "receiver", "name": "receiver_creator/logs", "data_type": "logs", "name": "filelog/lazybox/receiver_creator/logs{endpoint=\"10.244.0.13\"}/k8s_observer/01543800-cfea-4c10-8220-387e60f65151/lazybox", "component": "fileconsumer", "path": "/var/log/pods/default_daemonset-logs-sz4zk_01543800-cfea-4c10-8220-387e60f65151/lazybox/0.log"}
2024-10-02T12:05:17.712Z	info	fileconsumer/file.go:256	Started watching file	{"kind": "receiver", "name": "receiver_creator/logs", "data_type": "logs", "name": "filelog/busybox/receiver_creator/logs{endpoint=\"10.244.0.13\"}/k8s_observer/01543800-cfea-4c10-8220-387e60f65151/busybox", "component": "fileconsumer", "path": "/var/log/pods/default_daemonset-logs-sz4zk_01543800-cfea-4c10-8220-387e60f65151/busybox/0.log"}
```

In addition verify that the proper attributes are added per container
according to the 2 different filelog receiver definitions:


```console
2024-10-02T12:23:55.117Z	info	ResourceLog #0
Resource SchemaURL: 
Resource attributes:
     -> k8s.pod.name: Str(daemonset-logs-sz4zk)
     -> k8s.container.restart_count: Str(0)
     -> k8s.pod.uid: Str(01543800-cfea-4c10-8220-387e60f65151)
     -> k8s.container.name: Str(lazybox)
     -> k8s.namespace.name: Str(default)
     -> container.id: Str(63a8e69bdc6ee95ee7918baf913a548190f32838adeb0e6189a8210e05157b40)
     -> container.image.name: Str(busybox)
ScopeLogs #0
ScopeLogs SchemaURL: 
InstrumentationScope  
LogRecord #0
ObservedTimestamp: 2024-10-02 12:23:54.896772888 +0000 UTC
Timestamp: 2024-10-02 12:23:54.750904381 +0000 UTC
SeverityText: 
SeverityNumber: Unspecified(0)
Body: Str(otel logs at 12:23:54)
Attributes:
     -> log.iostream: Str(stdout)
     -> logtag: Str(F)
     -> log: Map({"template":"lazybox"})
     -> log.file.path: Str(/var/log/pods/default_daemonset-logs-sz4zk_01543800-cfea-4c10-8220-387e60f65151/lazybox/0.log)
Trace ID: 
Span ID: 
Flags: 0
ResourceLog #1
Resource SchemaURL: 
Resource attributes:
     -> k8s.container.restart_count: Str(0)
     -> k8s.pod.uid: Str(01543800-cfea-4c10-8220-387e60f65151)
     -> k8s.container.name: Str(busybox)
     -> k8s.namespace.name: Str(default)
     -> k8s.pod.name: Str(daemonset-logs-sz4zk)
     -> container.id: Str(47163758424f2bc5382b1e9702301be23cab368b590b5fbf0b30affa09b4a199)
     -> container.image.name: Str(busybox)
ScopeLogs #0
ScopeLogs SchemaURL: 
InstrumentationScope  
LogRecord #0
ObservedTimestamp: 2024-10-02 12:23:54.897788935 +0000 UTC
Timestamp: 2024-10-02 12:23:54.749885634 +0000 UTC
SeverityText: 
SeverityNumber: Unspecified(0)
Body: Str(otel logs at 12:23:54)
Attributes:
     -> log.file.path: Str(/var/log/pods/default_daemonset-logs-sz4zk_01543800-cfea-4c10-8220-387e60f65151/busybox/0.log)
     -> logtag: Str(F)
     -> log.iostream: Str(stdout)
     -> log: Map({"template":"busybox"})
Trace ID: 
Span ID: 
Flags: 0
```

Signed-off-by: ChrsMark <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants