Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[exporter/prometheusremotewrite] Permanent error: Permanent error: context deadline exceeded #32511

Closed
sterziev88 opened this issue Apr 18, 2024 · 5 comments
Labels
bug Something isn't working exporter/prometheusremotewrite needs triage New item requiring triage Stale

Comments

@sterziev88
Copy link

Component(s)

No response

What happened?

Description

Hi,
I have deployed opnetelemetry collector to which I already send logs and traces from my applications.
At the same time I have configured my application to send metrics over otlp as well. I have prometheus already deployed and works as expected. Now I just want to use otlp collector instead prometheus to scrape metrics itself..

For receiver in otlp I have this:

```   receivers:
        otlp:
          protocols:
            grpc:
              endpoint: x.x.x.x:4317
            http:
              endpoint: x.x.x.x:4318
What I have to set for prometheusremotewrite exporter in order to send forward metrics from otlp collector to promethus?
At the moment I have this:
```go
prometheusremotewrite:
          endpoint: http://prometheus-thanos.svc.cluster.local:10901/api/v1/push
          tls:
            insecure: true

but I receive this in my otlp logs:

2024-04-18T11:49:02.916Z error exporterhelper/queue_sender.go:97 Exporting failed. Dropping data. {"kind": "exporter", "data_type": "metrics", "name": "prometheusremotewrite", "error": "Permanent error: Permanent error: context deadline exceeded", "dropped_items": 12}
go.opentelemetry.io/collector/exporter/exporterhelper.newQueueSender.func1
go.opentelemetry.io/collector/[email protected]/exporterhelper/queue_sender.go:97
go.opentelemetry.io/collector/exporter/internal/queue.(*boundedMemoryQueue[...]).Consume
go.opentelemetry.io/collector/[email protected]/internal/queue/bounded_memory_queue.go:57
go.opentelemetry.io/collector/exporter/internal/queue.(*Consumers[...]).Start.func1
go.opentelemetry.io/collector/[email protected]/internal/queue/consumers.go:43

Steps to Reproduce

Install opentelemetry collector with helm chart version 0.85.0

Expected Result

To be able to see my metrics in prometheus

Actual Result

I don't see any metrics

Collector version

0.85.0

Environment information

Environment

OS: (e.g., "Ubuntu 20.04")
Compiler(if manually compiled): (e.g., "go 14.2")

OpenTelemetry Collector configuration

receivers:
        otlp:
          protocols:
            grpc:
              endpoint: x.x.x.x:4317
            http:
              endpoint: x.x.x.x:4318
exporters:
   prometheusremotewrite:
          endpoint: http://prometheus-thanos.svc.cluster.local:10901/api/v1/push
          tls:
            insecure: true

Log output

2024-04-18T11:49:02.916Z error exporterhelper/queue_sender.go:97 Exporting failed. Dropping data. {"kind": "exporter", "data_type": "metrics", "name": "prometheusremotewrite", "error": "Permanent error: Permanent error: context deadline exceeded", "dropped_items": 12}
go.opentelemetry.io/collector/exporter/exporterhelper.newQueueSender.func1
go.opentelemetry.io/collector/[email protected]/exporterhelper/queue_sender.go:97
go.opentelemetry.io/collector/exporter/internal/queue.(*boundedMemoryQueue[...]).Consume
go.opentelemetry.io/collector/[email protected]/internal/queue/bounded_memory_queue.go:57
go.opentelemetry.io/collector/exporter/internal/queue.(*Consumers[...]).Start.func1
go.opentelemetry.io/collector/[email protected]/internal/queue/consumers.go:43

Additional context

I already send my metrics over otlp ( on application level everything is set properly ). Now I want to forward them from collector to prometheus.

@sterziev88 sterziev88 added bug Something isn't working needs triage New item requiring triage labels Apr 18, 2024
Copy link
Contributor

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

Copy link
Contributor

This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

@ioanc
Copy link

ioanc commented Jun 24, 2024

I am running in the same issue with prometheusremotewrite and sending metrics to victoriaMetrics single node.
Building the collector using the next steps:

  • Dockerfile
# Test
# dist:
#     name: otelcol-metrics
#     description: Basic OTel Collector distribution for Developers
#     output_path: /otelcol-dev
#     # output_path: ./otelcol-dev-alpine
#     otelcol_version: 0.103.0

FROM docker.io/golang:1.22.4-alpine3.20 AS build-stage
LABEL org.opencontainers.image.authors="ioan corcodel"
WORKDIR /app
RUN apk add curl --update
RUN curl -k --proto '=https' --tlsv1.2 -fL -o ocb https://github.com/open-telemetry/opentelemetry-collector/releases/download/cmd%2Fbuilder%2Fv0.103.1/ocb_0.103.1_linux_amd64
RUN chmod +x ./ocb
COPY otel-builder.yaml ./
RUN CGO_ENABLED=0 ./ocb --config ./otel-builder.yaml

# Run the tests in the container
FROM build-stage AS run-test-stage
RUN /otelcol-dev/otelcol-metrics --help

# Deploy the application binary into a lean image
FROM gcr.io/distroless/base-debian11 AS build-release-stage

WORKDIR /

COPY --from=build-stage /otelcol-dev/otelcol-metrics /otelcol-metrics
ENTRYPOINT ["/otelcol-metrics"]
  • otel-builder.yaml
# https://github.com/open-telemetry/opentelemetry-collector/issues/6373
# export CGO_ENABLED=0

dist:
  name: otelcol-metrics
  description: Basic OTel Collector distribution for Developers
  output_path: /otelcol-dev
  # output_path: ./otelcol-dev-alpine
  otelcol_version: 0.103.0

exporters:
  - gomod:
      # NOTE: Prior to v0.86.0 use the `loggingexporter` instead of `debugexporter`.
      go.opentelemetry.io/collector/exporter/debugexporter v0.103.0
  - gomod:
      go.opentelemetry.io/collector/exporter/otlpexporter v0.103.0
  - gomod:
      github.com/open-telemetry/opentelemetry-collector-contrib/exporter/azuremonitorexporter v0.103.0
  - gomod:
      github.com/open-telemetry/opentelemetry-collector-contrib/exporter/azuredataexplorerexporter v0.103.0
  - gomod:
      go.opentelemetry.io/collector/exporter/nopexporter v0.103.0
  - gomod:
      github.com/open-telemetry/opentelemetry-collector-contrib/exporter/prometheusexporter v0.103.0
  - gomod:
      github.com/open-telemetry/opentelemetry-collector-contrib/exporter/prometheusremotewriteexporter v0.103.0

processors:
  - gomod:
      go.opentelemetry.io/collector/processor/batchprocessor v0.103.0
  - gomod:
      github.com/open-telemetry/opentelemetry-collector-contrib/processor/filterprocessor v0.103.0
  - gomod:
      github.com/open-telemetry/opentelemetry-collector-contrib/processor/intervalprocessor v0.103.0
  - gomod:
      github.com/open-telemetry/opentelemetry-collector-contrib/processor/k8sattributesprocessor v0.103.0
  - gomod:
      github.com/open-telemetry/opentelemetry-collector-contrib/processor/metricstransformprocessor v0.103.0

receivers:
  - gomod:
      go.opentelemetry.io/collector/receiver/otlpreceiver v0.103.0
  - gomod:
      github.com/open-telemetry/opentelemetry-collector-contrib/receiver/k8sclusterreceiver v0.103.0
  - gomod:
      github.com/open-telemetry/opentelemetry-collector-contrib/receiver/kubeletstatsreceiver v0.103.0
  - gomod:
      go.opentelemetry.io/collector/receiver/nopreceiver v0.103.0
  - gomod:
      github.com/open-telemetry/opentelemetry-collector-contrib/receiver/prometheusreceiver v0.103.0
  - gomod:
      github.com/open-telemetry/opentelemetry-collector-contrib/receiver/simpleprometheusreceiver v0.103.0

extensions:
  - gomod:
      github.com/open-telemetry/opentelemetry-collector-contrib/extension/healthcheckextension v0.103.0
  • container image available on the docker hub
docker.io/a9d593e2/otelcol-metrics:v0.103.0
  • using the following configmap as configuration file
apiVersion: v1
data:
  config-otl-collect.yaml: |
    receivers:
      otlp:
        protocols:
          grpc:
            endpoint: 0.0.0.0:4317
      prometheus:
        config:
          scrape_configs:
            - job_name: 'otel-collector'
              scrape_interval: 5s
              static_configs:
                - targets: ['0.0.0.0:18888']
            - job_name: k8s
              kubernetes_sd_configs:
              - role: pod
              relabel_configs:
              - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
                regex: "true"
                action: keep
              metric_relabel_configs:
              - source_labels: [__name__]
                regex: "(request_duration_seconds.*|response_duration_seconds.*)"
                action: keep
            - job_name: 'defalt-ns'
              kubernetes_sd_configs:
              - role: pod
              relabel_configs:
              - source_labels: [__meta_kubernetes_namespace_default]
                regex: "true"
                action: keep
    processors:
      batch:

    exporters:
      debug:
        verbosity: detailed
      prometheusremotewrite:
        endpoint: http://vmsingle-vmsingle-otel.metrics-victoria-otel.svc.cluster.local:8429/api/v1/write
      prometheus:
        endpoint: 0.0.0.0:8889
        const_labels:
          otel: otel-test
        send_timestamps: true
        metric_expiration: 180m
        enable_open_metrics: true
        add_metric_suffixes: true
        resource_to_telemetry_conversion:
          enabled: true

    service:
      pipelines:
        logs:
          receivers: [otlp]
          processors: [batch]
          exporters: [debug]
        metrics:
          receivers: [otlp,prometheus]
          processors: [batch]
          exporters: [debug, prometheus,prometheusremotewrite]
    #    traces:
    #      receivers: [otlp]
    #      processors: [batch]
    #      exporters: [debug]
kind: ConfigMap
metadata:
  name: otelcol-metrics-gw-01
  namespace: default

Copy link
Contributor

This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

@github-actions github-actions bot added the Stale label Aug 26, 2024
@dashpole
Copy link
Contributor

Dup of #31910. See #31910 (comment)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working exporter/prometheusremotewrite needs triage New item requiring triage Stale
Projects
None yet
Development

No branches or pull requests

3 participants