-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
prometheusremotewrite context deadline exceeded #31910
Comments
Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
I'm seeing something similar. I added some details over on this issue: open-telemetry/opentelemetry-collector#8217 (comment) |
I managed to solve (or brute force?) the issue by setting this in the exporter, the default is 5 consumers.
|
Thank you @martinohansen. I'll try, but it's just a workaround and maybe this issue needs to be fixed. |
I'm having a similar error, but increasing the num_consumers didn't help. However, two instances can't do that and it fails with:
No more info is available. This is my config:
What can we do to further troubleshoot this issue? |
After 6 hours, I finally figured it out: An nginx config in mimir-lb that doesn't update the IP addresses of the upstream servers. One of the upstream containers restarted and got a new IP address, which wasn't reflected in nginx. |
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
I'm encountering the same error. From the error messages, it is not clear to me whether writing to the remote endpoint is failing (i.e. does |
I am getting a similar issue here. I am using the opentelemetry-collector-contrib: 0.103.1
|
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
triage:
|
I'm also getting this error when trying to use prometheusremotewrite... Can't figure out what the issue is. Error from
Config: exporters:
prometheusremotewrite:
add_metric_suffixes: false
endpoint: http://mimir-.../api/v1/push
headers:
Authorization: Bearer my-token-here
X-Scope-OrgID: my-org-id
max_batch_size_bytes: 30000000
tls:
insecure_skip_verify: true Also, when trying to use the otlphttp exporter I'm getting a 499. I've tried both exporters to send into Mimir. UPDATE I have solved this for the exporters:
prometheusremotewrite:
endpoint: http://mimir-.../api/v1/push
headers:
Authorization: Bearer my-token-here
tls:
insecure_skip_verify: true |
for me its worked by changing from this:
to this:
|
I am having similar issue as: |
same for me..please help |
the same :(
|
I have the same error:
My config.
|
I fixed my case; that caused when I changed the DNS of the endpoint and made it not available. |
Component(s)
exporter/prometheusremotewrite
What happened?
Description
If the endpoint is not reachable and OTEL can't send metrics, I get some error messages.
Steps to Reproduce
Expected Result
No error messages, but an info log to let you know that the collector is queuing the metrics due to endpoint downtime.
Actual Result
2024-03-22T14:35:06.566Z error exporterhelper/queue_sender.go:97 Exporting failed. Dropping data. {"kind": "exporter", "data_type": "metrics", "name": "prometheusremotewrite", "error": "Permanent error: Permanent error: context deadline exceeded", "dropped_items": 2353} go.opentelemetry.io/collector/exporter/exporterhelper.newQueueSender.func1 go.opentelemetry.io/collector/[email protected]/exporterhelper/queue_sender.go:97 go.opentelemetry.io/collector/exporter/internal/queue.(*boundedMemoryQueue[...]).Consume go.opentelemetry.io/collector/[email protected]/internal/queue/bounded_memory_queue.go:57 go.opentelemetry.io/collector/exporter/internal/queue.(*Consumers[...]).Start.func1 go.opentelemetry.io/collector/[email protected]/internal/queue/consumers.go:43 2024-03-22T14:35:12.008Z error exporterhelper/queue_sender.go:97 Exporting failed. Dropping data. {"kind": "exporter", "data_type": "metrics", "name": "prometheusremotewrite", "error": "Permanent error: Permanent error: context deadline exceeded", "dropped_items": 24} go.opentelemetry.io/collector/exporter/exporterhelper.newQueueSender.func1 go.opentelemetry.io/collector/[email protected]/exporterhelper/queue_sender.go:97 go.opentelemetry.io/collector/exporter/internal/queue.(*boundedMemoryQueue[...]).Consume go.opentelemetry.io/collector/[email protected]/internal/queue/bounded_memory_queue.go:57 go.opentelemetry.io/collector/exporter/internal/queue.(*Consumers[...]).Start.func1 go.opentelemetry.io/collector/[email protected]/internal/queue/consumers.go:43
Collector version
0.96.0
Environment information
Environment
Docker image: otel/opentelemetry-collector:0.96.0
OpenTelemetry Collector configuration
Log output
Additional context
No response
The text was updated successfully, but these errors were encountered: