-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
elasticsearchexporter failed to connect to elastic cloud #29689
Comments
Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
thx, I will check it ASAP |
elasticsearch exporter not support |
@JaredTan95 I am looking for issues as a first-time contributor to OTel. Could I take this up and try to add support for ES 8.x? If yes, could you please help me get started. Any pointers will be helpful. Thanks. |
@JaredTan95 Could you clarify which versions are supported? It would be good to document this if possible. |
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
/label waiting-for-author Not being able to send to Elasticsearch 8.x means this exporter is unusable for many users. I'll happily test it if needed but I don't have time to prepare a proper PR, sorry. |
I do not experience any issues with sending to Elastic cloud 8.x using elasticsearch exporter. Setup: ocb-config-main.yaml: dist:
module: github.com/open-telemetry/opentelemetry-collector # the module name for the new distribution, following Go mod conventions. Optional, but recommended.
name: collector # the binary name. Optional.
description: "Custom OpenTelemetry Collector distribution" # a long name for the application. Optional.
otelcol_version: "0.96.0" # the OpenTelemetry Collector version to use as base for the distribution. Optional.
output_path: ./build_main/ # the path to write the output (sources and binary). Optional.
version: "1.0.0" # the version for your custom OpenTelemetry Collector. Optional.
# go: "/usr/bin/go" # which Go binary to use to compile the generated sources. Optional.
# debug_compilation: false # enabling this causes the builder to keep the debug symbols in the resulting binary. Optional.
exporters:
- gomod: "github.com/open-telemetry/opentelemetry-collector-contrib/exporter/elasticsearchexporter v0.96.0" # the Go module for the component. Required.
receivers:
- gomod:
go.opentelemetry.io/collector/receiver/otlpreceiver v0.96.0 command to build otel collector:
otelcol-main.yaml: receivers:
otlp:
protocols:
grpc:
endpoint: localhost:4317
http:
endpoint: localhost:4318
exporters:
elasticsearch:
endpoints: [ "https://**redacted**.cloud.es.io" ]
logs_index: foo
api_key: **redacted**
retry:
enabled: true
max_requests: 10000
service:
pipelines:
logs:
receivers: [otlp]
processors: []
exporters: [elasticsearch] command to run the collector:
In another terminal, run the command to send sample logs:
In kibana, there are 100 logs indexed:
The error in this issue appears to be a connectivity issue rather than a bug in elasticsearch exporter.
@JaredTan95 do you mind clarifying what is not supported? |
In our tests (to bridge AWS CloudWatch logs to Elastic self-hosted v8) we had to build the collector with go-elasticsearch v8 (basically waiting for this PR to be merged). |
It would be helpful to get the actual error log in your case since I am not able to reproduce the issue. On a separate note, since supposedly go-elasticsearch v7 works for both v7 and v8, and that upgrading to go-elasticsearch v8 will break support for v7 (see issue), I don't see #30262 getting merged soon. However, if you could show some errors in your use case, that could help make a case for adding support for specifically v8 (via a feature flag maybe?) |
Thanks, the screenshot is very helpful. Here's my hypothesis: as you can see from the stacktrace, there's a timeout_sender.go:49. Apparently, there's a default timeout of 5s for request export, while the exporter code has a default 90s timeout passed into go-elasticsearch bulk indexer. This means that by default, if Elasticsearch takes >5s to respond, the context will reach its deadline before go-elasticsearch http client gives up, hence the "context deadline exceeded" error. It is a combination of bad hardcoded defaults and a slow Elasticsearch. As to why upgrading to go-elasticsearch v8 solves your issue, it can be either some changes within go-elasticsearch v8, or how go-elasticsearch is used. Do you mind sharing the exact code changes you've made to upgrade to go-elasticsearch v8 from v7, as well as the collector configuration (redact sensitive information please)? It must be more than just a go mod replace since there is code that references v7 explicitly. It could also be that certain errors are retried in v7 but not v8 such that the retries in v7 use >5s and cause context deadline to be exceeded before bulk indexer can finish flushing. |
I doubt our Elasticsearch API takes more than 5s but I lack the data for when the test were conducted, I'll ask my colleague to chip in and provide some more context as soon as possible. |
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
This issue has been closed as inactive because it has been stale for 120 days with no activity. |
Component(s)
exporter/elasticsearch
What happened?
Description
my environment is hosted on the EKS 1.26.0 cluster and elasticsearch is 8.10 on the elastic cloud. I'm using elasticsearch exporter to send log to the elastic cloud. I'm getting errors that it keep dialing 10.46.48.34:18422 and timeout. I don't know where this IP come from.
below is error message.
Steps to Reproduce
Expected Result
elasticsearchexport can send logs to the elastic cloud.
Actual Result
Collector version
0.89
Environment information
Environment
EKS: 1.26
OpenTelemetry Collector configuration
Log output
Additional context
No response
The text was updated successfully, but these errors were encountered: