From 54d0dcba50f189bac1d8842a64f95bb696d06f69 Mon Sep 17 00:00:00 2001 From: Andrew Wilkins Date: Wed, 12 Jun 2024 22:48:18 +0800 Subject: [PATCH] [exporter/elasticsearch] Use confighttp.ClientConfig (#33367) **Description:** Move `Authentication` out of `ClientConfig`, and replace `ClientConfig` with `confighttp.ClientConfig`. This enables all common `confighttp` functionality, such as auth extensions, with the exception of "compression". For now compression remains unconfigurable, and is always enabled and uses gzip (this is defined within the go-docappender dependency). We should add some benchmarks (https://github.com/open-telemetry/opentelemetry-collector-contrib/issues/32504) before switching to the `confighttp`-based compressors. The exporter now implements a `component.StartFunc`, and we create the bulk indexer there so that we have access to `component.Host` for calling `confighttp.ClientConfig.ToClient`. **Link to tracking Issue:** N/A **Testing:** - Added a test verifying that `confighttp`'s `auth` is supported, with a mock client auth - Performed some manual testing with the `basicauth` extension, verifying the username/password are passed through to ES as configured **Documentation:** Updated the exporter's README: - added mention that `confighttp`'s `endpoint` setting is now supported too - simplified the example configuration - removed the `confighttp`, `configtls`, and `sender_queue` settings, and replaced with links to their docs - split the configuration settings into categories, with the required ones up front (URL + credentials) and optional ones in multiple sections under "Advanced configuration" --------- Co-authored-by: Carson Ip Co-authored-by: Adrian Cole <64215+codefromthecrypt@users.noreply.github.com> Co-authored-by: Andrzej Stencel --- .../elasticsearchexporter-confighttp.yaml | 30 +++ exporter/elasticsearchexporter/README.md | 204 +++++++++++------- exporter/elasticsearchexporter/config.go | 112 +++++----- exporter/elasticsearchexporter/config_test.go | 71 ++++-- .../elasticsearch_bulk.go | 55 +++-- exporter/elasticsearchexporter/exporter.go | 60 ++++-- .../elasticsearchexporter/exporter_test.go | 78 ++++++- exporter/elasticsearchexporter/factory.go | 38 ++-- .../elasticsearchexporter/factory_test.go | 4 +- exporter/elasticsearchexporter/go.mod | 15 +- exporter/elasticsearchexporter/go.sum | 26 ++- .../testdata/config.yaml | 2 + 12 files changed, 456 insertions(+), 239 deletions(-) create mode 100644 .chloggen/elasticsearchexporter-confighttp.yaml diff --git a/.chloggen/elasticsearchexporter-confighttp.yaml b/.chloggen/elasticsearchexporter-confighttp.yaml new file mode 100644 index 000000000000..6e849fe34d0d --- /dev/null +++ b/.chloggen/elasticsearchexporter-confighttp.yaml @@ -0,0 +1,30 @@ +# Use this changelog template to create an entry for release notes. + +# One of 'breaking', 'deprecation', 'new_component', 'enhancement', 'bug_fix' +change_type: enhancement + +# The name of the component, or a single word describing the area of concern, (e.g. filelogreceiver) +component: elasticsearchexporter + +# A brief description of the change. Surround your text with quotes ("") if it needs to start with a backtick (`). +note: Add support for confighttp options, notably "auth". + +# Mandatory: One or more tracking issues related to the change. You can use the PR number here if no issue exists. +issues: [33367] + +# (Optional) One or more lines of additional information to render under the primary note. +# These lines will be padded with 2 spaces and then inserted directly into the document. +# Use pipe (|) for multiline entries. +subtext: | + Add support for confighttp and related configuration settings, such as "auth". + This change also means that the Elasticsearch URL may be specified as "endpoint", + like the otlphttp exporter. + +# If your change doesn't affect end users or the exported elements of any package, +# you should instead start your pull request title with [chore] or use the "Skip Changelog" label. +# Optional: The change log or logs in which this entry should be included. +# e.g. '[user]' or '[user, api]' +# Include 'user' if the change is relevant to end users. +# Include 'api' if there is a change to a library API. +# Default: '[user]' +change_logs: [user] diff --git a/exporter/elasticsearchexporter/README.md b/exporter/elasticsearchexporter/README.md index 7a6d815475c5..43b799582107 100644 --- a/exporter/elasticsearchexporter/README.md +++ b/exporter/elasticsearchexporter/README.md @@ -16,29 +16,82 @@ This exporter supports sending OpenTelemetry logs and traces to [Elasticsearch]( ## Configuration options -- `endpoints`: List of Elasticsearch URLs. If `endpoints` and `cloudid` are missing, the - ELASTICSEARCH_URL environment variable will be used. -- `cloudid` (optional): - [ID](https://www.elastic.co/guide/en/cloud/current/ec-cloud-id.html) of the - Elastic Cloud Cluster to publish events to. The `cloudid` can be used instead - of `endpoints`. -- `num_workers` (default=runtime.NumCPU()): Number of workers publishing bulk requests concurrently. -- `index` (DEPRECATED, please use `logs_index` for logs, `traces_index` for traces): The - [index](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices.html) - or [data stream](https://www.elastic.co/guide/en/elasticsearch/reference/current/data-streams.html) - name to publish events to. The default value is `logs-generic-default`. -- `logs_index`: The - [index](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices.html) - or [data stream](https://www.elastic.co/guide/en/elasticsearch/reference/current/data-streams.html) - name to publish events to. The default value is `logs-generic-default` +Exactly one of the following settings is required: + +- `endpoint` (no default): The target Elasticsearch URL to which data will be sent + (e.g. `https://elasticsearch:9200`) +- `endpoints` (no default): A list of Elasticsearch URLs to which data will be sent, + attempted in round-robin order +- `cloudid` (no default): The [Elastic Cloud ID](https://www.elastic.co/guide/en/cloud/current/ec-cloud-id.html) + of the Elastic Cloud Cluster to which data will be sent (e.g. `foo:YmFyLmNsb3VkLmVzLmlvJGFiYzEyMyRkZWY0NTY=`) + +When the above settings are missing, `endpoints` will default to the +comma-separated `ELASTICSEARCH_URL` environment variable. + +Elasticsearch credentials may be configured via [Authentication configuration][configauth] settings. +As a shortcut, the following settings are also supported: + +- `user` (optional): Username used for HTTP Basic Authentication. +- `password` (optional): Password used for HTTP Basic Authentication. +- `api_key` (optional): [Elasticsearch API Key] in "encoded" format. + +Example: + +```yaml +exporters: + elasticsearch: + endpoint: https://elastic.example.com:9200 + auth: + authenticator: basicauth + +extensions: + basicauth: + username: elastic + password: changeme + +······ + +service: + extensions: [basicauth] + pipelines: + logs: + receivers: [otlp] + processors: [batch] + exporters: [elasticsearch] + traces: + receivers: [otlp] + processors: [batch] + exporters: [elasticsearch] +``` + +## Advanced configuration + +### HTTP settings + +The Elasticsearch exporter supports common [HTTP Configuration Settings][confighttp], except for `compression` (all requests are uncompressed). +As a consequence of supporting [confighttp], the Elasticsearch exporter also supports common [TLS Configuration Settings][configtls]. + +The Elasticsearch exporter sets `timeout` (HTTP request timeout) to 90s by default. +All other defaults are as defined by [confighttp]. + +### Queuing + +The Elasticsearch exporter supports the common [`sending_queue` settings][exporterhelper]. However, the sending queue is currently disabled by default. + +### Elasticsearch document routing + +Telemetry data will be written to signal specific data streams by default: +logs to `logs-generic-default`, and traces to `traces-generic-default`. +This can be customised through the following settings: + +- `index` (DEPRECATED, please use `logs_index` for logs, `traces_index` for traces): The [index] or [data stream] name to publish events to. + The default value is `logs-generic-default`. +- `logs_index`: The [index] or [data stream] name to publish events to. The default value is `logs-generic-default` - `logs_dynamic_index` (optional): takes resource or log record attribute named `elasticsearch.index.prefix` and `elasticsearch.index.suffix` resulting dynamically prefixed / suffixed indexing based on `logs_index`. (priority: resource attribute > log record attribute) - `enabled`(default=false): Enable/Disable dynamic index for log records -- `traces_index`: The - [index](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices.html) - or [data stream](https://www.elastic.co/guide/en/elasticsearch/reference/current/data-streams.html) - name to publish traces to. The default value is `traces-generic-default`. +- `traces_index`: The [index] or [data stream] name to publish traces to. The default value is `traces-generic-default`. - `traces_dynamic_index` (optional): takes resource or span attribute named `elasticsearch.index.prefix` and `elasticsearch.index.suffix` resulting dynamically prefixed / suffixed indexing based on `traces_index`. (priority: resource attribute > span attribute) @@ -49,23 +102,17 @@ This exporter supports sending OpenTelemetry logs and traces to [Elasticsearch]( The last string appended belongs to the date when the data is being generated. - `prefix_separator`(default=`-`): Set a separator between logstash_prefix and date. - `date_format`(default=`%Y.%m.%d`): Time format (based on strftime) to generate the second part of the Index name. -- `pipeline` (optional): Optional [Ingest pipeline](https://www.elastic.co/guide/en/elasticsearch/reference/current/ingest.html) ID used for processing documents published by the exporter. -- `flush`: Event bulk indexer buffer flush settings - - `bytes` (default=5000000): Write buffer flush size limit. - - `interval` (default=30s): Write buffer flush time limit. -- `retry`: Elasticsearch bulk request retry settings - - `enabled` (default=true): Enable/Disable request retry on error. Failed requests are retried with exponential backoff. - - `max_requests` (default=3): Number of HTTP request retries. - - `initial_interval` (default=100ms): Initial waiting time if a HTTP request failed. - - `max_interval` (default=1m): Max waiting time if a HTTP request failed. - - `retry_on_status` (default=[429, 500, 502, 503, 504]): Status codes that trigger request or document level retries. Request level retry and document level retry status codes are shared and cannot be configured separately. To avoid duplicates, it is recommended to set it to `[429]`. WARNING: The default will be changed to `[429]` in the future. + +### Elasticsearch document mapping + +The Elasticsearch exporter supports several document schemas and preprocessing +behaviours, which may be configured throug the following settings: + - `mapping`: Events are encoded to JSON. The `mapping` allows users to configure additional mapping rules. - `mode` (default=none): The fields naming mode. valid modes are: - `none`: Use original fields and event structure from the OTLP event. - - `ecs`: Try to map fields defined in the - [OpenTelemetry Semantic Conventions](https://github.com/open-telemetry/semantic-conventions) (version 1.22.0) - to [Elastic Common Schema (ECS)](https://www.elastic.co/guide/en/ecs/current/index.html). :warning: This mode's behavior is unstable, it is currently undergoing changes + - `ecs`: Try to map fields to [Elastic Common Schema (ECS)][ECS] - `raw`: Omit the `Attributes.` string prefixed to field names for log and span attributes as well as omit the `Events.` string prefixed to field names for span events. @@ -77,65 +124,60 @@ This exporter supports sending OpenTelemetry logs and traces to [Elasticsearch]( will reject documents that have duplicate fields. - `dedot` (default=true): When enabled attributes with `.` will be split into proper json objects. -- `sending_queue` - - `enabled` (default = false) - - `num_consumers` (default = 10): Number of consumers that dequeue batches; ignored if `enabled` is `false` - - `queue_size` (default = 1000): Maximum number of batches kept in queue; ignored if `enabled` is `false`; -### HTTP settings -- `read_buffer_size` (default=0): Read buffer size of HTTP client. -- `write_buffer_size` (default=0): Write buffer size of HTTP client. -- `timeout` (default=90s): HTTP request time limit. -- `headers` (optional): Headers to be sent with each HTTP request. +#### ECS mapping mode -### Security and Authentication settings +> [!WARNING] +> The ECS mode mapping mode is currently undergoing changes, and its behaviour is unstable. -- `user` (optional): Username used for HTTP Basic Authentication. -- `password` (optional): Password used for HTTP Basic Authentication. -- `api_key` (optional): Authorization [API Key](https://www.elastic.co/guide/en/elasticsearch/reference/current/security-api-create-api-key.html) in "encoded" format. +In ECS mapping mode, the Elastisearch Exporter attempts to map fields from +[OpenTelemetry Semantic Conventions][SemConv] (version 1.22.0) to [Elastic Common Schema][ECS]. +This mode may be used for compatibility with existing dashboards that work with with ECS. + +### Elasticsearch ingest pipeline + +Documents may be optionally passed through an [Elasticsearch Ingest pipeline] prior to indexing. +This can be configured through the following settings: + +- `pipeline` (optional): ID of an [Elasticsearch Ingest pipeline] used for processing documents published by the exporter. + +### Elasticsearch bulk indexing -### TLS settings -- `ca_file` (optional): Root Certificate Authority (CA) certificate, for - verifying the server's identity, if TLS is enabled. -- `cert_file` (optional): Client TLS certificate. -- `key_file` (optional): Client TLS key. -- `insecure` (optional): In gRPC when set to true, this is used to disable the client transport security. In HTTP, this disables verifying the server's certificate chain and host name. -- `insecure_skip_verify` (optional): Will enable TLS but not verify the certificate. +The Elasticsearch exporter uses the [Elasticsearch Bulk API] for indexing documents. +The behaviour of this bulk indexing can be configured with the following settings: + +- `num_workers` (default=runtime.NumCPU()): Number of workers publishing bulk requests concurrently. +- `flush`: Event bulk indexer buffer flush settings + - `bytes` (default=5000000): Write buffer flush size limit. + - `interval` (default=30s): Write buffer flush time limit. +- `retry`: Elasticsearch bulk request retry settings + - `enabled` (default=true): Enable/Disable request retry on error. Failed requests are retried with exponential backoff. + - `max_requests` (default=3): Number of HTTP request retries. + - `initial_interval` (default=100ms): Initial waiting time if a HTTP request failed. + - `max_interval` (default=1m): Max waiting time if a HTTP request failed. + - `retry_on_status` (default=[429, 500, 502, 503, 504]): Status codes that trigger request or document level retries. Request level retry and document level retry status codes are shared and cannot be configured separately. To avoid duplicates, it is recommended to set it to `[429]`. WARNING: The default will be changed to `[429]` in the future. -### Node Discovery +### Elasticsearch node discovery -The Elasticsearch Exporter will check Elasticsearch regularly for available -nodes and updates the list of hosts if discovery is enabled. Newly discovered -nodes will automatically be used for load balancing. +The Elasticsearch Exporter will regularly check Elasticsearch for available nodes. +Newly discovered nodes will automatically be used for load balancing. +Settings related to node discovery are: - `discover`: - `on_start` (optional): If enabled the exporter queries Elasticsearch for all known nodes in the cluster on startup. - `interval` (optional): Interval to update the list of Elasticsearch nodes. -## Example +Node discovery can be disabled by setting `discover.interval` to 0. -```yaml -exporters: - elasticsearch/trace: - endpoints: [https://elastic.example.com:9200] - traces_index: trace_index - elasticsearch/log: - endpoints: [http://localhost:9200] - logs_index: my_log_index - sending_queue: - enabled: true - num_consumers: 20 - queue_size: 1000 -······ -service: - pipelines: - logs: - receivers: [otlp] - processors: [batch] - exporters: [elasticsearch/log] - traces: - receivers: [otlp] - exporters: [elasticsearch/trace] - processors: [batch] -``` +[confighttp]: https://github.com/open-telemetry/opentelemetry-collector/tree/main/config/confighttp/README.md#http-configuration-settings +[configtls]: https://github.com/open-telemetry/opentelemetry-collector/blob/main/config/configtls/README.md#tls-configuration-settings +[configauth]: https://github.com/open-telemetry/opentelemetry-collector/blob/main/config/configauth/README.md#authentication-configuration +[exporterhelper]: https://github.com/open-telemetry/opentelemetry-collector/blob/main/exporter/exporterhelper/README.md +[Elasticsearch Ingest pipeline]: https://www.elastic.co/guide/en/elasticsearch/reference/current/ingest.html +[Elasticsearch Bulk API]: https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-bulk.html +[Elasticsearch API Key]: https://www.elastic.co/guide/en/elasticsearch/reference/current/security-api-create-api-key.html +[index]: https://www.elastic.co/guide/en/elasticsearch/reference/current/indices.html +[data stream]: https://www.elastic.co/guide/en/elasticsearch/reference/current/data-streams.html +[ecs]: https://www.elastic.co/guide/en/ecs/current/index.html +[SemConv]: https://github.com/open-telemetry/semantic-conventions diff --git a/exporter/elasticsearchexporter/config.go b/exporter/elasticsearchexporter/config.go index 413cbcec75f6..f1f13168de22 100644 --- a/exporter/elasticsearchexporter/config.go +++ b/exporter/elasticsearchexporter/config.go @@ -12,8 +12,8 @@ import ( "strings" "time" + "go.opentelemetry.io/collector/config/confighttp" "go.opentelemetry.io/collector/config/configopaque" - "go.opentelemetry.io/collector/config/configtls" "go.opentelemetry.io/collector/exporter/exporterhelper" ) @@ -58,12 +58,13 @@ type Config struct { // https://www.elastic.co/guide/en/elasticsearch/reference/current/ingest.html Pipeline string `mapstructure:"pipeline"` - ClientConfig `mapstructure:",squash"` - Discovery DiscoverySettings `mapstructure:"discover"` - Retry RetrySettings `mapstructure:"retry"` - Flush FlushSettings `mapstructure:"flush"` - Mapping MappingsSettings `mapstructure:"mapping"` - LogstashFormat LogstashFormatSettings `mapstructure:"logstash_format"` + confighttp.ClientConfig `mapstructure:",squash"` + Authentication AuthenticationSettings `mapstructure:",squash"` + Discovery DiscoverySettings `mapstructure:"discover"` + Retry RetrySettings `mapstructure:"retry"` + Flush FlushSettings `mapstructure:"flush"` + Mapping MappingsSettings `mapstructure:"mapping"` + LogstashFormat LogstashFormatSettings `mapstructure:"logstash_format"` } type LogstashFormatSettings struct { @@ -76,25 +77,6 @@ type DynamicIndexSetting struct { Enabled bool `mapstructure:"enabled"` } -type ClientConfig struct { - Authentication AuthenticationSettings `mapstructure:",squash"` - - // ReadBufferSize for HTTP client. See http.Transport.ReadBufferSize. - ReadBufferSize int `mapstructure:"read_buffer_size"` - - // WriteBufferSize for HTTP client. See http.Transport.WriteBufferSize. - WriteBufferSize int `mapstructure:"write_buffer_size"` - - // Timeout configures the HTTP request timeout. - Timeout time.Duration `mapstructure:"timeout"` - - // Headers allows users to configure optional HTTP headers that - // will be send with each HTTP request. - Headers map[string]string `mapstructure:"headers,omitempty"` - - configtls.ClientConfig `mapstructure:"tls,omitempty"` -} - // AuthenticationSettings defines user authentication related settings. type AuthenticationSettings struct { // User is used to configure HTTP Basic Authentication. @@ -184,9 +166,8 @@ const ( ) var ( - errConfigNoEndpoint = errors.New("endpoints or cloudid must be specified") - errConfigEmptyEndpoint = errors.New("endpoints must not include empty entries") - errConfigCloudIDMutuallyExclusive = errors.New("only one of endpoints or cloudid may be specified") + errConfigEndpointRequired = errors.New("exactly one of [endpoint, endpoints, cloudid] must be specified") + errConfigEmptyEndpoint = errors.New("endpoint must not be empty") ) func (m MappingMode) String() string { @@ -223,32 +204,11 @@ const defaultElasticsearchEnvName = "ELASTICSEARCH_URL" // Validate validates the elasticsearch server configuration. func (cfg *Config) Validate() error { - if len(cfg.Endpoints) == 0 && cfg.CloudID == "" { - v := os.Getenv(defaultElasticsearchEnvName) - if v == "" { - return errConfigNoEndpoint - } - for _, endpoint := range strings.Split(v, ",") { - endpoint = strings.TrimSpace(endpoint) - if err := validateEndpoint(endpoint); err != nil { - return fmt.Errorf("invalid endpoint %q: %w", endpoint, err) - } - } - } - - if cfg.CloudID != "" { - if len(cfg.Endpoints) > 0 { - return errConfigCloudIDMutuallyExclusive - } - if _, err := parseCloudID(cfg.CloudID); err != nil { - return err - } + endpoints, err := cfg.endpoints() + if err != nil { + return err } - - for _, endpoint := range cfg.Endpoints { - if endpoint == "" { - return errConfigEmptyEndpoint - } + for _, endpoint := range endpoints { if err := validateEndpoint(endpoint); err != nil { return fmt.Errorf("invalid endpoint %q: %w", endpoint, err) } @@ -258,10 +218,54 @@ func (cfg *Config) Validate() error { return fmt.Errorf("unknown mapping mode %q", cfg.Mapping.Mode) } + if cfg.Compression != "" { + // TODO support confighttp.ClientConfig.Compression + return errors.New("compression is not currently configurable") + } return nil } +func (cfg *Config) endpoints() ([]string, error) { + // Exactly one of endpoint, endpoints, or cloudid must be configured. + // If none are set, then $ELASTICSEARCH_URL may be specified instead. + var endpoints []string + var numEndpointConfigs int + if cfg.Endpoint != "" { + numEndpointConfigs++ + endpoints = []string{cfg.Endpoint} + } + if len(cfg.Endpoints) > 0 { + numEndpointConfigs++ + endpoints = cfg.Endpoints + } + if cfg.CloudID != "" { + numEndpointConfigs++ + u, err := parseCloudID(cfg.CloudID) + if err != nil { + return nil, err + } + endpoints = []string{u.String()} + } + if numEndpointConfigs == 0 { + if v := os.Getenv(defaultElasticsearchEnvName); v != "" { + numEndpointConfigs++ + endpoints = strings.Split(v, ",") + for i, endpoint := range endpoints { + endpoints[i] = strings.TrimSpace(endpoint) + } + } + } + if numEndpointConfigs != 1 { + return nil, errConfigEndpointRequired + } + return endpoints, nil +} + func validateEndpoint(endpoint string) error { + if endpoint == "" { + return errConfigEmptyEndpoint + } + u, err := url.Parse(endpoint) if err != nil { return err diff --git a/exporter/elasticsearchexporter/config_test.go b/exporter/elasticsearchexporter/config_test.go index 933cf2fba564..cafa1f65b541 100644 --- a/exporter/elasticsearchexporter/config_test.go +++ b/exporter/elasticsearchexporter/config_test.go @@ -12,6 +12,9 @@ import ( "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" "go.opentelemetry.io/collector/component" + "go.opentelemetry.io/collector/config/configcompression" + "go.opentelemetry.io/collector/config/confighttp" + "go.opentelemetry.io/collector/config/configopaque" "go.opentelemetry.io/collector/confmap/confmaptest" "go.opentelemetry.io/collector/exporter/exporterhelper" @@ -32,6 +35,9 @@ func TestConfig(t *testing.T) { defaultRawCfg.(*Config).Endpoints = []string{"http://localhost:9200"} defaultRawCfg.(*Config).Mapping.Mode = "raw" + defaultMaxIdleConns := 100 + defaultIdleConnTimeout := 90 * time.Second + tests := []struct { configFile string id component.ID @@ -56,17 +62,19 @@ func TestConfig(t *testing.T) { LogsIndex: "logs-generic-default", TracesIndex: "trace_index", Pipeline: "mypipeline", - ClientConfig: ClientConfig{ - Authentication: AuthenticationSettings{ - User: "elastic", - Password: "search", - APIKey: "AvFsEiPs==", - }, - Timeout: 2 * time.Minute, - Headers: map[string]string{ + ClientConfig: confighttp.ClientConfig{ + Timeout: 2 * time.Minute, + MaxIdleConns: &defaultMaxIdleConns, + IdleConnTimeout: &defaultIdleConnTimeout, + Headers: map[string]configopaque.String{ "myheader": "test", }, }, + Authentication: AuthenticationSettings{ + User: "elastic", + Password: "search", + APIKey: "AvFsEiPs==", + }, Discovery: DiscoverySettings{ OnStart: true, }, @@ -106,17 +114,19 @@ func TestConfig(t *testing.T) { LogsIndex: "my_log_index", TracesIndex: "traces-generic-default", Pipeline: "mypipeline", - ClientConfig: ClientConfig{ - Authentication: AuthenticationSettings{ - User: "elastic", - Password: "search", - APIKey: "AvFsEiPs==", - }, - Timeout: 2 * time.Minute, - Headers: map[string]string{ + ClientConfig: confighttp.ClientConfig{ + Timeout: 2 * time.Minute, + MaxIdleConns: &defaultMaxIdleConns, + IdleConnTimeout: &defaultIdleConnTimeout, + Headers: map[string]configopaque.String{ "myheader": "test", }, }, + Authentication: AuthenticationSettings{ + User: "elastic", + Password: "search", + APIKey: "AvFsEiPs==", + }, Discovery: DiscoverySettings{ OnStart: true, }, @@ -167,6 +177,13 @@ func TestConfig(t *testing.T) { cfg.Index = "my_log_index" }), }, + { + id: component.NewIDWithName(metadata.Type, "confighttp_endpoint"), + configFile: "config.yaml", + expected: withDefaultConfig(func(cfg *Config) { + cfg.Endpoint = "https://elastic.example.com:9200" + }), + }, } for _, tt := range tests { @@ -197,13 +214,13 @@ func TestConfig_Validate(t *testing.T) { }{ "no endpoints": { config: withDefaultConfig(), - err: "endpoints or cloudid must be specified", + err: "exactly one of [endpoint, endpoints, cloudid] must be specified", }, "empty endpoint": { config: withDefaultConfig(func(cfg *Config) { cfg.Endpoints = []string{""} }), - err: "endpoints must not include empty entries", + err: `invalid endpoint "": endpoint must not be empty`, }, "invalid endpoint": { config: withDefaultConfig(func(cfg *Config) { @@ -223,12 +240,19 @@ func TestConfig_Validate(t *testing.T) { }), err: `invalid decoded CloudID "abc"`, }, - "endpoint and cloudid both set": { + "endpoints and cloudid both set": { config: withDefaultConfig(func(cfg *Config) { cfg.Endpoints = []string{"http://test:9200"} cfg.CloudID = "foo:YmFyLmNsb3VkLmVzLmlvJGFiYzEyMyRkZWY0NTY=" }), - err: "only one of endpoints or cloudid may be specified", + err: "exactly one of [endpoint, endpoints, cloudid] must be specified", + }, + "endpoint and endpoints both set": { + config: withDefaultConfig(func(cfg *Config) { + cfg.Endpoint = "http://test:9200" + cfg.Endpoints = []string{"http://test:9200"} + }), + err: "exactly one of [endpoint, endpoints, cloudid] must be specified", }, "invalid mapping mode": { config: withDefaultConfig(func(cfg *Config) { @@ -243,6 +267,13 @@ func TestConfig_Validate(t *testing.T) { }), err: `invalid endpoint "without_scheme": invalid scheme "", expected "http" or "https"`, }, + "compression unsupported": { + config: withDefaultConfig(func(cfg *Config) { + cfg.Endpoints = []string{"http://test:9200"} + cfg.Compression = configcompression.TypeGzip + }), + err: `compression is not currently configurable`, + }, } for name, tt := range tests { diff --git a/exporter/elasticsearchexporter/elasticsearch_bulk.go b/exporter/elasticsearchexporter/elasticsearch_bulk.go index ce7e4e170dd3..c44a66f3db43 100644 --- a/exporter/elasticsearchexporter/elasticsearch_bulk.go +++ b/exporter/elasticsearchexporter/elasticsearch_bulk.go @@ -6,7 +6,6 @@ package elasticsearchexporter // import "github.com/open-telemetry/opentelemetry import ( "bytes" "context" - "crypto/tls" "fmt" "io" "net/http" @@ -18,6 +17,7 @@ import ( "github.com/cenkalti/backoff/v4" "github.com/elastic/go-docappender/v2" elasticsearch7 "github.com/elastic/go-elasticsearch/v7" + "go.opentelemetry.io/collector/component" "go.uber.org/zap" "github.com/open-telemetry/opentelemetry-collector-contrib/internal/common/sanitize" @@ -65,18 +65,20 @@ func (*clientLogger) ResponseBodyEnabled() bool { return false } -func newElasticsearchClient(logger *zap.Logger, config *Config) (*esClientCurrent, error) { - tlsCfg, err := config.ClientConfig.LoadTLSConfig(context.Background()) +func newElasticsearchClient( + ctx context.Context, + config *Config, + host component.Host, + telemetry component.TelemetrySettings, + userAgent string, +) (*esClientCurrent, error) { + httpClient, err := config.ClientConfig.ToClient(ctx, host, telemetry) if err != nil { return nil, err } - transport := newTransport(config, tlsCfg) - headers := make(http.Header) - for k, v := range config.Headers { - headers.Add(k, v) - } + headers.Set("User-Agent", userAgent) // maxRetries configures the maximum number of event publishing attempts, // including the first send and additional retries. @@ -88,12 +90,18 @@ func newElasticsearchClient(logger *zap.Logger, config *Config) (*esClientCurren maxRetries = 0 } + // endpoints converts Config.Endpoints, Config.CloudID, + // and Config.ClientConfig.Endpoint to a list of addresses. + endpoints, err := config.endpoints() + if err != nil { + return nil, err + } + return elasticsearch7.NewClient(esConfigCurrent{ - Transport: transport, + Transport: httpClient.Transport, // configure connection setup - Addresses: config.Endpoints, - CloudID: config.CloudID, + Addresses: endpoints, Username: config.Authentication.User, Password: string(config.Authentication.Password), APIKey: string(config.Authentication.APIKey), @@ -114,25 +122,10 @@ func newElasticsearchClient(logger *zap.Logger, config *Config) (*esClientCurren // configure internal metrics reporting and logging EnableMetrics: false, // TODO EnableDebugLogger: false, // TODO - Logger: (*clientLogger)(logger), + Logger: (*clientLogger)(telemetry.Logger), }) } -func newTransport(config *Config, tlsCfg *tls.Config) *http.Transport { - transport := http.DefaultTransport.(*http.Transport).Clone() - if tlsCfg != nil { - transport.TLSClientConfig = tlsCfg - } - if config.ReadBufferSize > 0 { - transport.ReadBufferSize = config.ReadBufferSize - } - if config.WriteBufferSize > 0 { - transport.WriteBufferSize = config.WriteBufferSize - } - - return transport -} - func createElasticsearchBackoffFunc(config *RetrySettings) func(int) time.Duration { if !config.Enabled { return nil @@ -301,8 +294,12 @@ func (w *worker) run() { } func (w *worker) flush() { - ctx, cancel := context.WithTimeout(context.Background(), w.flushTimeout) - defer cancel() + ctx := context.Background() + if w.flushTimeout > 0 { + var cancel context.CancelFunc + ctx, cancel = context.WithTimeout(context.Background(), w.flushTimeout) + defer cancel() + } stat, err := w.indexer.Flush(ctx) w.stats.docsIndexed.Add(stat.Indexed) if err != nil { diff --git a/exporter/elasticsearchexporter/exporter.go b/exporter/elasticsearchexporter/exporter.go index d3c9f124f596..2611dd42727d 100644 --- a/exporter/elasticsearchexporter/exporter.go +++ b/exporter/elasticsearchexporter/exporter.go @@ -7,52 +7,58 @@ import ( "context" "errors" "fmt" + "runtime" "time" + "go.opentelemetry.io/collector/component" + "go.opentelemetry.io/collector/exporter" "go.opentelemetry.io/collector/pdata/pcommon" "go.opentelemetry.io/collector/pdata/plog" "go.opentelemetry.io/collector/pdata/ptrace" - "go.uber.org/zap" ) type elasticsearchExporter struct { - logger *zap.Logger + component.TelemetrySettings + userAgent string + config *Config index string logstashFormat LogstashFormatSettings dynamicIndex bool + model mappingModel - client *esClientCurrent bulkIndexer *esBulkIndexerCurrent - model mappingModel } -func newExporter(logger *zap.Logger, cfg *Config, index string, dynamicIndex bool) (*elasticsearchExporter, error) { +func newExporter( + cfg *Config, + set exporter.Settings, + index string, + dynamicIndex bool, +) (*elasticsearchExporter, error) { if err := cfg.Validate(); err != nil { return nil, err } - client, err := newElasticsearchClient(logger, cfg) - if err != nil { - return nil, err - } - - bulkIndexer, err := newBulkIndexer(logger, client, cfg) - if err != nil { - return nil, err - } - model := &encodeModel{ dedup: cfg.Mapping.Dedup, dedot: cfg.Mapping.Dedot, mode: cfg.MappingMode(), } + userAgent := fmt.Sprintf( + "%s/%s (%s/%s)", + set.BuildInfo.Description, + set.BuildInfo.Version, + runtime.GOOS, + runtime.GOARCH, + ) + return &elasticsearchExporter{ - logger: logger, - client: client, - bulkIndexer: bulkIndexer, + TelemetrySettings: set.TelemetrySettings, + userAgent: userAgent, + config: cfg, index: index, dynamicIndex: dynamicIndex, model: model, @@ -60,8 +66,24 @@ func newExporter(logger *zap.Logger, cfg *Config, index string, dynamicIndex boo }, nil } +func (e *elasticsearchExporter) Start(ctx context.Context, host component.Host) error { + client, err := newElasticsearchClient(ctx, e.config, host, e.TelemetrySettings, e.userAgent) + if err != nil { + return err + } + bulkIndexer, err := newBulkIndexer(e.Logger, client, e.config) + if err != nil { + return err + } + e.bulkIndexer = bulkIndexer + return nil +} + func (e *elasticsearchExporter) Shutdown(ctx context.Context) error { - return e.bulkIndexer.Close(ctx) + if e.bulkIndexer != nil { + return e.bulkIndexer.Close(ctx) + } + return nil } func (e *elasticsearchExporter) pushLogsData(ctx context.Context, ld plog.Logs) error { diff --git a/exporter/elasticsearchexporter/exporter_test.go b/exporter/elasticsearchexporter/exporter_test.go index eea08bdcdd0e..1fe7c04dc799 100644 --- a/exporter/elasticsearchexporter/exporter_test.go +++ b/exporter/elasticsearchexporter/exporter_test.go @@ -6,6 +6,7 @@ package elasticsearchexporter import ( "context" "encoding/json" + "errors" "fmt" "net/http" "runtime" @@ -17,8 +18,13 @@ import ( "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" + "go.opentelemetry.io/collector/component" + "go.opentelemetry.io/collector/component/componenttest" + "go.opentelemetry.io/collector/config/configauth" + "go.opentelemetry.io/collector/config/configopaque" "go.opentelemetry.io/collector/exporter" "go.opentelemetry.io/collector/exporter/exportertest" + "go.opentelemetry.io/collector/extension/auth/authtest" "go.opentelemetry.io/collector/pdata/plog" "go.opentelemetry.io/collector/pdata/ptrace" ) @@ -133,7 +139,7 @@ func TestExporterLogs(t *testing.T) { }) exporter := newTestLogsExporter(t, server.URL, func(cfg *Config) { - cfg.Headers = map[string]string{"foo": "bah"} + cfg.Headers = map[string]configopaque.String{"foo": "bah"} }) mustSendLogRecords(t, exporter, plog.NewLogRecord()) <-done @@ -154,7 +160,7 @@ func TestExporterLogs(t *testing.T) { }) exporter := newTestLogsExporter(t, server.URL, func(cfg *Config) { - cfg.Headers = map[string]string{"User-Agent": "overridden"} + cfg.Headers = map[string]configopaque.String{"User-Agent": "overridden"} }) mustSendLogRecords(t, exporter, plog.NewLogRecord()) <-done @@ -580,6 +586,36 @@ func TestExporterTraces(t *testing.T) { }) } +// TestExporterAuth verifies that the Elasticsearch exporter supports +// confighttp.ClientConfig.Auth. +func TestExporterAuth(t *testing.T) { + done := make(chan struct{}, 1) + testauthID := component.NewID(component.MustNewType("authtest")) + exporter := newUnstartedTestLogsExporter(t, "http://testing.invalid", func(cfg *Config) { + cfg.Auth = &configauth.Authentication{AuthenticatorID: testauthID} + }) + err := exporter.Start(context.Background(), &mockHost{ + extensions: map[component.ID]component.Component{ + testauthID: &authtest.MockClient{ + ResultRoundTripper: roundTripperFunc(func(*http.Request) (*http.Response, error) { + select { + case done <- struct{}{}: + default: + } + return nil, errors.New("nope") + }), + }, + }, + }) + require.NoError(t, err) + defer func() { + require.NoError(t, exporter.Shutdown(context.Background())) + }() + + mustSendLogRecords(t, exporter, plog.NewLogRecord()) + <-done +} + func newTestTracesExporter(t *testing.T, url string, fns ...func(*Config)) exporter.Traces { f := NewFactory() cfg := withDefaultConfig(append([]func(*Config){func(cfg *Config) { @@ -589,6 +625,9 @@ func newTestTracesExporter(t *testing.T, url string, fns ...func(*Config)) expor }}, fns...)...) exp, err := f.CreateTracesExporter(context.Background(), exportertest.NewNopSettings(), cfg) require.NoError(t, err) + + err = exp.Start(context.Background(), componenttest.NewNopHost()) + require.NoError(t, err) t.Cleanup(func() { require.NoError(t, exp.Shutdown(context.Background())) }) @@ -596,6 +635,16 @@ func newTestTracesExporter(t *testing.T, url string, fns ...func(*Config)) expor } func newTestLogsExporter(t *testing.T, url string, fns ...func(*Config)) exporter.Logs { + exp := newUnstartedTestLogsExporter(t, url, fns...) + err := exp.Start(context.Background(), componenttest.NewNopHost()) + require.NoError(t, err) + t.Cleanup(func() { + require.NoError(t, exp.Shutdown(context.Background())) + }) + return exp +} + +func newUnstartedTestLogsExporter(t *testing.T, url string, fns ...func(*Config)) exporter.Logs { f := NewFactory() cfg := withDefaultConfig(append([]func(*Config){func(cfg *Config) { cfg.Endpoints = []string{url} @@ -604,9 +653,6 @@ func newTestLogsExporter(t *testing.T, url string, fns ...func(*Config)) exporte }}, fns...)...) exp, err := f.CreateLogsExporter(context.Background(), exportertest.NewNopSettings(), cfg) require.NoError(t, err) - t.Cleanup(func() { - require.NoError(t, exp.Shutdown(context.Background())) - }) return exp } @@ -639,3 +685,25 @@ func mustSendTraces(t *testing.T, exporter exporter.Traces, traces ptrace.Traces err := exporter.ConsumeTraces(context.Background(), traces) require.NoError(t, err) } + +type mockHost struct { + extensions map[component.ID]component.Component +} + +func (h *mockHost) GetFactory(kind component.Kind, typ component.Type) component.Factory { + panic(fmt.Errorf("expected call to GetFactory(%v, %v)", kind, typ)) +} + +func (h *mockHost) GetExtensions() map[component.ID]component.Component { + return h.extensions +} + +func (h *mockHost) GetExporters() map[component.DataType]map[component.ID]component.Component { + panic(fmt.Errorf("expected call to GetExporters")) +} + +type roundTripperFunc func(*http.Request) (*http.Response, error) + +func (f roundTripperFunc) RoundTrip(r *http.Request) (*http.Response, error) { + return f(r) +} diff --git a/exporter/elasticsearchexporter/factory.go b/exporter/elasticsearchexporter/factory.go index 7bee699443cb..fedd0c50b36b 100644 --- a/exporter/elasticsearchexporter/factory.go +++ b/exporter/elasticsearchexporter/factory.go @@ -9,10 +9,10 @@ import ( "context" "fmt" "net/http" - "runtime" "time" "go.opentelemetry.io/collector/component" + "go.opentelemetry.io/collector/config/confighttp" "go.opentelemetry.io/collector/exporter" "go.opentelemetry.io/collector/exporter/exporterhelper" @@ -23,7 +23,6 @@ const ( // The value of "type" key in configuration. defaultLogsIndex = "logs-generic-default" defaultTracesIndex = "traces-generic-default" - userAgentHeaderKey = "User-Agent" ) // NewFactory creates a factory for Elastic exporter. @@ -39,14 +38,16 @@ func NewFactory() exporter.Factory { func createDefaultConfig() component.Config { qs := exporterhelper.NewDefaultQueueSettings() qs.Enabled = false + + httpClientConfig := confighttp.NewDefaultClientConfig() + httpClientConfig.Timeout = 90 * time.Second + return &Config{ QueueSettings: qs, - ClientConfig: ClientConfig{ - Timeout: 90 * time.Second, - }, - Index: "", - LogsIndex: defaultLogsIndex, - TracesIndex: defaultTracesIndex, + ClientConfig: httpClientConfig, + Index: "", + LogsIndex: defaultLogsIndex, + TracesIndex: defaultTracesIndex, Retry: RetrySettings{ Enabled: true, MaxRequests: 3, @@ -89,9 +90,7 @@ func createLogsExporter( index = cf.Index } - setDefaultUserAgentHeader(cf, set.BuildInfo) - - exporter, err := newExporter(set.Logger, cf, index, cf.LogsDynamicIndex.Enabled) + exporter, err := newExporter(cf, set, index, cf.LogsDynamicIndex.Enabled) if err != nil { return nil, fmt.Errorf("cannot configure Elasticsearch exporter: %w", err) } @@ -101,6 +100,7 @@ func createLogsExporter( set, cfg, exporter.pushLogsData, + exporterhelper.WithStart(exporter.Start), exporterhelper.WithShutdown(exporter.Shutdown), exporterhelper.WithQueue(cf.QueueSettings), ) @@ -112,9 +112,7 @@ func createTracesExporter(ctx context.Context, cf := cfg.(*Config) - setDefaultUserAgentHeader(cf, set.BuildInfo) - - exporter, err := newExporter(set.Logger, cf, cf.TracesIndex, cf.TracesDynamicIndex.Enabled) + exporter, err := newExporter(cf, set, cf.TracesIndex, cf.TracesDynamicIndex.Enabled) if err != nil { return nil, fmt.Errorf("cannot configure Elasticsearch exporter: %w", err) } @@ -123,18 +121,8 @@ func createTracesExporter(ctx context.Context, set, cfg, exporter.pushTraceData, + exporterhelper.WithStart(exporter.Start), exporterhelper.WithShutdown(exporter.Shutdown), exporterhelper.WithQueue(cf.QueueSettings), ) } - -// set default User-Agent header with BuildInfo if User-Agent is empty -func setDefaultUserAgentHeader(cf *Config, info component.BuildInfo) { - if _, found := cf.Headers[userAgentHeaderKey]; found { - return - } - if cf.Headers == nil { - cf.Headers = make(map[string]string) - } - cf.Headers[userAgentHeaderKey] = fmt.Sprintf("%s/%s (%s/%s)", info.Description, info.Version, runtime.GOOS, runtime.GOARCH) -} diff --git a/exporter/elasticsearchexporter/factory_test.go b/exporter/elasticsearchexporter/factory_test.go index f3039cf9f1fc..4cb39d9393e9 100644 --- a/exporter/elasticsearchexporter/factory_test.go +++ b/exporter/elasticsearchexporter/factory_test.go @@ -39,7 +39,7 @@ func TestFactory_CreateLogsExporter_Fail(t *testing.T) { params := exportertest.NewNopSettings() _, err := factory.CreateLogsExporter(context.Background(), params, cfg) require.Error(t, err, "expected an error when creating a logs exporter") - assert.EqualError(t, err, "cannot configure Elasticsearch exporter: endpoints or cloudid must be specified") + assert.EqualError(t, err, "cannot configure Elasticsearch exporter: exactly one of [endpoint, endpoints, cloudid] must be specified") } func TestFactory_CreateMetricsExporter_Fail(t *testing.T) { @@ -70,7 +70,7 @@ func TestFactory_CreateTracesExporter_Fail(t *testing.T) { params := exportertest.NewNopSettings() _, err := factory.CreateTracesExporter(context.Background(), params, cfg) require.Error(t, err, "expected an error when creating a traces exporter") - assert.EqualError(t, err, "cannot configure Elasticsearch exporter: endpoints or cloudid must be specified") + assert.EqualError(t, err, "cannot configure Elasticsearch exporter: exactly one of [endpoint, endpoints, cloudid] must be specified") } func TestFactory_CreateLogsAndTracesExporterWithDeprecatedIndexOption(t *testing.T) { diff --git a/exporter/elasticsearchexporter/go.mod b/exporter/elasticsearchexporter/go.mod index d6886e134a2e..a27f99e89b61 100644 --- a/exporter/elasticsearchexporter/go.mod +++ b/exporter/elasticsearchexporter/go.mod @@ -12,10 +12,13 @@ require ( github.com/open-telemetry/opentelemetry-collector-contrib/internal/coreinternal v0.102.0 github.com/stretchr/testify v1.9.0 go.opentelemetry.io/collector/component v0.102.2-0.20240606174409-6888f8f7a45f + go.opentelemetry.io/collector/config/configauth v0.102.2-0.20240606174409-6888f8f7a45f + go.opentelemetry.io/collector/config/configcompression v1.9.1-0.20240606174409-6888f8f7a45f + go.opentelemetry.io/collector/config/confighttp v0.102.2-0.20240606174409-6888f8f7a45f go.opentelemetry.io/collector/config/configopaque v1.9.1-0.20240606174409-6888f8f7a45f - go.opentelemetry.io/collector/config/configtls v0.102.2-0.20240606174409-6888f8f7a45f go.opentelemetry.io/collector/confmap v0.102.2-0.20240606174409-6888f8f7a45f go.opentelemetry.io/collector/exporter v0.102.2-0.20240606174409-6888f8f7a45f + go.opentelemetry.io/collector/extension/auth v0.102.2-0.20240606174409-6888f8f7a45f go.opentelemetry.io/collector/pdata v1.9.1-0.20240606174409-6888f8f7a45f go.opentelemetry.io/collector/semconv v0.102.2-0.20240606174409-6888f8f7a45f go.opentelemetry.io/otel/metric v1.27.0 @@ -33,12 +36,15 @@ require ( github.com/elastic/go-elasticsearch/v8 v8.14.0 // indirect github.com/elastic/go-sysinfo v1.7.1 // indirect github.com/elastic/go-windows v1.0.1 // indirect + github.com/felixge/httpsnoop v1.0.4 // indirect github.com/fsnotify/fsnotify v1.7.0 // indirect github.com/go-logr/logr v1.4.1 // indirect github.com/go-logr/stdr v1.2.2 // indirect github.com/go-viper/mapstructure/v2 v2.0.0-alpha.1 // indirect github.com/gogo/protobuf v1.3.2 // indirect + github.com/golang/snappy v0.0.4 // indirect github.com/google/uuid v1.6.0 // indirect + github.com/hashicorp/go-version v1.7.0 // indirect github.com/joeshaw/multierror v0.0.0-20140124173710-69b34d4ec901 // indirect github.com/json-iterator/go v1.1.12 // indirect github.com/klauspost/compress v1.17.8 // indirect @@ -55,21 +61,26 @@ require ( github.com/prometheus/client_model v0.6.1 // indirect github.com/prometheus/common v0.54.0 // indirect github.com/prometheus/procfs v0.15.0 // indirect + github.com/rs/cors v1.10.1 // indirect go.elastic.co/apm/module/apmzap/v2 v2.6.0 // indirect go.elastic.co/apm/v2 v2.6.0 // indirect go.elastic.co/fastjson v1.3.0 // indirect go.opentelemetry.io/collector v0.102.2-0.20240606174409-6888f8f7a45f // indirect go.opentelemetry.io/collector/config/configretry v0.102.2-0.20240606174409-6888f8f7a45f // indirect go.opentelemetry.io/collector/config/configtelemetry v0.102.2-0.20240606174409-6888f8f7a45f // indirect + go.opentelemetry.io/collector/config/configtls v0.102.2-0.20240606174409-6888f8f7a45f // indirect + go.opentelemetry.io/collector/config/internal v0.102.2-0.20240606174409-6888f8f7a45f // indirect go.opentelemetry.io/collector/consumer v0.102.2-0.20240606174409-6888f8f7a45f // indirect go.opentelemetry.io/collector/extension v0.102.2-0.20240606174409-6888f8f7a45f // indirect + go.opentelemetry.io/collector/featuregate v1.9.1-0.20240606174409-6888f8f7a45f // indirect go.opentelemetry.io/collector/receiver v0.102.2-0.20240606174409-6888f8f7a45f // indirect + go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.52.0 // indirect go.opentelemetry.io/otel v1.27.0 // indirect go.opentelemetry.io/otel/exporters/prometheus v0.49.0 // indirect go.opentelemetry.io/otel/sdk v1.27.0 // indirect go.opentelemetry.io/otel/sdk/metric v1.27.0 // indirect go.uber.org/multierr v1.11.0 // indirect - golang.org/x/net v0.25.0 // indirect + golang.org/x/net v0.26.0 // indirect golang.org/x/sync v0.7.0 // indirect golang.org/x/sys v0.21.0 // indirect golang.org/x/text v0.16.0 // indirect diff --git a/exporter/elasticsearchexporter/go.sum b/exporter/elasticsearchexporter/go.sum index db2a4ba5f174..6e3cb8bc80b7 100644 --- a/exporter/elasticsearchexporter/go.sum +++ b/exporter/elasticsearchexporter/go.sum @@ -24,6 +24,8 @@ github.com/elastic/go-sysinfo v1.7.1/go.mod h1:i1ZYdU10oLNfRzq4vq62BEwD2fH8KaWh6 github.com/elastic/go-windows v1.0.0/go.mod h1:TsU0Nrp7/y3+VwE82FoZF8gC/XFg/Elz6CcloAxnPgU= github.com/elastic/go-windows v1.0.1 h1:AlYZOldA+UJ0/2nBuqWdo90GFCgG9xuyw9SYzGUtJm0= github.com/elastic/go-windows v1.0.1/go.mod h1:FoVvqWSun28vaDQPbj2Elfc0JahhPB7WQEGa3c814Ss= +github.com/felixge/httpsnoop v1.0.4 h1:NFTV2Zj1bL4mc9sqWACXbQFVBBg2W3GPvqp8/ESS2Wg= +github.com/felixge/httpsnoop v1.0.4/go.mod h1:m8KPJKqk1gH5J9DgRY2ASl2lWCfGKXixSwevea8zH2U= github.com/fsnotify/fsnotify v1.7.0 h1:8JEhPFa5W2WU7YfeZzPNqzMP6Lwt7L2715Ggo0nosvA= github.com/fsnotify/fsnotify v1.7.0/go.mod h1:40Bi/Hjc2AVfZrqy+aj+yEI+/bRxZnMJyTJwOpGvigM= github.com/go-logr/logr v1.2.2/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A= @@ -35,11 +37,15 @@ github.com/go-viper/mapstructure/v2 v2.0.0-alpha.1 h1:TQcrn6Wq+sKGkpyPvppOz99zsM github.com/go-viper/mapstructure/v2 v2.0.0-alpha.1/go.mod h1:oJDH3BJKyqBA2TXFhDsKDGDTlndYOZ6rGS0BRZIxGhM= github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q= github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q= +github.com/golang/snappy v0.0.4 h1:yAGX7huGHXlcLOEtBnF4w7FQwA26wojNCwOYAEhLjQM= +github.com/golang/snappy v0.0.4/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q= github.com/google/go-cmp v0.6.0 h1:ofyhxvXcZhMsU5ulbFiLKl/XBFqE1GSq7atu8tAmTRI= github.com/google/go-cmp v0.6.0/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY= github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg= github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0= github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= +github.com/hashicorp/go-version v1.7.0 h1:5tqGy27NaOTB8yJKUZELlFAS/LTKJkrmONwQKeRZfjY= +github.com/hashicorp/go-version v1.7.0/go.mod h1:fltr4n8CU8Ke44wwGCBoEymUuxUHl09ZGVZPK5anwXA= github.com/jessevdk/go-flags v1.4.0/go.mod h1:4FA24M0QyGHXBuZZK/XkWh8h0e1EYbRYJSGM75WSRxI= github.com/joeshaw/multierror v0.0.0-20140124173710-69b34d4ec901 h1:rp+c0RAYOWj8l6qbCUTSiRLG/iKnW3K3/QfPPuSsBt4= github.com/joeshaw/multierror v0.0.0-20140124173710-69b34d4ec901/go.mod h1:Z86h9688Y0wesXCyonoVr47MasHilkuLMqGhRZ4Hpak= @@ -91,6 +97,8 @@ github.com/prometheus/procfs v0.15.0 h1:A82kmvXJq2jTu5YUhSGNlYoxh85zLnKgPz4bMZgI github.com/prometheus/procfs v0.15.0/go.mod h1:Y0RJ/Y5g5wJpkTisOtqwDSo4HwhGmLB4VQSw2sQJLHk= github.com/rogpeppe/go-internal v1.10.0 h1:TMyTOH3F/DB16zRVcYyreMH6GnZZrwQVAoYjRBZyWFQ= github.com/rogpeppe/go-internal v1.10.0/go.mod h1:UQnix2H7Ngw/k4C5ijL5+65zddjncjaFoBhdsK/akog= +github.com/rs/cors v1.10.1 h1:L0uuZVXIKlI1SShY2nhFfo44TYvDPQ1w4oFkUJNfhyo= +github.com/rs/cors v1.10.1/go.mod h1:XyqrcTp5zjWr1wsJ8PIRZssZ8b/WMcMf71DJnit4EMU= github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI= github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= @@ -112,6 +120,12 @@ go.opentelemetry.io/collector v0.102.2-0.20240606174409-6888f8f7a45f h1:l2ZMTF7/ go.opentelemetry.io/collector v0.102.2-0.20240606174409-6888f8f7a45f/go.mod h1:RxtmSO5a8f4R1kGY7/vnciw8GZTSZCljgYedEbI+iP8= go.opentelemetry.io/collector/component v0.102.2-0.20240606174409-6888f8f7a45f h1:OBqdOlHQqgt991UMBC6B04N/fLZNZS/ik/JC+XH41OE= go.opentelemetry.io/collector/component v0.102.2-0.20240606174409-6888f8f7a45f/go.mod h1:hg92ib1gYoAh1TxQj4k0O/V+WH1CGs76LQTHfbJ1cU4= +go.opentelemetry.io/collector/config/configauth v0.102.2-0.20240606174409-6888f8f7a45f h1:J5AR7UiDNErP7dagJWuoKQV9/KkJjOeIjgQMFFw89hU= +go.opentelemetry.io/collector/config/configauth v0.102.2-0.20240606174409-6888f8f7a45f/go.mod h1:/vhOP3TzP8kOnKTmxUx0h9Aqpd1f7sjLczMmNgEowP4= +go.opentelemetry.io/collector/config/configcompression v1.9.1-0.20240606174409-6888f8f7a45f h1:ywAW14HQh9TLbm8lwWLOwUCTcaog6zynnRYtYVMTEhg= +go.opentelemetry.io/collector/config/configcompression v1.9.1-0.20240606174409-6888f8f7a45f/go.mod h1:6+m0GKCv7JKzaumn7u80A2dLNCuYf5wdR87HWreoBO0= +go.opentelemetry.io/collector/config/confighttp v0.102.2-0.20240606174409-6888f8f7a45f h1:ZyZ9tZeO4nYNjqDfKSPFeF+Ff3C3xld028DMAUpEH7Q= +go.opentelemetry.io/collector/config/confighttp v0.102.2-0.20240606174409-6888f8f7a45f/go.mod h1:NOjezGuHVir9HSBXza4PkVHjipdxPe3SnTMjUTTeXBc= go.opentelemetry.io/collector/config/configopaque v1.9.1-0.20240606174409-6888f8f7a45f h1:yMl/nKCAeL5IdQQJYtRWjk3Knf6vxQNCk+xvg4kr+Zs= go.opentelemetry.io/collector/config/configopaque v1.9.1-0.20240606174409-6888f8f7a45f/go.mod h1:2A3QtznGaN3aFnki8sHqKHjLHouyz7B4ddQrdBeohCg= go.opentelemetry.io/collector/config/configretry v0.102.2-0.20240606174409-6888f8f7a45f h1:pR8lEN+8OVG43QpFiwG7gNq3ddXWW51XnCspxJ9lH7c= @@ -120,6 +134,8 @@ go.opentelemetry.io/collector/config/configtelemetry v0.102.2-0.20240606174409-6 go.opentelemetry.io/collector/config/configtelemetry v0.102.2-0.20240606174409-6888f8f7a45f/go.mod h1:WxWKNVAQJg/Io1nA3xLgn/DWLE/W1QOB2+/Js3ACi40= go.opentelemetry.io/collector/config/configtls v0.102.2-0.20240606174409-6888f8f7a45f h1:UO4qEUe/60yJO8dDXZsN4ikCfuxafXxjbIj6QEBQ93w= go.opentelemetry.io/collector/config/configtls v0.102.2-0.20240606174409-6888f8f7a45f/go.mod h1:KHdrvo3cwosgDxclyiLWmtbovIwqvaIGeTXr3p5721A= +go.opentelemetry.io/collector/config/internal v0.102.2-0.20240606174409-6888f8f7a45f h1:yLweVl++Q86K3hUMgGet0B2yv/V7ZmLgqjvUpxDXN/w= +go.opentelemetry.io/collector/config/internal v0.102.2-0.20240606174409-6888f8f7a45f/go.mod h1:Vig3dfeJJnuRe1kBNpszBzPoj5eYnR51wXbeq36Zfpg= go.opentelemetry.io/collector/confmap v0.102.2-0.20240606174409-6888f8f7a45f h1:MJEzd1kB1G9QRaM+QpZBWA07SM1AIynrfouhgkv4PzA= go.opentelemetry.io/collector/confmap v0.102.2-0.20240606174409-6888f8f7a45f/go.mod h1:KgpS7UxH5rkd69CzAzlY2I1heH8Z7eNCZlHmwQBMxNg= go.opentelemetry.io/collector/consumer v0.102.2-0.20240606174409-6888f8f7a45f h1:hDB+qtz0EA3mTYL1zihz6fUG8Ze8l4/rTBAM5K+RNeA= @@ -128,6 +144,10 @@ go.opentelemetry.io/collector/exporter v0.102.2-0.20240606174409-6888f8f7a45f h1 go.opentelemetry.io/collector/exporter v0.102.2-0.20240606174409-6888f8f7a45f/go.mod h1:6DSemHA1NG7iEgrSB9TQ0Qqc0oHDaGsAENmlCz1vlHc= go.opentelemetry.io/collector/extension v0.102.2-0.20240606174409-6888f8f7a45f h1:orWwqHaAIWDsHe22pQQNCO90vmQc8a1bUzQ/7f/luzk= go.opentelemetry.io/collector/extension v0.102.2-0.20240606174409-6888f8f7a45f/go.mod h1:fChJ/P8Qsgcb+EF29mA5+Z2QuBQFmu5nbzSL6tP7QKY= +go.opentelemetry.io/collector/extension/auth v0.102.2-0.20240606174409-6888f8f7a45f h1:/f9y5inNPkdPXkf5q9tLzs+0umNPy33zTAKcu9VB3SE= +go.opentelemetry.io/collector/extension/auth v0.102.2-0.20240606174409-6888f8f7a45f/go.mod h1:ujW13ror++ZW+QiLoY2uBAfeqnxrYUnrk2yTvvqtOIw= +go.opentelemetry.io/collector/featuregate v1.9.1-0.20240606174409-6888f8f7a45f h1:P7Dler+V5pO04DfZvy5rGi4qdDi/17Gty7Sy5N8oIQc= +go.opentelemetry.io/collector/featuregate v1.9.1-0.20240606174409-6888f8f7a45f/go.mod h1:PsOINaGgTiFc+Tzu2K/X2jP+Ngmlp7YKGV1XrnBkH7U= go.opentelemetry.io/collector/pdata v1.9.1-0.20240606174409-6888f8f7a45f h1:ZSmt73uc+xxFHuryi4G1qh3VMx069JJGxfRLgIpaOHM= go.opentelemetry.io/collector/pdata v1.9.1-0.20240606174409-6888f8f7a45f/go.mod h1:vk7LrfpyVpGZrRWcpjyy0DDZzL3SZiYMQxfap25551w= go.opentelemetry.io/collector/pdata/testdata v0.102.1 h1:S3idZaJxy8M7mCC4PG4EegmtiSaOuh6wXWatKIui8xU= @@ -136,6 +156,8 @@ go.opentelemetry.io/collector/receiver v0.102.2-0.20240606174409-6888f8f7a45f h1 go.opentelemetry.io/collector/receiver v0.102.2-0.20240606174409-6888f8f7a45f/go.mod h1:jxMmi2G3dSBhhAqnn+0bT+GC+3n47P6VyD0KTnr/NeQ= go.opentelemetry.io/collector/semconv v0.102.2-0.20240606174409-6888f8f7a45f h1:e3QizVBHcpg13Sp9/ZvnZGcWP7VSKD+aNOw+vNyRczw= go.opentelemetry.io/collector/semconv v0.102.2-0.20240606174409-6888f8f7a45f/go.mod h1:yMVUCNoQPZVq/IPfrHrnntZTWsLf5YGZ7qwKulIl5hw= +go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.52.0 h1:9l89oX4ba9kHbBol3Xin3leYJ+252h0zszDtBwyKe2A= +go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.52.0/go.mod h1:XLZfZboOJWHNKUv7eH0inh0E9VV6eWDFB/9yJyTLPp0= go.opentelemetry.io/otel v1.27.0 h1:9BZoF3yMK/O1AafMiQTVu0YDj5Ea4hPhxCs7sGva+cg= go.opentelemetry.io/otel v1.27.0/go.mod h1:DMpAK8fzYRzs+bi3rS5REupisuqTheUlSZJ1WnZaPAQ= go.opentelemetry.io/otel/exporters/prometheus v0.49.0 h1:Er5I1g/YhfYv9Affk9nJLfH/+qCCVVg1f2R9AbJfqDQ= @@ -163,8 +185,8 @@ golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU= -golang.org/x/net v0.25.0 h1:d/OCCoBEUq33pjydKrGQhw7IlUPI2Oylr+8qLx49kac= -golang.org/x/net v0.25.0/go.mod h1:JkAGAh7GEvH74S6FOH42FLoXpXbE/aqXSrIQjXgsiwM= +golang.org/x/net v0.26.0 h1:soB7SVo0PWrY4vPW/+ay0jKDNScG2X9wFeYlXIvJsOQ= +golang.org/x/net v0.26.0/go.mod h1:5YKkiSynbBIh3p6iOc/vibscux0x38BZDkn8sCUPxHE= golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= diff --git a/exporter/elasticsearchexporter/testdata/config.yaml b/exporter/elasticsearchexporter/testdata/config.yaml index b75fda2cf65a..4431dd6410d6 100644 --- a/exporter/elasticsearchexporter/testdata/config.yaml +++ b/exporter/elasticsearchexporter/testdata/config.yaml @@ -57,3 +57,5 @@ elasticsearch/cloudid: elasticsearch/deprecated_index: endpoints: [https://elastic.example.com:9200] index: my_log_index +elasticsearch/confighttp_endpoint: + endpoint: https://elastic.example.com:9200