Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Go panic in OLTP exporter #16499

Closed
barclayadam opened this issue Nov 25, 2022 · 3 comments · Fixed by #16498
Closed

Go panic in OLTP exporter #16499

barclayadam opened this issue Nov 25, 2022 · 3 comments · Fixed by #16498
Labels
bug Something isn't working

Comments

@barclayadam
Copy link

Describe the bug
Go is periodically producing a panic log and restarting (with an exit code of 2) with one of 2 outputs:

panic: runtime error: index out of range [-2]

goroutine 112 [running]:
go.opentelemetry.io/collector/pdata/internal/data/protogen/collector/metrics/v1.encodeVarintMetricsService(...)
	go.opentelemetry.io/collector/[email protected]/internal/data/protogen/collector/metrics/v1/metrics_service.pb.go:438
go.opentelemetry.io/collector/pdata/internal/data/protogen/collector/metrics/v1.(*ExportMetricsServiceRequest).MarshalToSizedBuffer(0xc011b58138, {0xc018800000, 0xc666, 0xc666})
	go.opentelemetry.io/collector/[email protected]/internal/data/protogen/collector/metrics/v1/metrics_service.pb.go:357 +0x16d
go.opentelemetry.io/collector/pdata/internal/data/protogen/collector/metrics/v1.(*ExportMetricsServiceRequest).Marshal(0xc01dab1f00?)
	go.opentelemetry.io/collector/[email protected]/internal/data/protogen/collector/metrics/v1/metrics_service.pb.go:332 +0x56
google.golang.org/protobuf/internal/impl.legacyMarshal({{}, {0x74909f8, 0xc01dab1f00}, {0x0, 0x0, 0x0}, 0x0})
	google.golang.org/[email protected]/internal/impl/legacy_message.go:402 +0xa2
google.golang.org/protobuf/proto.MarshalOptions.marshal({{}, 0x28?, 0x0, 0x0}, {0x0, 0x0, 0x0}, {0x74909f8, 0xc01dab1f00})
	google.golang.org/[email protected]/proto/encode.go:166 +0x27b
google.golang.org/protobuf/proto.MarshalOptions.MarshalAppend({{}, 0x60?, 0x3e?, 0x4c?}, {0x0, 0x0, 0x0}, {0x7410c60?, 0xc01dab1f00?})
	google.golang.org/[email protected]/proto/encode.go:125 +0x79
github.com/golang/protobuf/proto.marshalAppend({0x0, 0x0, 0x0}, {0x7faa4a6238e0?, 0xc011b58138?}, 0x0?)
	github.com/golang/[email protected]/proto/wire.go:40 +0xa5
github.com/golang/protobuf/proto.Marshal(...)
	github.com/golang/[email protected]/proto/wire.go:23
google.golang.org/grpc/encoding/proto.codec.Marshal({}, {0x64c3e60, 0xc011b58138})
	google.golang.org/[email protected]/encoding/proto/proto.go:45 +0x4e
google.golang.org/grpc.encode({0x7faa4a7f6e88?, 0xb312e00?}, {0x64c3e60?, 0xc011b58138?})
	google.golang.org/[email protected]/rpc_util.go:594 +0x44
google.golang.org/grpc.prepareMsg({0x64c3e60?, 0xc011b58138?}, {0x7faa4a7f6e88?, 0xb312e00?}, {0x0, 0x0}, {0x744f500, 0xc0000faaf0})
	google.golang.org/[email protected]/stream.go:1692 +0xd2
google.golang.org/grpc.(*clientStream).SendMsg(0xc01827c360, {0x64c3e60?, 0xc011b58138})
	google.golang.org/[email protected]/stream.go:830 +0xfd
google.golang.org/grpc.invoke({0x745e5f0?, 0xc011627020?}, {0x694300b?, 0x4?}, {0x64c3e60, 0xc011b58138}, {0x64c3fa0, 0xc016bd6810}, 0x0?, {0xc01bb72440, ...})
	google.golang.org/[email protected]/call.go:70 +0xa8
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc.UnaryClientInterceptor.func1({0x745e5f0, 0xc011626f60}, {0x694300b, 0x3f}, {0x64c3e60, 0xc011b58138}, {0x64c3fa0, 0xc016bd6810}, 0xc000c40380, 0x6a6f038, ...)
	go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/[email protected]/interceptor.go:105 +0x3e4
google.golang.org/grpc.(*ClientConn).Invoke(0xc000c40380?, {0x745e5f0?, 0xc011626f60?}, {0x694300b?, 0x3f?}, {0x64c3e60?, 0xc011b58138?}, {0x64c3fa0?, 0xc016bd6810?}, {0xc0003dded0, ...})
	google.golang.org/[email protected]/call.go:35 +0x223
go.opentelemetry.io/collector/pdata/internal/data/protogen/collector/metrics/v1.(*metricsServiceClient).Export(0xc00013e458, {0x745e5f0, 0xc011626f60}, 0xc000a02e70?, {0xc0003dded0, 0x1, 0x1})
	go.opentelemetry.io/collector/[email protected]/internal/data/protogen/collector/metrics/v1/metrics_service.pb.go:272 +0xc9
go.opentelemetry.io/collector/pdata/pmetric/pmetricotlp.(*grpcClient).Export(0x49cd20?, {0x745e5f0?, 0xc011626f60?}, {0xc011626f30?}, {0xc0003dded0?, 0x0?, 0x0?})
	go.opentelemetry.io/collector/[email protected]/pmetric/pmetricotlp/grpc.go:50 +0x30
go.opentelemetry.io/collector/exporter/otlpexporter.(*exporter).pushMetrics(0xc00131c820, {0x745e5b8?, 0xc00f8ecd20?}, {0x745e5f0?})
	go.opentelemetry.io/collector/exporter/[email protected]/otlp.go:111 +0x69
go.opentelemetry.io/collector/exporter/exporterhelper.(*metricsRequest).Export(0x745e5f0?, {0x745e5b8?, 0xc00f8ecd20?})
	go.opentelemetry.io/[email protected]/exporter/exporterhelper/metrics.go:65 +0x34
go.opentelemetry.io/collector/exporter/exporterhelper.(*timeoutSender).send(0xc001324840, {0x747d640, 0xc0123cf4a0})
	go.opentelemetry.io/[email protected]/exporter/exporterhelper/common.go:203 +0x96
go.opentelemetry.io/collector/exporter/exporterhelper.(*retrySender).send(0xc0005fd440, {0x747d640, 0xc0123cf4a0})
	go.opentelemetry.io/[email protected]/exporter/exporterhelper/queued_retry.go:388 +0x58d
go.opentelemetry.io/collector/exporter/exporterhelper.(*metricsSenderWithObservability).send(0xc00131f668, {0x747d640, 0xc0123cf4a0})
	go.opentelemetry.io/[email protected]/exporter/exporterhelper/metrics.go:133 +0x88
go.opentelemetry.io/collector/exporter/exporterhelper.(*queuedRetrySender).start.func1({0x747d640, 0xc0123cf4a0})
	go.opentelemetry.io/[email protected]/exporter/exporterhelper/queued_retry.go:206 +0x39
go.opentelemetry.io/collector/exporter/exporterhelper/internal.(*boundedMemoryQueue).StartConsumers.func1()
	go.opentelemetry.io/[email protected]/exporter/exporterhelper/internal/bounded_memory_queue.go:61 +0xb6
created by go.opentelemetry.io/collector/exporter/exporterhelper/internal.(*boundedMemoryQueue).StartConsumers
	go.opentelemetry.io/[email protected]/exporter/exporterhelper/internal/bounded_memory_queue.go:56 +0x45
panic: runtime error: index out of range [-1]

goroutine 125 [running]:
go.opentelemetry.io/collector/pdata/internal/data/protogen/metrics/v1.(*ResourceMetrics).MarshalToSizedBuffer(0xc01f3f3500, {0xc012466000, 0x5d4, 0x11790})
	go.opentelemetry.io/collector/[email protected]/internal/data/protogen/metrics/v1/metrics.pb.go:2144 +0x1d6
go.opentelemetry.io/collector/pdata/internal/data/protogen/collector/metrics/v1.(*ExportMetricsServiceRequest).MarshalToSizedBuffer(0xc019f38150, {0xc012466000, 0x11790, 0x11790})
	go.opentelemetry.io/collector/[email protected]/internal/data/protogen/collector/metrics/v1/metrics_service.pb.go:352 +0xac
go.opentelemetry.io/collector/pdata/internal/data/protogen/collector/metrics/v1.(*ExportMetricsServiceRequest).Marshal(0xc01d6cc330?)
	go.opentelemetry.io/collector/[email protected]/internal/data/protogen/collector/metrics/v1/metrics_service.pb.go:332 +0x56
google.golang.org/protobuf/internal/impl.legacyMarshal({{}, {0x74909f8, 0xc01d6cc330}, {0x0, 0x0, 0x0}, 0x0})
	google.golang.org/[email protected]/internal/impl/legacy_message.go:402 +0xa2
google.golang.org/protobuf/proto.MarshalOptions.marshal({{}, 0x28?, 0x0, 0x0}, {0x0, 0x0, 0x0}, {0x74909f8, 0xc01d6cc330})
	google.golang.org/[email protected]/proto/encode.go:166 +0x27b
google.golang.org/protobuf/proto.MarshalOptions.MarshalAppend({{}, 0x60?, 0x3e?, 0x4c?}, {0x0, 0x0, 0x0}, {0x7410c60?, 0xc01d6cc330?})
	google.golang.org/[email protected]/proto/encode.go:125 +0x79
github.com/golang/protobuf/proto.marshalAppend({0x0, 0x0, 0x0}, {0x7f836b82f3d0?, 0xc019f38150?}, 0x0?)
	github.com/golang/[email protected]/proto/wire.go:40 +0xa5
github.com/golang/protobuf/proto.Marshal(...)
	github.com/golang/[email protected]/proto/wire.go:23
google.golang.org/grpc/encoding/proto.codec.Marshal({}, {0x64c3e60, 0xc019f38150})
	google.golang.org/[email protected]/encoding/proto/proto.go:45 +0x4e
google.golang.org/grpc.encode({0x7f836bc31170?, 0xb312e00?}, {0x64c3e60?, 0xc019f38150?})
	google.golang.org/[email protected]/rpc_util.go:594 +0x44
google.golang.org/grpc.prepareMsg({0x64c3e60?, 0xc019f38150?}, {0x7f836bc31170?, 0xb312e00?}, {0x0, 0x0}, {0x744f500, 0xc000150aa0})
	google.golang.org/[email protected]/stream.go:1692 +0xd2
google.golang.org/grpc.(*clientStream).SendMsg(0xc0192d3200, {0x64c3e60?, 0xc019f38150})
	google.golang.org/[email protected]/stream.go:830 +0xfd
google.golang.org/grpc.invoke({0x745e5f0?, 0xc01bacc840?}, {0x694300b?, 0x4?}, {0x64c3e60, 0xc019f38150}, {0x64c3fa0, 0xc01c067aa0}, 0x0?, {0xc01db78340, ...})
	google.golang.org/[email protected]/call.go:70 +0xa8
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc.UnaryClientInterceptor.func1({0x745e5f0, 0xc01bacc780}, {0x694300b, 0x3f}, {0x64c3e60, 0xc019f38150}, {0x64c3fa0, 0xc01c067aa0}, 0xc0002da380, 0x6a6f038, ...)
	go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/[email protected]/interceptor.go:105 +0x3e4
google.golang.org/grpc.(*ClientConn).Invoke(0xc0002da380?, {0x745e5f0?, 0xc01bacc780?}, {0x694300b?, 0x3f?}, {0x64c3e60?, 0xc019f38150?}, {0x64c3fa0?, 0xc01c067aa0?}, {0xc00081d0d0, ...})
	google.golang.org/[email protected]/call.go:35 +0x223
go.opentelemetry.io/collector/pdata/internal/data/protogen/collector/metrics/v1.(*metricsServiceClient).Export(0xc000012fa8, {0x745e5f0, 0xc01bacc780}, 0xc001318750?, {0xc00081d0d0, 0x1, 0x1})
	go.opentelemetry.io/collector/[email protected]/internal/data/protogen/collector/metrics/v1/metrics_service.pb.go:272 +0xc9
go.opentelemetry.io/collector/pdata/pmetric/pmetricotlp.(*grpcClient).Export(0x49cd20?, {0x745e5f0?, 0xc01bacc780?}, {0xc01bacc750?}, {0xc00081d0d0?, 0x0?, 0x0?})
	go.opentelemetry.io/collector/[email protected]/pmetric/pmetricotlp/grpc.go:50 +0x30
go.opentelemetry.io/collector/exporter/otlpexporter.(*exporter).pushMetrics(0xc001193c20, {0x745e5b8?, 0xc01605af60?}, {0x745e5f0?})
	go.opentelemetry.io/collector/exporter/[email protected]/otlp.go:111 +0x69
go.opentelemetry.io/collector/exporter/exporterhelper.(*metricsRequest).Export(0x745e5f0?, {0x745e5b8?, 0xc01605af60?})
	go.opentelemetry.io/[email protected]/exporter/exporterhelper/metrics.go:65 +0x34
go.opentelemetry.io/collector/exporter/exporterhelper.(*timeoutSender).send(0xc0008e0fd0, {0x747d640, 0xc01a46b4d0})
	go.opentelemetry.io/[email protected]/exporter/exporterhelper/common.go:203 +0x96
go.opentelemetry.io/collector/exporter/exporterhelper.(*retrySender).send(0xc001195170, {0x747d640, 0xc01a46b4d0})
	go.opentelemetry.io/[email protected]/exporter/exporterhelper/queued_retry.go:388 +0x58d
go.opentelemetry.io/collector/exporter/exporterhelper.(*metricsSenderWithObservability).send(0xc000d33290, {0x747d640, 0xc01a46b4d0})
	go.opentelemetry.io/[email protected]/exporter/exporterhelper/metrics.go:133 +0x88
go.opentelemetry.io/collector/exporter/exporterhelper.(*queuedRetrySender).start.func1({0x747d640, 0xc01a46b4d0})
	go.opentelemetry.io/[email protected]/exporter/exporterhelper/queued_retry.go:206 +0x39
go.opentelemetry.io/collector/exporter/exporterhelper/internal.(*boundedMemoryQueue).StartConsumers.func1()
	go.opentelemetry.io/[email protected]/exporter/exporterhelper/internal/bounded_memory_queue.go:61 +0xb6
created by go.opentelemetry.io/collector/exporter/exporterhelper/internal.(*boundedMemoryQueue).StartConsumers
	go.opentelemetry.io/[email protected]/exporter/exporterhelper/internal/bounded_memory_queue.go:56 +0x45

Steps to reproduce

I've only noticed this in logs recently trying to understand why I'm seeing duplicate logs being published via OLTP end so I do not have reproduction or what part of my configuration could be causing this

What version did you use?
0.63.1

What config did you use?

exporters:
  logging:
    loglevel: warn
  otlp/newrelic:
    endpoint: ${NEW_RELIC_ENDPOINT}
    headers:
      api-key: ${NEW_RELIC_API_KEY}
  prometheus:
    endpoint: 0.0.0.0:8787
extensions:
  health_check: {}
  memory_ballast:
    size_mib: "204"
processors:
  batch:
    send_batch_size: 1000
  cumulativetodelta:
    include:
      match_type: strict
      metrics:
      - system.network.io
      - system.disk.operations
      - system.network.dropped
      - system.network.packets
      - process.cpu.time
  filter/dropMetrics:
    metrics:
      exclude:
        match_type: strict
        metric_names:
        - k8s.pod.filesystem.available
        - k8s.pod.filesystem.capacity
        - k8s.pod.filesystem.usage
  k8sattributes:
    extract:
      metadata:
      - k8s.pod.name
      - k8s.pod.uid
      - k8s.deployment.name
      - k8s.namespace.name
    filter:
      node_from_env_var: KUBE_NODE_NAME
    passthrough: false
    pod_association:
    - sources:
      - from: resource_attribute
        name: k8s.pod.uid
  memory_limiter:
    check_interval: 5s
    limit_mib: 768
    spike_limit_mib: 256
  resource:
    attributes:
    - action: insert
      key: k8s.cluster.name
      value: uksouth
    - action: upsert
      from_attribute: service.instance.id
      key: k8s.pod.uid
    - action: upsert
      key: host.id
      value: ${KUBE_NODE_NAME}
    - action: upsert
      key: host.name
      value: ${KUBE_NODE_NAME}
  resource/serviceFromK8S:
    attributes:
    - action: upsert
      from_attribute: k8s.deployment.name
      key: service.name
    - action: upsert
      from_attribute: k8s.namespace.name
      key: service.namespace
receivers:
  filelog:
    exclude:
    - /var/log/pods/apm_opentelemetry-collector*_*/opentelemetry-collector/*.log
    include:
    - /var/log/pods/apps*/*/*.log
    - /var/log/pods/kured*/*/*.log
    include_file_name: false
    include_file_path: true
    operators:
    - id: extract_metadata_from_filepath
      parse_from: attributes["log.file.path"]
      regex: ^.*/(?P<namespace_name>[^_]+)_(?P<pod_name>[^_]+)_(?P<pod_uid>[a-f0-9-]{36})/(?P<container_name>[^._]+)/\d+.log$
      type: regex_parser
    - id: parser-containerd
      parse_from: body
      regex: ^(?P<time>[^ ^Z]+Z) (stdout|stderr) ([^ ]*) ?(?P<log>.*)$
      timestamp:
        layout: '%Y-%m-%dT%H:%M:%S.%LZ'
        parse_from: attributes.time
      type: regex_parser
    - from: attributes.log
      id: move_log_to_body
      to: body
      type: move
    - from: attributes.container_name
      id: move-container-name
      to: resource["service.name"]
      type: move
    - from: attributes.namespace_name
      id: move-namespace
      to: resource["service.namespace"]
      type: move
    - from: attributes.pod_uid
      id: move-pod-uid-to-service-instance
      to: resource["service.instance.id"]
      type: move
    - id: parse-serilog-json
      if: body matches "^{.*@m.*}$"
      parse_from: body
      scope_name:
        parse_from: attributes.SourceContext
      severity:
        parse_from: attributes.@l
      timestamp:
        layout: ms
        layout_type: epoch
        parse_from: attributes.@t
      trace:
        span_id:
          parse_from: attributes.SpanId
        trace_id:
          parse_from: attributes.TraceId
      type: json_parser
    - from: attributes.@m
      id: move-message
      if: attributes["@m"] != nil
      to: body
      type: move
    start_at: beginning
  hostmetrics:
    collection_interval: 30s
    scrapers:
      cpu:
        metrics:
          system.cpu.utilization:
            enabled: "true"
      memory:
        metrics:
          system.memory.utilization:
            enabled: "true"
      network: null
  k8s_events: {}
  kubeletstats:
    auth_type: serviceAccount
    collection_interval: 30s
    endpoint: https://${KUBE_NODE_NAME}:10250
    insecure_skip_verify: true
    metric_groups:
    - pod
  otlp:
    protocols:
      grpc:
        endpoint: 0.0.0.0:4317
      http:
        endpoint: 0.0.0.0:4318
  prometheus:
    config:
      scrape_configs:
      - job_name: k8s
        kubernetes_sd_configs:
        - role: pod
        relabel_configs:
        - action: keep
          regex: "true"
          source_labels:
          - __meta_kubernetes_pod_annotation_prometheus_io_scrape
        - regex: (.*:)d+;(d+)
          replacement: $1$2
          source_labels:
          - __address__
          - __meta_kubernetes_pod_annotation_prometheus_io_port
          target_label: __address__
service:
  extensions:
  - health_check
  - memory_ballast
  pipelines:
    logs:
      exporters:
      - otlp/newrelic
      processors:
      - memory_limiter
      - batch
      - resource
      - k8sattributes
      - resource/serviceFromK8S
      receivers:
      - filelog
      - k8s_events
    metrics:
      exporters:
      - otlp/newrelic
      - prometheus
      processors:
      - memory_limiter
      - filter/dropMetrics
      - batch
      - resource
      - k8sattributes
      - resource/serviceFromK8S
      - cumulativetodelta
      receivers:
      - otlp
      - hostmetrics
      - kubeletstats
      - prometheus
    traces:
      exporters:
      - otlp/newrelic
      processors:
      - memory_limiter
      - batch
      - resource
      receivers:
      - otlp
  telemetry:
    metrics:
      address: 0.0.0.0:8888

Environment
Helm chart: 0.38.0
Azure Kubernetes Service: v1.22.11

Additional context
Add any other context about the problem here.

@barclayadam barclayadam added the bug Something isn't working label Nov 25, 2022
@codeboten
Copy link
Contributor

Thanks for the report @barclayadam, this appears to be a dup of open-telemetry/opentelemetry-collector#6420, which has been fixed in v0.64.1. Can you test with that version and see if it solves your issue?

@barclayadam
Copy link
Author

@codeboten I have upgraded to Helm chart 0.40.2 which uses docker otel/opentelemetry-collector-contrib:0.66.0 and I still see the same issue (included again in case stack is different due to new version):

panic: runtime error: index out of range [-2]

goroutine 156 [running]:
go.opentelemetry.io/collector/pdata/internal/data/protogen/metrics/v1.encodeVarintMetrics(...)
	go.opentelemetry.io/collector/[email protected]/internal/data/protogen/metrics/v1/metrics.pb.go:3266
go.opentelemetry.io/collector/pdata/internal/data/protogen/metrics/v1.(*Metric_Sum).MarshalToSizedBuffer(0x2?, {0xc01b49d405, 0x9c, 0x9f60ee?})
	go.opentelemetry.io/collector/[email protected]/internal/data/protogen/metrics/v1/metrics.pb.go:2290 +0x110
go.opentelemetry.io/collector/pdata/internal/data/protogen/metrics/v1.(*Metric_Sum).MarshalTo(0x64c1e5?, {0xc01b49d405, 0xc01b49a000?, 0xb170})
	go.opentelemetry.io/collector/[email protected]/internal/data/protogen/metrics/v1/metrics.pb.go:2278 +0x47
go.opentelemetry.io/collector/pdata/internal/data/protogen/metrics/v1.(*Metric).MarshalToSizedBuffer(0xc01b3e68c0, {0xc01b49a000, 0x34a1, 0xe575})
	go.opentelemetry.io/collector/[email protected]/internal/data/protogen/metrics/v1/metrics.pb.go:2226 +0xa9
go.opentelemetry.io/collector/pdata/internal/data/protogen/metrics/v1.(*ScopeMetrics).MarshalToSizedBuffer(0xc01a1fb880, {0xc01b49a000, 0x34a1, 0xe575})
	go.opentelemetry.io/collector/[email protected]/internal/data/protogen/metrics/v1/metrics.pb.go:2178 +0x23c
go.opentelemetry.io/collector/pdata/internal/data/protogen/metrics/v1.(*ResourceMetrics).MarshalToSizedBuffer(0xc01b3e8420, {0xc01b49a000, 0x34a1, 0xe575})
	go.opentelemetry.io/collector/[email protected]/internal/data/protogen/metrics/v1/metrics.pb.go:2124 +0x25c
go.opentelemetry.io/collector/pdata/internal/data/protogen/collector/metrics/v1.(*ExportMetricsServiceRequest).MarshalToSizedBuffer(0xc01adef2a8, {0xc01b49a000, 0xe575, 0xe575})
	go.opentelemetry.io/collector/[email protected]/internal/data/protogen/collector/metrics/v1/metrics_service.pb.go:352 +0xac
go.opentelemetry.io/collector/pdata/internal/data/protogen/collector/metrics/v1.(*ExportMetricsServiceRequest).Marshal(0xc01afdc150?)
	go.opentelemetry.io/collector/[email protected]/internal/data/protogen/collector/metrics/v1/metrics_service.pb.go:332 +0x56
google.golang.org/protobuf/internal/impl.legacyMarshal({{}, {0x7612630, 0xc01afdc150}, {0x0, 0x0, 0x0}, 0x0})
	google.golang.org/[email protected]/internal/impl/legacy_message.go:402 +0xa2
google.golang.org/protobuf/proto.MarshalOptions.marshal({{}, 0x28?, 0x0, 0x0}, {0x0, 0x0, 0x0}, {0x7612630, 0xc01afdc150})
	google.golang.org/[email protected]/proto/encode.go:166 +0x27b
google.golang.org/protobuf/proto.MarshalOptions.MarshalAppend({{}, 0x0?, 0x73?, 0x5f?}, {0x0, 0x0, 0x0}, {0x758e3a0?, 0xc01afdc150?})
	google.golang.org/[email protected]/proto/encode.go:125 +0x79
github.com/golang/protobuf/proto.marshalAppend({0x0, 0x0, 0x0}, {0x7f478fc0bd38?, 0xc01adef2a8?}, 0x50?)
	github.com/golang/[email protected]/proto/wire.go:40 +0xa5
github.com/golang/protobuf/proto.Marshal(...)
	github.com/golang/[email protected]/proto/wire.go:23
google.golang.org/grpc/encoding/proto.codec.Marshal({}, {0x65f7300, 0xc01adef2a8})
	google.golang.org/[email protected]/encoding/proto/proto.go:45 +0x4e
google.golang.org/grpc.encode({0x7f47904ea270?, 0xb5312a8?}, {0x65f7300?, 0xc01adef2a8?})
	google.golang.org/[email protected]/rpc_util.go:595 +0x44
google.golang.org/grpc.prepareMsg({0x65f7300?, 0xc01adef2a8?}, {0x7f47904ea270?, 0xb5312a8?}, {0x0, 0x0}, {0x75cdf50, 0xc0000fab40})
	google.golang.org/[email protected]/stream.go:1708 +0xd2
google.golang.org/grpc.(*clientStream).SendMsg(0xc01be00000, {0x65f7300?, 0xc01adef2a8})
	google.golang.org/[email protected]/stream.go:846 +0xfd
google.golang.org/grpc.invoke({0x75dea30?, 0xc01c55c480?}, {0x6a97018?, 0x4?}, {0x65f7300, 0xc01adef2a8}, {0x65f7440, 0xc01affa198}, 0x0?, {0xc01b2ea040, ...})
	google.golang.org/[email protected]/call.go:70 +0xa8
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc.UnaryClientInterceptor.func1({0x75dea30, 0xc01c55c3c0}, {0x6a97018, 0x3f}, {0x65f7300, 0xc01adef2a8}, {0x65f7440, 0xc01affa198}, 0xc00057a700, 0x6bc5920, ...)
	go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/[email protected]/interceptor.go:105 +0x3e4
google.golang.org/grpc.(*ClientConn).Invoke(0xc00057a700?, {0x75dea30?, 0xc01c55c3c0?}, {0x6a97018?, 0x3f?}, {0x65f7300?, 0xc01adef2a8?}, {0x65f7440?, 0xc01affa198?}, {0xc00093c520, ...})
	google.golang.org/[email protected]/call.go:35 +0x223
go.opentelemetry.io/collector/pdata/internal/data/protogen/collector/metrics/v1.(*metricsServiceClient).Export(0xc0000139e0, {0x75dea30, 0xc01c55c3c0}, 0xc00164a4e0?, {0xc00093c520, 0x1, 0x1})
	go.opentelemetry.io/collector/[email protected]/internal/data/protogen/collector/metrics/v1/metrics_service.pb.go:272 +0xc9
go.opentelemetry.io/collector/pdata/pmetric/pmetricotlp.(*grpcClient).Export(0x49cd40?, {0x75dea30?, 0xc01c55c3c0?}, {0xc01c55c390?}, {0xc00093c520?, 0x0?, 0x0?})
	go.opentelemetry.io/collector/[email protected]/pmetric/pmetricotlp/grpc.go:47 +0x30
go.opentelemetry.io/collector/exporter/otlpexporter.(*exporter).pushMetrics(0xc000d68000, {0x75de9f8?, 0xc01b2ec060?}, {0x75dea30?})
	go.opentelemetry.io/collector/exporter/[email protected]/otlp.go:104 +0x69
go.opentelemetry.io/collector/exporter/exporterhelper.(*metricsRequest).Export(0x75dea30?, {0x75de9f8?, 0xc01b2ec060?})
	go.opentelemetry.io/[email protected]/exporter/exporterhelper/metrics.go:64 +0x34
go.opentelemetry.io/collector/exporter/exporterhelper.(*timeoutSender).send(0xc000d501c0, {0x75fe998, 0xc01be53bc0})
	go.opentelemetry.io/[email protected]/exporter/exporterhelper/common.go:207 +0x96
go.opentelemetry.io/collector/exporter/exporterhelper.(*retrySender).send(0xc000c6a2d0, {0x75fe998, 0xc01be53bc0})
	go.opentelemetry.io/[email protected]/exporter/exporterhelper/queued_retry.go:387 +0x58d
go.opentelemetry.io/collector/exporter/exporterhelper.(*metricsSenderWithObservability).send(0xc000d58138, {0x75fe998, 0xc01be53bc0})
	go.opentelemetry.io/[email protected]/exporter/exporterhelper/metrics.go:135 +0x88
go.opentelemetry.io/collector/exporter/exporterhelper.(*queuedRetrySender).start.func1({0x75fe998, 0xc01be53bc0})
	go.opentelemetry.io/[email protected]/exporter/exporterhelper/queued_retry.go:205 +0x39
go.opentelemetry.io/collector/exporter/exporterhelper/internal.(*boundedMemoryQueue).StartConsumers.func1()
	go.opentelemetry.io/[email protected]/exporter/exporterhelper/internal/bounded_memory_queue.go:61 +0xb6
created by go.opentelemetry.io/collector/exporter/exporterhelper/internal.(*boundedMemoryQueue).StartConsumers
	go.opentelemetry.io/[email protected]/exporter/exporterhelper/internal/bounded_memory_queue.go:56 +0x45

@bogdandrutu
Copy link
Member

@codeboten this time the culprit seems to be a metrics component (before happened on traces). My guess base on the config is prometheus exporter (have not checked yet).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
3 participants