Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Metric] use Float64Counter to get wrong metric data #914

Closed
gunsluo opened this issue Jul 7, 2020 · 5 comments
Closed

[Metric] use Float64Counter to get wrong metric data #914

gunsluo opened this issue Jul 7, 2020 · 5 comments
Milestone

Comments

@gunsluo
Copy link

gunsluo commented Jul 7, 2020

Describe the bug
use Float64Counter to get the wrong metric data.

the code

meterPusher = push.New(
			simple.NewWithExactDistribution(),
			exporter,
			push.WithPeriod(30*time.Second),
			//push.WithTimeout(10*time.Second),
		)
meterProvider = meterPusher.Provider()
meter:= meterProvider.Meter("name")

accountReadCounter : =meter.NewFloat64Counter("account.read", metric.WithDescription("record number of reading account"));

....

accountReadCounter.Add(ctx, 1)

the counter metric's value should not be reduced.

image

What did you expect to see?
the value of the counter keeps increasing by time.

What version did you use?
Version:
collector 0.5.0
otel: 0.7.0

What config did you use?
collector Config:

receivers:
  otlp:
    protocols:
      grpc:
        endpoint: "0.0.0.0:55679"

processors:
  batch:
  queued_retry:

extensions:
  health_check: {}

exporters:
  jaeger:
    endpoint: "jaeger:14250"
    insecure: true
  prometheus:
    endpoint: "0.0.0.0:8889"
    namespace: example
    const_labels:
      label1: test
  logging:
      loglevel: debug

service:
  extensions: [health_check]
  pipelines:
    traces:
      receivers: [otlp]
      processors: [batch, queued_retry]
      exporters: [jaeger]

    metrics:
      receivers: [otlp]
      exporters: [prometheus, logging]

Environment
OS: macOS
Compiler: go version go1.14.2 darwin/amd64

Additional context
it works fine on version 0.3.0(collector) and otel(0.6.0).

@gunsluo
Copy link
Author

gunsluo commented Jul 7, 2020

Hi, @nilebox

I found that SDK reset the value of the counter when record's updateCount is not collectedCount, there is no any modification of collectedCount in code. so the value of the counter is reset.

https://github.com/open-telemetry/opentelemetry-go/blob/master/sdk/metric/sdk.go#L376

func (m *Accumulator) checkpointRecord(r *record) int {

is it a bug?

@Aneurysm9
Copy link
Member

@jmacd is this related to #887/#903?

@jmacd
Copy link
Contributor

jmacd commented Jul 8, 2020

I am a bit confused. From the report, it looks more like version 0.6 is still being used. The 0.7 release added metrics processor logic to sum the individual counts, and the problems in #887 and #903 would have produced different odd behavior. Is there any chance this is a 0.6 otel library with a 0.4 collector? Otherwise, it's an unusual report and I would like to continue investigating.

@jmacd
Copy link
Contributor

jmacd commented Jul 8, 2020

This is a case of open-telemetry/opentelemetry-collector#1255. The OTLP to Prometheus functionality is broken in the 0.4 collector release: we will disable the OTLP receiver in the 0.5 release and get this fixed in 0.6 I hope.

@jmacd
Copy link
Contributor

jmacd commented Jul 16, 2020

Closing this as it's covered in collector issue 1255.

@jmacd jmacd closed this as completed Jul 16, 2020
@pellared pellared added this to the untracked milestone Nov 8, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants