Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[exporter/awsemf] Exporter still ignores the first batch of a metric send to CloudWatch #1991

Closed
mizzzto opened this issue Apr 25, 2023 · 3 comments
Labels
cloudwatch Cloudwatch related issues metrics Metrics related issue

Comments

@mizzzto
Copy link

mizzzto commented Apr 25, 2023

Describe the bug
This is a follow-up to #1653

After the PR open-telemetry/opentelemetry-collector-contrib#17988 was merged and released I started using the new retain_initial_value_of_delta_metric config option but saw that it did not change metrics data; the first batch was still ignored and my counters showed 0 instead of the true value.

Steps to reproduce
I copied the steps from #1653:

  1. The code is checked in at https://github.com/mircohaug/otel-awsemf-reproduction-ignored-first-batch
  2. Open otel-agent-config.yaml and populate it with the code below (note the new retain_initial_value_of_delta_metric)
receivers:
  otlp:
    protocols:
      grpc:
exporters:
  awsemf:
    resource_to_telemetry_conversion:
      enabled: true
    log_group_name: "emfbug-reproduction-embedded-metrics-otel"
    log_stream_name: "otel-stream"
    namespace: "your-metric-namespace"
    retain_initial_value_of_delta_metric: true
service:
  pipelines:
    metrics:
      receivers: [otlp]
      exporters: [awsemf]
  1. Authenticate your shell against an AWS account with the default profile. (Or change the value of the AWS_PROFILE env var in step 5)
  2. create the log group emfbug-reproduction-embedded-metrics-otel by running aws logs create-log-group --log-group-name emfbug-reproduction-embedded-metrics-otel
  3. Start the otel agent by running
docker run -d --rm -p 4317:4317 \
-e AWS_REGION=eu-central-1 \
-e AWS_PROFILE=default \
-v ~/.aws:/root/.aws \
-v "$(pwd)/otel-agent-config.yaml":/otel-local-config.yaml \
--name awscollector \
public.ecr.aws/aws-observability/aws-otel-collector:latest \
--config otel-local-config.yaml;
  1. run yarn install
  2. create metrics by running yarn start
    i. In the code we create a new OTEL counter. We add 1 to the counter four times in total. We Split these four increments on two batches with two increments each. Between these batches there is a wait time to allow the OTEL agent to flush the values to AWS.
  3. Run this log insights query (fields counter_name,@timestamp on the log group emfbug-reproduction-embedded-metrics-otel) to see the published EMF Metrics.
    See actual and expected result
  4. Cleanup by running
docker rm -f awscollector and aws logs delete-log-group --log-group-name emfbug-reproduction-embedded-metrics-otel

What did you expect to see?
We expect the loggroup to contain two entries with a value of 2. One entry for each batch.

What did you see instead?
We only get one entry with a value of 2. The first batch gets its value set to 0.

Additional context

  • The behaviour persists over multiple runs of the script.
  • Also does it happen for each new combination of counter name, and attributes.
  • In Order to exclude a faulty implementation in the OTEL Framework we also added a ConsoleMetricExporter alongside the one that exports the metrics to the OTEL agent. This Exporter print the correct values to the console.
  • In addition we added a file exporter to the OTEL agent pipeline. This one also shows the correct values.
  • Small waits between the batches lead to all four increments ending up in the same batch and we se no value in cloudwatch whatsoever.
  • faulty_metrics
@bryan-aguilar
Copy link
Contributor

create metrics by running yarn start
i. In the code we create a new OTEL counter. We add 1 to the counter four times in total. We Split these four increments on two batches with two increments each. Between these batches there is a wait time to allow the OTEL agent to flush the values to AWS.

Is the link valid here?

Can you add the logging exporter with detailed verbosity to the collector config? Then post the collector logs.

@mizzzto
Copy link
Author

mizzzto commented Apr 25, 2023

create metrics by running yarn start
i. In the code we create a new OTEL counter. We add 1 to the counter four times in total. We Split these four increments on two batches with two increments each. Between these batches there is a wait time to allow the OTEL agent to flush the values to AWS.

Is the link valid here?

Sorry, it's fixed now.

Can you add the logging exporter with detailed verbosity to the collector config? Then post the collector logs.

I'm not really experienced with this, so can you explain more what you mean? How to add the logging exporter and where should those logs be?

@mizzzto
Copy link
Author

mizzzto commented May 4, 2023

It seems the way I initialise my application was missing another config option for the delta (in OTLPExporter) so this issue is probably not valid and can be rejected.

@mizzzto mizzzto closed this as completed May 4, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cloudwatch Cloudwatch related issues metrics Metrics related issue
Projects
None yet
Development

No branches or pull requests

2 participants