Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merge metrics from different resources export #31654

Closed
changliu-wk opened this issue Mar 8, 2024 · 3 comments
Closed

Merge metrics from different resources export #31654

changliu-wk opened this issue Mar 8, 2024 · 3 comments
Labels
question Further information is requested

Comments

@changliu-wk
Copy link

Component(s)

processor/metricstransform

Is your feature request related to a problem? Please describe.

When using Gunicorn as python server, it will fork many workers, each worker will emit metrics asynchronously.
If put worker.pid as its resource attributes, metrics will have a very high cardinality.
I tried to let this server have an otel sidecar, in this side car, use batch processor first. Setting timeout as 1 min.
In this 1 mins interval, I would receive each worker's exclusive metrics.
If I send those metrics directly to NewRelic/Mimir, I would get out-of-order errors.
Below is the data model:
Let's say we have two worker, and received this twice with different start timestamp, and timestamp.

Resource SchemaURL: 
Resource attributes:
     -> service.name: Str(services)
ScopeMetrics #0
Metric #0
Descriptor:
     -> Name: http.server.duration
StartTimestamp: 2024-03-07 21:52:31.483597198 +0000 UTC
Timestamp: 2024-03-07 21:53:29.399488457 +0000 UTC
...Some Data Point...
Resource SchemaURL: 
Resource attributes:
     -> service.name: Str(services)
ScopeMetrics #0
Metric #0
Descriptor:
     -> Name: http.server.duration
StartTimestamp: 2024-03-07 21:52:31.406403762 +0000 UTC
Timestamp: 2024-03-07 21:53:29.44117394 +0000 UTC
...Some Data Point...

Describe the solution you'd like

I would like to use metricstransform's combine action to merge those data points under one metric named http.server.duration.
I tried to use aggregate_label_values by using metricstransform processor, it doesn't work, because it just works for the datapoints of the same metric.

Describe alternatives you've considered

No response

Additional context

No response

@changliu-wk changliu-wk added enhancement New feature or request needs triage New item requiring triage labels Mar 8, 2024
@github-actions github-actions bot added the processor/metricstransform Metrics Transform processor label Mar 8, 2024
Copy link
Contributor

github-actions bot commented Mar 8, 2024

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

@crobert-1
Copy link
Member

I believe this is explicitly disallowed as a result of the collector's design, and OTLP data model. Here's another issue for context. Also, here's the OTLP data model document that outlines the design.

If you simply want to modify resource attribute values, such as service.name in your example, you can use the resource, or transform processor.

@crobert-1 crobert-1 added question Further information is requested and removed enhancement New feature or request processor/metricstransform Metrics Transform processor needs triage New item requiring triage labels Mar 12, 2024
@yuri-rs
Copy link
Contributor

yuri-rs commented Mar 16, 2024

I was able to aggregate different resources for metric by chaining batch -> groupbyattrs -> metricstransform/aggregate_labels processors.
I think the same approach will work for metric merge

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

3 participants