You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
When using Gunicorn as python server, it will fork many workers, each worker will emit metrics asynchronously.
If put worker.pid as its resource attributes, metrics will have a very high cardinality.
I tried to let this server have an otel sidecar, in this side car, use batch processor first. Setting timeout as 1 min.
In this 1 mins interval, I would receive each worker's exclusive metrics.
If I send those metrics directly to NewRelic/Mimir, I would get out-of-order errors.
Below is the data model:
Let's say we have two worker, and received this twice with different start timestamp, and timestamp.
Resource SchemaURL:
Resource attributes:
-> service.name: Str(services)
ScopeMetrics #0
Metric #0
Descriptor:
-> Name: http.server.duration
StartTimestamp: 2024-03-07 21:52:31.483597198 +0000 UTC
Timestamp: 2024-03-07 21:53:29.399488457 +0000 UTC
...Some Data Point...
Resource SchemaURL:
Resource attributes:
-> service.name: Str(services)
ScopeMetrics #0
Metric #0
Descriptor:
-> Name: http.server.duration
StartTimestamp: 2024-03-07 21:52:31.406403762 +0000 UTC
Timestamp: 2024-03-07 21:53:29.44117394 +0000 UTC
...Some Data Point...
Describe the solution you'd like
I would like to use metricstransform's combine action to merge those data points under one metric named http.server.duration.
I tried to use aggregate_label_values by using metricstransform processor, it doesn't work, because it just works for the datapoints of the same metric.
Describe alternatives you've considered
No response
Additional context
No response
The text was updated successfully, but these errors were encountered:
I believe this is explicitly disallowed as a result of the collector's design, and OTLP data model. Here's another issue for context. Also, here's the OTLP data model document that outlines the design.
If you simply want to modify resource attribute values, such as service.name in your example, you can use the resource, or transform processor.
I was able to aggregate different resources for metric by chaining batch -> groupbyattrs -> metricstransform/aggregate_labels processors.
I think the same approach will work for metric merge
Component(s)
processor/metricstransform
Is your feature request related to a problem? Please describe.
When using Gunicorn as python server, it will fork many workers, each worker will emit metrics asynchronously.
If put worker.pid as its resource attributes, metrics will have a very high cardinality.
I tried to let this server have an otel sidecar, in this side car, use batch processor first. Setting timeout as 1 min.
In this 1 mins interval, I would receive each worker's exclusive metrics.
If I send those metrics directly to NewRelic/Mimir, I would get out-of-order errors.
Below is the data model:
Let's say we have two worker, and received this twice with different start timestamp, and timestamp.
Describe the solution you'd like
I would like to use
metricstransform
'scombine
action to merge those data points under one metric named http.server.duration.I tried to use
aggregate_label_values
by usingmetricstransform
processor, it doesn't work, because it just works for the datapoints of the same metric.Describe alternatives you've considered
No response
Additional context
No response
The text was updated successfully, but these errors were encountered: