-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
spanmetricsprocessor doesn't prune histograms when metric cache is pruned #27080
Comments
Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
Took a crack at a PR #27083 |
Hello @nijave, I can confirm the issue in my environment: How you can see, the metrics exposed in There are two details about your issue that doesn't match with the environment that I have: spanmetricsconnector and v0.85.0. Could you consider using those on your work? |
My current config: receivers:
otlp:
protocols:
grpc:
exporters:
prometheus:
endpoint: 0.0.0.0:8889
metric_expiration: 60s
connectors:
spanmetrics:
histogram:
unit: "ms"
explicit:
buckets: []
metrics_flush_interval: 15s
dimensions:
- name: build_name
- name: build_number
exclude_dimensions:
- span.kind
dimensions_cache_size: 100
processors:
batch:
attributes/spanmetrics:
actions:
- action: extract
key: host.name
pattern: ^(?P<kubernetes_cluster>.+)-jenkins-(?P<organization>tantofaz|whatever-org)-(?P<build_name>.+)-(?P<build_number>[0-9]+)(?P<build_id>(?:-[^-]+){2}|--.*?)$
filter/spanmetrics:
error_mode: ignore
metrics:
metric:
- 'resource.attributes["service.name"] != "jenkins"'
service:
pipelines:
traces:
receivers: [otlp]
processors: [attributes/spanmetrics, batch]
exporters: [spanmetrics]
metrics:
receivers: [spanmetrics]
processors: [filter/spanmetrics]
exporters: [prometheus] |
This issue should be considerable critical, once Based on this article, Prometheus only supports cumulative metrics, so I cannot use delta metrics to avoid the issue. If there is a workaround for the issue, please let me know. AFAIK the workaround doesn't exist, making this issue critical. |
Prune histograms when the dimension cache evictions are removed **Description:** Prunes histograms when the dimension cache is pruned. This prevents metric series from growing indefinitely **Link to tracking Issue:** #27080 **Testing:** I modified the the existing test to check `histograms` length instead of dimensions cache length. This required simulating ticks to hit the exportMetrics function **Documentation:** <Describe the documentation added.> Co-authored-by: Sean Marciniak <[email protected]>
Prune histograms when the dimension cache evictions are removed **Description:** Prunes histograms when the dimension cache is pruned. This prevents metric series from growing indefinitely **Link to tracking Issue:** open-telemetry#27080 **Testing:** I modified the the existing test to check `histograms` length instead of dimensions cache length. This required simulating ticks to hit the exportMetrics function **Documentation:** <Describe the documentation added.> Co-authored-by: Sean Marciniak <[email protected]>
Prune histograms when the dimension cache evictions are removed **Description:** Prunes histograms when the dimension cache is pruned. This prevents metric series from growing indefinitely **Link to tracking Issue:** open-telemetry#27080 **Testing:** I modified the the existing test to check `histograms` length instead of dimensions cache length. This required simulating ticks to hit the exportMetrics function **Documentation:** <Describe the documentation added.> Co-authored-by: Sean Marciniak <[email protected]>
Component(s)
processor/spanmetrics
What happened?
Description
span metrics processor doesn't drop old histograms
Graphs in grafana/agent#5271
Steps to Reproduce
leave the collector running a while, watch exported metric count drop indefinitely
Expected Result
metric series should be pruned if they haven't been updated a while
Actual Result
metric series dimension cache is pruned but histograms are not
Collector version
v0.80.0
Environment information
Environment
OS: (e.g., "Ubuntu 20.04")
Compiler(if manually compiled): (e.g., "go 14.2")
OpenTelemetry Collector configuration
Configured is automatically generated by Grafana Agent. See https://github.com/grafana/agent/blob/main/pkg/traces/config.go#L647
Log output
Additional context
It looks like
histograms
map should have been pruned/LRU'd in addition tometricsKeyToDimensions
#2179I think this is the same/similar but it's closed so I figured I'd collect everything into a bug report #17306 (comment)
The text was updated successfully, but these errors were encountered: