Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Send only when metrics differ from t-1 #22100

Closed
garry-cairns opened this issue May 19, 2023 · 8 comments
Closed

Send only when metrics differ from t-1 #22100

garry-cairns opened this issue May 19, 2023 · 8 comments

Comments

@garry-cairns
Copy link
Contributor

Component(s)

connector/spanmetrics

Is your feature request related to a problem? Please describe.

Persistent back ends will waste storage from collectors sending spanmetrics periodically when they don't change what was last sent.

Additionally, there's a reported "bug" (not really a bug if it's an instance of what I describe) for which I've suggested a workaround in which setups with multiple hosts that might host the same service at different times can get weird looking metrics from the collectors on those hosts. My fix above works for this but at the expense of additional storage for persistent back ends such as influx. Only sending metrics when they show a change from t-1 would resolve this too.

Describe the solution you'd like

If I have spanmetrics with a count of 1 at t, and a count of 1 again at t+1 (cumulative) or a delta of 0, there should be no export for those metrics.

Describe alternatives you've considered

The alternative is to solve this at the exporter or even storage layer, but this means one needs n solutions instead of 1.

Additional context

No response

@garry-cairns garry-cairns added enhancement New feature or request needs triage New item requiring triage labels May 19, 2023
@github-actions
Copy link
Contributor

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

@albertteoh
Copy link
Contributor

If I understand correctly, prometheus components can mark a metric as stale if no new data points are received for a configured period of time, and handle this accordingly when writing to/or responding to scrapes from prometheus.

I'm wondering that, if we implement this enhancement, would we potentially send the wrong signal to metrics consumers that a metric is stale (i.e. failed scrape), when in fact, it's still an "active" metric but just unchanged? In particular, some prometheus functions explicitly handle stale values: https://www.robustperception.io/staleness-and-promql/.

@garry-cairns
Copy link
Contributor Author

@albertteoh would you consider having the behaviour be configurable? Prometheus is stateless in a lot of configurations but other backends aren't, and persistent backends in particular might benefit from this I think.

@albertteoh
Copy link
Contributor

Yup, I think making this configurable helps in mitigating any potential negative impact it will have on both consumers of the metrics and the collector itself (increased memory pressure and CPU load).

However, I'd like to be convinced that this will be a useful feature for the majority of users of this component, to make it worth the effort of introducing this feature in the codebase, which I think would not be trivial, particularly if it's configurable.

@kovrus do you have any thoughts on this?

@garry-cairns
Copy link
Contributor Author

garry-cairns commented May 26, 2023

@albertteoh makes sense. Also worth mentioning I think that some of the rationale for making spanmetrics a connector was because the processor was thought to me too tightly coupled to the Prometheus implementation. At the risk of solutionizing I think memory pressure could be kept down as it'll be effectively nil for DELTA (if 0 don't send) and 1 integer per unique service name at any given time in CUMULATIVE, which I can't imagine will ever become pathological.

Happy to send a review for an initial proposed implementation if that's helpful, though I've not written much Go before so it might not be worth your while. Let me know your preference.

@albertteoh
Copy link
Contributor

A PR would be much appreciated.

@djaglowski djaglowski removed the needs triage New item requiring triage label Jun 28, 2023
@github-actions
Copy link
Contributor

This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

@github-actions github-actions bot added the Stale label Aug 28, 2023
@github-actions
Copy link
Contributor

This issue has been closed as inactive because it has been stale for 120 days with no activity.

@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Oct 27, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants