Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for metrics aggregation #190

Open
andresmargalef opened this issue Jan 30, 2024 · 3 comments
Open

Support for metrics aggregation #190

andresmargalef opened this issue Jan 30, 2024 · 3 comments

Comments

@andresmargalef
Copy link

I see no aggregation, in some clients like this one in go the in-memory aggregator is convenient because can reduce UDP traffic, for example multiples increments/count in same metric could be reduced to only one send to udp.
The same thing we do in other languages and the perf boost on applications with high usage is high.

@56quarters
Copy link
Owner

Cadence has a buffered backend that combines multiple metrics into a single UDP (or UNIX socket) write. Will this work for your use case?

@andresmargalef
Copy link
Author

andresmargalef commented Feb 15, 2024

Sadly not. We have a lot of metrics, at work we had a similar problem using envoyproxy with a sink to a datadog agent. Every time a requests arrives, some histogram metrics goes to UDP using buffering. We change this "push to udp" model to a "pull based" where every 5 seconds we pull aggregated metrics from envoyproxy including histograms.
This is the cpu reduction and latency reduction.

image image

@56quarters
Copy link
Owner

Thanks for the explanation. That's part of why I use Prometheus metrics for things these days.

This seems like it'd probably involve a pretty big re-architecture to Cadence and change in behavior. Because of that, I'm unlikely to work on it myself.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants