You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I see no aggregation, in some clients like this one in go the in-memory aggregator is convenient because can reduce UDP traffic, for example multiples increments/count in same metric could be reduced to only one send to udp.
The same thing we do in other languages and the perf boost on applications with high usage is high.
The text was updated successfully, but these errors were encountered:
Sadly not. We have a lot of metrics, at work we had a similar problem using envoyproxy with a sink to a datadog agent. Every time a requests arrives, some histogram metrics goes to UDP using buffering. We change this "push to udp" model to a "pull based" where every 5 seconds we pull aggregated metrics from envoyproxy including histograms.
This is the cpu reduction and latency reduction.
Thanks for the explanation. That's part of why I use Prometheus metrics for things these days.
This seems like it'd probably involve a pretty big re-architecture to Cadence and change in behavior. Because of that, I'm unlikely to work on it myself.
I see no aggregation, in some clients like this one in go the in-memory aggregator is convenient because can reduce UDP traffic, for example multiples increments/count in same metric could be reduced to only one send to udp.
The same thing we do in other languages and the perf boost on applications with high usage is high.
The text was updated successfully, but these errors were encountered: