-
Notifications
You must be signed in to change notification settings - Fork 233
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support prefixed glob matches #481
Comments
I think this would be a worthwhile feature, although I am not sure how to do it with the efficient state machine. If anyone wants to tackle this please do! Meanwhile, unless you have very high throughput, I expect ~10 regex matches to work. Instead of deploying the exporter like an agent, consider deploying it as a sidecar. This makes it so that each instance of the expirter only needs to handle the metrics from one instance of the application. Additionally, you get per instance metrics "for free" – from the outside it stops mattering whether there are two processes communicating over some protocol, or one natively instrumented one. |
The data source is spark, so I suspect the total number of series is fairly high. Do you know offhand if graphite_exporter already has a metric I could track for match rate or processing time?
Good point. I'm doing that already which should limit the number of series depending on number of local spark processes (which varies proportional to VM size). |
It does not, but it really ought to 😉 |
I'm transitioning from https://docs.victoriametrics.com/vmagent.html to https://github.com/prometheus/graphite_exporter and ran into a case where I'm using prefixed globs to help distinguish from one of a few possible metric prefix cases.
Here's a simple example match:
This will fail with:
I think I could maybe switch to regex matches, but I'll need 10 of them so I'm a little concerned with the possible overhead vs glob matching.
It'd be great if the statsd exporter mapper could handle prefixed globs, or if you suspect 10 regex matches is no big deal, let me know :)
The text was updated successfully, but these errors were encountered: