-
Notifications
You must be signed in to change notification settings - Fork 894
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Document Metrics Processor APIs and SDK requirements #1116
Comments
Hi @parallelstream There is already a metrics Processor concept, although it is not completely documented. @jkwatson and @bogdandrutu have been working with me to resolve terminology here -- I am under the impression that there is already something similar to a Processor component in the Java SDK, but the API is not quite like the one you gave. In the OTel-Go SDK, I believe your stated goal can be realized by implementing one of the APIs in I believe As a requirement for the OpenTelemetry specification, this issue should remain open until we clarify: (1) which exporter APIs are available for processing metrics |
yeah, re distributed context - this is spot on what we're doing. We store common labels in the Baggage and want a standard way of writing a custom logic to extract it and attach as KV tags to metrics and spans produced. With Spans we do it in SpanProcessor, so smith similar for metrics was envisaged. |
👋 @jmacd |
I'm interested in working on terminology a bit. We've been discussing an existing metrics Processor that is similar in ways to the same term in Tracing. The thing you're introducing in open-telemetry/opentelemetry-go#1271 is something we need, but I was initially confused because of the term. There has been a bit of discussion about sampling in the metrics API, as part of a discussion about raw metric values and exemplars. In the closed draft PR linked above I modified the metric Accumulator to allow for reducing label dimensions before building label sets. The connection to sampling is that if you want to sample raw metric events at full dimensionality but not accumulate them at full dimensionality, you need to capture these dimensions before they're accumulated. I'd like to find a better term for processing that happens during the early stages of the accumulator as in open-telemetry/opentelemetry-go#1023. Your application here injects distributed context label values, also during the early stages of the accumulator. I see these applications as very similar. How does the term "AccumulatorProcessor" sound? It's a lot of letters but helps distinguish from the current processor concept, which is post-accumulator. |
The linked-to PR in Go looks great. It's a short and sweet interface for enriching metric events with labels from the context. My only question is whether we should adopt the term "Processor" for this new interface and return the existing Processor component to using a term such as "Batcher" or "Integrator", both of which we have used in the past. I'm looking for others' input. |
Hi @jmacd thanks for pointers. After reading the above I see the confusion and generally agree with your assessment - we need to get terminology right. AccumulatorProcessor seems reasonable to me. It needs to expose same methods I outlined and needs to be called right after the record() method itself, before metric gets into the accumulator. So if I just modify the name of the interface, rest stays intact:
If we have this extension point, then baggage-based labels population which Hanshuo shows becomes just an application-level implementation of the AccumulatorProcessor and might not even live in OTel repo, it can be smith we keep for ourselves (we can consider making it part of standard OTel though if people find it useful). Implementation wise I agree it needs to go before building the label set for an instrument, I thought bind() method of AbtractSynchrousInstrument can be a good place per my PR thoughts? |
@parallelstream I'm very supportive of this idea. As far as terminology, @bogdandrutu implied he thinks we should use the name "Processor" for this API you are describing--which sits between the API and the Accumulator, and we should adopt another term for what is today "Processor" which sits between the Accumulator and Exporter (e.g., #1198). Prior terms we've used for the component between Accumulator and Exporter: "Batcher", "Integrator". As far as the interface itself (independent of name), I agree with the following:
I want to challenge you to find more than one application of the API as a starting point, and I hinted at one above. See the note here in a draft PR, which I had developed only as a proof-of-concept for label reduction. Label reduction can be performed in either of these locations: between the API and the Accumulator, or between the Accumulator and the Exporter, i.e., before or after aggregation of an interval. OTLP has support for RawValue exemplars, and at the time this was added we also discussed support for a |
@bogdandrutu I have a related question. In the past you have mentioned that because of the cost of extracting correlations from baggage, that you have seen the work of processing labels out of the critical path into a queue of metric events. Apparently this is being proposed in OTel-Java (@jkwatson, @carlosalberto), and I wonder if we should be talking about any notions of metric event queuing at the specification level. If you follow the discussion above (#1116 (comment)), it shows that we can replace a stream of metric-API events with a stream of sampled metric-API events using a hypothetical |
FYI, I don't know of any proposal to do this in otel-java. Can you point me at an issue or conversation I might have missed? |
This makes ton of sense to me. I would be happy with either, but as a user "Processor" is naturally closer and understandable.
Sure thing. I can think of a few:
Not at this stage. We are adopting a different approach - we export all data as raw from the SDK and do cardinality reduction as well as roll ups and throttling in a streaming processing pipeline |
@jmacd do you want me to go ahead and create a PR for the changes proposed along the lines of an example PR here https://github.com/parallelstream/opentelemetry-java/pull/3/commits? Do we have enough understanding now about what we are trying to achieve? |
The Metrics SDK has changed a lot and the processor that takes raw measurements will not be part of the scope for the initial stable release. |
(moving from open-telemetry/opentelemetry-java#1810)
Is your feature request related to a problem? Please describe.
Similar to SpanProcessor, we need a hook for metrics which would allow users to intercept metrics lifecycle eventы with their own logic.
Describe the solution you'd like
Describe alternatives you've considered
Main problem I am trying to address - is to be able to add certain labels to metrics depending on Context (i.e. dynamically). One alternative can be to just do that and make it a standalone feature aka "sticky labels". Had a simple PR for this here. However MetricsProcessor is a more generic solution which can address other use cases.
As for the implementation, instead of using immutable Labels we can pass ReadWriteLables object, this would make it follow the same patter as in SpanProcessor
Additional context
There's a problem with this approach - statically bound instruments will not be able to dynamically add labels during record operation. Since we only intercept onLabelsBound() in case one statically binds an instrument and then uses that to record multiple measurements with different contexts, only first bind call will be intercepted
Example PR
https://github.com/parallelstream/opentelemetry-java/pull/3/commits
The text was updated successfully, but these errors were encountered: