This repository has been archived by the owner on Dec 6, 2024. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 164
Update 0003-measure-metric-type to match current spec #61
Merged
bogdandrutu
merged 11 commits into
open-telemetry:master
from
jmacd:jmacd/metric-round3
Nov 16, 2019
Merged
Changes from 10 commits
Commits
Show all changes
11 commits
Select commit
Hold shift + click to select a range
cb6a996
Propose approval / start round 3 of discussion
06f9656
Use Monotonic
4f59e21
Updates to use Monotonic/NonMonotonic, and NonNegtive/Signed
d871653
Merge branch 'master' into jmacd/metric-round3
bogdandrutu 202cf54
Upstream
2841cfd
Update 0003
8b13a7f
Minor
5f07994
Revert
fce5747
Revert
08a8ab1
Merge branch 'master' into jmacd/metric-round3
bogdandrutu d4e097b
Merge branch 'master' into jmacd/metric-round3
bogdandrutu File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I need to better understand the difference between recommended vs required. It feels to me that the whole metrics derived a bit more than expected with the LabelSet and got pushed more towards statsd than I feel comfortable (this is mostly coming from the Go implementation that I looked at).
Allowing users to call getHandlers and add/set/record with LabelSet that are larger than the "required keys" removes the optimization that this operation can be transformed in a simple atomic increment for the most common case (default aggregation using recommended/required keys).
As an example:
queue_topic
(which has 3 possible valuesvalue_1
,value_2
,value_3
).add
with aLabelSet
that includesqueue_topic
andqueue_name
then I need to do expensive operation to extract the queue_topic from theLabelSet
then look for the Handle then update the atomic variable (even if the LabelSet includes only thequeue_topic
I need to look for the value and compare strings). This will definitely be extra overhead for the common case.I would like to confirm as I mentioned before that a Prometheus like performance can be achieved for the "required/recommended keys" and default aggregation with our current API, while allow more extensibility and custom aggregations.
This comment probably applies to the whole
LabelSet
concept. I feel that the current API spec is focused on the fact that the entireLabelSet
will just be serialized on the wire, which is different than what I think an optimized metrics implementation should do which is to pre-aggregate metrics and drop all the extra labels that are not necessary.Anyway don't want to block this PR with this comment, but want to have this documented and addressed before we move forward with the
LabelSet
idea (which I do like from the perspective of having an implementation performant for statsd, but I do not like that adds unnecessary overhead for something like Prometheus).There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you add/set/record with a handle that has more labels than the recommended labels for the instrument in question, you still get an atomic operation (assuming the aggregation supports it--some measure-aggregations do not). The atomic operation updates a record that is bound to the pair of (descriptor, labelset). There is no additional computation in the instrumentation code path.
On the collect/export code path, the implementation "groups" all the records for a particular descriptor, meaning to fold results from several label sets into one aggregate result. So you do have additional cost, but it is not in the fast path, it's in the export path. There's a possibility to compute and cache a mapping from (Descriptor, LabelSet) to ordinal recommended-key positions, that would mostly alleviate the computation here. It's a simple matter of running through the label sets, mapping into the unique, ordered recommended-key labels, and then merging aggregates. [I haven't implemented this cache, it's a NOTE in the code.]
Now consider the benefit. You suggested a LabelSet that includes both
queue_topic
(the recommended key) andqueue_name
(an additional label). During the collect/export pass, the implementation will merge records from different label sets according to their value forqueue_topic
, but we have new information that could be very useful to the user. We're able to derive, for each of the aggregate results, the distribution ofqueue_name
. This could be used to automatically ensure that a unique exemplar was supplied for each distinctqueue_name
, for example. This is a huge benefit, IMO.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note that the cost of a handle
Add()
in the Go benchmarks stands at ~15ns (regardless of how many labels there are, compared with recommended labels), where the direct call toAdd()
with a label set stands at ~200ns. So using a handle is an order of magnitude cheaper than a direct call (also note that a direct call is much cheaper than constructing a label set, for realistic label sets).It remains to be seen how much the impact of additional CPU at collection will be, but we cannot let the future metrics world be constrained by Prometheus which has a very limited view of how we can use labels.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can I resolve this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I need to see an implementation (e2e) before convincing myself that we are not making a wrong API decision. But I think we should close for the moment, we documented the concern and will revisit once we have an e2e implementation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe just open an issue with the conversation to track this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here's an e2e implementation. There are two
export.Batcher
implementations, one groups by recommended keys, one groups by all keys in the label set.open-telemetry/opentelemetry-go#265
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@bogdandrutu I'm not sure I could summarize the issue you intend to track. Is it that LabelSets permit additional labels and you're worried about performance?