-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add head sampling section + elaborate tail sampling #2463
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is definitely an improvement! A few nits, but otherwise I like it.
Co-authored-by: Kent Quirk <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd avoid capitalizing "head sampling" and "tail sampling" (unless the context asks for it like a heading or the start of a sentence).
as possible. A decision to sample or drop a span or trace is not made by | ||
inspecting the trace as a whole. | ||
|
||
For example, the most common form of head sampling is Consistent Probability |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we link to the spec of Consistent Probability Sampling here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Doneski!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
non-blocking comment, great addition:)
/easycla |
Hmmm, after I hit the button I wondered if the head sampling bit should not use the Consistent Probability sampler terminology, since technically SDKs don't implement this yet. ...but, like, the concept is still the same from the perspective of an end-user. Deterministically taking a fraction of traces is what both do, but the CPS propagates metadata that tells you information about the total population that the sample comes from. This makes the CPS superior because it lets a telemetry backend re-weight counts so that any aggregation you derive from tracing data (like total # of requests) is actually correct w.r.t. the total population compared to the sample. ...anyways, I'm really not too worried at the moment, as it's easy enough to change. Frankly this is the direction all the head samplers should be heading anyways. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Fixes #2457
This still leaves our sampling docs in a very, very incomplete state. But in the name of incremental progress, here's something.
Preview: https://deploy-preview-2463--opentelemetry.netlify.app/docs/concepts/sampling/