Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

New counter metric showing number of sampled/not sampled spans #30482

Closed
ArthurSens opened this issue Jan 12, 2024 · 3 comments · Fixed by #30485
Closed

New counter metric showing number of sampled/not sampled spans #30482

ArthurSens opened this issue Jan 12, 2024 · 3 comments · Fixed by #30485
Assignees
Labels
enhancement New feature or request processor/tailsampling Tail sampling processor

Comments

@ArthurSens
Copy link
Member

Component(s)

processor/tailsampling

Is your feature request related to a problem? Please describe.

I was debugging the input/output rate of spans of a few collectors and was super confused about the numbers I was seeing.

The metric rate(otelcol_receiver_accepted_spans[5m]) was showing around 12k spans/second, and rate(otelcol_processor_tail_sampling_count_traces_sampled{policy="mypolicy", sampled="false"}[5m]) was showing around 1k spans/second. Without any failures in the whole pipeline, I was expecting to see an output rate of 11k spans/second when looking at rate(otelcol_exporter_sent_spans[5m]), however, the numbers were showing only 6k spans/second.

@matej-g helped me with this and showed how the metric otelcol_processor_tail_sampling_count_traces_sampled is actually about traces, not spans, as the name suggests.

Describe the solution you'd like

It would be useful to have a metric showing the number of sampled/not sampled spans, to verify if input/output rate of spans is working as expected.

Describe alternatives you've considered

No response

Additional context

No response

@ArthurSens ArthurSens added enhancement New feature or request needs triage New item requiring triage labels Jan 12, 2024
@github-actions github-actions bot added the processor/tailsampling Tail sampling processor label Jan 12, 2024
Copy link
Contributor

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

@ArthurSens
Copy link
Member Author

Same thing for the metric otelcol_processor_tail_sampling_sampling_trace_dropped_too_early, if this metric increases I have no way of knowing how many spans were dropped

@jpkrohling
Copy link
Member

I agree -- the only concern is about current users: what's the performance impact of adding this new counter? Can we wrap it in feature flags, so that users complaining about it can turn them off? If we hear no complaints for a couple of releases, we can remove the feature flag.

jpkrohling pushed a commit that referenced this issue Feb 16, 2024
…0485)

**Description:** <Describe what has changed.>
Add metrics to measure sampled/not sampled spans.

**Link to tracking Issue:** 
Fixes
#30482

**Testing:** <Describe what testing was performed and which tests were
added.>
None

**Documentation:** <Describe the documentation added.>
None

---------

Signed-off-by: Arthur Silva Sens <[email protected]>
XinRanZhAWS pushed a commit to XinRanZhAWS/opentelemetry-collector-contrib that referenced this issue Mar 13, 2024
…en-telemetry#30485)

**Description:** <Describe what has changed.>
Add metrics to measure sampled/not sampled spans.

**Link to tracking Issue:** 
Fixes
open-telemetry#30482

**Testing:** <Describe what testing was performed and which tests were
added.>
None

**Documentation:** <Describe the documentation added.>
None

---------

Signed-off-by: Arthur Silva Sens <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request processor/tailsampling Tail sampling processor
Projects
None yet
2 participants