-
Notifications
You must be signed in to change notification settings - Fork 893
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Multi-policy sampling #2113
Comments
If you are restricting to power of two sampling rates, like The probability that some profiler is active is given by the maximum activation probability of any policy that contains the given profiler. Therefore, the basic profiler is activated with a probability of |
@oertl can I pick your brain on another somewhat similar scenario? Suppose I want to have different sampling rates per endpoint of a service. Since the endpoint partitions all possible requests into disjoint sets, there's no issue with using |
@yurishkuro If I understand correctly, you are thinking of a case where the individual spans of a trace do not have the same sampling probability. In this case, the trace may be incomplete (or partially sampled) and a weight for the whole trace cannot simply be determined. The weighting of the trace ultimately depends on the quantity of interest extracted from the trace. This is exactly what my paper is about https://arxiv.org/abs/2107.07703. The nice thing about consistent sampling is that it avoids the combinatorial explosion you mentioned. |
@oertl sorry I didn't explain well. I was still thinking of a traditional sampling that happens in one place (at the root) and then propagated with the trace, so it is consistent. But I want the probability of that decision to be based on attributes of the trace. The classic example of this is different sampling rates by the endpoint. Or by user type: free user vs. premium user. Basically, when the values of the attribute are independent, there is no issue. But in case of a |
@yurishkuro I do not see a problem when taking the max probability. As long as each trace is finally weighted according to the inverse of its chosen sampling probability the estimate should be unbiased. Here an example for clarification: Assume sampling probabilities p_{catalog}=1/2, p_{checkout}=1/4, and p_{prime}=1/8 and that following traces with corresponding attributes have been sampled:
The estimate of the number of traces with attributes {checkout} would be 2 + 4 + 4 = 10 |
The recent OTEPs from @jmacd showed now a consistent sampling can be applied to multiple nodes using different sampling rates. I have another use case that is similar, but not the same. Our system supports sampling policies that could express not only the desired sampling rate, but also the desired verbosity, e.g. one policy may request the trace to collect CPU metrics, another may ask for function-level profiling (lots of internal spans). And those policies sometime can be attached to the same node in the architecture. Right now we're evaluating them independently (with independent coin flip).
This raises two problems:
Problem 1. The
adjusted_count
or "weight" is not a property of the whole span, but of a specific profiler. E.g. the same span can represent basic metrics like latency/QPS/errors sampled 1-in-10, and also CPU metrics sampled 1-in-100. So the weights for these two sets of data will be 10 and 100 respectively.Problem 2. How do we use consistent sampling approach and assign the correct weights to each profiler? For example, assume we have two policies active at the same node that activate different profilers with different probabilities:
The weight for CPU profiler is unambiguous, w=100, but what about the weight for the basic profiler? Is is w=1/(0.1 + 0.01 + 0.1*0.01)?
@jmacd @oertl - curious to hear your thoughts
The text was updated successfully, but these errors were encountered: