-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[exporterhelper] Introduce batching functionality #8685
Conversation
679abd7
to
5c2cf22
Compare
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #8685 +/- ##
==========================================
+ Coverage 90.92% 90.95% +0.02%
==========================================
Files 348 350 +2
Lines 18401 18576 +175
==========================================
+ Hits 16732 16895 +163
- Misses 1346 1354 +8
- Partials 323 327 +4 ☔ View full report in Codecov by Sentry. |
f9badfc
to
63f5bed
Compare
63f5bed
to
86d98c9
Compare
86d98c9
to
ed9a2b8
Compare
a50f7b6
to
06dda05
Compare
} | ||
|
||
func (b *Batcher) mergeRequests(req1 *request, req2 *request) (*request, error) { | ||
r, err := b.mergeFunc(req1.Context(), req1.Request, req2.Request) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
trying to understand a behavior of this all a bit.
i wonder if it would be valuable to merge req1 and req2 Contexts into a single context. here, behavior may not be deterministic when considering cancelable contexts or contexts with Deadlines.
it would be worth to think about how batch should behave in case one of the contexts expires
also what in case of different spans related to these contexts
queue sender creates a noCancellationContext maybe this could do the same just in case queue is not used after batch
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@michalpristas, this is a good question. In the current implementation (which is similar to the current batch processor), we should pass only one context from the request that is currently being added to the batch and is blocking the current Consume
call. We can ignore the context associated with the existing batch because all the other requests are already accumulated in the batch and are not active anymore.
But if the queue is not enabled, we probably want to block all the incoming requests until a batch is ready, as discussed in #8762. In that case, we probably need some kind of context merging. That option is not implemented yet.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The blocking behavior is implemented now.
I looked into merging contexts of requests that create a batch.
There was a proposal to implement this capability natively in go golang/go#36503, but it was declined.
We could manually merge the contexts, but it would require spawning another goroutine specifically to handle the context cancellations. I think this is an overkill for this task. I think it's ok just to take the context from the first request, assuming that it has the shortest timeout. I updated the code accordingly. Let me know WDYT.
06dda05
to
5d662b6
Compare
0c20bc7
to
785c752
Compare
8c6faa5
to
3d03d0d
Compare
bd11b5d
to
57bbe89
Compare
66c89e7
to
8106ad8
Compare
25d070d
to
1861c97
Compare
bbea1a1
to
a86a1ed
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks good for a first version, specially considering we are marking it as experimental. I think we need to see it used in components and see how it fares in real life so I am approving
This change introduces new experimental batching functionality to the exporter helper
This change introduces new experimental batching functionality to the exporter helper. The batch sender is fully concurrent and synchronous. It's set after the queue sender, which, if enabled, introduces the asynchronous behavior and ensures no data loss with the permanent queue.
Follow-up TODO list:
Updates #8122