Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support sending spans in batches #6

Closed
mjbryant opened this issue Sep 21, 2016 · 5 comments
Closed

Support sending spans in batches #6

mjbryant opened this issue Sep 21, 2016 · 5 comments

Comments

@mjbryant
Copy link

The Zipkin Kafka collector supports receiving batches of spans instead of single spans after openzipkin/zipkin#995. However, having a py_zipkin user implement batching solely in the transport_handler seems like it'd be too difficult on their part.

Maybe we could support having ZipkinLoggingContext take a batch_size param that defaults to something large but sane (20?), then in log_spans we could call transport_handler with a list of thrift-encoded spans as opposed to a single span at a time.

@kaisen, thoughts?

@dmitry-prokopchenkov
Copy link
Contributor

@kaisen, @bplotnick What about this? AFAIK Zipkin serer can process batch spans. At least we might modify log_spans to send root span and self.log_handler.client_spans together. What do you think? I could give a try on this.

@dmitry-prokopchenkov
Copy link
Contributor

@kaisen, @bplotnick Guys, your thoughts on this?

@bplotnick
Copy link
Contributor

@dmitry-prokopchenkov Go for it!

You could make log_span just write to a queue and flush when it hits a configurable batch size (with a sane default). Then at the end of log_spans, you flush the queue to write the remaining spans.

@dmitry-prokopchenkov
Copy link
Contributor

@bplotnick ok, thanks!

@dmitry-prokopchenkov
Copy link
Contributor

@bplotnick just a reminder, this can be closed

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants