-
Notifications
You must be signed in to change notification settings - Fork 666
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
General performance figures and optimisations #214
Comments
I don't know too much about the benchmark from the gist but it appears to measure consumer performance exclusively. From the description your provided it sounds like you are measure end to end throughput however. Either way I would expect increasing Presumably there are some start-up costs associated with this tests as well that we may be accounting for. I would try disabling |
I did a similar test a while back and saw a similar differential: https://gist.github.com/mhowlett/e9491aad29817aeda6003c3404874b35 The primary reason to go with the confluent client is reliability. librdkafka is very widely used and tested and this go client leverages that to provide the core functionality (i.e. all the bits that are most likely to be buggy). It's not that hard to write a kafka client, but the interaction with the cluster is quite involved and it is hard to write one that handles all the error scenarios well. Update: actually produce throughput was similar, you should check out that gist. |
Maybe time to update benchmarks with librdkafka 1.0 release? |
We've noticed latency between produce time and consume time from kafka using confluent-kafka-go client to be high greater than 5 seconds. we're also speculating the lib config as it would batch / internally optimise and didn't suit our less latency issues. Having benchmark with documented configurations (for high throughput, less latency, reliability) would be helpful. @edenhill |
The librdkafka docs is a good starting point: https://github.com/edenhill/librdkafka/blob/master/INTRODUCTION.md#performance |
General question on performance figures for confluent-kafka-go and how it compares to sarama? I am running go benchmarks locally using the channel based Producer/Consumers and am getting around 2x worse then sarama. Settings for consumer:
Settings for producer:
The
"queue.buffering.max.ms"
had the largest effect dropping 20s write for 100k events to 1s. How can I improve this?I am not able to reproduce this gists results:
https://gist.github.com/savaki/a19dcc1e72cb5d621118fbee1db4e61f
Checklist
Please provide the following information:
LibraryVersion()
): v0.11.4The text was updated successfully, but these errors were encountered: