-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
how to improve kafka producer throughput? #137
Comments
I would say that there are several solutions to this:
|
hi, wizzat, for p in pool: |
Setting the maxsize on the multiprocessing Queue would involve a code change to Kafka-Python to accept an async queue size and pass it into the Queue constructor (producer:122). |
Thank you. |
* KIP-345 Add static consumer membership support * KIP-345 Add examples to docs * KIP-345 Add leave_group_on_close flag https://issues.apache.org/jira/browse/KAFKA-6995 * KIP-345 Add tests for static membership * KIP-345 Update docs for leave_group_on_close option * Update changelog.rst * remove six from base.py * Update base.py * Update base.py * Update base.py * Update changelog.rst * Update README.rst --------- Co-authored-by: Denis Kazakov <[email protected]> Co-authored-by: Denis Kazakov <[email protected]>
* KIP-345 Add static consumer membership support * KIP-345 Add examples to docs * KIP-345 Add leave_group_on_close flag https://issues.apache.org/jira/browse/KAFKA-6995 * KIP-345 Add tests for static membership * KIP-345 Update docs for leave_group_on_close option * Update changelog.rst * remove six from base.py * Update base.py * Update base.py * Update base.py * Update changelog.rst * Update README.rst --------- Co-authored-by: Denis Kazakov <[email protected]> Co-authored-by: Denis Kazakov <[email protected]>
hi,all
When I use sync producer, the message is send slowly.
When I use async producer to deal with lots of messages, it work faster. I see python generate another process( to buffer and flush ?), and the memory it used is keep growing. At last, the system oom killer will kill some of it, then another.
I guess it's because my message generate rate is larger than kafka's throughput. Am I right ? If it is, how to imporve? Thanks.
The text was updated successfully, but these errors were encountered: