Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add setting to disable limit on kafka_num_consumers #40670

Merged
merged 3 commits into from
Aug 29, 2022

Conversation

Avogar
Copy link
Member

@Avogar Avogar commented Aug 26, 2022

Changelog category (leave one):

  • New Feature

Changelog entry (a user-readable short description of the changes that goes to CHANGELOG.md):

Add setting to disable limit on kafka_num_consumers. Closes #40331

Information about CI checks: https://clickhouse.com/docs/en/development/continuous-integration/

@robot-ch-test-poll1 robot-ch-test-poll1 added the pr-feature Pull request with new product feature label Aug 26, 2022
@evillique evillique self-assigned this Aug 26, 2022
@alesapin alesapin merged commit 7ce0afc into ClickHouse:master Aug 29, 2022
@alesapin alesapin added the pr-must-backport Pull request should be backported intentionally. Use this label with great care! label Aug 31, 2022
Avogar added a commit that referenced this pull request Sep 1, 2022
Backport #40670 to 22.8: Add setting to disable limit on kafka_num_consumers
Avogar added a commit that referenced this pull request Sep 1, 2022
Backport #40670 to 22.7: Add setting to disable limit on kafka_num_consumers
@robot-clickhouse robot-clickhouse added the pr-backports-created Backport PRs are successfully created, it won't be processed by CI script anymore label Sep 1, 2022
Avogar added a commit that referenced this pull request Sep 1, 2022
Backport #40670 to 22.6: Add setting to disable limit on kafka_num_consumers
Avogar added a commit that referenced this pull request Sep 8, 2022
Backport #40670 to 22.3: Add setting to disable limit on kafka_num_consumers
@@ -532,6 +532,7 @@ static constexpr UInt64 operator""_GiB(unsigned long long value)
M(UInt64, max_entries_for_hash_table_stats, 10'000, "How many entries hash table statistics collected during aggregation is allowed to have", 0) \
M(UInt64, max_size_to_preallocate_for_aggregation, 10'000'000, "For how many elements it is allowed to preallocate space in all hash tables in total before aggregation", 0) \
\
M(Bool, kafka_disable_num_consumers_limit, false, "Disable limit on kafka_num_consumers that depends on the number of available CPU cores", 0) \
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does it worth to add an extra setting just for this? Maybe simply log a message with error or similar severity is enough?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
pr-backports-created Backport PRs are successfully created, it won't be processed by CI script anymore pr-feature Pull request with new product feature pr-must-backport Pull request should be backported intentionally. Use this label with great care!
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Setting to disable or override limit on kafka_num_consumers
6 participants