-
Notifications
You must be signed in to change notification settings - Fork 660
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Client Request hang when produce message of 10M size #763
Comments
The maximum message size on confluent cloud for basic and standard clusters is 8Mb. For dedicated clusters it is 20Mb. https://docs.confluent.io/cloud/current/clusters/cluster-types.html#cloud-cluster-types However, the client should not hang, you should get an error. Feel free to paste debug logs, that would be interesting. |
This is log client.log that running on the dedicated cluster. |
it will not occur hang after config message.max.bytes to 104857600, so the root cause seems to be caused when message.max.bytes on the client-side less than on the server-side. |
try setting p.Produce returns an error value, which you need to check, in addition to possibly returning an error via the delivery channel. Since librdkafka can determine the message to large error immediately, it is exposed via the return value, not the delivery channel. (IMO an improved API would only use one mechanism to indicate all errors). relevant... golang/go#20803 marking this as enhancement as the produce example does not use the error value returned, and may exhibit the same issue. |
@mhowlett it works well, I can get a "Message size too large" error when checking p.Produce return value |
Description
I tested to produce different size messages, the request will hang when message size reach 10Mb
How to reproduce
Checklist
Please provide the following information:
LibraryVersion(v1.8.2)
):ConfigMap{...}
"debug": ".."
as necessary)The text was updated successfully, but these errors were encountered: