You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I was reproducing with code in Rust using rust-rdkafka but that wrapper doesn't do much while reading committed offsets besides calling librdkafka itself. Because in rust-rdkafka it checks that metadata is a valid UTF-8 string it panics with errors like:
Metadata is not UTF-8: Utf8Error { valid_up_to: 3, error_len: Some(1) }
if it starts to return "random" data.
Verified also by implementing OffsetFetch and OffsetCommit in Rust that it's not an issue on the Kafka side - with pure Rust impl I couldn't reproduce issue with reading invalid data.
How to reproduce
Use byte array [10, 20, 0, 30, 40] as the commit metadata and commit for any partition. Then read committed offsets via rd_kafka_committed and in some cases metadata after \0 is just different than what was written.
Examples from other tests I've conducted where for the same metadata written we get random responses:
Operating system: 32-Ubuntu SMP Mon Jan 9 12:28:07 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
Provide logs (with debug=.. as necessary) from librdkafka
Nothing is logged and everything seems to be working just fine.
Provide broker log excerpts
Can't do it but no errors / warning on the broker side. Also as said above I've configured it isn't the issue purely on the Kafka side.
Critical issue
The text was updated successfully, but these errors were encountered:
mlowicki
changed the title
Reading committed offsets where metadata contains null byte leads to read random data after null byte.
Reading committed offsets where metadata contains null byte leads to reading random data after null byte.
Mar 15, 2024
Description
I was reproducing with code in Rust using
rust-rdkafka
but that wrapper doesn't do much while reading committed offsets besides calling librdkafka itself. Because inrust-rdkafka
it checks that metadata is a valid UTF-8 string it panics with errors like:if it starts to return "random" data.
Verified also by implementing
OffsetFetch
andOffsetCommit
in Rust that it's not an issue on the Kafka side - with pure Rust impl I couldn't reproduce issue with reading invalid data.How to reproduce
Use byte array
[10, 20, 0, 30, 40]
as the commit metadata and commit for any partition. Then read committed offsets viard_kafka_committed
and in some cases metadata after \0 is just different than what was written.Examples from other tests I've conducted where for the same metadata written we get random responses:
rust-rdkafka
usedlibrdkafka
2.3.0 - fede1024/rust-rdkafka@87105bc.Checklist
IMPORTANT: We will close issues where the checklist has not been completed.
Please provide the following information:
<REPLACE with e.g., v0.10.5 or a git sha. NOT "latest" or "current">
3.7.0
<REPLACE with e.g., message.timeout.ms=123, auto.reset.offset=earliest, ..>
this is all I set:
32-Ubuntu SMP Mon Jan 9 12:28:07 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
debug=..
as necessary) from librdkafkaNothing is logged and everything seems to be working just fine.
Can't do it but no errors / warning on the broker side. Also as said above I've configured it isn't the issue purely on the Kafka side.
The text was updated successfully, but these errors were encountered: