content-type not evaluated on every send request #804
-
strimzi-kafka-bridge version: 0.25 It seems that the A request posted to the bridge via
does indeed result in an message containing the base64 decoded key and value. |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments 1 reply
-
HI @lowerorbit , when you create a consumer, you specify the embedded format (json vs binary). Any action about consuming by asking a different embeddede format should reply with an error. Can you please provide detailed steps about reproducing what you are doing? Since the consumer creation (and how) to the consuming operations. Thanks! |
Beta Was this translation helpful? Give feedback.
-
Hi @ppatierno, The setup is as follows:
Tests done:Given the test.json file from above containing a base64 encoded key,value pair, two POST requests to the producer endpoint /topics/BRIDGE-TEST1 were done
Consuming the topic via
The kafka-bridge pod output:
Based on the content-type in the request I would assume that the second request contains the decoded values for the records key and value. Crosschecking with a non-Istio setup on my workstation works as expected so that might be an Istio specific behaviour. But why? The content-type header is not modified by istio as visible in the pods output. BR |
Beta Was this translation helpful? Give feedback.
-
I have a clue ... When the producer HTTP client keeps the connection alive, it's assumed to be the same producer as before (so it's using the same corresponding Kafka producer internally) so sending always the same embedded format as value. It means going through the same internal message converter, so using always base64 or never base64 depending on which comes first and how the producer is created (for this reason you see this behaviour inverting the order). A new producer is created when it comes from a new HTTP connection created. |
Beta Was this translation helpful? Give feedback.
I have a clue ...
Istio is taking the producer connection alive after sending the message, which is something not happening with
cURL
command.I was able to reproduce your same behaviour by keeping or not keeping connection alive when sending the two messages one after the other.
When the producer HTTP client keeps the connection alive, it's assumed to be the same producer as before (so it's using the same corresponding Kafka producer internally) so sending always the same embedded format as value. It means going through the same internal message converter, so using always base64 or never base64 depending on which comes first and how the producer is created (for this reason you see this b…