-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Server throws exception while producing with low-level api #1013
Comments
Hmm, I don't see anything wrong with your code. Perhaps kafka requires |
Thanks for your response! More information: The Value of Record is encoded by msgpack, and the batch codec is set to None. Is this OK? I'll try it tomorrow. And, I've tried set OffsetDelta of every record, but this lead to another problem. It seems that the consumer's offset is wrong: sarama consumer's log:
The consumer keeps shutting down itself because it's request offset is always greater than server's offset. Weird. broker log:
|
That is weird. It looks like you are using consumer groups - are you using https://github.com/bsm/sarama-cluster ? They would have a better idea how to look into the consumer offset issue. |
Yes. I'm using sarama cluster. I will have a try. |
I've tested on kafka 0.11 and 1.0:
So now, I avoid this problem by compressing the record and not setting the OffsetDelta. And I 've reported this issue to sarama cluster. Waiting for their response. THX! |
1.14.0 is from November, it doesn't have patches for In my tests I've been using Sarama from master for producer with 1.0.0 API and very old Sarama and 0.8.0 (the oldest supported version, which is the default) consumer. |
@bobrik I've tried the latest master, but I think there is nothing to do with the version and OffsetDelta patches. I produce message with the low-level api, not default sarama producer. I don't know whether this is a sarama issue or a sarama cluster issue: |
And I find something: If OffsetDelta set:
OffsetDelta not set:
test.go: package main
import (
"fmt"
"time"
"github.com/Shopify/sarama"
"github.com/eapache/go-xerial-snappy"
)
func run(addrs []string, topic string) error {
config := sarama.NewConfig()
config.Version = sarama.V0_11_0_0
config.Consumer.Return.Errors = true
client, err := sarama.NewClient(addrs, config)
if err != nil {
return err
}
defer client.Close()
broker, err := client.Leader(topic, 0)
if err != nil {
return err
}
defer broker.Close()
req := &sarama.ProduceRequest{
RequiredAcks: sarama.WaitForAll,
Timeout: 10 * 1000,
Version: 3,
}
req.AddBatch(topic, 0, &sarama.RecordBatch{
FirstTimestamp: time.Now(),
Version: 2,
ProducerID: -1,
Codec: sarama.CompressionSnappy,
Records: []*sarama.Record{
{OffsetDelta: 0, Value: snappy.Encode([]byte("DATA-0001"))},
{OffsetDelta: 1, Value: snappy.Encode([]byte("DATA-0002"))},
},
})
if _, err := broker.Produce(req); err != nil {
return err
}
consumer, err := sarama.NewConsumerFromClient(client)
if err != nil {
return err
}
defer consumer.Close()
pc, err := consumer.ConsumePartition(topic, 0, sarama.OffsetOldest)
if err != nil {
return err
}
defer pc.Close()
go func() {
for err := range pc.Errors() {
fmt.Println("ERR", err)
}
}()
for msg := range pc.Messages() {
value, err := snappy.Decode(msg.Value)
if err != nil {
return err
}
fmt.Printf("MSG: %s-%d/%d %q\n", msg.Topic, msg.Partition, msg.Offset, value)
}
return nil
}
func main() {
if err := run([]string{"192.168.6.151:9092"}, "topic-x"); err != nil {
fmt.Println("FATAL", err)
}
} |
I think I find the reason. If OffsetDelta is set and the offset is n, when consuming, messages are returned from offset n+1. But if OffsetDelta is not set, the messages are returned from offset n. |
That makes sense, since the offset delta is added to the base offset. What I don’t understand is why the delta needs to be set at all for uncompressed messages. |
@imjustfly Could you try with #1026? It seems like it might be related. |
I think on reflection this is basically the same issue as #1032, so I'll roll it up there. |
Versions
Sarama Version: 1.14.0
Kafka Version: 1.0
Go Version: 1.8
Configuration
default configuration
Logs
Problem Description
When I send a produce request with batch records, the broker api returns with no err, but server reports exceptions as above (something wrong about offset), and the consumer get a KError -- ErrUnknown . But If there is only one record in batch, every thing seems ok.
This is my code about producing:
Did I miss something?
The text was updated successfully, but these errors were encountered: