-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
【Kafka】使用Kafka,如何保证消息不丢? #8
Comments
Kafka思路一般,MQ至少包含三个组成部分,Producer,Broker,Consumer。producer发送消息到broker,consumer消费存储在Broker的消息。producer到broker,broker到consumer,broker存储消息三个环节都有可能导致消息丢失。 KafkaProducer丢消息场景1、Kafka消息发送都是异步进行,所以当我们调用API发送消息之后,消息并无法保证一定会被发送到Broker,如果此时进程退出,可能导致消息丢失。 解决方案:1、控制Kafka向Broker提交消息的时间间隔和批次大小:
2、设置重试策略,消息发送失败后重试。重试会有消息重复的可能。
Broker本身丢消息场景1、topic没有设置副本 解决方案1、设置topic多副本: Consumer丢消息场景1、自动提交offset,导致消息拉取到,没有消费成功就被标记为已消费。 解决方案1、禁止自动提交 |
emmmmm, |
对于Kafka,如果设置成ack=all,kafka性能将大幅下降,可以参考这里的一个压测数据. |
如题
The text was updated successfully, but these errors were encountered: