We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
1、所用版本皆是最新版本 2、canal-server已按配置实现HA模式 3、canal-adapter如果不用mq,用tcp也已实现HA模式 4、canal-adapter使用rabbitmq,所有的adapter都会去拉rabbitmq的数据,无法像tcp一样单点拉取rabbitmq数据,会存在数据消费的顺序问题,zookeeper在两个位置我都试过配置了不产生效果,望大大修复 5、以下是我的adapter配置: server: port: 7555 spring: jackson: date-format: yyyy-MM-dd HH:mm:ss time-zone: GMT+8 default-property-inclusion: non_null
canal.conf: canalServerHost: #tcp kafka rocketMQ rabbitMQ mode: rabbitMQ flatMessage: true
syncBatchSize: 1000 retries: 3 timeout: accessKey: secretKey: consumerProperties: # canal tcp consumer # canal.tcp.server.host: # canal.tcp.zookeeper.hosts: 172.16.61.127:2181,172.16.61.127:2182,172.16.61.127:2183 # canal.tcp.batch.size: 500 # canal.tcp.username: xxx # canal.tcp.password: xxxxx rabbitmq.host: 172.16.61.127:5672 rabbitmq.virtual.host: / rabbitmq.username: admin rabbitmq.password: admin canalAdapters: - instance: canal_queue # canal instance Name or mq topic name groups: - groupId: g1 outerAdapters: - name: logger - name: rdb key: mysql1 properties: jdbc.driverClassName: com.mysql.jdbc.Driver jdbc.url: jdbc:mysql://172.16.61.127:33999/uppcloudtest?useUnicode=true&useSSL=false jdbc.username: root jdbc.password: Conlin360 druid.stat.enable: false druid.stat.slowSqlMillis: 1000
The text was updated successfully, but these errors were encountered:
所用docker镜像: dyrnq/canal-adapter 1.1.6-hotfix-1-jdk8 c2e120706e2b 3 weeks ago 919MB canal/canal-admin latest 7f191cda3a3e 2 months ago 2GB canal/canal-server latest 9bf4fbb74f65 2 months ago 1.69GB
Sorry, something went wrong.
No branches or pull requests
1、所用版本皆是最新版本
2、canal-server已按配置实现HA模式
3、canal-adapter如果不用mq,用tcp也已实现HA模式
4、canal-adapter使用rabbitmq,所有的adapter都会去拉rabbitmq的数据,无法像tcp一样单点拉取rabbitmq数据,会存在数据消费的顺序问题,zookeeper在两个位置我都试过配置了不产生效果,望大大修复
5、以下是我的adapter配置:
server:
port: 7555
spring:
jackson:
date-format: yyyy-MM-dd HH:mm:ss
time-zone: GMT+8
default-property-inclusion: non_null
canal.conf:
canalServerHost:
#tcp kafka rocketMQ rabbitMQ
mode: rabbitMQ
flatMessage: true
zookeeperHosts: 172.16.61.127:2181,172.16.61.127:2182,172.16.61.127:2183
syncBatchSize: 1000
retries: 3
timeout:
accessKey:
secretKey:
consumerProperties:
# canal tcp consumer
# canal.tcp.server.host:
# canal.tcp.zookeeper.hosts: 172.16.61.127:2181,172.16.61.127:2182,172.16.61.127:2183
# canal.tcp.batch.size: 500
# canal.tcp.username: xxx
# canal.tcp.password: xxxxx
rabbitmq.host: 172.16.61.127:5672
rabbitmq.virtual.host: /
rabbitmq.username: admin
rabbitmq.password: admin
canalAdapters:
- instance: canal_queue # canal instance Name or mq topic name
groups:
- groupId: g1
outerAdapters:
- name: logger
- name: rdb
key: mysql1
properties:
jdbc.driverClassName: com.mysql.jdbc.Driver
jdbc.url: jdbc:mysql://172.16.61.127:33999/uppcloudtest?useUnicode=true&useSSL=false
jdbc.username: root
jdbc.password: Conlin360
druid.stat.enable: false
druid.stat.slowSqlMillis: 1000
The text was updated successfully, but these errors were encountered: