-
Notifications
You must be signed in to change notification settings - Fork 14.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
KAFKA-9337 simplify mm2 initial configs #7872
Conversation
# regex which defines which topics gets replicated. For eg "foo-.*" | ||
A->B.topics = .* | ||
|
||
B->A.enabled = true |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like below two lines also enable replication from B to A (in addition to existing replication from A to B).
Maybe we could make this configuration more straightforward by replicating only from A to B? (this configuration previously had a similar setup)
What do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's a lot more common to have bidirectional replication. I think having A->B and B->A is more illustrative of a real configuration, without being much more complicated.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm, thx
# This is a comma separated host:port pairs for each cluster | ||
# for e.g. "A_host:9092, A_host2:9092, A_host3:9092" | ||
A.bootstrap.servers = localhost:9092 | ||
B.bootstrap.servers = localhost:10092 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What about keeping the host/port as they were instead of using locahost for both?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems better .
Will revert it.
@@ -18,18 +18,40 @@ | |||
# Run with ./bin/connect-mirror-maker.sh connect-mirror-maker.properties | |||
|
|||
# specify any number of cluster aliases | |||
clusters = A, B, C | |||
clusters = A, B |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: trailing space
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
acked
Perhaps surprisingly, it's possible to run MM2 with a single broker if you do something like this:
That would make it very easy to get MM2 running out-of-the-box:
The downside is that it's sorta confusing. WDYT? |
IMHO , i think this will confuse a person who is trying to use mm2 for the
first time as it not so intuitive.
On a different note, shouldn't we specifically check that if cluster A, B
are same as a config sanity check. This will ensure users dont replicate to
a same source cluster?. The situation becomes more likely if we have more
than 2 clusters ie (A,B,C). Thoughts ?
…On Tue, Jan 7, 2020 at 2:00 AM Ryanne Dolan ***@***.***> wrote:
Perhaps surprisingly, it's possible to run MM2 with a single broker if you
do something like this:
`
two cluster aliases
clusters = A, B
... but they are the same broker!
A.bootstrap.servers = localhost:9092
B.bootstrap.servers = localhost:9092
enable and configure individual replication flows
A->B.enabled = true
A->B.topics = foo-.*
B->A.enabled = true
B->A.topics = bar-.*
customize as needed
sync.topic.acls.enabled = false
replication.policy = 1
offset.storage.replication.factor = 1
config.storage.replication.factor = 1
status.storage.replication.factor = 1
offset-syncs.topic.replication.factor = 1
checkpoints.topic.replication.factor = 1
heartbeats.topic.replication.factor = 1
replication.policy.separator = _ emit.heartbeats.interval.seconds = 5
`
That would make it very easy to get MM2 running out-of-the-box:
./bin/zookeeper.sh config/zookeeper.properties ./bin/kafka-server-start.sh
config/server.properties ./bin/connect-mirror-maker.sh
config/connect-mirror-maker.properties
The downside is that it's sorta confusing. WDYT?
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#7872?email_source=notifications&email_token=AARHYTI2RSBXHMXQRQLXOX3Q4OIG3A5CNFSM4J72DTGKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEIGV6OA#issuecomment-571301688>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AARHYTLGYLJYUEE56ILCYIDQ4OIG3ANCNFSM4J72DTGA>
.
--
Thanks
Karan
|
I also think using the same cluster for source and destination is confusing, hence my comment earlier to keep the different hostnames. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the changes, LGTM
A sanity check might make sense, but I'm not sure it should be considered an error if you set up MM2 to use a single cluster with multiple aliases like that. It's hard to imagine a use-case where it would make sense, but it does work! |
Hello Kafka Developers, I hope this message finds you well. I am currently using Kafka Mirror Maker and have set up a monitoring alert with Prometheus Alertmanager. Here is the alert rule I'm using: - alert: NodeKafkaMirrorMakerServiceDown
expr: node_systemd_unit_state{name="kafka-mirror-maker.service", state="active"} == 0
for: 1m
labels:
severity: error
project: my-project
annotations:
summary: "Kafka Mirror Maker service is down"
instance: "{{ $labels.instance }}"
value: "{{ $value }}" I would like to know if this is considered a good practice for monitoring the Kafka Mirror Maker service. Any feedback or suggestions would be greatly appreciated. Thank you for your time and assistance. Best regards, Asker |
More detailed description of your change,
if necessary. The PR title and PR message become
the squashed commit message, so use a separate
comment to ping reviewers.
Summary of testing strategy (including rationale)
for the feature or bug fix. Unit and/or integration
tests are expected for any behaviour change and
system tests should be considered for larger changes.
Committer Checklist (excluded from commit message)