Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

KAFKA-9337 simplify mm2 initial configs #7872

Merged
merged 1 commit into from
Jan 7, 2020
Merged

Conversation

cryptoe
Copy link
Contributor

@cryptoe cryptoe commented Dec 27, 2019

More detailed description of your change,
if necessary. The PR title and PR message become
the squashed commit message, so use a separate
comment to ping reviewers.

Summary of testing strategy (including rationale)
for the feature or bug fix. Unit and/or integration
tests are expected for any behaviour change and
system tests should be considered for larger changes.

Committer Checklist (excluded from commit message)

  • Verify design and implementation
  • Verify test coverage and CI build status
  • Verify documentation (including upgrade notes)

# regex which defines which topics gets replicated. For eg "foo-.*"
A->B.topics = .*

B->A.enabled = true
Copy link

@rpozarickij rpozarickij Jan 1, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like below two lines also enable replication from B to A (in addition to existing replication from A to B).
Maybe we could make this configuration more straightforward by replicating only from A to B? (this configuration previously had a similar setup)
What do you think?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's a lot more common to have bidirectional replication. I think having A->B and B->A is more illustrative of a real configuration, without being much more complicated.

Copy link
Contributor

@ryannedolan ryannedolan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm, thx

# This is a comma separated host:port pairs for each cluster
# for e.g. "A_host:9092, A_host2:9092, A_host3:9092"
A.bootstrap.servers = localhost:9092
B.bootstrap.servers = localhost:10092
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What about keeping the host/port as they were instead of using locahost for both?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems better .
Will revert it.

@@ -18,18 +18,40 @@
# Run with ./bin/connect-mirror-maker.sh connect-mirror-maker.properties

# specify any number of cluster aliases
clusters = A, B, C
clusters = A, B
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: trailing space

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

acked

@ryannedolan
Copy link
Contributor

ryannedolan commented Jan 6, 2020

Perhaps surprisingly, it's possible to run MM2 with a single broker if you do something like this:

# two cluster aliases
clusters = A, B

# ... but they are the same broker!
A.bootstrap.servers = localhost:9092
B.bootstrap.servers = localhost:9092

# enable and configure individual replication flows
A->B.enabled = true
A->B.topics = foo-.*
B->A.enabled = true
B->A.topics = bar-.*

# customize as needed
sync.topic.acls.enabled = false
replication.policy = 1
offset.storage.replication.factor = 1
config.storage.replication.factor = 1
status.storage.replication.factor = 1
offset-syncs.topic.replication.factor = 1
checkpoints.topic.replication.factor = 1
heartbeats.topic.replication.factor = 1
# replication.policy.separator = _
# emit.heartbeats.interval.seconds = 5

That would make it very easy to get MM2 running out-of-the-box:

./bin/zookeeper.sh config/zookeeper.properties
./bin/kafka-server-start.sh config/server.properties
./bin/connect-mirror-maker.sh config/connect-mirror-maker.properties

The downside is that it's sorta confusing. WDYT?

@cryptoe
Copy link
Contributor Author

cryptoe commented Jan 7, 2020 via email

@mimaison
Copy link
Member

mimaison commented Jan 7, 2020

I also think using the same cluster for source and destination is confusing, hence my comment earlier to keep the different hostnames.

Copy link
Member

@mimaison mimaison left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the changes, LGTM

@mimaison mimaison merged commit b298886 into apache:trunk Jan 7, 2020
@ryannedolan
Copy link
Contributor

This will ensure users dont replicate to a same source cluster?

A sanity check might make sense, but I'm not sure it should be considered an error if you set up MM2 to use a single cluster with multiple aliases like that. It's hard to imagine a use-case where it would make sense, but it does work!

@Gakhramanzode
Copy link

Hello Kafka Developers,

I hope this message finds you well. I am currently using Kafka Mirror Maker and have set up a monitoring alert with Prometheus Alertmanager. Here is the alert rule I'm using:

- alert: NodeKafkaMirrorMakerServiceDown
  expr: node_systemd_unit_state{name="kafka-mirror-maker.service", state="active"} == 0
  for: 1m
  labels:
    severity: error
    project: my-project
  annotations:
    summary: "Kafka Mirror Maker service is down"
    instance: "{{ $labels.instance }}"
    value: "{{ $value }}"

I would like to know if this is considered a good practice for monitoring the Kafka Mirror Maker service. Any feedback or suggestions would be greatly appreciated.

Thank you for your time and assistance.

Best regards, Asker

@cryptoe @ryannedolan @mimaison @rpozarickij @halorgium

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants