Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update to use all urls in the connector configuration when they are configured #1813

Merged
merged 7 commits into from
Oct 5, 2023

Conversation

aindriu-aiven
Copy link
Contributor

@aindriu-aiven aindriu-aiven commented Sep 27, 2023

We currently allow users to configure multiple urls for Kafka Connect however we did not support the use of multiple urls in the cluster api. This prevented the multi connect cluster deployment from working efficiently.

What kind of change does this PR introduce?

  • Bug fix
  • New feature
  • Refactor
  • Docs update
  • CI update

What is the current behavior?

Currently when a connector environment is configured with multiple urls e.g.
"server:8086,server2:9986" this url information is not used properly by Klaw to cycle through the different urls and ensure that when one is unavailable that the others are then used.

What is the new behavior?

The Cluster api will cycle through the environments and stop when it finds the first working successful connection.

Other information:

No other information at this time.

Please check if the PR fulfills these requirements

  • The commit message follows our guidelines
  • Tests for the changes have been added (for bug fixes / features)

log.error("Error in deleting connector ", ex);
return ApiResponse.notOk(CLUSTER_API_ERR_3);
Set errMsgResponse = new HashSet();
for (String envUrl : getEnvironment(clusterConnectorRequest.getEnv())) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When there are multiple connect clusters (which means distributed mode), we should only check with first one. If that fails, go the next one... and so on. Internally each connect cluster will configure the same kafka bootstrap servers.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could write a utility method for all the endpoints, to check if the first connect cluster is available, then try updating connectors.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes exactly, the loop will return a value if it is successful, if it returns a HTTP error it gets caught in the try-catch and loops for the next one. If all urls are exhausted it returns a NotOk status

HttpMethod.DELETE,
request,
new ParameterizedTypeReference<>() {});
return ApiResponse.SUCCESS;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right, this return will take care

Copy link
Contributor

@muralibasani muralibasani left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks

@aindriu-aiven aindriu-aiven merged commit 6c9a7c1 into main Oct 5, 2023
@aindriu-aiven aindriu-aiven deleted the issue-provision-multi-cluster-connect branch October 5, 2023 07:58
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants