-
Notifications
You must be signed in to change notification settings - Fork 61
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update to use all urls in the connector configuration when they are configured #1813
Conversation
…onfigured Signed-off-by: Aindriu Lavelle <[email protected]>
…onfigured Signed-off-by: Aindriu Lavelle <[email protected]>
…ven-Open/klaw into issue-provision-multi-cluster-connect
cluster-api/src/main/java/io/aiven/klaw/clusterapi/services/KafkaConnectService.java
Show resolved
Hide resolved
cluster-api/src/main/java/io/aiven/klaw/clusterapi/services/KafkaConnectService.java
Outdated
Show resolved
Hide resolved
log.error("Error in deleting connector ", ex); | ||
return ApiResponse.notOk(CLUSTER_API_ERR_3); | ||
Set errMsgResponse = new HashSet(); | ||
for (String envUrl : getEnvironment(clusterConnectorRequest.getEnv())) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When there are multiple connect clusters (which means distributed mode), we should only check with first one. If that fails, go the next one... and so on. Internally each connect cluster will configure the same kafka bootstrap servers.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could write a utility method for all the endpoints, to check if the first connect cluster is available, then try updating connectors.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes exactly, the loop will return a value if it is successful, if it returns a HTTP error it gets caught in the try-catch and loops for the next one. If all urls are exhausted it returns a NotOk status
HttpMethod.DELETE, | ||
request, | ||
new ParameterizedTypeReference<>() {}); | ||
return ApiResponse.SUCCESS; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right, this return will take care
cluster-api/src/main/java/io/aiven/klaw/clusterapi/services/KafkaConnectService.java
Outdated
Show resolved
Hide resolved
Signed-off-by: Aindriu Lavelle <[email protected]>
Signed-off-by: Aindriu Lavelle <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thanks
We currently allow users to configure multiple urls for Kafka Connect however we did not support the use of multiple urls in the cluster api. This prevented the multi connect cluster deployment from working efficiently.
What kind of change does this PR introduce?
What is the current behavior?
Currently when a connector environment is configured with multiple urls e.g.
"server:8086,server2:9986" this url information is not used properly by Klaw to cycle through the different urls and ensure that when one is unavailable that the others are then used.
What is the new behavior?
The Cluster api will cycle through the environments and stop when it finds the first working successful connection.
Other information:
No other information at this time.
Please check if the PR fulfills these requirements