-
Notifications
You must be signed in to change notification settings - Fork 735
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Using AWS ELB addresses for outside listeners #136
Comments
I haven't tried using loadbalancers, but as you've verified the generated config and I see no reason why node addresses and external addresses would be different. The only observation I've made about the wrong addresses being returned at bootstrap was fixed in 4c202f4. Oh, I noticed now that the port is |
Thanks! That did the trick. For posterity: I ran into one more problem though. The The pods were therefore also not getting the important label of broker id ( I fixed the above problem by making the OUTSIDE_HOST and OUTSIDE_PORT as annotations instead of labels [1]. After that, I was able to connect to kafka from outside AWS and was able to produce and consume messages. [1]
|
@shrinandj I was unaware of the limit on label values. It makes sense. Care to review #137? |
I want to configure Kafka on my Kubernetes cluster such that it is accessible from outside. I cannot use a nodePort and the VM's IP address.
Instead, I configured one
Service
of typeLoadBalancer
for each broker and modified theinit.sh
to use the ELBs external IP.I then created the ConfigMaps and started the kafka statefulsets. I can see that the
/etc/kafka/server.properties
file gets populated with the correct dns entry for the OUTSIDE HOST.However, the broker hostnames received outside the K8s cluster have internal cluster object names.
As a result, brokers are not accessible from outside the K8S cluster.
Are there other changes to be made for each broker's ELB addresses (OUTSIDE addresses) to show up?
The text was updated successfully, but these errors were encountered: