Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Outside (i.e. cluster-external) access through per-broker Service #78

Merged
merged 7 commits into from
Nov 7, 2017

Conversation

solsson
Copy link
Contributor

@solsson solsson commented Oct 15, 2017

This is my interpretation of a baseline for different type of clusters, from #13.

The new pod label is low-risk, as are the services. The new directives in server.properties need more testing. Might de-stabilize regular inside access.

I've tested this only on minikube so far:

BOOTSTRAP=$(minikube ip):32400,$(minikube ip):32401,$(minikube ip):32402
docker run --rm -t solsson/kafkacat -C -b $BOOTSTRAP -t test-basic-with-kafkacat -o -10

Outside+minikube is interesting as a local development setup for Kafka services. Production clusters on the other hand will probably always need tailored advertised.listeners (maybe through the host lookup command in init config) and listener.security.protocol.map.

@solsson solsson changed the title Exemplify outside (i.e. cluster-external) access through per-broker Service Outside (i.e. cluster-external) access through per-broker Service Oct 15, 2017
@solsson
Copy link
Contributor Author

solsson commented Oct 16, 2017

@comdw in response to #13 (comment), feel free to test with this PR. It works on minikube, and quite possibly also on GKE with outside meaning TCP access to the nodes' internal IPs. In particular I'm interested in how to adapt the init commands to different environments.

@comdw
Copy link

comdw commented Oct 16, 2017

@solsson - tested this and it works fine for me.

@yacut
Copy link

yacut commented Oct 24, 2017

@solsson works for me. Thanks!

@solsson solsson mentioned this pull request Oct 25, 2017
@codyAnivive
Copy link

@solsson works great on bare metal deployment as well

@solsson solsson added this to the v3.0 milestone Nov 7, 2017
@solsson
Copy link
Contributor Author

solsson commented Nov 7, 2017

I'm also using this now. Thanks for the great discussion in #13.

@solsson solsson merged commit de0919b into master Nov 7, 2017
solsson added a commit to Yolean/kafka-topic-client that referenced this pull request Nov 28, 2017
solsson added a commit to Yolean/kafka-topic-client that referenced this pull request Nov 28, 2017
solsson added a commit that referenced this pull request Jan 8, 2018
@StevenACoffman
Copy link
Contributor

I wrote this to test the outside services:


function join_by { local IFS="$1"; shift; echo "$*"; }

BOOTSTRAP_ARRAY=()
POD_NAME_ARRAY=("kafka-0" "kafka-1" "kafka-2")
for POD_NAME in "${POD_NAME_ARRAY[@]}"
do
  BOOTSTRAP_IP=$(kubectl get pods ${POD_NAME} -n kafka -o jsonpath='{.metadata.labels.kafka-listener-outside-host}')
  BOOTSTRAP_PORT=$(kubectl get pods ${POD_NAME} -n kafka -o jsonpath='{.metadata.labels.kafka-listener-outside-port}')
  BOOTSTRAP_ARRAY+=("$BOOTSTRAP_IP:$BOOTSTRAP_PORT")
done

BOOTSTRAP=$(join_by , "${BOOTSTRAP_ARRAY[@]}")
docker run --rm -t solsson/kafkacat -C -b $BOOTSTRAP -t test-basic-with-kafkacat -o -10

@solsson
Copy link
Contributor Author

solsson commented Jan 8, 2018

^ Nice example of how to use the new labels.

@solsson
Copy link
Contributor Author

solsson commented Jan 8, 2018

Based on the above I found a horrible :) one-liner relying on bootstrapping, to test #120: docker run --rm solsson/kafkacat -C -b $(kubectl -n kafka get pod kafka-0 -o go-template --templat'{{index .metadata.labels "kafka-listener-outside-host"}}:{{index .metadata.labels "kafka-listener-outside-port"}}') -t heapster-metrics

@StevenACoffman
Copy link
Contributor

StevenACoffman commented Jan 8, 2018

That is either terrible, or wonderful. I guess both. 😄

@mtbbiker
Copy link

mtbbiker commented Jul 27, 2018

Just an idea that I have managed to use to get the "outside" service to "connect" to the "Pod'

kind: Service
apiVersion: v1
metadata:
  name: outside-0
  namespace: kafka
spec:
  selector:
    app: kafka
    statefulset.kubernetes.io/pod-name: kafka-0
  ports:
  - protocol: TCP
    targetPort: 9093
    port: 9093

On previous versions of K8S I also used @solsson initContainer idea. However in version 1.11.1 I couldn't get it to work to create the labels . So I got it to work by adding statefulset.kubernetes.io/pod-name: kafka-0 to the selector: spec section
Add this now for each outside service by incrementing the value for kafka-1, kafka-2 etc.

@solsson
Copy link
Contributor Author

solsson commented Jul 27, 2018

@mtbbiker Can you please create an issue for the errors with k8s 1.11.1, with the output from kubectl -n kafka logs kafka-0 -c init-config?

@mtbbiker
Copy link

@solsson As requested for the zookeeper deployment, the logs

kubectl -n kafka logs pzoo-0 -c init-config
1
+ '[' -z '' ']'
+ ID_OFFSET=1
+ export ZOOKEEPER_SERVER_ID=1
+ ZOOKEEPER_SERVER_ID=1
+ echo 1
+ tee /var/lib/zookeeper/data/myid
+ sed -i 's/server\.1\=[a-z0-9.-]*/server.1=0.0.0.0/' /etc/kafka/zookeeper.properties
sed: can't read /etc/kafka/zookeeper.properties: No such file or directory

I am busy with a new deployment and log a seperate Issue

@mtbbiker
Copy link

@solsson I found a issue in the ConfigMap yaml file, I have Zookeeper working now

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants