-
Notifications
You must be signed in to change notification settings - Fork 598
WIP: Add k8s integration tests to CI #3561
base: master
Are you sure you want to change the base?
Conversation
Current issue appears to be that the results are not able to write back to the result collector, so hopefully an easy fix. TODO: * fix result collection * make test messages clear * add kuberentes integration testing to Travis CI * make a branch to convert bash scripts to python
@@ -19,7 +19,7 @@ | |||
heron.class.state.manager: org.apache.heron.statemgr.zookeeper.curator.CuratorStateManager | |||
|
|||
# local state manager connection string | |||
heron.statemgr.connection.string: <zookeeper_host:zookeeper_port> | |||
heron.statemgr.connection.string: 127.0.0.1:2181 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is this set to 127.0.0.1 instead of a Kubernetes service name like zookeeper:2181
?
If it's set like this only for testing reasons, would it be better to keep it with the placeholder and change to 127.0.0.1:2181 in the kubernetes portion of the test script?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
on the host machine zookeeper is available on 127.0.0.1:2181
due to a kubectl port-forward
. That helped get the integration test runner further, but I take it this same config is also consumed within the cluster, where it should point to zookeeper:2181
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah I think it was a place holder for someone configuring Heron to run in Kubernetes. But each of our K8s yamls actually override the setting as a -D
parameter passed in to the heron-apiserver
command. So I don't think your code change would impact anything beyond it not being immediately apparent that this is no longer a placeholder. I wonder what part of the test scripts need direct Zookeeper access.
Do we plan to do anymore work on this PR? |
Attempt to resolve the merge conflicts with changes from |
I'm aiming to get the integration tests to also run for a kuberentes cluster, currently rough progress for this is in
./scripts/travis/k8s.sh
. I'm sharing this as a draft to ask for pointers.So far the script ensures there is a test image (debian10 isn't working due to the TLS issue), then creates a kind cluster using the
./deploy/kubernetes/minikube/*.yaml
which it waits for before starting the integration tests. The cluster looks ok on the surface as the services appear healthy, and topologies can be created and their pods run (ignoring when resource requests are too high at the moment).The problems I'm seeing at the moment are that
topology-test-runner
instance state results are not written back to the HTTP collector on the host, which is accessible from the executors from the host URL passed to the test script, andhttp://127.0.0.1:8080
on the host.The topology structure is also not updated for
topology-test-runner
andtest-runner
. Zookeeper within the cluster is accessible onzookeeper:2181
, and127.0.0.1:2181
from the host.I suspect this is down to misconfiguration, maybe something like needing separate
~/.heron/conf/kubernetes/*.yaml
copies, one for within the cluster, one for the host.Once that's worked out, I'll clean up and refactor the
./scripts/travis/*.sh
test scripts and make a proper PR