-
Notifications
You must be signed in to change notification settings - Fork 431
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
e2e: use helm to install out-of-tree cloud-provider-azure #2209
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -29,6 +29,7 @@ kind.kubeconfig | |
minikube.kubeconfig | ||
kubeconfig | ||
!kubeconfig/ | ||
*.kubeconfig | ||
|
||
# ssh keys | ||
.ssh* | ||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -2,6 +2,7 @@ | |
|
||
envsubst_cmd = "./hack/tools/bin/envsubst" | ||
kubectl_cmd = "./hack/tools/bin/kubectl" | ||
helm_cmd = "./hack/tools/bin/helm" | ||
tools_bin = "./hack/tools/bin" | ||
|
||
#Add tools to path | ||
|
@@ -331,9 +332,13 @@ def deploy_worker_templates(template, substitutions): | |
yaml = yaml.replace("${" + substitution + "}", value) | ||
|
||
yaml = yaml.replace('"', '\\"') # add escape character to double quotes in yaml | ||
flavor_name = os.path.basename(flavor) | ||
flavor_cmd = "RANDOM=$(bash -c 'echo $RANDOM'); CLUSTER_NAME=" + flavor.replace("windows", "win") + "-$RANDOM; make generate-flavors; echo \"" + yaml + "\" > ./.tiltbuild/" + flavor + "; cat ./.tiltbuild/" + flavor + " | " + envsubst_cmd + " | " + kubectl_cmd + " apply -f - && echo \"Cluster \'$CLUSTER_NAME\' created, don't forget to delete\"" | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. |
||
if "external-cloud-provider" in flavor_name: | ||
flavor_cmd += "; until " + kubectl_cmd + " get secret ${CLUSTER_NAME}-kubeconfig > /dev/null 2>&1; do sleep 5; done; " + kubectl_cmd + " get secret ${CLUSTER_NAME}-kubeconfig -o jsonpath={.data.value} | base64 --decode > ./${CLUSTER_NAME}.kubeconfig; chmod 600 ./${CLUSTER_NAME}.kubeconfig; until " + kubectl_cmd + " --kubeconfig=./${CLUSTER_NAME}.kubeconfig get nodes > /dev/null 2>&1; do sleep 5; done; " + helm_cmd + " --kubeconfig ./${CLUSTER_NAME}.kubeconfig install --repo https://raw.githubusercontent.com/kubernetes-sigs/cloud-provider-azure/master/helm/repo cloud-provider-azure --generate-name --set infra.clusterName=${CLUSTER_NAME}" | ||
local_resource( | ||
name = os.path.basename(flavor), | ||
cmd = "RANDOM=$(bash -c 'echo $RANDOM'); CLUSTER_NAME=" + flavor.replace("windows", "win") + "-$RANDOM; make generate-flavors; echo \"" + yaml + "\" > ./.tiltbuild/" + flavor + "; cat ./.tiltbuild/" + flavor + " | " + envsubst_cmd + " | " + kubectl_cmd + " apply -f - && echo \"Cluster \'$CLUSTER_NAME\' created, don't forget to delete\"", | ||
name = flavor_name, | ||
cmd = flavor_cmd, | ||
auto_init = False, | ||
trigger_mode = TRIGGER_MODE_MANUAL, | ||
labels = ["flavors"], | ||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -35,13 +35,56 @@ export CLUSTER_TEMPLATE="test/dev/cluster-template-custom-builds.yaml" | |
|
||
## Testing the out-of-tree cloud provider | ||
|
||
To test changes made to the [Azure cloud provider](https://github.com/kubernetes-sigs/cloud-provider-azure), first build and push images for cloud-controller-manager and/or cloud-node-manager from the root of the cloud-provider-azure repo. | ||
To test changes made to the [Azure cloud provider](https://github.com/kubernetes-sigs/cloud-provider-azure), first build and push images for cloud-controller-manager and/or cloud-node-manager from the branch of the cloud-provider-azure repo that the desired changes are in. Based on the repository, image name, and image tag you produce from your custom image build and push, set the appropriate environment variables below: | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @CecileRobertMichon FYI a new set of documentation changes since your most recent review There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @nilo19 @lzhecheng could you please review this part? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. lgtm There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. lgtm. Thanks! |
||
|
||
```bash | ||
$ export IMAGE_REGISTRY=docker.io/myusername | ||
$ export CCM_IMAGE_NAME=azure-cloud-controller-manager | ||
$ export CNM_IMAGE_NAME=azure-node-controller-manager | ||
$ export IMAGE_TAG=canary | ||
``` | ||
|
||
Then, use the `external-cloud-provider` flavor to create a cluster: | ||
|
||
```bash | ||
AZURE_CLOUD_CONTROLLER_MANAGER_IMG=myrepo/my-ccm:v0.0.1 \ | ||
AZURE_CLOUD_NODE_MANAGER_IMG=myrepo/my-cnm:v0.0.1 \ | ||
CLUSTER_TEMPLATE=cluster-template-external-cloud-provider.yaml \ | ||
$ export CLUSTER_NAME=my-cluster | ||
$ CLUSTER_TEMPLATE=cluster-template-external-cloud-provider.yaml \ | ||
make create-workload-cluster | ||
``` | ||
|
||
Once your cluster deploys, you should receive the kubeconfig to the workload cluster. Set your `KUBECONFIG` environment variable to point to the kubeconfig file, then use the official cloud-provider-azure Helm chart to deploy the cloud-provider-azure components using your custom built images: | ||
|
||
```bash | ||
$ helm install --repo https://raw.githubusercontent.com/kubernetes-sigs/cloud-provider-azure/master/helm/repo cloud-provider-azure --generate-name --set infra.clusterName=${CLUSTER_NAME} \ | ||
--set cloudControllerManager.imageRepository="${IMAGE_REGISTRY}" \ | ||
--set cloudNodeManager.imageRepository="${IMAGE_REGISTRY}" \ | ||
--set cloudControllerManager.imageName="${CCM_IMAGE_NAME}" \ | ||
--set cloudNodeManager.imageName="${CNM_IMAGE_NAME}" \ | ||
--set cloudControllerManager.imageTag="${IMAGE_TAG}" \ | ||
--set cloudNodeManager.imageTag="${IMAGE_TAG}" | ||
``` | ||
|
||
The helm command above assumes that you want to test custom images of both cloud-controller-manager and cloud-node-manager. If you only wish to test one component, you may omit the other one referenced in the example above to produce the desired `helm install` command (for example, if you wish to only test a custom cloud-controller-manager image, omit the three `--set cloudNodeManager...` arguments above). | ||
|
||
Once you have installed the components via Helm, you should see the relevant pods running in your test cluster under the `kube-system` namespace. To iteratively develop on this test cluster, you may manually edit the `cloud-controller-manager` Deployment resource, and/or the `cloud-node-manager` Daemonset resource delivered via `helm install`. Or you may issue follow-up `helm` commands with each test iteration. For example: | ||
|
||
```bash | ||
$ export IMAGE_TAG=canaryv2 | ||
$ helm upgrade --install --repo https://raw.githubusercontent.com/kubernetes-sigs/cloud-provider-azure/master/helm/repo cloud-provider-azure --generate-name --set infra.clusterName=${CLUSTER_NAME} \ | ||
--set cloudControllerManager.imageRepository="${IMAGE_REGISTRY}" \ | ||
--set cloudNodeManager.imageRepository="${IMAGE_REGISTRY}" \ | ||
--set cloudControllerManager.imageName="${CCM_IMAGE_NAME}" \ | ||
--set cloudNodeManager.imageName="${CNM_IMAGE_NAME}" \ | ||
--set cloudControllerManager.imageTag="${IMAGE_TAG}" \ | ||
--set cloudNodeManager.imageTag="${IMAGE_TAG}" | ||
$ export IMAGE_TAG=canaryv3 | ||
$ helm upgrade --install --repo https://raw.githubusercontent.com/kubernetes-sigs/cloud-provider-azure/master/helm/repo cloud-provider-azure --generate-name --set infra.clusterName=${CLUSTER_NAME} \ | ||
--set cloudControllerManager.imageRepository="${IMAGE_REGISTRY}" \ | ||
--set cloudNodeManager.imageRepository="${IMAGE_REGISTRY}" \ | ||
--set cloudControllerManager.imageName="${CCM_IMAGE_NAME}" \ | ||
--set cloudNodeManager.imageName="${CNM_IMAGE_NAME}" \ | ||
--set cloudControllerManager.imageTag="${IMAGE_TAG}" \ | ||
--set cloudNodeManager.imageTag="${IMAGE_TAG}" | ||
``` | ||
|
||
Each successive `helm upgrade --install` command will release a new version of the chart, which will have the effect of replacing the Deployment and/or Daemonset image configurations (and thus replace the pods running in the cluster) with the new image version built and pushed for each test iteration. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unifying these installs together made it a lot easier to standardize dep installs across the various jobs and scripts.
Should I go ahead and add the remaining set of tools in hack/tools/bin to this list for best practice going forward (despite the fact that not every invocation of
make install-tools
will require every single binary)?cc @CecileRobertMichon @jsturtevant
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm weak -1 on this, even if it improves the makefile a little, it would make running, say
make go-test
, a lot slower the first time and install a bunch of binaries that aren't needed, I slightly prefer the "just in time" install approach.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cool, yeah, that's the tradeoff.
Any concerns w/ this new
install-tools
target here? The practical problem is we have > 1 (actually like 3 or 4) places in > 1 file that need the above (or at least 4 of 5) installed, and during this effort I encountered how easy it is to miss a dep when adding a new one if the only option is to copy/paste those individual tool install commands around.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm also mildly opposed to this change if it adds any noticeable time to testing targets. Could we maybe have the new target install just the tools that are actually shared, or is this still a maintenance headache?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
only
test-e2e-run
andtilt-up
have been updated to include this new target, so I think that's a good trade-off (those are flows that are already sort of heavyweight and shouldn't slow down CI overall)