-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add auto-deploy custom manifest support #2991
Conversation
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: datachi7d The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Welcome @datachi7d! |
Hi @datachi7d. Thanks for your PR. I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Found a few problems with a simple pod example: $ cat <<EOF | kind create cluster --config=-
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
customManifests:
- inline.yaml: |
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
EOF
Creating cluster "kind" ...
✓ Ensuring node image (kindest/node:v1.25.3) 🖼
✓ Preparing nodes 📦
✓ Writing configuration 📜
✓ Starting control-plane 🕹️
✓ Installing CNI 🔌
✓ Installing StorageClass 💾
✗ Installing custom manifests 📃
ERROR: failed to create cluster: failed to add default storage class: customManifest[0][inline.yaml]: command "docker exec --privileged -i kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf apply -f -" failed with error: exit status 1 Firstly error message isn't too helpful here, and secondly it appears that in this case it is failing to create the pod due to the default service account not being present. I will take a look into moving the action to after |
Looks like the $ cat <<EOF | kind create cluster --config=-
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
customManifests:
- inline.yaml: |
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
EOF
Creating cluster "kind" ...
✓ Ensuring node image (kindest/node:v1.25.3) 🖼
✓ Preparing nodes 📦
✓ Writing configuration 📜
✓ Starting control-plane 🕹️
✓ Installing CNI 🔌
✓ Installing StorageClass 💾
✗ Installing custom manifests 📃
ERROR: failed to create cluster: failed to deploy manifest: customManifest[0][inline.yaml]: error deploying manifest: command "docker exec --privileged -i kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf apply -f -" failed with error: exit status 1
Command Output: Error from server (Forbidden): error when creating "STDIN": pods "nginx" is forbidden: error looking up service account default/default: serviceaccount "default" not found To resolve the service account problem I moved the action so that it's after $ cat <<EOF | kind create cluster --wait 30s --config=-
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
customManifests:
- inline.yaml: |
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
EOF
Creating cluster "kind" ...
✓ Ensuring node image (kindest/node:v1.25.3) 🖼
✓ Preparing nodes 📦
✓ Writing configuration 📜
✓ Starting control-plane 🕹️
✓ Installing CNI 🔌
✓ Installing StorageClass 💾
✓ Waiting ≤ 30s for control-plane = Ready ⏳
• Ready after 18s 💚
✓ Installing custom manifests 📃
Set kubectl context to "kind-kind"
You can now use your cluster with:
kubectl cluster-info --context kind-kind
Thanks for using kind! 😊
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 0 20s |
} else { | ||
// read file in | ||
var manifest []byte | ||
if manifest, err = os.ReadFile(t); os.IsNotExist(err) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is going to pickup files relative to the cwd - should it be relative to the cluster configuration yaml path (when not given via stdin)?
I'm sorry, but per our contributing guide features should be discussed as an issue first https://kind.sigs.k8s.io/docs/contributing/getting-started/#4-reaching-out Initial feedback: This is a feature that can grow complex very quickly yet simple versions can be easily implemented outside of kind. It is unclear that we should build this in. For example, users will then want to support various forms of templating, dependency ordering and error handling, etc. But you can already do this by |
The kubernetes ecosystem has a plethora of tools whose sole purpose is managing manifests. |
agree with Ben, I´d like Kind to be decoupled from the provisioning and keep doing what it does best, providing a kubernetes cluster |
Yes I can see how this would simplicate kind, or add feature bloat. Especially compared to user just creating a shell script that uses @BenTheElder - I can create an issue for posterity? |
This adds the ability to do a one-shot deploy manifests on startup of a kind cluster, for example setting up an ingress NGINX in the User guide would become:
The
customManifests
supports local files, http/s URLs and inline yaml, for example:The functionality is similar to the likes of auto-deploying manifests in k3s or k0s, however these appear to use controllers to continuously monitor the manifests for changes and reconcile those changes.
I have seen #253 which relies on
kubeadm
to deploy/manage "addons", but as far as I can tell there has been no progress in this area.So this one way to deploy custom manifests on startup, the alternative implementations could be:
kubeadm
supports managing/deploying addons via yaml manifestsLooking forward to any feedback on this :)