-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cluster Created with kind Fails to Mount containerd HostPath #83
Comments
I've noticed that if I manually edit the generated clusterctl cluster config before applying it and remove the type and format of the HostMount that I can get past this issue: Before: apiVersion: infrastructure.cluster.x-k8s.io/v1alpha4
kind: KubemarkMachineTemplate
metadata:
labels:
cluster.x-k8s.io/cluster-name: kube-node-mgmt
name: kube-node-mgmt-kubemark-md-0
namespace: default
spec:
template:
spec:
extraMounts:
- containerPath: unix:///run/containerd/containerd.sock
hostPath: unix:///run/containerd/containerd.sock
name: containerd-sock
type: Socket After: apiVersion: infrastructure.cluster.x-k8s.io/v1alpha4
kind: KubemarkMachineTemplate
metadata:
labels:
cluster.x-k8s.io/cluster-name: kube-node-mgmt
name: kube-node-mgmt-kubemark-md-0
namespace: default
spec:
template:
spec:
extraMounts:
- containerPath: /run/containerd/containerd.sock
hostPath: /run/containerd/containerd.sock
name: containerd-sock This is a separate issue, but I also found that I was not able to use Kubernetes version My previous commands ended up with ErrImagePull Instead, I needed to find a version that was supported by both kind and kubemark by cross-referencing the following image repositories:
I ended up settling on |
interesting, i have not hit this yet, but perhaps we need to update those templates for the current versions? |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
I know that this is at least still a problem for me. I currently run all of the manifests that This should be a super simple fix, I'm guessing that we just remove the line from here? https://github.com/kubernetes-sigs/cluster-api-provider-kubemark/blob/main/templates/cluster-template-capd.yaml#L120 However, it would be good to know that someone else is able to confirm this problem and that it isn't just something different about my local environment before changing something like this. |
/remove-lifecycle stale |
@aauren i'm assuming that this is still an issue for you? i have been updating the |
Yup. Still an issue for me. I'm very open to the idea that something is just off about my setup. However, if it is superfluous and you're willing to remove it, that would help me a lot also. |
i'll give it a try without the |
So I can tell you the process that I have been using to use kubemark:
providers:
- name: "kubemark"
url: "https://github.com/kubernetes-sigs/cluster-api-provider-kubemark/releases/v0.5.0/infrastructure-components.yaml"
type: "InfrastructureProvider"
$ cat kubemark.yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: kubemark
nodes:
- role: control-plane
# The below add in a mount for configuring passing the docker socket into the containers
extraMounts:
- containerPath: /var/run/docker.sock
hostPath: /var/run/docker.sock
- role: worker
# The below add in a mount for configuring passing the docker socket into the containers
extraMounts:
- containerPath: /var/run/docker.sock
hostPath: /var/run/docker.sock
$ kind create cluster --config kubemark.yaml
Creating cluster "kubemark" ...
✓ Ensuring node image (kindest/node:v1.26.3) 🖼
✓ Preparing nodes 📦 📦
✓ Writing configuration 📜
✓ Starting control-plane 🕹️
✓ Installing CNI 🔌
✓ Installing StorageClass 💾
✓ Joining worker nodes 🚜
Set kubectl context to "kind-kubemark"
You can now use your cluster with:
kubectl cluster-info --context kind-kubemark
Thanks for using kind! 😊
$ export CLUSTER_TOPOLOGY=true
$ clusterctl init --infrastructure kubemark,docker
Fetching providers
Installing cert-manager Version="v1.11.0"
Waiting for cert-manager to be available...
Installing Provider="cluster-api" Version="v1.4.1" TargetNamespace="capi-system"
Installing Provider="bootstrap-kubeadm" Version="v1.4.1" TargetNamespace="capi-kubeadm-bootstrap-system"
Installing Provider="control-plane-kubeadm" Version="v1.4.1" TargetNamespace="capi-kubeadm-control-plane-system"
Installing Provider="infrastructure-kubemark" Version="v0.5.0" TargetNamespace="capk-system"
Installing Provider="infrastructure-docker" Version="v1.4.1" TargetNamespace="capd-system"
Your management cluster has been initialized successfully!
You can now create your first workload cluster by running the following:
clusterctl generate cluster [name] --kubernetes-version [version] | kubectl apply -f -
$ export SERVICE_CIDR=["172.17.0.0/16"]
$ export POD_CIDR=["192.168.122.0/24"]
$ clusterctl generate cluster kube-node-mgmt --infrastructure kubemark --flavor capd --kubernetes-version 1.25.3 --control-plane-machine-count=1 --worker-machine-count=4 | kubectl apply -f- |
thanks @aauren , i will try to reproduce from your instructions. |
i've been working on reproducing this, and i do get a similar result when trying things as you have them listed here. but when i remove the
fwiw, i'm using capi 1.4.6 and kubernetes 1.25.3 |
ok, i think i've found the root cause here. for me, it's not the i modified my cluster yaml to contain this for the KubemarkMachineTemplate
could you try out that configuration @aauren ? |
i think this is fixed in the 0.6.0 release, but i'm hitting a different issue there now |
i've created #97 to capture the followup work here. |
@aauren please give the |
Hey @elmiko! Sorry that it took me so long to get around to testing this one. I can confirm that Thanks for fixing this up for me! Cheers! |
great to hear @aauren ! |
What steps did you take and what happened:
When creating a kubemark cluster using kind and capd kubemark pods stay in
ContainerCreating
status with an error in the description saying that they haveFailedMount
:Going into the kubelet container in docker shows that the file exists and is a socket:
Steps to Reproduce:
clusterctl.yaml
:default
namespace and then watch them stop atContainerCreating
What did you expect to happen:
I expected the kubemark / CAPD cluster to come up and for pods to enter running state
Anything else you would like to add:
I tried using minikube instead of kind to create the cluster and ran into the same issue with the containerd socket not mounting.
I was originally using Kubernetes 1.23.X to test against, but found the original issue where CAPD was switched to use the
unix:///
style socket specification in the HostMount and it mentioned problems with 1.24.X versions of k8s so I switched to 1.26.3. But no matter what I try I can't seem to get past this error: kubernetes-sigs/cluster-api#6155I'm using Docker version:
23.0.1
Environment:
v0.5.0
kubectl version
):/etc/os-release
):Ubuntu 22.04.2
/kind bug
[One or more /area label. See https://github.com/kubernetes-sigs/cluster-api-provider-kubemark/labels?q=area for the list of labels]
The text was updated successfully, but these errors were encountered: