-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
minikube start fails with btrfs #12569
Comments
/kind support |
I have very similar situation on Fedora, two machines: desktop and notebook. Minibike works perfectly on desktop configured couple of weeks ago. Today I tried to configure it on my laptop and it fails to start. When I called
while on my working machine I don't see any restarts of kubelet. Kernel versions are the same, the differences that come in my mind are: the good machine has nvidia runtime for docker enabled and additionally the hard drive is not encrypted, the nonworking laptop does not have nvidia runtime and the hard drive is encrypted. The good machine has btrfs driver enabled since day one and that has never changed, I tried both btrfs and overlay2 on the broken machine and it didn't help. Logs, just in case and |
Basically, what fixed the issue for me was reinstalling the OS with ext4 as the filesystem instead of btrfs, which I previously had. @slabko it doesn't matter which driver you specify for docker, your filesystem matters here, from what I understand. If you don't want to reinstall, this fix supposedly fixes the issue: #7923 (comment). Oh, and by the way the encryption might be the issue here (I also encountered this problem on an encrypted btrfs filesystem), because when I tested btrfs on VMs without encryption - minikube worked without any problems. So for me it's either ext4 or btrfs without encryption to make it work |
@dabljues Thank you very much for you help, the following command actually solved my problem
In the meantime I also tried kind, seems to work nicely as well with btrfs and encryption. |
Yeah, btrfs is a known issue with minikube: #7923. I'm glad there's a workaround! |
Thank you |
Kubernetes 1.23 will support |
@slabko Just to mention, your command lacks a |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
No problem with kernel 5.15.32 and minikube v1.25.2 anymore |
Based on @HerHde's comment, this seems to be resolved, so I'm going to close this issue. |
Steps to reproduce the issue:
Run
minikube logs --file=logs.txt
and drag and drop the log file into this issueFull output of failed command if not
minikube start
:😄 minikube v1.23.2 on Arch
✨ Automatically selected the docker driver. Other choices: ssh, none
👍 Starting control plane node minikube in cluster minikube
🚜 Pulling base image ...
💾 Downloading Kubernetes v1.22.2 preload ...
> preloaded-images-k8s-v13-v1...: 511.84 MiB / 511.84 MiB 100.00% 78.83 Mi
🔥 Creating docker container (CPUs=2, Memory=7800MB) ...
🐳 Preparing Kubernetes v1.22.2 on Docker 20.10.8 ...
▪ Generating certificates and keys ...
▪ Booting up control plane ...
💢 initialization failed, will try again: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.22.2
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost minikube] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost minikube] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
stderr:
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
So now I'll list the things I've tried:
minikube start
with older k8s versions like1.21.0
,1.20.0
etc.Then, I went and installed everything on a virtual machine, same OS configuration etc. There I just installed docker, enabled its service, ran
minikube start
and boom, everything worked (same version of docker, same filesystem, same minikube version etc). I didn't need to install kubelet nor kubectl. The only thing that was different was the kernel - I have 5.14.7, on VM I had 5.13.9.So I even downgraded the kernel, rebooted, restarted the docker service (so I could see the downgraded kernel under
docker info
). Same thing happens.I don't know if that matters (because I've tested, and you don't need kubelet installed to run
minikube start
), but after I installedkubelet
, I was checking thejournalctl
frequenty, I spotted this error occurring all the time:This happens on
minikube
stdout when I have enabledkubelet.service
, but I don't start it (or I stop it basically)I've been debugging this for 8 straight hours now, searched all the issues, SO questions, no luck, at least for me (though many of them are open since a long time ago). Do any of you guys know what may be happening here? If there's anything that I forgot to attach here, some logs, system info - I can provide
The text was updated successfully, but these errors were encountered: