-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Kic drivers: mount more linux folders into the container for unstandard /lib/modules/ #8370
Comments
Looks like |
I believe i have the same error. I am also using a proxy but all that is configured in the environment (and also with docker-compose) so i should not have to pass anything. It seems minikube downloaded k8s just fine. I did
Running with Since my system has defined both uppercase and lowercase for maximum compatibility...
...i needed to update both...
...which looks good [the ❗ error now dissappeared]...
...but still i got the same error! @sharifelgamal Ubuntu 18.04 |
So i read starting-a-cluster, which says that i still have to pass something, and tried |
Can you please try starting up without the It shouldn't be necessary, as minikube will program it automatically based on your environment. If you are still having issues, can you please share the output of |
Hi, I am facing the same issue.
|
Also, I would like to see the output from your system of:
This sounds to me like one of two types of problems:
It'd be nice to sort out which one it is. |
FWIW, the Ubuntu seems to lack the |
Kubeadm has a hardcoded list of paths to search for the kernel configuration file: There are two possible locations where it could have found it, but neither is mapped into KIC:
The only config that we have under The To really fix this issue, it should mount the /boot directory as well. Most likely only the needed files? e.g. /boot/config-5.4.0-48-generic The use of "FATAL" here is also misleading, apparently it can cope with the missing config just fine... For instance in our own OS (minikube.iso), we don't have either of these directories available at runtime. But we did start with IKCONFIG (8e457d4) /proc/config.gz |
@afbjorklund I think we could add this folders to kic container, if ti is on linux (since on mac and windows we only deal with docker-machine's VM) we could add a case that only for linux we should mount more folders. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
is there a specific OS that has this problem? |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
@okhwan do u mind sharing what OS and what Linux verison were you using so maybe we could add an integration test for this? |
@ilya-zuyev could not loading module config be related to our containerd |
@medyagh I'm facing an issue similar to the one described here on Fedora 37 Workstation Edition. Seems like a regression - this workaround has worked for me. |
Steps to reproduce the issue:
export http_proxy="{MY Proxy addr}"
export https_proxy="{MY Proxy addr}"
export no_proxy="localhost,127.0.0.1,192.168.99.0/24,10.96.0.0/12,192.168.39.0/24"
minikube start --docker-env http_proxy=$http_proxy --docker-env https_proxy=$https_proxy --docker-env no_proxy=$no_proxy
Full output of failed command:
I0604 19:52:34.906603 30956 logs.go:117] Gathering logs for kube-apiserver [d02e5236b2cf] ...
I0604 19:52:34.906623 30956 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 d02e5236b2cf"
I0604 19:52:34.951583 30956 logs.go:117] Gathering logs for etcd [fcbe7846e192] ...
I0604 19:52:34.951603 30956 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 fcbe7846e192"
I0604 19:52:34.993177 30956 logs.go:117] Gathering logs for kube-scheduler [8e201d6ef5b7] ...
I0604 19:52:34.993196 30956 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 8e201d6ef5b7"
I0604 19:52:35.032671 30956 logs.go:117] Gathering logs for describe nodes ...
I0604 19:52:35.032693 30956 ssh_runner.go:148] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0604 19:52:39.438305 30956 ssh_runner.go:188] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (4.405567143s)
I0604 19:52:39.438433 30956 logs.go:117] Gathering logs for container status ...
I0604 19:52:39.438470 30956 ssh_runner.go:148] Run: /bin/bash -c "sudo
which crictl || echo crictl
ps -a || sudo docker ps -a"W0604 19:52:39.485743 30956 out.go:201] Error starting cluster: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.3
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 5.3.0-53-generic
DOCKER_VERSION: 19.03.2
DOCKER_GRAPH_DRIVER: overlay2
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 26.509685 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet-check] Initial timeout of 40s passed.
stderr:
W0604 10:50:06.310736 11552 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.3.0-53-generic\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0604 10:50:07.762778 11552 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0604 10:50:07.763580 11552 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase upload-config/kubelet: Error writing Crisocket information for the control-plane node: timed out waiting for the condition
To see the stack trace of this error execute with --v=5 or higher
Full output of
minikube start
command used, if not already included:Optional: Full output of
minikube logs
command:The text was updated successfully, but these errors were encountered: