Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

minikube fails to launch #17736

Closed
gdoaks1 opened this issue Dec 5, 2023 · 16 comments
Closed

minikube fails to launch #17736

gdoaks1 opened this issue Dec 5, 2023 · 16 comments
Labels
kind/support Categorizes issue or PR as a support question. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. triage/needs-information Indicates an issue needs more information in order to work on it.

Comments

@gdoaks1
Copy link

gdoaks1 commented Dec 5, 2023

What Happened?

Windows PowerShell
Copyright (C) Microsoft Corporation. All rights reserved.

Install the latest PowerShell for new features and improvements! https://aka.ms/PSWindows

Loading personal and system profiles took 1719ms.
PS C:\Users\GDoaks1> history
PS C:\Users\GDoaks1> minikube start --nodes 2 -p multinode-demo

  • [multinode-demo] minikube v1.27.1 on Microsoft Windows 11 Enterprise 10.0.22000 Build 22000
    ! Specified Kubernetes version 1.25.6 is newer than the newest supported version: v1.25.2. Use minikube config defaults kubernetes-version for details.
  • Using the docker driver based on existing profile
  • Starting control plane node multinode-demo in cluster multinode-demo
  • Pulling base image ...
  • Restarting existing docker container for "multinode-demo" ...
    ! This container is having trouble accessing https://registry.k8s.io
  • To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
  • Preparing Kubernetes v1.25.6 on Docker 20.10.18 ...
    • kubelet.cgroup-driver=systemd
      X Unable to load cached images: loading cached images: CreateFile C:\Users\GDoaks1.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.9.3: The system cannot find the path specified.
      ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
      ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
      stdout:
      [init] Using Kubernetes version: v1.25.6
      [preflight] Running pre-flight checks
      [preflight] Pulling images required for setting up a Kubernetes cluster
      [preflight] This might take a minute or two, depending on the speed of your internet connection
      [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'

stderr:
W1205 22:07:44.170132 10430 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR ImagePull]: failed to pull image registry.k8s.io/kube-apiserver:v1.25.6: output: time="2023-12-05T22:07:44Z" level=fatal msg="pulling image: rpc error: code = Unknown desc = Error response from daemon: Get "https://registry.k8s.io/v2/\": x509: certificate signed by unknown authority"
, error: exit status 1
[ERROR ImagePull]: failed to pull image registry.k8s.io/kube-controller-manager:v1.25.6: output: time="2023-12-05T22:07:45Z" level=fatal msg="pulling image: rpc error: code = Unknown desc = Error response from daemon: Get "https://registry.k8s.io/v2/\": x509: certificate signed by unknown authority"
, error: exit status 1
[ERROR ImagePull]: failed to pull image registry.k8s.io/kube-scheduler:v1.25.6: output: time="2023-12-05T22:07:46Z" level=fatal msg="pulling image: rpc error: code = Unknown desc = Error response from daemon: Get "https://registry.k8s.io/v2/\": x509: certificate signed by unknown authority"
, error: exit status 1
[ERROR ImagePull]: failed to pull image registry.k8s.io/kube-proxy:v1.25.6: output: time="2023-12-05T22:07:46Z" level=fatal msg="pulling image: rpc error: code = Unknown desc = Error response from daemon: Get "https://registry.k8s.io/v2/\": x509: certificate signed by unknown authority"
, error: exit status 1
[ERROR ImagePull]: failed to pull image registry.k8s.io/pause:3.8: output: time="2023-12-05T22:07:47Z" level=fatal msg="pulling image: rpc error: code = Unknown desc = Error response from daemon: Get "https://registry.k8s.io/v2/\": x509: certificate signed by unknown authority"
, error: exit status 1
[ERROR ImagePull]: failed to pull image registry.k8s.io/coredns/coredns:v1.9.3: output: time="2023-12-05T22:07:48Z" level=fatal msg="pulling image: rpc error: code = Unknown desc = Error response from daemon: Get "https://registry.k8s.io/v2/\": x509: certificate signed by unknown authority"
, error: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with --ignore-preflight-errors=...
To see the stack trace of this error execute with --v=5 or higher

X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.25.6
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'

stderr:
W1205 22:07:50.387299 11829 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR ImagePull]: failed to pull image registry.k8s.io/kube-apiserver:v1.25.6: output: time="2023-12-05T22:07:51Z" level=fatal msg="pulling image: rpc error: code = Unknown desc = Error response from daemon: Get "https://registry.k8s.io/v2/\": x509: certificate signed by unknown authority"
, error: exit status 1
[ERROR ImagePull]: failed to pull image registry.k8s.io/kube-controller-manager:v1.25.6: output: time="2023-12-05T22:07:51Z" level=fatal msg="pulling image: rpc error: code = Unknown desc = Error response from daemon: Get "https://registry.k8s.io/v2/\": x509: certificate signed by unknown authority"
, error: exit status 1
[ERROR ImagePull]: failed to pull image registry.k8s.io/kube-scheduler:v1.25.6: output: time="2023-12-05T22:07:52Z" level=fatal msg="pulling image: rpc error: code = Unknown desc = Error response from daemon: Get "https://registry.k8s.io/v2/\": x509: certificate signed by unknown authority"
, error: exit status 1
[ERROR ImagePull]: failed to pull image registry.k8s.io/kube-proxy:v1.25.6: output: time="2023-12-05T22:07:53Z" level=fatal msg="pulling image: rpc error: code = Unknown desc = Error response from daemon: Get "https://registry.k8s.io/v2/\": x509: certificate signed by unknown authority"
, error: exit status 1
[ERROR ImagePull]: failed to pull image registry.k8s.io/pause:3.8: output: time="2023-12-05T22:07:54Z" level=fatal msg="pulling image: rpc error: code = Unknown desc = Error response from daemon: Get "https://registry.k8s.io/v2/\": x509: certificate signed by unknown authority"
, error: exit status 1
[ERROR ImagePull]: failed to pull image registry.k8s.io/coredns/coredns:v1.9.3: output: time="2023-12-05T22:07:54Z" level=fatal msg="pulling image: rpc error: code = Unknown desc = Error response from daemon: Get "https://registry.k8s.io/v2/\": x509: certificate signed by unknown authority"
, error: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with --ignore-preflight-errors=...
To see the stack trace of this error execute with --v=5 or higher

╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
https://github.com/kubernetes/minikube/issues/new/choose
│ │
│ * Please run minikube logs --file=logs.txt and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯

X Exiting due to GUEST_START: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.25.6
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'

stderr:
W1205 22:07:50.387299 11829 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR ImagePull]: failed to pull image registry.k8s.io/kube-apiserver:v1.25.6: output: time="2023-12-05T22:07:51Z" level=fatal msg="pulling image: rpc error: code = Unknown desc = Error response from daemon: Get "https://registry.k8s.io/v2/\": x509: certificate signed by unknown authority"
, error: exit status 1
[ERROR ImagePull]: failed to pull image registry.k8s.io/kube-controller-manager:v1.25.6: output: time="2023-12-05T22:07:51Z" level=fatal msg="pulling image: rpc error: code = Unknown desc = Error response from daemon: Get "https://registry.k8s.io/v2/\": x509: certificate signed by unknown authority"
, error: exit status 1
[ERROR ImagePull]: failed to pull image registry.k8s.io/kube-scheduler:v1.25.6: output: time="2023-12-05T22:07:52Z" level=fatal msg="pulling image: rpc error: code = Unknown desc = Error response from daemon: Get "https://registry.k8s.io/v2/\": x509: certificate signed by unknown authority"
, error: exit status 1
[ERROR ImagePull]: failed to pull image registry.k8s.io/kube-proxy:v1.25.6: output: time="2023-12-05T22:07:53Z" level=fatal msg="pulling image: rpc error: code = Unknown desc = Error response from daemon: Get "https://registry.k8s.io/v2/\": x509: certificate signed by unknown authority"
, error: exit status 1
[ERROR ImagePull]: failed to pull image registry.k8s.io/pause:3.8: output: time="2023-12-05T22:07:54Z" level=fatal msg="pulling image: rpc error: code = Unknown desc = Error response from daemon: Get "https://registry.k8s.io/v2/\": x509: certificate signed by unknown authority"
, error: exit status 1
[ERROR ImagePull]: failed to pull image registry.k8s.io/coredns/coredns:v1.9.3: output: time="2023-12-05T22:07:54Z" level=fatal msg="pulling image: rpc error: code = Unknown desc = Error response from daemon: Get "https://registry.k8s.io/v2/\": x509: certificate signed by unknown authority"
, error: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with --ignore-preflight-errors=...
To see the stack trace of this error execute with --v=5 or higher

╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
https://github.com/kubernetes/minikube/issues/new/choose
│ │
│ * Please run minikube logs --file=logs.txt and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯

PS C:\Users\GDoaks1> ^I

Attach the log file

PS C:\Users\GDoaks1> minikube logs --file=logs.txt

  • Profile "minikube" not found. Run "minikube profile list" to view all profiles.
    To start a cluster, run: "minikube start"
    PS C:\Users\GDoaks1>

Operating System

Windows

Driver

Docker

@kundan2707
Copy link
Contributor

@gdoaks1 check ssl certificate valid or not .

@kundan2707
Copy link
Contributor

/kind support

@k8s-ci-robot k8s-ci-robot added the kind/support Categorizes issue or PR as a support question. label Dec 5, 2023
@gdoaks1
Copy link
Author

gdoaks1 commented Dec 5, 2023

its a fresh install so cert should be new i executed minikube delete --purge --all prior to the start

@pnasrat
Copy link
Contributor

pnasrat commented Dec 6, 2023

@gdoaks1 Please note the documentation on this error here https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/#x509-certificate-signed-by-unknown-authority

The error Error response from daemon: Get "[https://registry.k8s.io/v2/\](https://registry.k8s.io/v2/%5C)": x509: certificate signed by unknown authority" is not related to minikube generated certificates but is most likely due to a corporate proxy on your end that - so you may need to add the PEM file from your corporate IT or extract it from chrome as in #3613 (comment)

Can you do this and see if it resolves your issue

/triage needs-information

@k8s-ci-robot k8s-ci-robot added the triage/needs-information Indicates an issue needs more information in order to work on it. label Dec 6, 2023
@gdoaks1
Copy link
Author

gdoaks1 commented Dec 6, 2023

now im getting
PS C:\Users\GDoaks1> minikube start --nodes 2 -p multinode-demo

  • [multinode-demo] minikube v1.27.1 on Microsoft Windows 11 Enterprise 10.0.22000 Build 22000
  • Automatically selected the docker driver
  • Using Docker Desktop driver with root privileges
  • Starting control plane node multinode-demo in cluster multinode-demo
  • Pulling base image ...
  • Creating docker container (CPUs=2, Memory=3950MB) ...
    ! This container is having trouble accessing https://registry.k8s.io
  • To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
  • Preparing Kubernetes v1.25.2 on Docker 20.10.18 ...
    • Generating certificates and keys ...
    • Booting up control plane ...
      ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
      stdout:
      [init] Using Kubernetes version: v1.25.2
      [preflight] Running pre-flight checks
      [preflight] Pulling images required for setting up a Kubernetes cluster
      [preflight] This might take a minute or two, depending on the speed of your internet connection
      [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
      [certs] Using certificateDir folder "/var/lib/minikube/certs"
      [certs] Using existing ca certificate authority
      [certs] Using existing apiserver certificate and key on disk
      [certs] Generating "apiserver-kubelet-client" certificate and key
      [certs] Generating "front-proxy-ca" certificate and key
      [certs] Generating "front-proxy-client" certificate and key
      [certs] Generating "etcd/ca" certificate and key
      [certs] Generating "etcd/server" certificate and key
      [certs] etcd/server serving cert is signed for DNS names [localhost multinode-demo] and IPs [192.168.49.2 127.0.0.1 ::1]
      [certs] Generating "etcd/peer" certificate and key
      [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-demo] and IPs [192.168.49.2 127.0.0.1 ::1]
      [certs] Generating "etcd/healthcheck-client" certificate and key
      [certs] Generating "apiserver-etcd-client" certificate and key
      [certs] Generating "sa" key and public key
      [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
      [kubeconfig] Writing "admin.conf" kubeconfig file
      [kubeconfig] Writing "kubelet.conf" kubeconfig file
      [kubeconfig] Writing "controller-manager.conf" kubeconfig file
      [kubeconfig] Writing "scheduler.conf" kubeconfig file
      [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
      [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
      [kubelet-start] Starting the kubelet
      [control-plane] Using manifest folder "/etc/kubernetes/manifests"
      [control-plane] Creating static Pod manifest for "kube-apiserver"
      [control-plane] Creating static Pod manifest for "kube-controller-manager"
      [control-plane] Creating static Pod manifest for "kube-scheduler"
      [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
      [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
      [kubelet-check] Initial timeout of 40s passed.

Unfortunately, an error has occurred:
timed out waiting for the condition

This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'

stderr:
W1206 17:14:06.937450 1219 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

  • Generating certificates and keys ...
  • Booting up control plane ...

X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.25.2
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

Unfortunately, an error has occurred:
timed out waiting for the condition

This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'

stderr:
W1206 17:18:12.397348 3601 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
https://github.com/kubernetes/minikube/issues/new/choose
│ │
│ * Please run minikube logs --file=logs.txt and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯

X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.25.2
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

Unfortunately, an error has occurred:
timed out waiting for the condition

This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'

stderr:
W1206 17:18:12.397348 3601 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

PS C:\Users\GDoaks1>

@pnasrat
Copy link
Contributor

pnasrat commented Dec 6, 2023

@gdoaks1 as linked in the new error see https://minikube.sigs.k8s.io/docs/reference/networking/proxy/

If you have a corporate proxy please ensure you are running minikube with the appropriate environment variables set

set HTTP_PROXY=http://<proxy hostname:port>
set HTTPS_PROXY=https://<proxy hostname:port>
set NO_PROXY=localhost,127.0.0.1,10.96.0.0/12,192.168.59.0/24,192.168.49.0/24,192.168.39.0/24

minikube start

Can you try stopping then starting with those environment variables correctly set (changing <proxy hostname:port> for your environment eg set HTTP_PROXY=http://proxy.example.com:3128

@gdoaks1
Copy link
Author

gdoaks1 commented Dec 8, 2023

this is on company pc but im using home internet no vpn... there shouldnt be a proxy here

@gdoaks1
Copy link
Author

gdoaks1 commented Dec 8, 2023

or what value should I assign

@pnasrat
Copy link
Contributor

pnasrat commented Dec 8, 2023

Unfortunately I do not have access to any windows systems to try help diagnose local network configuration but things that you can check to identify further where the issue might be - but this most likely is the setup of your corporate machine

  1. Can regular docker containers access the internet - eg docker run --rm curlimages/curl:8.5.0 -LI https://registry.k8s.io/ includes a HTTP 200 response.
  2. Check your proxy configuration in the Windows gui by Select the Start button, then select Settings > Network & internet > Proxy. See more use a proxy server in windows - note this may be configured in a number of ways.
  3. If you use chrome or Edge you can chrome://net-internals/#proxy or possibly edge://net-internals/#proxy to view what the browser thinks it is using as a proxy

@gdoaks1
Copy link
Author

gdoaks1 commented Dec 11, 2023

PS C:\Users\GDoaks1> docker run --rm curlimages/curl:8.5.0 -LI https://registry.k8s.io/
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
curl: (60) SSL certificate problem: unable to get local issuer certificate
More details here: https://curl.se/docs/sslcerts.html

curl failed to verify the legitimacy of the server and therefore could not
establish a secure connection to it. To learn more about this situation and
how to fix it, please visit the web page mentioned above.
PS C:\Users\GDoaks1>

@gdoaks1
Copy link
Author

gdoaks1 commented Dec 11, 2023

but with -V option it works

PS C:\Users\GDoaks1> curl -V

cmdlet Invoke-WebRequest at command pipeline position 1
Supply values for the following parameters:
Uri: https://registry.k8s.io/
VERBOSE: GET with 0-byte payload
VERBOSE: received -1-byte response of content type text/html; charset=utf-8

StatusCode : 200
StatusDescription : OK
Content :

@gdoaks1
Copy link
Author

gdoaks1 commented Dec 11, 2023

Adding proxy readout

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 10, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Apr 9, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale May 9, 2024
@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/support Categorizes issue or PR as a support question. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. triage/needs-information Indicates an issue needs more information in order to work on it.
Projects
None yet
Development

No branches or pull requests

5 participants