Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Does the latest version introduce unexpected bug? #14817

Closed
bitpeng opened this issue Aug 19, 2022 · 5 comments
Closed

Does the latest version introduce unexpected bug? #14817

bitpeng opened this issue Aug 19, 2022 · 5 comments
Labels
kind/support Categorizes issue or PR as a support question.

Comments

@bitpeng
Copy link

bitpeng commented Aug 19, 2022

What Happened?

I have tried hard but failed to install k8s by minikube in centos7.9. Anybody who can give me a kind help? Thanks.

minikube version

$ minikube version
minikube version: v1.26.1
commit: 62e108c

lsb release

$ lsb_release -a
LSB Version: :core-4.1-amd64:core-4.1-noarch:cxx-4.1-amd64:cxx-4.1-noarch:desktop-4.1-amd64:desktop-4.1-noarch:languages-4.1-amd64:languages-4.1-noarch:printing-4.1-amd64:printing-4.1-noarch
Distributor ID: CentOS
Description: CentOS Linux release 7.9.2009 (Core)
Release: 7.9.2009
Codename: Core

docker info

$ docker version
Client: Docker Engine - Community
Version: 20.10.17
API version: 1.41
Go version: go1.17.11
Git commit: 100c701
Built: Mon Jun 6 23:05:12 2022
OS/Arch: linux/amd64
Context: default
Experimental: true

Server: Docker Engine - Community
Engine:
Version: 20.10.17
API version: 1.41 (minimum version 1.12)
Go version: go1.17.11
Git commit: a89b842
Built: Mon Jun 6 23:03:33 2022
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.6.7
GitCommit: 0197261a30bf81f1ee8e6a4dd2dea0ef95d67ccb
runc:
Version: 1.1.3
GitCommit: v1.1.3-0-g6724737
docker-init:
Version: 0.19.0
GitCommit: de40ad0

kernel info

$ uname -a
Linux dev-nmg-huhehaote4-devtest-214.in.ctcdn.cn 3.10.0-1160.36.2.el7.x86_64 #1 SMP Wed Jul 21 11:57:15 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux

This below is the error info.

`Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

Unfortunately, an error has occurred:
timed out waiting for the condition

This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'

stderr:
W0819 04:13:19.246159 13005 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/3.10.0-1160.36.2.el7.x86_64\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

W0819 12:17:20.780852 58289 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start

There is a bridge-nf-call-iptables in my machine

this is a warning info [WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist. But in fact i create the file manually and still showing this warning message.
$ more /proc/sys/net/bridge/bridge-nf-call-iptables 1

More details output with --alsologtostderr, I have redirect to the file

alsologtostderr_output.log

Attach the log file

This is the minikube log output info
minikube_log_cmd_output.txt

minikube status

$ minikube status
minikube
type: Control Plane
host: Running
kubelet: Running
apiserver: Stopped
kubeconfig: Configured

Operating System

Other

Driver

Docker

@bitpeng bitpeng changed the title minikube start failed in centos7.9 Does the latest version introduce unexpected bug? Aug 19, 2022
@bitpeng
Copy link
Author

bitpeng commented Aug 19, 2022

install k8s successfully by minikube v1.26.0-beta.1, so Does the latest version introduce an unexpected bug?

I have NO idea to solve the issue, so I just try to choose another minikube version. Surprisingly, I install k8s cluster successfully without much effort. Does the latest version introduce an unexpected bug?

$ minikube start --image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers

  • minikube v1.26.0-beta.1 on Centos 7.9.2009 (amd64)

  • Automatically selected the docker driver. Other choices: ssh, none

  • Using image repository registry.cn-hangzhou.aliyuncs.com/google_containers

  • Using Docker driver with the root privilege

  • Starting control plane node minikube in cluster minikube

  • Pulling base image ...

  • minikube 1.26.1 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.26.1

  • To disable this notice, run: 'minikube config set WantUpdateNotification false'

    registry.cn-hangzhou.aliyun...: 381.13 MiB / 381.13 MiB 100.00% 10.73 Mi

  • Creating docker container (CPUs=2, Memory=2200MB) ...

    kubeadm.sha256: 64 B / 64 B [--------------------------] 100.00% ? p/s 0s
    kubelet.sha256: 64 B / 64 B [--------------------------] 100.00% ? p/s 0s
    kubeadm: 43.12 MiB / 43.12 MiB [-------------] 100.00% 19.94 MiB p/s 2.4s
    kubelet: 118.77 MiB / 118.77 MiB [-----------] 100.00% 19.45 MiB p/s 6.3s

    • Generating certificates and keys ...
    • Booting up control plane ...
    • Configuring RBAC rules ...
  • Verifying Kubernetes components...

    • Using image registry.cn-hangzhou.aliyuncs.com/google_containers/storage-provisioner:v5
  • Enabled addons: default-storageclass, storage-provisioner

  • Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

$ minikube status
minikube
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

@ps68060
Copy link

ps68060 commented Aug 24, 2022

I installed 1.26.1 on the 19-08-2022 and it would start minikube but due to issues with docker I re-installed it and now I get a very similar error to the one above. This is on Windows 10.

[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

Unfortunately, an error has occurred:
timed out waiting for the condition

This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.

@spowelljr
Copy link
Member

Thanks for reporting your issue, just checking if you deleted the instance (minikube delete) and tried starting fresh. Looking at the log it says Using the docker driver based on existing profile which indicates that it's trying to restart a existing instance.

@spowelljr spowelljr added the kind/support Categorizes issue or PR as a support question. label Aug 29, 2022
@medyagh
Copy link
Member

medyagh commented Aug 31, 2022

@bitpeng would you kindly try with 1.26.0 and see if that works ?
and make sure to run "minikube delete --all --purge" before

I would like to find out what version the regression starter ?

@klaases
Copy link
Contributor

klaases commented Oct 10, 2022

Hi @bitpeng – is this issue still occurring? Are additional details available? If so, please feel free to re-open the issue by commenting with /reopen. This issue will be closed as additional information was unavailable and some time has passed.

Additional information that may be helpful:

  • Whether the issue occurs with the latest minikube release

  • The exact minikube start command line used

  • Attach the full output of minikube logs, run minikube logs --file=logs.txt to create a log file

Thank you for sharing your experience!

@klaases klaases closed this as completed Oct 10, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/support Categorizes issue or PR as a support question.
Projects
None yet
Development

No branches or pull requests

5 participants