Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

minikube start --image-mirror-country=cn failed #15270

Closed
Whitroom opened this issue Nov 2, 2022 · 6 comments
Closed

minikube start --image-mirror-country=cn failed #15270

Whitroom opened this issue Nov 2, 2022 · 6 comments
Labels
l/zh-CN Issues in or relating to Chinese

Comments

@Whitroom
Copy link

Whitroom commented Nov 2, 2022

Hi!

我刚刚入门Kubernetes,尝试使用minikube搭建简单的实验环境,在进行初始化的时候出现了这些问题,一开始查看日志的时候发现有些镜像在阿里云找不到,譬如:

I just got started with Kubernetes, tried to use minikube to build a simple experimental environment, and these problems occurred during initialization, and when I first checked the logs, I found that some images could not be found in Aliyun, for example:

daemon lookup for registry.cn-hangzhou.aliyuncs.com/google_containers/storage-provisioner:v5: Error: No such image: registry.cn-hangzhou.aliyuncs.com/google_containers/storage-provisioner:v5

因此在这个镜像源中预先下载了所需镜像,但执行命令的时候还是出现同样的状况,只不过logs报错相对一开始来说少了一些。

Therefore, the required image is downloaded in advance in this image source, but the same situation occurs when executing the same command, the error logs is less than at the beginning.

重现问题所需的命令/Command to reproduce it

minikube start --image-mirror-country=cn

失败的命令的完整输出/Failed cmd output


😄 Microsoft Windows 11 Home 10.0.22623 Build 22623 上的 minikube v1.27.0
❗ Kubernetes 1.25.0 has a known issue with resolv.conf. minikube is using a workaround that should work for most use cases.
❗ For more information, see: kubernetes/kubernetes#112135
✨ 自动选择 docker 驱动
🎉 minikube 1.27.1 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.27.1
💡 To disable this notice, run: 'minikube config set WantUpdateNotification false'

❗ 您所在位置的已知存储库都无法访问。正在将 registry.cn-hangzhou.aliyuncs.com/google_containers 用作后备存储库。
✅ 正在使用镜像存储库 registry.cn-hangzhou.aliyuncs.com/google_containers
📌 Using Docker Desktop driver with root privileges
👍 Starting control plane node minikube in cluster minikube
🚜 Pulling base image ...
🔥 Creating docker container (CPUs=2, Memory=3900MB) ...
❗ This container is having trouble accessing https://registry.cn-hangzhou.aliyuncs.com/google_containers
💡 To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
> kubelet.sha256: 64 B / 64 B [-------------------------] 100.00% ? p/s 0s
> kubectl.sha256: 64 B / 64 B [-------------------------] 100.00% ? p/s 0s
> kubeadm.sha256: 64 B / 64 B [-------------------------] 100.00% ? p/s 0s
> kubeadm: 41.76 MiB / 41.76 MiB [--------------] 100.00% 2.20 MiB p/s 19s
> kubectl: 42.92 MiB / 42.92 MiB [--------------] 100.00% 2.09 MiB p/s 21s
> kubelet: 108.93 MiB / 108.93 MiB [------------] 100.00% 2.12 MiB p/s 52s

▪ Generating certificates and keys ...
▪ Booting up control plane ...

💢 initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.25.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost minikube] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost minikube] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

Unfortunately, an error has occurred:
timed out waiting for the condition

This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'

stderr:
W1102 10:50:37.383260 1935 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

▪ Generating certificates and keys ...
▪ Booting up control plane ...

💣 开启 cluster 时出错: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.25.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

Unfortunately, an error has occurred:
timed out waiting for the condition

This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'

stderr:
W1102 10:54:42.214388 4671 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

╭───────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ 😿 If the above advice does not help, please let us know: │
│ 👉 https://github.com/kubernetes/minikube/issues/new/choose
│ │
│ Please run minikube logs --file=logs.txt and attach logs.txt to the GitHub issue. │
│ │
╰───────────────────────────────────────────────────────────────────────────────────────────╯

❌ Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.25.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

Unfortunately, an error has occurred:
timed out waiting for the condition

This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'

stderr:
W1102 10:54:42.214388 4671 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

💡 建议:检查 'journalctl -xeu kubelet' 的输出,尝试启动 minikube 时添加参数 --extra-config=kubelet.cgroup-driver=systemd
🍿 Related issue: #4172

minikube logs命令的输出/minikube logs output


stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
❗ unable to fetch logs for: describe nodes

使用的操作系统版本/Using operating system version
Microsoft Windows 11 Home 10.0.22623 Build 22623
Docker version:

Client: Cloud integration: v1.0.28 Version: 20.10.17 API version: 1.41 Go version: go1.17.11 Git commit: 100c701 Built: Mon Jun 6 23:09:02 2022 OS/Arch: windows/amd64 Context: default Experimental: true

Server: Docker Desktop 4.11.0 (83626)
Engine:
Version: 20.10.17
API version: 1.41 (minimum version 1.12)
Go version: go1.17.11
Git commit: a89b842
Built: Mon Jun 6 23:01:23 2022
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.6.6
GitCommit: 10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1
runc:
Version: 1.1.2
GitCommit: v1.1.2-0-ga916309
docker-init:
Version: 0.19.0
GitCommit: de40ad0

**minikube version**:
minikube version: v1.27.0 commit: 4243041
希望能够得到各位大佬的帮助!

Hope to get Dalao's help!

@Whitroom Whitroom added the l/zh-CN Issues in or relating to Chinese label Nov 2, 2022
@Whitroom
Copy link
Author

Whitroom commented Nov 2, 2022

minikube logs --file=''输出的文件内容/File content:

==> Audit <==
|---------|---------------------------|----------|----------------|---------|---------------------|----------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|---------------------------|----------|----------------|---------|---------------------|----------|
| start | --image-mirror-country=cn | minikube | WHITROOM\10620 | v1.27.0 | 02 Nov 22 18:48 CST | |
|---------|---------------------------|----------|----------------|---------|---------------------|----------|

==> Last Start <==
Log file created at: 2022/11/02 18:48:42
Running on machine: Whitroom
Binary: Built with gc go1.19.1 for windows/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1102 18:48:42.859720 47284 out.go:296] Setting OutFile to fd 96 ...
I1102 18:48:42.869896 47284 out.go:309] Setting ErrFile to fd 100...
W1102 18:48:42.892030 47284 root.go:310] Error reading config file at C:\Users\10620.minikube\config\config.json: open C:\Users\10620.minikube\config\config.json: The system cannot find the path specified.
I1102 18:48:42.900280 47284 out.go:303] Setting JSON to false
I1102 18:48:42.914243 47284 start.go:115] hostinfo: {"hostname":"Whitroom","uptime":11580,"bootTime":1667374542,"procs":301,"os":"windows","platform":"Microsoft Windows 11 Home","platformFamily":"Standalone Workstation","platformVersion":"10.0.22623 Build 22623","kernelVersion":"10.0.22623 Build 22623","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"3576888c-e24f-47b0-945a-1e1c3ac44173"}
W1102 18:48:42.914243 47284 start.go:123] gopshost.Virtualization returned error: not implemented yet
I1102 18:48:42.916530 47284 out.go:177] 😄 Microsoft Windows 11 Home 10.0.22623 Build 22623 上的 minikube v1.27.0
I1102 18:48:42.918196 47284 notify.go:214] Checking for updates...
W1102 18:48:42.918746 47284 preload.go:295] Failed to list preload files: open C:\Users\10620.minikube\cache\preloaded-tarball: The system cannot find the file specified.
W1102 18:48:42.918746 47284 out.go:239] ❗ Kubernetes 1.25.0 has a known issue with resolv.conf. minikube is using a workaround that should work for most use cases.
W1102 18:48:42.919290 47284 out.go:239] ❗ For more information, see: kubernetes/kubernetes#112135
I1102 18:48:42.919290 47284 driver.go:365] Setting default libvirt URI to qemu:///system
I1102 18:48:42.919290 47284 global.go:111] Querying for installed drivers using PATH=C:\Program Files\Microsoft\jdk-17.0.2.8-hotspot\bin;C:\WINDOWS\system32;C:\WINDOWS;C:\WINDOWS\System32\Wbem;C:\WINDOWS\System32\WindowsPowerShell\v1.0;C:\WINDOWS\System32\OpenSSH;C:\Program Files\nodejs;C:\Program Files (x86)\Tencent\微信web开发者工具\dll;C:\Program Files\Docker\Docker\resources\bin;C:\ProgramData\DockerDesktop\version-bin;C:\Program Files\AMD\AMDuProf\bin;C:\Program Files\Go\bin;C:\Program Files\dotnet;C:\Program Files\gs\gs10.00.0\bin;C:\Program Files\Kubernetes\Minikube;C:\Program Files\MySQL\MySQL Shell 8.0\bin;C:\Users\10620\AppData\Local\Programs\Python\Python310\Scripts;C:\Users\10620\AppData\Local\Programs\Python\Python310;C:\Users\10620\AppData\Local\Microsoft\WindowsApps;C:\Users\10620\AppData\Local\Programs\Microsoft VS Code\bin;C:\Users\10620\AppData\Roaming\npm;C:\Program Files\Git\bin;C:\Msys64\mingw64\bin;C:\Flutter\bin;C:\Go\bin;C:\platform-tools;C:\FFmpeg\bin;C:\Gradle\bin;C:\apache-maven\bin;C:\Program Files (x86)\Tencent\QQGameTempest\Hall.57821;C:\Users\10620\go\bin;C:\Users\10620.dotnet\tools
I1102 18:48:43.092135 47284 docker.go:137] docker version: linux-20.10.17
I1102 18:48:43.095082 47284 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1102 18:48:43.646424 47284 info.go:265] docker info: {ID:MTN4:UJVP:JW4Q:XH7J:SESZ:2I2H:BZTF:T4NX:HAOW:ZMVN:MOBH:CC3F Containers:5 ContainersRunning:0 ContainersPaused:0 ContainersStopped:5 Images:21 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:53 SystemTime:2022-11-02 10:48:43.224866465 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:6 KernelVersion:5.15.68.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:7997632512 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.7.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.8] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:}}
I1102 18:48:43.646931 47284 global.go:119] docker default: true priority: 9, state: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc: Version:}
I1102 18:48:44.578848 47284 global.go:119] hyperv default: true priority: 8, state: {Installed:true Healthy:false Running:false NeedsImprovement:false Error:Hyper-V requires Administrator privileges Reason: Fix:Right-click the PowerShell icon and select Run as Administrator to open PowerShell in elevated mode. Doc: Version:}
I1102 18:48:44.587936 47284 global.go:119] podman default: true priority: 3, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "podman": executable file not found in %!P(MISSING)ATH%!R(MISSING)eason: Fix:Install Podman Doc:https://minikube.sigs.k8s.io/docs/drivers/podman/ Version:}
I1102 18:48:44.596603 47284 global.go:119] qemu2 default: true priority: 3, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "qemu-system-x86_64": executable file not found in %!P(MISSING)ATH%!R(MISSING)eason: Fix:Install qemu-system Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/qemu/ Version:}
I1102 18:48:44.596603 47284 global.go:119] ssh default: false priority: 4, state: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc: Version:}
I1102 18:48:44.615579 47284 global.go:119] virtualbox default: true priority: 6, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:unable to find VBoxManage in $PATH Reason: Fix:Install VirtualBox Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/virtualbox/ Version:}
I1102 18:48:44.624114 47284 global.go:119] vmware default: true priority: 7, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "docker-machine-driver-vmware": executable file not found in %!P(MISSING)ATH%!R(MISSING)eason: Fix:Install docker-machine-driver-vmware Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/vmware/ Version:}
I1102 18:48:44.624654 47284 driver.go:300] not recommending "ssh" due to default: false
I1102 18:48:44.624726 47284 driver.go:295] not recommending "hyperv" due to health: Hyper-V requires Administrator privileges
I1102 18:48:44.624726 47284 driver.go:335] Picked: docker
I1102 18:48:44.624726 47284 driver.go:336] Alternatives: [ssh]
I1102 18:48:44.624726 47284 driver.go:337] Rejects: [hyperv podman qemu2 virtualbox vmware]
I1102 18:48:44.626320 47284 out.go:177] ✨ 自动选择 docker 驱动
I1102 18:48:44.627398 47284 start.go:284] selected driver: docker
I1102 18:48:44.627398 47284 start.go:808] validating driver "docker" against
I1102 18:48:44.627925 47284 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc: Version:}
I1102 18:48:44.632617 47284 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1102 18:48:45.124536 47284 info.go:265] docker info: {ID:MTN4:UJVP:JW4Q:XH7J:SESZ:2I2H:BZTF:T4NX:HAOW:ZMVN:MOBH:CC3F Containers:5 ContainersRunning:0 ContainersPaused:0 ContainersStopped:5 Images:21 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:53 SystemTime:2022-11-02 10:48:44.763971265 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:6 KernelVersion:5.15.68.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:7997632512 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.7.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.8] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:}}
I1102 18:48:45.125110 47284 start_flags.go:296] no existing cluster config was found, will generate one from the flags
I1102 18:48:45.173692 47284 start_flags.go:377] Using suggested 3900MB memory alloc based on sys=15741MB, container=7627MB
I1102 18:48:45.174306 47284 start.go:878] selecting image repository for country cn ...
I1102 18:48:45.191330 47284 lock.go:35] WriteFile acquiring C:\Users\10620.minikube\last_update_check: {Name:mkfe24c29fdcf8106bc9a34b9f87dea2eadf4b7e Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I1102 18:48:45.192866 47284 out.go:177] 🎉 minikube 1.27.1 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.27.1
I1102 18:48:45.192866 47284 out.go:177] 💡 To disable this notice, run: 'minikube config set WantUpdateNotification false'

W1102 18:48:56.440874 47284 out.go:239] ❗ 您所在位置的已知存储库都无法访问。正在将 registry.cn-hangzhou.aliyuncs.com/google_containers 用作后备存储库。
I1102 18:48:56.444969 47284 out.go:177] ✅ 正在使用镜像存储库 registry.cn-hangzhou.aliyuncs.com/google_containers
I1102 18:48:56.446642 47284 start_flags.go:835] Wait components to verify : map[apiserver:true system_pods:true]
I1102 18:48:56.447824 47284 out.go:177] 📌 Using Docker Desktop driver with root privileges
I1102 18:48:56.448572 47284 cni.go:95] Creating CNI manager for ""
I1102 18:48:56.448572 47284 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
I1102 18:48:56.448572 47284 start_flags.go:310] config:
{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:3900 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.0 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository:registry.cn-hangzhou.aliyuncs.com/google_containers LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\10620:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
I1102 18:48:56.450240 47284 out.go:177] 👍 Starting control plane node minikube in cluster minikube
I1102 18:48:56.451881 47284 cache.go:120] Beginning downloading kic base image for docker with docker
I1102 18:48:56.452424 47284 out.go:177] 🚜 Pulling base image ...
I1102 18:48:56.454586 47284 image.go:75] Checking for registry.cn-hangzhou.aliyuncs.com/google_containers/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon
I1102 18:48:56.455116 47284 localpath.go:146] windows sanitize: C:\Users\10620.minikube\cache\images\amd64\registry.cn-hangzhou.aliyuncs.com\google_containers\kube-apiserver:v1.25.0 -> C:\Users\10620.minikube\cache\images\amd64\registry.cn-hangzhou.aliyuncs.com\google_containers\kube-apiserver_v1.25.0
I1102 18:48:56.455207 47284 localpath.go:146] windows sanitize: C:\Users\10620.minikube\cache\images\amd64\registry.cn-hangzhou.aliyuncs.com\google_containers\kube-controller-manager:v1.25.0 -> C:\Users\10620.minikube\cache\images\amd64\registry.cn-hangzhou.aliyuncs.com\google_containers\kube-controller-manager_v1.25.0
I1102 18:48:56.455207 47284 localpath.go:146] windows sanitize: C:\Users\10620.minikube\cache\images\amd64\registry.cn-hangzhou.aliyuncs.com\google_containers\storage-provisioner:v5 -> C:\Users\10620.minikube\cache\images\amd64\registry.cn-hangzhou.aliyuncs.com\google_containers\storage-provisioner_v5
I1102 18:48:56.455207 47284 localpath.go:146] windows sanitize: C:\Users\10620.minikube\cache\images\amd64\registry.cn-hangzhou.aliyuncs.com\google_containers\etcd:3.5.4-0 -> C:\Users\10620.minikube\cache\images\amd64\registry.cn-hangzhou.aliyuncs.com\google_containers\etcd_3.5.4-0
I1102 18:48:56.455207 47284 localpath.go:146] windows sanitize: C:\Users\10620.minikube\cache\images\amd64\registry.cn-hangzhou.aliyuncs.com\google_containers\pause:3.8 -> C:\Users\10620.minikube\cache\images\amd64\registry.cn-hangzhou.aliyuncs.com\google_containers\pause_3.8
I1102 18:48:56.455207 47284 localpath.go:146] windows sanitize: C:\Users\10620.minikube\cache\images\amd64\registry.cn-hangzhou.aliyuncs.com\google_containers\kube-proxy:v1.25.0 -> C:\Users\10620.minikube\cache\images\amd64\registry.cn-hangzhou.aliyuncs.com\google_containers\kube-proxy_v1.25.0
I1102 18:48:56.455207 47284 localpath.go:146] windows sanitize: C:\Users\10620.minikube\cache\images\amd64\registry.cn-hangzhou.aliyuncs.com\google_containers\kube-scheduler:v1.25.0 -> C:\Users\10620.minikube\cache\images\amd64\registry.cn-hangzhou.aliyuncs.com\google_containers\kube-scheduler_v1.25.0
I1102 18:48:56.455207 47284 profile.go:148] Saving config to C:\Users\10620.minikube\profiles\minikube\config.json ...
I1102 18:48:56.455207 47284 localpath.go:146] windows sanitize: C:\Users\10620.minikube\cache\images\amd64\registry.cn-hangzhou.aliyuncs.com\google_containers\coredns:v1.9.3 -> C:\Users\10620.minikube\cache\images\amd64\registry.cn-hangzhou.aliyuncs.com\google_containers\coredns_v1.9.3
I1102 18:48:56.455207 47284 lock.go:35] WriteFile acquiring C:\Users\10620.minikube\profiles\minikube\config.json: {Name:mk38b37295ee7d0ad40e65441fe3e9cd9394208e Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I1102 18:48:56.626339 47284 cache.go:107] acquiring lock: {Name:mk3521acf93b489ad9d422de4e93ecd95005034d Clock:{} Delay:500ms Timeout:10m0s Cancel:}
I1102 18:48:56.626339 47284 cache.go:107] acquiring lock: {Name:mk84e36c14373b09a9f44c3bb614934faad36383 Clock:{} Delay:500ms Timeout:10m0s Cancel:}
I1102 18:48:56.626339 47284 cache.go:107] acquiring lock: {Name:mkf4ba1dee819d7c374279cca20dbe9db782dcd8 Clock:{} Delay:500ms Timeout:10m0s Cancel:}
I1102 18:48:56.626339 47284 cache.go:107] acquiring lock: {Name:mkb4fdcf10fe7d505dde92e50989f47a6617d4ab Clock:{} Delay:500ms Timeout:10m0s Cancel:}
I1102 18:48:56.626339 47284 cache.go:107] acquiring lock: {Name:mk75340c0c892ea879581e532bd831c690bd0bac Clock:{} Delay:500ms Timeout:10m0s Cancel:}
I1102 18:48:56.626339 47284 cache.go:107] acquiring lock: {Name:mk9f769d53606fe3d061e213f7568ffd2061114a Clock:{} Delay:500ms Timeout:10m0s Cancel:}
I1102 18:48:56.626339 47284 cache.go:107] acquiring lock: {Name:mk018ab6e883a21fba2fc1ac58c1b0dbd8c06bcf Clock:{} Delay:500ms Timeout:10m0s Cancel:}
I1102 18:48:56.626339 47284 cache.go:107] acquiring lock: {Name:mkf2b70c683aaef286e7611bca1b366a46d45b55 Clock:{} Delay:500ms Timeout:10m0s Cancel:}
I1102 18:48:56.629616 47284 image.go:134] retrieving image: registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.8
I1102 18:48:56.630167 47284 image.go:134] retrieving image: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.25.0
I1102 18:48:56.630167 47284 image.go:134] retrieving image: registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.4-0
I1102 18:48:56.630167 47284 image.go:134] retrieving image: registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.9.3
I1102 18:48:56.630167 47284 image.go:134] retrieving image: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.25.0
I1102 18:48:56.630723 47284 image.go:134] retrieving image: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.25.0
I1102 18:48:56.630811 47284 image.go:134] retrieving image: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.25.0
I1102 18:48:56.630811 47284 image.go:134] retrieving image: registry.cn-hangzhou.aliyuncs.com/google_containers/storage-provisioner:v5
I1102 18:48:56.637409 47284 image.go:173] found registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.8 locally: &{ref:{Repository:{Registry:{insecure:false registry:registry.cn-hangzhou.aliyuncs.com} repository:google_containers/pause} tag:3.8 original:registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.8} opener:0xc00019e070 tarballImage: id:0xc00008c980 once:{done:0 m:{state:0 sema:0}} err:}
I1102 18:48:56.637409 47284 cache.go:161] opening: \?\Volume{b0bb5e4f-06ad-4aed-b7ee-341472607145}\Users\10620.minikube\cache\images\amd64\registry.cn-hangzhou.aliyuncs.com\google_containers\pause_3.8
I1102 18:48:56.647898 47284 image.go:173] found registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.25.0 locally: &{ref:{Repository:{Registry:{insecure:false registry:registry.cn-hangzhou.aliyuncs.com} repository:google_containers/kube-proxy} tag:v1.25.0 original:registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.25.0} opener:0xc000e82000 tarballImage: id:0xc0006f2380 once:{done:0 m:{state:0 sema:0}} err:}
I1102 18:48:56.647898 47284 cache.go:161] opening: \?\Volume{b0bb5e4f-06ad-4aed-b7ee-341472607145}\Users\10620.minikube\cache\images\amd64\registry.cn-hangzhou.aliyuncs.com\google_containers\kube-proxy_v1.25.0
I1102 18:48:56.661668 47284 image.go:173] found registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.4-0 locally: &{ref:{Repository:{Registry:{insecure:false registry:registry.cn-hangzhou.aliyuncs.com} repository:google_containers/etcd} tag:3.5.4-0 original:registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.4-0} opener:0xc000b46000 tarballImage: id:0xc000e0a100 once:{done:0 m:{state:0 sema:0}} err:}
I1102 18:48:56.661668 47284 cache.go:161] opening: \?\Volume{b0bb5e4f-06ad-4aed-b7ee-341472607145}\Users\10620.minikube\cache\images\amd64\registry.cn-hangzhou.aliyuncs.com\google_containers\etcd_3.5.4-0
I1102 18:48:56.670591 47284 image.go:173] found registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.9.3 locally: &{ref:{Repository:{Registry:{insecure:false registry:registry.cn-hangzhou.aliyuncs.com} repository:google_containers/coredns} tag:v1.9.3 original:registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.9.3} opener:0xc000e24000 tarballImage: id:0xc000a06080 once:{done:0 m:{state:0 sema:0}} err:}
I1102 18:48:56.671110 47284 cache.go:161] opening: \?\Volume{b0bb5e4f-06ad-4aed-b7ee-341472607145}\Users\10620.minikube\cache\images\amd64\registry.cn-hangzhou.aliyuncs.com\google_containers\coredns_v1.9.3
I1102 18:48:56.683711 47284 image.go:173] found registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.25.0 locally: &{ref:{Repository:{Registry:{insecure:false registry:registry.cn-hangzhou.aliyuncs.com} repository:google_containers/kube-scheduler} tag:v1.25.0 original:registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.25.0} opener:0xc000f680e0 tarballImage: id:0xc00008d560 once:{done:0 m:{state:0 sema:0}} err:}
I1102 18:48:56.683711 47284 cache.go:161] opening: \?\Volume{b0bb5e4f-06ad-4aed-b7ee-341472607145}\Users\10620.minikube\cache\images\amd64\registry.cn-hangzhou.aliyuncs.com\google_containers\kube-scheduler_v1.25.0
I1102 18:48:56.692654 47284 image.go:173] found registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.25.0 locally: &{ref:{Repository:{Registry:{insecure:false registry:registry.cn-hangzhou.aliyuncs.com} repository:google_containers/kube-apiserver} tag:v1.25.0 original:registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.25.0} opener:0xc000f68070 tarballImage: id:0xc00008ca80 once:{done:0 m:{state:0 sema:0}} err:}
I1102 18:48:56.692654 47284 cache.go:161] opening: \?\Volume{b0bb5e4f-06ad-4aed-b7ee-341472607145}\Users\10620.minikube\cache\images\amd64\registry.cn-hangzhou.aliyuncs.com\google_containers\kube-apiserver_v1.25.0
I1102 18:48:56.702117 47284 image.go:79] Found registry.cn-hangzhou.aliyuncs.com/google_containers/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon, skipping pull
I1102 18:48:56.702117 47284 cache.go:142] registry.cn-hangzhou.aliyuncs.com/google_containers/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c exists in daemon, skipping load
I1102 18:48:56.702117 47284 cache.go:208] Successfully downloaded all kic artifacts
I1102 18:48:56.702374 47284 start.go:364] acquiring machines lock for minikube: {Name:mk5adb2005a3434c4180d65af384ea0700ff3924 Clock:{} Delay:500ms Timeout:10m0s Cancel:}
I1102 18:48:56.702374 47284 start.go:368] acquired machines lock for "minikube" in 0s
I1102 18:48:56.702374 47284 start.go:93] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:registry.cn-hangzhou.aliyuncs.com/google_containers/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:3900 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.0 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository:registry.cn-hangzhou.aliyuncs.com/google_containers LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\10620:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:} &{Name: IP: Port:8443 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:true Worker:true}
I1102 18:48:56.702885 47284 start.go:125] createHost starting for "" (driver="docker")
I1102 18:48:56.705021 47284 out.go:204] 🔥 Creating docker container (CPUs=2, Memory=3900MB) ...
I1102 18:48:56.706223 47284 start.go:159] libmachine.API.Create for "minikube" (driver="docker")
I1102 18:48:56.706223 47284 client.go:168] LocalClient.Create starting
I1102 18:48:56.707342 47284 main.go:134] libmachine: Creating CA: C:\Users\10620.minikube\certs\ca.pem
I1102 18:48:56.710189 47284 cache.go:156] \?\Volume{b0bb5e4f-06ad-4aed-b7ee-341472607145}\Users\10620.minikube\cache\images\amd64\registry.cn-hangzhou.aliyuncs.com\google_containers\pause_3.8 exists
I1102 18:48:56.710189 47284 cache.go:96] cache image "registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.8" -> "C:\Users\10620\.minikube\cache\images\amd64\registry.cn-hangzhou.aliyuncs.com\google_containers\pause_3.8" took 254.9825ms
I1102 18:48:56.710189 47284 cache.go:80] save to tar file registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.8 -> C:\Users\10620.minikube\cache\images\amd64\registry.cn-hangzhou.aliyuncs.com\google_containers\pause_3.8 succeeded
I1102 18:48:56.712926 47284 image.go:173] found registry.cn-hangzhou.aliyuncs.com/google_containers/storage-provisioner:v5 locally: &{ref:{Repository:{Registry:{insecure:false registry:registry.cn-hangzhou.aliyuncs.com} repository:google_containers/storage-provisioner} tag:v5 original:registry.cn-hangzhou.aliyuncs.com/google_containers/storage-provisioner:v5} opener:0xc000f68150 tarballImage: id:0xc00008d720 once:{done:0 m:{state:0 sema:0}} err:}
I1102 18:48:56.713000 47284 cache.go:161] opening: \?\Volume{b0bb5e4f-06ad-4aed-b7ee-341472607145}\Users\10620.minikube\cache\images\amd64\registry.cn-hangzhou.aliyuncs.com\google_containers\storage-provisioner_v5
I1102 18:48:56.721951 47284 image.go:173] found registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.25.0 locally: &{ref:{Repository:{Registry:{insecure:false registry:registry.cn-hangzhou.aliyuncs.com} repository:google_containers/kube-controller-manager} tag:v1.25.0 original:registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.25.0} opener:0xc000b460e0 tarballImage: id:0xc000e0a300 once:{done:0 m:{state:0 sema:0}} err:}
I1102 18:48:56.721951 47284 cache.go:161] opening: \?\Volume{b0bb5e4f-06ad-4aed-b7ee-341472607145}\Users\10620.minikube\cache\images\amd64\registry.cn-hangzhou.aliyuncs.com\google_containers\kube-controller-manager_v1.25.0
I1102 18:48:56.785770 47284 main.go:134] libmachine: Creating client certificate: C:\Users\10620.minikube\certs\cert.pem
I1102 18:48:57.083910 47284 cli_runner.go:164] Run: docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1102 18:48:57.398938 47284 cli_runner.go:211] docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1102 18:48:57.403584 47284 network_create.go:272] running [docker network inspect minikube] to gather additional debugging logs...
I1102 18:48:57.403584 47284 cli_runner.go:164] Run: docker network inspect minikube
W1102 18:48:57.709164 47284 cli_runner.go:211] docker network inspect minikube returned with exit code 1
I1102 18:48:57.709164 47284 network_create.go:275] error running [docker network inspect minikube]: docker network inspect minikube: exit status 1
stdout:
[]

stderr:
Error: No such network: minikube
I1102 18:48:57.709164 47284 network_create.go:277] output of [docker network inspect minikube]: -- stdout --
[]

-- /stdout --
** stderr **
Error: No such network: minikube

** /stderr **
I1102 18:48:57.714662 47284 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1102 18:48:58.111291 47284 network.go:290] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000e84068] misses:0}
I1102 18:48:58.111291 47284 network.go:236] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I1102 18:48:58.111291 47284 network_create.go:115] attempt to create docker network minikube 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I1102 18:48:58.116260 47284 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=minikube minikube
I1102 18:48:58.513019 47284 network_create.go:99] docker network minikube 192.168.49.0/24 created
I1102 18:48:58.513019 47284 kic.go:106] calculated static IP "192.168.49.2" for the "minikube" container
I1102 18:48:58.522529 47284 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I1102 18:48:58.850997 47284 cli_runner.go:164] Run: docker volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true
I1102 18:48:59.235778 47284 oci.go:103] Successfully created a docker volume minikube
I1102 18:48:59.240281 47284 cli_runner.go:164] Run: docker run --rm --name minikube-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --entrypoint /usr/bin/test -v minikube:/var registry.cn-hangzhou.aliyuncs.com/google_containers/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c -d /var/lib
I1102 18:48:59.411977 47284 cache.go:156] \?\Volume{b0bb5e4f-06ad-4aed-b7ee-341472607145}\Users\10620.minikube\cache\images\amd64\registry.cn-hangzhou.aliyuncs.com\google_containers\storage-provisioner_v5 exists
I1102 18:48:59.411977 47284 cache.go:96] cache image "registry.cn-hangzhou.aliyuncs.com/google_containers/storage-provisioner:v5" -> "C:\Users\10620\.minikube\cache\images\amd64\registry.cn-hangzhou.aliyuncs.com\google_containers\storage-provisioner_v5" took 2.9567702s
I1102 18:48:59.411977 47284 cache.go:80] save to tar file registry.cn-hangzhou.aliyuncs.com/google_containers/storage-provisioner:v5 -> C:\Users\10620.minikube\cache\images\amd64\registry.cn-hangzhou.aliyuncs.com\google_containers\storage-provisioner_v5 succeeded
I1102 18:49:01.940687 47284 cli_runner.go:217] Completed: docker run --rm --name minikube-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --entrypoint /usr/bin/test -v minikube:/var registry.cn-hangzhou.aliyuncs.com/google_containers/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c -d /var/lib: (2.7004062s)
I1102 18:49:01.940687 47284 oci.go:107] Successfully prepared a docker volume minikube
I1102 18:49:01.940687 47284 preload.go:132] Checking if preload exists for k8s version v1.25.0 and runtime docker
W1102 18:49:02.113728 47284 preload.go:115] https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube-preloaded-volume-tarballs/v18/v1.25.0/preloaded-images-k8s-v18-v1.25.0-docker-overlay2-amd64.tar.lz4 status code: 404
I1102 18:49:02.122356 47284 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1102 18:49:03.358640 47284 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (1.2362841s)
I1102 18:49:03.358640 47284 info.go:265] docker info: {ID:MTN4:UJVP:JW4Q:XH7J:SESZ:2I2H:BZTF:T4NX:HAOW:ZMVN:MOBH:CC3F Containers:5 ContainersRunning:0 ContainersPaused:0 ContainersStopped:5 Images:21 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:54 OomKillDisable:true NGoroutines:62 SystemTime:2022-11-02 10:49:02.604789905 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:6 KernelVersion:5.15.68.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:7997632512 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.7.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.8] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:}}
I1102 18:49:03.364250 47284 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I1102 18:49:04.409161 47284 cli_runner.go:217] Completed: docker info --format "'{{json .SecurityOptions}}'": (1.040242s)
I1102 18:49:04.414805 47284 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.168.49.2 --volume minikube:/var --security-opt apparmor=unconfined --memory=3900mb --memory-swap=3900mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 registry.cn-hangzhou.aliyuncs.com/google_containers/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c
I1102 18:49:04.780984 47284 cache.go:156] \?\Volume{b0bb5e4f-06ad-4aed-b7ee-341472607145}\Users\10620.minikube\cache\images\amd64\registry.cn-hangzhou.aliyuncs.com\google_containers\coredns_v1.9.3 exists
I1102 18:49:04.781498 47284 cache.go:96] cache image "registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.9.3" -> "C:\Users\10620\.minikube\cache\images\amd64\registry.cn-hangzhou.aliyuncs.com\google_containers\coredns_v1.9.3" took 8.3262918s
I1102 18:49:04.781645 47284 cache.go:80] save to tar file registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.9.3 -> C:\Users\10620.minikube\cache\images\amd64\registry.cn-hangzhou.aliyuncs.com\google_containers\coredns_v1.9.3 succeeded
I1102 18:49:05.777251 47284 cache.go:156] \?\Volume{b0bb5e4f-06ad-4aed-b7ee-341472607145}\Users\10620.minikube\cache\images\amd64\registry.cn-hangzhou.aliyuncs.com\google_containers\kube-scheduler_v1.25.0 exists
I1102 18:49:05.777768 47284 cache.go:96] cache image "registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.25.0" -> "C:\Users\10620\.minikube\cache\images\amd64\registry.cn-hangzhou.aliyuncs.com\google_containers\kube-scheduler_v1.25.0" took 9.3225612s
I1102 18:49:05.777768 47284 cache.go:80] save to tar file registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.25.0 -> C:\Users\10620.minikube\cache\images\amd64\registry.cn-hangzhou.aliyuncs.com\google_containers\kube-scheduler_v1.25.0 succeeded
I1102 18:49:06.100624 47284 cli_runner.go:217] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.168.49.2 --volume minikube:/var --security-opt apparmor=unconfined --memory=3900mb --memory-swap=3900mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 registry.cn-hangzhou.aliyuncs.com/google_containers/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c: (1.6858183s)
I1102 18:49:06.107275 47284 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Running}}
I1102 18:49:06.510913 47284 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}}
I1102 18:49:06.924005 47284 cli_runner.go:164] Run: docker exec minikube stat /var/lib/dpkg/alternatives/iptables
I1102 18:49:07.362825 47284 cache.go:156] \?\Volume{b0bb5e4f-06ad-4aed-b7ee-341472607145}\Users\10620.minikube\cache\images\amd64\registry.cn-hangzhou.aliyuncs.com\google_containers\kube-proxy_v1.25.0 exists
I1102 18:49:07.362825 47284 cache.go:96] cache image "registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.25.0" -> "C:\Users\10620\.minikube\cache\images\amd64\registry.cn-hangzhou.aliyuncs.com\google_containers\kube-proxy_v1.25.0" took 10.9076186s
I1102 18:49:07.362825 47284 cache.go:80] save to tar file registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.25.0 -> C:\Users\10620.minikube\cache\images\amd64\registry.cn-hangzhou.aliyuncs.com\google_containers\kube-proxy_v1.25.0 succeeded
I1102 18:49:07.464190 47284 oci.go:144] the created container "minikube" has a running status.
I1102 18:49:07.464190 47284 kic.go:210] Creating ssh key for kic: C:\Users\10620.minikube\machines\minikube\id_rsa...
I1102 18:49:07.863365 47284 kic_runner.go:191] docker (temp): C:\Users\10620.minikube\machines\minikube\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I1102 18:49:08.266301 47284 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}}
I1102 18:49:08.542116 47284 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I1102 18:49:08.542116 47284 kic_runner.go:114] Args: [docker exec --privileged minikube chown docker:docker /home/docker/.ssh/authorized_keys]
I1102 18:49:09.070017 47284 kic.go:250] ensuring only current user has permissions to key file located at : C:\Users\10620.minikube\machines\minikube\id_rsa...
I1102 18:49:09.542035 47284 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}}
I1102 18:49:09.730540 47284 machine.go:88] provisioning docker machine ...
I1102 18:49:09.730540 47284 ubuntu.go:169] provisioning hostname "minikube"
I1102 18:49:09.733872 47284 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I1102 18:49:09.933337 47284 main.go:134] libmachine: Using SSH client type: native
I1102 18:49:09.933930 47284 main.go:134] libmachine: &{{{ 0 [] [] []} docker [0x74a860] 0x74d7e0 [] 0s} 127.0.0.1 54558 }
I1102 18:49:09.933930 47284 main.go:134] libmachine: About to run SSH command:
sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname
I1102 18:49:10.104836 47284 main.go:134] libmachine: SSH cmd err, output: : minikube

I1102 18:49:10.108129 47284 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I1102 18:49:10.279240 47284 main.go:134] libmachine: Using SSH client type: native
I1102 18:49:10.279787 47284 main.go:134] libmachine: &{{{ 0 [] [] []} docker [0x74a860] 0x74d7e0 [] 0s} 127.0.0.1 54558 }
I1102 18:49:10.279787 47284 main.go:134] libmachine: About to run SSH command:

	if ! grep -xq '.*\sminikube' /etc/hosts; then
		if grep -xq '127.0.1.1\s.*' /etc/hosts; then
			sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts;
		else 
			echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts; 
		fi
	fi

I1102 18:49:10.421938 47284 main.go:134] libmachine: SSH cmd err, output: :
I1102 18:49:10.421938 47284 ubuntu.go:175] set auth options {CertDir:C:\Users\10620.minikube CaCertPath:C:\Users\10620.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\10620.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\10620.minikube\machines\server.pem ServerKeyPath:C:\Users\10620.minikube\machines\server-key.pem ClientKeyPath:C:\Users\10620.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\10620.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\10620.minikube}
I1102 18:49:10.421938 47284 ubuntu.go:177] setting up certificates
I1102 18:49:10.421938 47284 provision.go:83] configureAuth start
I1102 18:49:10.424671 47284 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I1102 18:49:10.597754 47284 provision.go:138] copyHostCerts
I1102 18:49:10.598298 47284 exec_runner.go:151] cp: C:\Users\10620.minikube\certs\ca.pem --> C:\Users\10620.minikube/ca.pem (1074 bytes)
I1102 18:49:10.598833 47284 exec_runner.go:151] cp: C:\Users\10620.minikube\certs\cert.pem --> C:\Users\10620.minikube/cert.pem (1119 bytes)
I1102 18:49:10.599910 47284 exec_runner.go:151] cp: C:\Users\10620.minikube\certs\key.pem --> C:\Users\10620.minikube/key.pem (1679 bytes)
I1102 18:49:10.600432 47284 provision.go:112] generating server cert: C:\Users\10620.minikube\machines\server.pem ca-key=C:\Users\10620.minikube\certs\ca.pem private-key=C:\Users\10620.minikube\certs\ca-key.pem org=10620.minikube san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube minikube]
I1102 18:49:10.720462 47284 provision.go:172] copyRemoteCerts
I1102 18:49:10.730550 47284 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1102 18:49:10.733093 47284 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I1102 18:49:10.895264 47284 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54558 SSHKeyPath:C:\Users\10620.minikube\machines\minikube\id_rsa Username:docker}
I1102 18:49:11.015224 47284 ssh_runner.go:362] scp C:\Users\10620.minikube\machines\server.pem --> /etc/docker/server.pem (1196 bytes)
I1102 18:49:11.063686 47284 ssh_runner.go:362] scp C:\Users\10620.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I1102 18:49:11.113036 47284 ssh_runner.go:362] scp C:\Users\10620.minikube\certs\ca.pem --> /etc/docker/ca.pem (1074 bytes)
I1102 18:49:11.160120 47284 provision.go:86] duration metric: configureAuth took 738.1817ms
I1102 18:49:11.160120 47284 ubuntu.go:193] setting minikube options for container-runtime
I1102 18:49:11.160666 47284 config.go:180] Loaded profile config "minikube": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.0
I1102 18:49:11.165008 47284 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I1102 18:49:11.405963 47284 main.go:134] libmachine: Using SSH client type: native
I1102 18:49:11.406486 47284 main.go:134] libmachine: &{{{ 0 [] [] []} docker [0x74a860] 0x74d7e0 [] 0s} 127.0.0.1 54558 }
I1102 18:49:11.406486 47284 main.go:134] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I1102 18:49:11.562641 47284 main.go:134] libmachine: SSH cmd err, output: : overlay

I1102 18:49:11.562641 47284 ubuntu.go:71] root file system type: overlay
I1102 18:49:11.562641 47284 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I1102 18:49:11.565919 47284 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I1102 18:49:11.632189 47284 cache.go:156] \?\Volume{b0bb5e4f-06ad-4aed-b7ee-341472607145}\Users\10620.minikube\cache\images\amd64\registry.cn-hangzhou.aliyuncs.com\google_containers\kube-controller-manager_v1.25.0 exists
I1102 18:49:11.632189 47284 cache.go:96] cache image "registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.25.0" -> "C:\Users\10620\.minikube\cache\images\amd64\registry.cn-hangzhou.aliyuncs.com\google_containers\kube-controller-manager_v1.25.0" took 15.176982s
I1102 18:49:11.632189 47284 cache.go:80] save to tar file registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.25.0 -> C:\Users\10620.minikube\cache\images\amd64\registry.cn-hangzhou.aliyuncs.com\google_containers\kube-controller-manager_v1.25.0 succeeded
I1102 18:49:11.787866 47284 main.go:134] libmachine: Using SSH client type: native
I1102 18:49:11.787866 47284 main.go:134] libmachine: &{{{ 0 [] [] []} docker [0x74a860] 0x74d7e0 [] 0s} 127.0.0.1 54558 }
I1102 18:49:11.788384 47284 main.go:134] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60

[Service]
Type=notify
Restart=on-failure

This file is a systemd drop-in unit that inherits from the base dockerd configuration.

The base configuration already specifies an 'ExecStart=...' command. The first directive

here is to clear out that command inherited from the base configuration. Without this,

the command from the base configuration and the command specified here are treated as

a sequence of commands, which is not the desired behavior, nor is it valid -- systemd

will catch this invalid input and refuse to start the service with an error like:

Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other

container runtimes. If left unlimited, it may result in OOM issues with MySQL.

ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID

Having non-zero Limit*s causes performance problems due to accounting overhead

in the kernel. We recommend using cgroups to do container-local accounting.

LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

Uncomment TasksMax if your systemd version supports it.

Only systemd 226 and above support this version.

TasksMax=infinity
TimeoutStartSec=0

set delegate yes so that systemd does not reset the cgroups of docker containers

Delegate=yes

kill only the docker process, not all processes in the cgroup

KillMode=process

[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I1102 18:49:11.955033 47284 main.go:134] libmachine: SSH cmd err, output: : [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60

[Service]
Type=notify
Restart=on-failure

This file is a systemd drop-in unit that inherits from the base dockerd configuration.

The base configuration already specifies an 'ExecStart=...' command. The first directive

here is to clear out that command inherited from the base configuration. Without this,

the command from the base configuration and the command specified here are treated as

a sequence of commands, which is not the desired behavior, nor is it valid -- systemd

will catch this invalid input and refuse to start the service with an error like:

Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other

container runtimes. If left unlimited, it may result in OOM issues with MySQL.

ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID

Having non-zero Limit*s causes performance problems due to accounting overhead

in the kernel. We recommend using cgroups to do container-local accounting.

LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

Uncomment TasksMax if your systemd version supports it.

Only systemd 226 and above support this version.

TasksMax=infinity
TimeoutStartSec=0

set delegate yes so that systemd does not reset the cgroups of docker containers

Delegate=yes

kill only the docker process, not all processes in the cgroup

KillMode=process

[Install]
WantedBy=multi-user.target

I1102 18:49:11.958308 47284 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I1102 18:49:12.136896 47284 main.go:134] libmachine: Using SSH client type: native
I1102 18:49:12.136896 47284 main.go:134] libmachine: &{{{ 0 [] [] []} docker [0x74a860] 0x74d7e0 [] 0s} 127.0.0.1 54558 }
I1102 18:49:12.136896 47284 main.go:134] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I1102 18:49:12.351161 47284 cache.go:156] \?\Volume{b0bb5e4f-06ad-4aed-b7ee-341472607145}\Users\10620.minikube\cache\images\amd64\registry.cn-hangzhou.aliyuncs.com\google_containers\kube-apiserver_v1.25.0 exists
I1102 18:49:12.351673 47284 cache.go:96] cache image "registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.25.0" -> "C:\Users\10620\.minikube\cache\images\amd64\registry.cn-hangzhou.aliyuncs.com\google_containers\kube-apiserver_v1.25.0" took 15.896466s
I1102 18:49:12.351673 47284 cache.go:80] save to tar file registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.25.0 -> C:\Users\10620.minikube\cache\images\amd64\registry.cn-hangzhou.aliyuncs.com\google_containers\kube-apiserver_v1.25.0 succeeded
I1102 18:49:12.916103 47284 main.go:134] libmachine: SSH cmd err, output: : --- /lib/systemd/system/docker.service 2022-06-06 23:01:03.000000000 +0000
+++ /lib/systemd/system/docker.service.new 2022-11-02 10:49:11.942880675 +0000
@@ -1,30 +1,32 @@
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
-After=network-online.target docker.socket firewalld.service containerd.service
+BindsTo=containerd.service
+After=network-online.target firewalld.service containerd.service
Wants=network-online.target
-Requires=docker.socket containerd.service
+Requires=docker.socket
+StartLimitBurst=3
+StartLimitIntervalSec=60

[Service]
Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutSec=0
-RestartSec=2
-Restart=always

-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
+Restart=on-failure

-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
+ExecReload=/bin/kill -s HUP $MAINPID

Having non-zero Limit*s causes performance problems due to accounting overhead

in the kernel. We recommend using cgroups to do container-local accounting.

@@ -32,16 +34,16 @@
LimitNPROC=infinity
LimitCORE=infinity

-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
TasksMax=infinity
+TimeoutStartSec=0

set delegate yes so that systemd does not reset the cgroups of docker containers

Delegate=yes

kill only the docker process, not all processes in the cgroup

KillMode=process
-OOMScoreAdjust=-500

[Install]
WantedBy=multi-user.target
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker

I1102 18:49:12.916103 47284 machine.go:91] provisioned docker machine in 3.1855627s
I1102 18:49:12.916103 47284 client.go:171] LocalClient.Create took 16.2098805s
I1102 18:49:12.916103 47284 start.go:167] duration metric: libmachine.API.Create for "minikube" took 16.2098805s
I1102 18:49:12.916103 47284 start.go:300] post-start starting for "minikube" (driver="docker")
I1102 18:49:12.916103 47284 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1102 18:49:12.924746 47284 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1102 18:49:12.927462 47284 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I1102 18:49:13.124680 47284 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54558 SSHKeyPath:C:\Users\10620.minikube\machines\minikube\id_rsa Username:docker}
I1102 18:49:13.239295 47284 ssh_runner.go:195] Run: cat /etc/os-release
I1102 18:49:13.244093 47284 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I1102 18:49:13.244093 47284 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I1102 18:49:13.244093 47284 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I1102 18:49:13.244093 47284 info.go:137] Remote host: Ubuntu 20.04.5 LTS
I1102 18:49:13.244093 47284 filesync.go:126] Scanning C:\Users\10620.minikube\addons for local assets ...
I1102 18:49:13.244636 47284 filesync.go:126] Scanning C:\Users\10620.minikube\files for local assets ...
I1102 18:49:13.244636 47284 start.go:303] post-start completed in 328.5326ms
I1102 18:49:13.248976 47284 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I1102 18:49:13.401426 47284 profile.go:148] Saving config to C:\Users\10620.minikube\profiles\minikube\config.json ...
I1102 18:49:13.410477 47284 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1102 18:49:13.411514 47284 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I1102 18:49:13.617989 47284 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54558 SSHKeyPath:C:\Users\10620.minikube\machines\minikube\id_rsa Username:docker}
I1102 18:49:13.745710 47284 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I1102 18:49:13.755060 47284 start.go:128] duration metric: createHost completed in 17.052175s
I1102 18:49:13.755060 47284 start.go:83] releasing machines lock for "minikube", held for 17.0526858s
I1102 18:49:13.759092 47284 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I1102 18:49:13.990427 47284 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.cn-hangzhou.aliyuncs.com/google_containers/
I1102 18:49:13.993059 47284 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I1102 18:49:14.169192 47284 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54558 SSHKeyPath:C:\Users\10620.minikube\machines\minikube\id_rsa Username:docker}
I1102 18:49:22.179526 47284 cache.go:156] \?\Volume{b0bb5e4f-06ad-4aed-b7ee-341472607145}\Users\10620.minikube\cache\images\amd64\registry.cn-hangzhou.aliyuncs.com\google_containers\etcd_3.5.4-0 exists
I1102 18:49:22.180513 47284 cache.go:96] cache image "registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.4-0" -> "C:\Users\10620\.minikube\cache\images\amd64\registry.cn-hangzhou.aliyuncs.com\google_containers\etcd_3.5.4-0" took 25.7253064s
I1102 18:49:22.180513 47284 cache.go:80] save to tar file registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.4-0 -> C:\Users\10620.minikube\cache\images\amd64\registry.cn-hangzhou.aliyuncs.com\google_containers\etcd_3.5.4-0 succeeded
I1102 18:49:22.180513 47284 cache.go:87] Successfully saved all images to host disk.
I1102 18:49:22.189054 47284 ssh_runner.go:195] Run: systemctl --version
I1102 18:49:25.369557 47284 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.cn-hangzhou.aliyuncs.com/google_containers/: (11.37913s)
I1102 18:49:25.369557 47284 ssh_runner.go:235] Completed: systemctl --version: (3.1805025s)
W1102 18:49:25.369557 47284 start.go:735] [curl -sS -m 2 https://registry.cn-hangzhou.aliyuncs.com/google_containers/] failed: curl -sS -m 2 https://registry.cn-hangzhou.aliyuncs.com/google_containers/: Process exited with status 28
stdout:

stderr:
curl: (28) Resolving timed out after 2000 milliseconds
W1102 18:49:25.369557 47284 out.go:239] ❗ This container is having trouble accessing https://registry.cn-hangzhou.aliyuncs.com/google_containers
W1102 18:49:25.370119 47284 out.go:239] 💡 To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
I1102 18:49:25.378330 47284 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I1102 18:49:25.394149 47284 cruntime.go:273] skipping containerd shutdown because we are bound to it
I1102 18:49:25.402243 47284 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1102 18:49:25.420204 47284 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
image-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I1102 18:49:25.455360 47284 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I1102 18:49:25.574841 47284 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I1102 18:49:25.684876 47284 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1102 18:49:25.783863 47284 ssh_runner.go:195] Run: sudo systemctl restart docker
I1102 18:49:26.052490 47284 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I1102 18:49:26.157949 47284 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1102 18:49:26.287105 47284 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
I1102 18:49:26.305560 47284 start.go:450] Will wait 60s for socket path /var/run/cri-dockerd.sock
I1102 18:49:26.314255 47284 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I1102 18:49:26.320409 47284 start.go:471] Will wait 60s for crictl version
I1102 18:49:26.328915 47284 ssh_runner.go:195] Run: sudo crictl version
I1102 18:49:26.462293 47284 start.go:480] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 20.10.17
RuntimeApiVersion: 1.41.0
I1102 18:49:26.464953 47284 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I1102 18:49:26.519177 47284 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I1102 18:49:26.558481 47284 out.go:204] 🐳 正在 Docker 20.10.17 中准备 Kubernetes v1.25.0…
I1102 18:49:26.561139 47284 cli_runner.go:164] Run: docker exec -t minikube dig +short host.docker.internal
I1102 18:49:26.857348 47284 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
I1102 18:49:26.866614 47284 ssh_runner.go:195] Run: grep 192.168.65.2 host.minikube.internal$ /etc/hosts
I1102 18:49:26.872152 47284 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1102 18:49:26.891040 47284 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" minikube
I1102 18:49:27.072455 47284 preload.go:132] Checking if preload exists for k8s version v1.25.0 and runtime docker
I1102 18:49:27.075671 47284 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I1102 18:49:27.109791 47284 docker.go:611] Got preloaded images:
I1102 18:49:27.110308 47284 docker.go:617] registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.25.0 wasn't preloaded
I1102 18:49:27.110308 47284 cache_images.go:88] LoadImages start: [registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.25.0 registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.25.0 registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.25.0 registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.25.0 registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.8 registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.4-0 registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.9.3 registry.cn-hangzhou.aliyuncs.com/google_containers/storage-provisioner:v5]
I1102 18:49:27.122559 47284 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.25.0
I1102 18:49:27.132891 47284 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.25.0
I1102 18:49:27.137401 47284 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.4-0
I1102 18:49:27.139370 47284 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.8
I1102 18:49:27.144060 47284 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.cn-hangzhou.aliyuncs.com/google_containers/storage-provisioner:v5
I1102 18:49:27.145248 47284 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.25.0
I1102 18:49:27.159743 47284 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.25.0
I1102 18:49:27.171658 47284 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.9.3
I1102 18:49:27.225000 47284 cache_images.go:116] "registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.25.0" needs transfer: "registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.25.0" does not exist at hash "sha256:bef2cf3115095379b5af3e6c0fb4b0e6a8ef7a144aa2907bd0a3125e9d2e203e" in container runtime
I1102 18:49:27.225000 47284 localpath.go:146] windows sanitize: C:\Users\10620.minikube\cache\images\amd64\registry.cn-hangzhou.aliyuncs.com\google_containers\kube-scheduler:v1.25.0 -> C:\Users\10620.minikube\cache\images\amd64\registry.cn-hangzhou.aliyuncs.com\google_containers\kube-scheduler_v1.25.0
I1102 18:49:27.225000 47284 docker.go:292] Removing image: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.25.0
I1102 18:49:27.229126 47284 ssh_runner.go:195] Run: docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.25.0
I1102 18:49:27.320836 47284 cache_images.go:116] "registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.25.0" needs transfer: "registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.25.0" does not exist at hash "sha256:4d2edfd10d3e3f4395b70652848e2a1efd5bd0bc38e9bc360d4ee5c51afacfe5" in container runtime
I1102 18:49:27.320836 47284 localpath.go:146] windows sanitize: C:\Users\10620.minikube\cache\images\amd64\registry.cn-hangzhou.aliyuncs.com\google_containers\kube-apiserver:v1.25.0 -> C:\Users\10620.minikube\cache\images\amd64\registry.cn-hangzhou.aliyuncs.com\google_containers\kube-apiserver_v1.25.0
I1102 18:49:27.321384 47284 docker.go:292] Removing image: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.25.0
I1102 18:49:27.326683 47284 ssh_runner.go:195] Run: docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.25.0
I1102 18:49:27.328451 47284 cache_images.go:116] "registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.4-0" needs transfer: "registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.4-0" does not exist at hash "sha256:a8a176a5d5d698f9409dc246f81fa69d37d4a2f4132ba5e62e72a78476b27f66" in container runtime
I1102 18:49:27.328451 47284 localpath.go:146] windows sanitize: C:\Users\10620.minikube\cache\images\amd64\registry.cn-hangzhou.aliyuncs.com\google_containers\etcd:3.5.4-0 -> C:\Users\10620.minikube\cache\images\amd64\registry.cn-hangzhou.aliyuncs.com\google_containers\etcd_3.5.4-0
I1102 18:49:27.328451 47284 docker.go:292] Removing image: registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.4-0
I1102 18:49:27.332341 47284 ssh_runner.go:195] Run: docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.4-0
I1102 18:49:27.420847 47284 cache_images.go:116] "registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.8" needs transfer: "registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.8" does not exist at hash "sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517" in container runtime
I1102 18:49:27.420847 47284 localpath.go:146] windows sanitize: C:\Users\10620.minikube\cache\images\amd64\registry.cn-hangzhou.aliyuncs.com\google_containers\pause:3.8 -> C:\Users\10620.minikube\cache\images\amd64\registry.cn-hangzhou.aliyuncs.com\google_containers\pause_3.8
I1102 18:49:27.420847 47284 docker.go:292] Removing image: registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.8
I1102 18:49:27.420847 47284 cache_images.go:116] "registry.cn-hangzhou.aliyuncs.com/google_containers/storage-provisioner:v5" needs transfer: "registry.cn-hangzhou.aliyuncs.com/google_containers/storage-provisioner:v5" does not exist at hash "sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
I1102 18:49:27.420847 47284 localpath.go:146] windows sanitize: C:\Users\10620.minikube\cache\images\amd64\registry.cn-hangzhou.aliyuncs.com\google_containers\storage-provisioner:v5 -> C:\Users\10620.minikube\cache\images\amd64\registry.cn-hangzhou.aliyuncs.com\google_containers\storage-provisioner_v5
I1102 18:49:27.420847 47284 docker.go:292] Removing image: registry.cn-hangzhou.aliyuncs.com/google_containers/storage-provisioner:v5
I1102 18:49:27.425607 47284 ssh_runner.go:195] Run: docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/storage-provisioner:v5
I1102 18:49:27.425607 47284 cache_images.go:116] "registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.25.0" needs transfer: "registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.25.0" does not exist at hash "sha256:58a9a0c6d96f2b956afdc831504e6796c23f5f90a7b5341393b762d9ba96f2f6" in container runtime
I1102 18:49:27.425607 47284 localpath.go:146] windows sanitize: C:\Users\10620.minikube\cache\images\amd64\registry.cn-hangzhou.aliyuncs.com\google_containers\kube-proxy:v1.25.0 -> C:\Users\10620.minikube\cache\images\amd64\registry.cn-hangzhou.aliyuncs.com\google_containers\kube-proxy_v1.25.0
I1102 18:49:27.425607 47284 cache_images.go:116] "registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.25.0" needs transfer: "registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.25.0" does not exist at hash "sha256:1a54c86c03a673d4e046b9f64854c713512d39a0136aef76a4a450d5ad51273e" in container runtime
I1102 18:49:27.425607 47284 localpath.go:146] windows sanitize: C:\Users\10620.minikube\cache\images\amd64\registry.cn-hangzhou.aliyuncs.com\google_containers\kube-controller-manager:v1.25.0 -> C:\Users\10620.minikube\cache\images\amd64\registry.cn-hangzhou.aliyuncs.com\google_containers\kube-controller-manager_v1.25.0
I1102 18:49:27.425607 47284 docker.go:292] Removing image: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.25.0
I1102 18:49:27.425607 47284 docker.go:292] Removing image: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.25.0
I1102 18:49:27.425607 47284 ssh_runner.go:195] Run: docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.8
I1102 18:49:27.430750 47284 ssh_runner.go:195] Run: docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.25.0
I1102 18:49:27.430750 47284 ssh_runner.go:195] Run: docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.25.0
I1102 18:49:27.436533 47284 cache_images.go:116] "registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.9.3" needs transfer: "registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.9.3" does not exist at hash "sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a" in container runtime
I1102 18:49:27.437071 47284 localpath.go:146] windows sanitize: C:\Users\10620.minikube\cache\images\amd64\registry.cn-hangzhou.aliyuncs.com\google_containers\coredns:v1.9.3 -> C:\Users\10620.minikube\cache\images\amd64\registry.cn-hangzhou.aliyuncs.com\google_containers\coredns_v1.9.3
I1102 18:49:27.437071 47284 docker.go:292] Removing image: registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.9.3
I1102 18:49:27.441131 47284 ssh_runner.go:195] Run: docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.9.3
I1102 18:49:27.523444 47284 cache_images.go:286] Loading image from: C:\Users\10620.minikube\cache\images\amd64\registry.cn-hangzhou.aliyuncs.com\google_containers\kube-scheduler_v1.25.0
I1102 18:49:27.536759 47284 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.25.0
I1102 18:49:27.620637 47284 cache_images.go:286] Loading image from: C:\Users\10620.minikube\cache\images\amd64\registry.cn-hangzhou.aliyuncs.com\google_containers\kube-apiserver_v1.25.0
I1102 18:49:27.622840 47284 cache_images.go:286] Loading image from: C:\Users\10620.minikube\cache\images\amd64\registry.cn-hangzhou.aliyuncs.com\google_containers\etcd_3.5.4-0
I1102 18:49:27.640374 47284 cache_images.go:286] Loading image from: C:\Users\10620.minikube\cache\images\amd64\registry.cn-hangzhou.aliyuncs.com\google_containers\storage-provisioner_v5
I1102 18:49:27.640374 47284 cache_images.go:286] Loading image from: C:\Users\10620.minikube\cache\images\amd64\registry.cn-hangzhou.aliyuncs.com\google_containers\pause_3.8
I1102 18:49:27.640933 47284 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.25.0
I1102 18:49:27.642592 47284 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.4-0
I1102 18:49:27.653602 47284 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.8
I1102 18:49:27.654202 47284 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
I1102 18:49:27.720406 47284 cache_images.go:286] Loading image from: C:\Users\10620.minikube\cache\images\amd64\registry.cn-hangzhou.aliyuncs.com\google_containers\kube-controller-manager_v1.25.0
I1102 18:49:27.720406 47284 cache_images.go:286] Loading image from: C:\Users\10620.minikube\cache\images\amd64\registry.cn-hangzhou.aliyuncs.com\google_containers\kube-proxy_v1.25.0
I1102 18:49:27.720406 47284 cache_images.go:286] Loading image from: C:\Users\10620.minikube\cache\images\amd64\registry.cn-hangzhou.aliyuncs.com\google_containers\coredns_v1.9.3
I1102 18:49:27.720406 47284 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.25.0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.25.0: Process exited with status 1
stdout:

stderr:
stat: cannot stat '/var/lib/minikube/images/kube-scheduler_v1.25.0': No such file or directory
I1102 18:49:27.720406 47284 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.25.0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.25.0: Process exited with status 1
stdout:

stderr:
stat: cannot stat '/var/lib/minikube/images/kube-apiserver_v1.25.0': No such file or directory
I1102 18:49:27.720406 47284 ssh_runner.go:362] scp C:\Users\10620.minikube\cache\images\amd64\registry.cn-hangzhou.aliyuncs.com\google_containers\kube-scheduler_v1.25.0 --> /var/lib/minikube/images/kube-scheduler_v1.25.0 (18173440 bytes)
I1102 18:49:27.720406 47284 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.4-0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.4-0: Process exited with status 1
stdout:

stderr:
stat: cannot stat '/var/lib/minikube/images/etcd_3.5.4-0': No such file or directory
I1102 18:49:27.720406 47284 ssh_runner.go:362] scp C:\Users\10620.minikube\cache\images\amd64\registry.cn-hangzhou.aliyuncs.com\google_containers\kube-apiserver_v1.25.0 --> /var/lib/minikube/images/kube-apiserver_v1.25.0 (40231424 bytes)
I1102 18:49:27.720406 47284 ssh_runner.go:362] scp C:\Users\10620.minikube\cache\images\amd64\registry.cn-hangzhou.aliyuncs.com\google_containers\etcd_3.5.4-0 --> /var/lib/minikube/images/etcd_3.5.4-0 (115895808 bytes)
I1102 18:49:27.720406 47284 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
stdout:

stderr:
stat: cannot stat '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
I1102 18:49:27.720406 47284 ssh_runner.go:362] scp C:\Users\10620.minikube\cache\images\amd64\registry.cn-hangzhou.aliyuncs.com\google_containers\storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (10569216 bytes)
I1102 18:49:27.720406 47284 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.8: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.8: Process exited with status 1
stdout:

stderr:
stat: cannot stat '/var/lib/minikube/images/pause_3.8': No such file or directory
I1102 18:49:27.720406 47284 ssh_runner.go:362] scp C:\Users\10620.minikube\cache\images\amd64\registry.cn-hangzhou.aliyuncs.com\google_containers\pause_3.8 --> /var/lib/minikube/images/pause_3.8 (338944 bytes)
I1102 18:49:27.740017 47284 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.9.3
I1102 18:49:27.742934 47284 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.25.0
I1102 18:49:27.744576 47284 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.25.0
I1102 18:49:27.811895 47284 docker.go:259] Loading image: /var/lib/minikube/images/pause_3.8
I1102 18:49:27.811895 47284 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.8 | docker load"
I1102 18:49:27.824168 47284 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.9.3: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.9.3: Process exited with status 1
stdout:

stderr:
stat: cannot stat '/var/lib/minikube/images/coredns_v1.9.3': No such file or directory
I1102 18:49:27.824168 47284 ssh_runner.go:362] scp C:\Users\10620.minikube\cache\images\amd64\registry.cn-hangzhou.aliyuncs.com\google_containers\coredns_v1.9.3 --> /var/lib/minikube/images/coredns_v1.9.3 (17011712 bytes)
I1102 18:49:27.834886 47284 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.25.0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.25.0: Process exited with status 1
stdout:

stderr:
stat: cannot stat '/var/lib/minikube/images/kube-proxy_v1.25.0': No such file or directory
I1102 18:49:27.835472 47284 ssh_runner.go:362] scp C:\Users\10620.minikube\cache\images\amd64\registry.cn-hangzhou.aliyuncs.com\google_containers\kube-proxy_v1.25.0 --> /var/lib/minikube/images/kube-proxy_v1.25.0 (23014912 bytes)
I1102 18:49:27.858271 47284 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.25.0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.25.0: Process exited with status 1
stdout:

stderr:
stat: cannot stat '/var/lib/minikube/images/kube-controller-manager_v1.25.0': No such file or directory
I1102 18:49:27.858271 47284 ssh_runner.go:362] scp C:\Users\10620.minikube\cache\images\amd64\registry.cn-hangzhou.aliyuncs.com\google_containers\kube-controller-manager_v1.25.0 --> /var/lib/minikube/images/kube-controller-manager_v1.25.0 (36726272 bytes)
I1102 18:49:28.194539 47284 cache_images.go:315] Transferred and loaded C:\Users\10620.minikube\cache\images\amd64\registry.cn-hangzhou.aliyuncs.com\google_containers\pause_3.8 from cache
I1102 18:49:29.301822 47284 docker.go:259] Loading image: /var/lib/minikube/images/storage-provisioner_v5
I1102 18:49:29.301822 47284 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
I1102 18:49:30.351115 47284 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load": (1.0492932s)
I1102 18:49:30.351115 47284 cache_images.go:315] Transferred and loaded C:\Users\10620.minikube\cache\images\amd64\registry.cn-hangzhou.aliyuncs.com\google_containers\storage-provisioner_v5 from cache
I1102 18:49:30.661914 47284 docker.go:259] Loading image: /var/lib/minikube/images/kube-scheduler_v1.25.0
I1102 18:49:30.661914 47284 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-scheduler_v1.25.0 | docker load"
I1102 18:49:32.360395 47284 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-scheduler_v1.25.0 | docker load": (1.6984808s)
I1102 18:49:32.360948 47284 cache_images.go:315] Transferred and loaded C:\Users\10620.minikube\cache\images\amd64\registry.cn-hangzhou.aliyuncs.com\google_containers\kube-scheduler_v1.25.0 from cache
I1102 18:49:32.360948 47284 docker.go:259] Loading image: /var/lib/minikube/images/coredns_v1.9.3
I1102 18:49:32.360948 47284 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.9.3 | docker load"
I1102 18:49:33.566855 47284 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.9.3 | docker load": (1.2053614s)
I1102 18:49:33.566855 47284 cache_images.go:315] Transferred and loaded C:\Users\10620.minikube\cache\images\amd64\registry.cn-hangzhou.aliyuncs.com\google_containers\coredns_v1.9.3 from cache
I1102 18:49:33.566906 47284 docker.go:259] Loading image: /var/lib/minikube/images/kube-proxy_v1.25.0
I1102 18:49:33.566906 47284 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-proxy_v1.25.0 | docker load"
I1102 18:49:34.993463 47284 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-proxy_v1.25.0 | docker load": (1.4265576s)
I1102 18:49:34.993463 47284 cache_images.go:315] Transferred and loaded C:\Users\10620.minikube\cache\images\amd64\registry.cn-hangzhou.aliyuncs.com\google_containers\kube-proxy_v1.25.0 from cache
I1102 18:49:34.993463 47284 docker.go:259] Loading image: /var/lib/minikube/images/kube-apiserver_v1.25.0
I1102 18:49:34.993463 47284 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-apiserver_v1.25.0 | docker load"
I1102 18:49:36.278936 47284 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-apiserver_v1.25.0 | docker load": (1.285473s)
I1102 18:49:36.278936 47284 cache_images.go:315] Transferred and loaded C:\Users\10620.minikube\cache\images\amd64\registry.cn-hangzhou.aliyuncs.com\google_containers\kube-apiserver_v1.25.0 from cache
I1102 18:49:36.278936 47284 docker.go:259] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.25.0
I1102 18:49:36.278936 47284 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-controller-manager_v1.25.0 | docker load"
I1102 18:49:37.458552 47284 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-controller-manager_v1.25.0 | docker load": (1.1796157s)
I1102 18:49:37.458552 47284 cache_images.go:315] Transferred and loaded C:\Users\10620.minikube\cache\images\amd64\registry.cn-hangzhou.aliyuncs.com\google_containers\kube-controller-manager_v1.25.0 from cache
I1102 18:49:37.458552 47284 docker.go:259] Loading image: /var/lib/minikube/images/etcd_3.5.4-0
I1102 18:49:37.458552 47284 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.4-0 | docker load"
I1102 18:49:40.856269 47284 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.4-0 | docker load": (3.3977171s)
I1102 18:49:40.856269 47284 cache_images.go:315] Transferred and loaded C:\Users\10620.minikube\cache\images\amd64\registry.cn-hangzhou.aliyuncs.com\google_containers\etcd_3.5.4-0 from cache
I1102 18:49:40.856796 47284 cache_images.go:123] Successfully loaded all cached images
I1102 18:49:40.856796 47284 cache_images.go:92] LoadImages completed in 13.7464873s
I1102 18:49:40.859519 47284 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I1102 18:49:40.945794 47284 cni.go:95] Creating CNI manager for ""
I1102 18:49:40.945794 47284 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
I1102 18:49:40.945794 47284 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I1102 18:49:40.945794 47284 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.25.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository:registry.cn-hangzhou.aliyuncs.com/google_containers ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:true}
I1102 18:49:40.945794 47284 kubeadm.go:161] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.49.2
bindPort: 8443
bootstrapTokens:

  • groups:
    • system:bootstrappers:kubeadm:default-node-token
      ttl: 24h0m0s
      usages:
    • signing
    • authentication
      nodeRegistration:
      criSocket: /var/run/cri-dockerd.sock
      name: "minikube"
      kubeletExtraArgs:
      node-ip: 192.168.49.2
      taints: []

apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.25.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12

apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
clusterDomain: "cluster.local"

disable disk resource management by default

imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
resolvConf: /etc/kubelet-resolv.conf

apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0

Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"

tcpEstablishedTimeout: 0s

Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"

tcpCloseWaitTimeout: 0s

I1102 18:49:40.945794 47284 kubeadm.go:962] kubelet [Unit]
Wants=docker.socket

[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.25.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=minikube --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2 --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.8 --runtime-request-timeout=15m

[Install]
config:
{KubernetesVersion:v1.25.0 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository:registry.cn-hangzhou.aliyuncs.com/google_containers LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I1102 18:49:40.954544 47284 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.0
I1102 18:49:40.969856 47284 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.25.0: Process exited with status 2
stdout:

stderr:
ls: cannot access '/var/lib/minikube/binaries/v1.25.0': No such file or directory

Initiating transfer...
I1102 18:49:40.980272 47284 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.25.0
I1102 18:49:41.001270 47284 download.go:101] Downloading: https://kubernetes.oss-cn-hangzhou.aliyuncs.com/kubernetes-release/release/v1.25.0/bin/linux/amd64/kubelet?checksum=file:https://kubernetes.oss-cn-hangzhou.aliyuncs.com/kubernetes-release/release/v1.25.0/bin/linux/amd64/kubelet.sha256 -> C:\Users\10620.minikube\cache\linux\amd64\v1.25.0/kubelet
I1102 18:49:41.001270 47284 download.go:101] Downloading: https://kubernetes.oss-cn-hangzhou.aliyuncs.com/kubernetes-release/release/v1.25.0/bin/linux/amd64/kubectl?checksum=file:https://kubernetes.oss-cn-hangzhou.aliyuncs.com/kubernetes-release/release/v1.25.0/bin/linux/amd64/kubectl.sha256 -> C:\Users\10620.minikube\cache\linux\amd64\v1.25.0/kubectl
I1102 18:49:41.001270 47284 download.go:101] Downloading: https://kubernetes.oss-cn-hangzhou.aliyuncs.com/kubernetes-release/release/v1.25.0/bin/linux/amd64/kubeadm?checksum=file:https://kubernetes.oss-cn-hangzhou.aliyuncs.com/kubernetes-release/release/v1.25.0/bin/linux/amd64/kubeadm.sha256 -> C:\Users\10620.minikube\cache\linux\amd64\v1.25.0/kubeadm
I1102 18:50:01.042026 47284 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.25.0/kubeadm
I1102 18:50:01.050255 47284 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.25.0/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.25.0/kubeadm: Process exited with status 1
stdout:

stderr:
stat: cannot stat '/var/lib/minikube/binaries/v1.25.0/kubeadm': No such file or directory
I1102 18:50:01.050255 47284 ssh_runner.go:362] scp C:\Users\10620.minikube\cache\linux\amd64\v1.25.0/kubeadm --> /var/lib/minikube/binaries/v1.25.0/kubeadm (43790336 bytes)
I1102 18:50:02.615469 47284 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.25.0/kubectl
I1102 18:50:02.623825 47284 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.25.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.25.0/kubectl: Process exited with status 1
stdout:

stderr:
stat: cannot stat '/var/lib/minikube/binaries/v1.25.0/kubectl': No such file or directory
I1102 18:50:02.624372 47284 ssh_runner.go:362] scp C:\Users\10620.minikube\cache\linux\amd64\v1.25.0/kubectl --> /var/lib/minikube/binaries/v1.25.0/kubectl (45002752 bytes)
I1102 18:50:33.628353 47284 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1102 18:50:33.663944 47284 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.25.0/kubelet
I1102 18:50:33.669500 47284 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.25.0/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.25.0/kubelet: Process exited with status 1
stdout:

stderr:
stat: cannot stat '/var/lib/minikube/binaries/v1.25.0/kubelet': No such file or directory
I1102 18:50:33.669500 47284 ssh_runner.go:362] scp C:\Users\10620.minikube\cache\linux\amd64\v1.25.0/kubelet --> /var/lib/minikube/binaries/v1.25.0/kubelet (114220984 bytes)
I1102 18:50:35.922559 47284 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1102 18:50:35.937255 47284 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (560 bytes)
I1102 18:50:35.964519 47284 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1102 18:50:35.992872 47284 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2137 bytes)
I1102 18:50:36.029408 47284 ssh_runner.go:195] Run: sudo cp /etc/resolv.conf /etc/kubelet-resolv.conf
I1102 18:50:36.052734 47284 ssh_runner.go:195] Run: sudo sed -i -e "s/^search .$//" /etc/kubelet-resolv.conf
I1102 18:50:36.077626 47284 ssh_runner.go:195] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts
I1102 18:50:36.083665 47284 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1102 18:50:36.101667 47284 certs.go:54] Setting up C:\Users\10620.minikube\profiles\minikube for IP: 192.168.49.2
I1102 18:50:36.101667 47284 certs.go:187] generating minikubeCA CA: C:\Users\10620.minikube\ca.key
I1102 18:50:36.177881 47284 crypto.go:156] Writing cert to C:\Users\10620.minikube\ca.crt ...
I1102 18:50:36.177881 47284 lock.go:35] WriteFile acquiring C:\Users\10620.minikube\ca.crt: {Name:mke2fecbebb284dcfef3d019c3580e4d9dd894d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I1102 18:50:36.182890 47284 crypto.go:164] Writing key to C:\Users\10620.minikube\ca.key ...
I1102 18:50:36.182890 47284 lock.go:35] WriteFile acquiring C:\Users\10620.minikube\ca.key: {Name:mkd780cae02a7faf67c6f12663f93266df18ff9f Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I1102 18:50:36.183408 47284 certs.go:187] generating proxyClientCA CA: C:\Users\10620.minikube\proxy-client-ca.key
I1102 18:50:36.413326 47284 crypto.go:156] Writing cert to C:\Users\10620.minikube\proxy-client-ca.crt ...
I1102 18:50:36.413326 47284 lock.go:35] WriteFile acquiring C:\Users\10620.minikube\proxy-client-ca.crt: {Name:mkee9aeee8da53d6575a803e01ead9f4457921d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I1102 18:50:36.413326 47284 crypto.go:164] Writing key to C:\Users\10620.minikube\proxy-client-ca.key ...
I1102 18:50:36.413326 47284 lock.go:35] WriteFile acquiring C:\Users\10620.minikube\proxy-client-ca.key: {Name:mk4ed052d4ff9fc645ae91f432544d8147899a6d Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I1102 18:50:36.413326 47284 certs.go:302] generating minikube-user signed cert: C:\Users\10620.minikube\profiles\minikube\client.key
I1102 18:50:36.413326 47284 crypto.go:68] Generating cert C:\Users\10620.minikube\profiles\minikube\client.crt with IP's: []
I1102 18:50:36.445218 47284 crypto.go:156] Writing cert to C:\Users\10620.minikube\profiles\minikube\client.crt ...
I1102 18:50:36.445218 47284 lock.go:35] WriteFile acquiring C:\Users\10620.minikube\profiles\minikube\client.crt: {Name:mk9b10f8880c1c5286ad75a32fc0c640f4deb1eb Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I1102 18:50:36.454261 47284 crypto.go:164] Writing key to C:\Users\10620.minikube\profiles\minikube\client.key ...
I1102 18:50:36.454261 47284 lock.go:35] WriteFile acquiring C:\Users\10620.minikube\profiles\minikube\client.key: {Name:mkc0268a7761937f03e82c2ca9d85bffae62b12c Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I1102 18:50:36.454261 47284 certs.go:302] generating minikube signed cert: C:\Users\10620.minikube\profiles\minikube\apiserver.key.dd3b5fb2
I1102 18:50:36.454261 47284 crypto.go:68] Generating cert C:\Users\10620.minikube\profiles\minikube\apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
I1102 18:50:36.672062 47284 crypto.go:156] Writing cert to C:\Users\10620.minikube\profiles\minikube\apiserver.crt.dd3b5fb2 ...
I1102 18:50:36.672062 47284 lock.go:35] WriteFile acquiring C:\Users\10620.minikube\profiles\minikube\apiserver.crt.dd3b5fb2: {Name:mked9493391485f2b738abc7385d6f399a6294d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I1102 18:50:36.672062 47284 crypto.go:164] Writing key to C:\Users\10620.minikube\profiles\minikube\apiserver.key.dd3b5fb2 ...
I1102 18:50:36.672062 47284 lock.go:35] WriteFile acquiring C:\Users\10620.minikube\profiles\minikube\apiserver.key.dd3b5fb2: {Name:mk3f6bcc315c0378d50d4e2fff9bc7d157085975 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I1102 18:50:36.672062 47284 certs.go:320] copying C:\Users\10620.minikube\profiles\minikube\apiserver.crt.dd3b5fb2 -> C:\Users\10620.minikube\profiles\minikube\apiserver.crt
I1102 18:50:36.683236 47284 certs.go:324] copying C:\Users\10620.minikube\profiles\minikube\apiserver.key.dd3b5fb2 -> C:\Users\10620.minikube\profiles\minikube\apiserver.key
I1102 18:50:36.683236 47284 certs.go:302] generating aggregator signed cert: C:\Users\10620.minikube\profiles\minikube\proxy-client.key
I1102 18:50:36.683236 47284 crypto.go:68] Generating cert C:\Users\10620.minikube\profiles\minikube\proxy-client.crt with IP's: []
I1102 18:50:36.797176 47284 crypto.go:156] Writing cert to C:\Users\10620.minikube\profiles\minikube\proxy-client.crt ...
I1102 18:50:36.797176 47284 lock.go:35] WriteFile acquiring C:\Users\10620.minikube\profiles\minikube\proxy-client.crt: {Name:mk20a7a51f96308bc2d761ff3af980472f2d84a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I1102 18:50:36.806707 47284 crypto.go:164] Writing key to C:\Users\10620.minikube\profiles\minikube\proxy-client.key ...
I1102 18:50:36.806707 47284 lock.go:35] WriteFile acquiring C:\Users\10620.minikube\profiles\minikube\proxy-client.key: {Name:mk2b055021b8d7bab1f503bc55863dbb8b5cabb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I1102 18:50:36.810826 47284 certs.go:388] found cert: C:\Users\10620.minikube\certs\C:\Users\10620.minikube\certs\ca-key.pem (1675 bytes)
I1102 18:50:36.810826 47284 certs.go:388] found cert: C:\Users\10620.minikube\certs\C:\Users\10620.minikube\certs\ca.pem (1074 bytes)
I1102 18:50:36.810826 47284 certs.go:388] found cert: C:\Users\10620.minikube\certs\C:\Users\10620.minikube\certs\cert.pem (1119 bytes)
I1102 18:50:36.810826 47284 certs.go:388] found cert: C:\Users\10620.minikube\certs\C:\Users\10620.minikube\certs\key.pem (1679 bytes)
I1102 18:50:36.810826 47284 ssh_runner.go:362] scp C:\Users\10620.minikube\profiles\minikube\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I1102 18:50:36.849338 47284 ssh_runner.go:362] scp C:\Users\10620.minikube\profiles\minikube\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I1102 18:50:36.881422 47284 ssh_runner.go:362] scp C:\Users\10620.minikube\profiles\minikube\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1102 18:50:36.916931 47284 ssh_runner.go:362] scp C:\Users\10620.minikube\profiles\minikube\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I1102 18:50:36.950843 47284 ssh_runner.go:362] scp C:\Users\10620.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1102 18:50:36.984064 47284 ssh_runner.go:362] scp C:\Users\10620.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I1102 18:50:37.017559 47284 ssh_runner.go:362] scp C:\Users\10620.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1102 18:50:37.051320 47284 ssh_runner.go:362] scp C:\Users\10620.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I1102 18:50:37.083939 47284 ssh_runner.go:362] scp C:\Users\10620.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1102 18:50:37.117300 47284 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1102 18:50:37.147386 47284 ssh_runner.go:195] Run: openssl version
I1102 18:50:37.161657 47284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I1102 18:50:37.184288 47284 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1102 18:50:37.190840 47284 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Nov 2 10:50 /usr/share/ca-certificates/minikubeCA.pem
I1102 18:50:37.197465 47284 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1102 18:50:37.213196 47284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I1102 18:50:37.227813 47284 kubeadm.go:396] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:registry.cn-hangzhou.aliyuncs.com/google_containers/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:3900 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.0 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository:registry.cn-hangzhou.aliyuncs.com/google_containers LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\10620:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
I1102 18:50:37.231978 47284 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*(kube-system) --format={{.ID}}
I1102 18:50:37.278922 47284 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1102 18:50:37.302090 47284 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1102 18:50:37.317980 47284 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
I1102 18:50:37.327644 47284 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1102 18:50:37.342972 47284 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:

stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1102 18:50:37.342972 47284 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1102 18:50:37.384075 47284 kubeadm.go:317] W1102 10:50:37.383260 1935 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
I1102 18:50:37.422170 47284 kubeadm.go:317] [WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
I1102 18:50:37.496807 47284 kubeadm.go:317] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1102 18:54:39.138060 47284 kubeadm.go:317] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
I1102 18:54:39.138060 47284 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
I1102 18:54:39.142751 47284 kubeadm.go:317] [init] Using Kubernetes version: v1.25.0
I1102 18:54:39.142751 47284 kubeadm.go:317] [preflight] Running pre-flight checks
I1102 18:54:39.142751 47284 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
I1102 18:54:39.142751 47284 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1102 18:54:39.143288 47284 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I1102 18:54:39.143288 47284 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1102 18:54:39.152573 47284 out.go:204] ▪ Generating certificates and keys ...
I1102 18:54:39.153102 47284 kubeadm.go:317] [certs] Using existing ca certificate authority
I1102 18:54:39.153102 47284 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
I1102 18:54:39.153102 47284 kubeadm.go:317] [certs] Generating "apiserver-kubelet-client" certificate and key
I1102 18:54:39.153102 47284 kubeadm.go:317] [certs] Generating "front-proxy-ca" certificate and key
I1102 18:54:39.153102 47284 kubeadm.go:317] [certs] Generating "front-proxy-client" certificate and key
I1102 18:54:39.153102 47284 kubeadm.go:317] [certs] Generating "etcd/ca" certificate and key
I1102 18:54:39.153655 47284 kubeadm.go:317] [certs] Generating "etcd/server" certificate and key
I1102 18:54:39.153655 47284 kubeadm.go:317] [certs] etcd/server serving cert is signed for DNS names [localhost minikube] and IPs [192.168.49.2 127.0.0.1 ::1]
I1102 18:54:39.153655 47284 kubeadm.go:317] [certs] Generating "etcd/peer" certificate and key
I1102 18:54:39.153655 47284 kubeadm.go:317] [certs] etcd/peer serving cert is signed for DNS names [localhost minikube] and IPs [192.168.49.2 127.0.0.1 ::1]
I1102 18:54:39.154198 47284 kubeadm.go:317] [certs] Generating "etcd/healthcheck-client" certificate and key
I1102 18:54:39.154198 47284 kubeadm.go:317] [certs] Generating "apiserver-etcd-client" certificate and key
I1102 18:54:39.154198 47284 kubeadm.go:317] [certs] Generating "sa" key and public key
I1102 18:54:39.154198 47284 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1102 18:54:39.154198 47284 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
I1102 18:54:39.154198 47284 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1102 18:54:39.154198 47284 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1102 18:54:39.154198 47284 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1102 18:54:39.154736 47284 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1102 18:54:39.154736 47284 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1102 18:54:39.154736 47284 kubeadm.go:317] [kubelet-start] Starting the kubelet
I1102 18:54:39.154736 47284 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1102 18:54:39.155818 47284 out.go:204] ▪ Booting up control plane ...
I1102 18:54:39.156358 47284 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1102 18:54:39.156358 47284 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1102 18:54:39.156358 47284 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1102 18:54:39.156358 47284 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1102 18:54:39.156897 47284 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I1102 18:54:39.156897 47284 kubeadm.go:317] [kubelet-check] Initial timeout of 40s passed.
I1102 18:54:39.156897 47284 kubeadm.go:317]
I1102 18:54:39.156897 47284 kubeadm.go:317] Unfortunately, an error has occurred:
I1102 18:54:39.156897 47284 kubeadm.go:317] timed out waiting for the condition
I1102 18:54:39.156897 47284 kubeadm.go:317]
I1102 18:54:39.156897 47284 kubeadm.go:317] This error is likely caused by:
I1102 18:54:39.156897 47284 kubeadm.go:317] - The kubelet is not running
I1102 18:54:39.156897 47284 kubeadm.go:317] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I1102 18:54:39.156897 47284 kubeadm.go:317]
I1102 18:54:39.156897 47284 kubeadm.go:317] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I1102 18:54:39.156897 47284 kubeadm.go:317] - 'systemctl status kubelet'
I1102 18:54:39.157421 47284 kubeadm.go:317] - 'journalctl -xeu kubelet'
I1102 18:54:39.157421 47284 kubeadm.go:317]
I1102 18:54:39.157421 47284 kubeadm.go:317] Additionally, a control plane component may have crashed or exited when started by the container runtime.
I1102 18:54:39.157421 47284 kubeadm.go:317] To troubleshoot, list all containers using your preferred container runtimes CLI.
I1102 18:54:39.157421 47284 kubeadm.go:317] Here is one example how you may list all running Kubernetes containers by using crictl:
I1102 18:54:39.157421 47284 kubeadm.go:317] - 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
I1102 18:54:39.157421 47284 kubeadm.go:317] Once you have found the failing container, you can inspect its logs with:
I1102 18:54:39.157950 47284 kubeadm.go:317] - 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'
W1102 18:54:39.157950 47284 out.go:239] 💢 initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.25.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost minikube] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost minikube] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

Unfortunately, an error has occurred:
timed out waiting for the condition

This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'

stderr:
W1102 10:50:37.383260 1935 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

I1102 18:54:39.158483 47284 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
I1102 18:54:42.128902 47284 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (2.970419s)
I1102 18:54:42.137423 47284 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1102 18:54:42.153025 47284 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
I1102 18:54:42.160985 47284 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1102 18:54:42.176246 47284 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:

stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1102 18:54:42.176246 47284 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1102 18:54:42.215675 47284 kubeadm.go:317] W1102 10:54:42.214388 4671 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
I1102 18:54:42.249140 47284 kubeadm.go:317] [WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
I1102 18:54:42.325472 47284 kubeadm.go:317] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1102 18:58:43.060189 47284 kubeadm.go:317] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
I1102 18:58:43.060189 47284 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
I1102 18:58:43.065153 47284 kubeadm.go:317] [init] Using Kubernetes version: v1.25.0
I1102 18:58:43.065153 47284 kubeadm.go:317] [preflight] Running pre-flight checks
I1102 18:58:43.065153 47284 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
I1102 18:58:43.065153 47284 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1102 18:58:43.065153 47284 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I1102 18:58:43.065153 47284 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1102 18:58:43.066788 47284 out.go:204] ▪ Generating certificates and keys ...
I1102 18:58:43.066788 47284 kubeadm.go:317] [certs] Using existing ca certificate authority
I1102 18:58:43.067322 47284 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
I1102 18:58:43.067322 47284 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
I1102 18:58:43.067322 47284 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
I1102 18:58:43.067322 47284 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
I1102 18:58:43.067322 47284 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
I1102 18:58:43.067322 47284 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
I1102 18:58:43.067322 47284 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
I1102 18:58:43.067322 47284 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
I1102 18:58:43.067322 47284 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
I1102 18:58:43.067866 47284 kubeadm.go:317] [certs] Using the existing "sa" key
I1102 18:58:43.067866 47284 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1102 18:58:43.067866 47284 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
I1102 18:58:43.067866 47284 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1102 18:58:43.067866 47284 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1102 18:58:43.067866 47284 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1102 18:58:43.068409 47284 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1102 18:58:43.068409 47284 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1102 18:58:43.068409 47284 kubeadm.go:317] [kubelet-start] Starting the kubelet
I1102 18:58:43.068409 47284 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1102 18:58:43.069483 47284 out.go:204] ▪ Booting up control plane ...
I1102 18:58:43.070015 47284 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1102 18:58:43.070015 47284 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1102 18:58:43.070015 47284 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1102 18:58:43.070545 47284 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1102 18:58:43.070545 47284 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I1102 18:58:43.070545 47284 kubeadm.go:317] [kubelet-check] Initial timeout of 40s passed.
I1102 18:58:43.070545 47284 kubeadm.go:317]
I1102 18:58:43.070545 47284 kubeadm.go:317] Unfortunately, an error has occurred:
I1102 18:58:43.070545 47284 kubeadm.go:317] timed out waiting for the condition
I1102 18:58:43.070545 47284 kubeadm.go:317]
I1102 18:58:43.070545 47284 kubeadm.go:317] This error is likely caused by:
I1102 18:58:43.070545 47284 kubeadm.go:317] - The kubelet is not running
I1102 18:58:43.071077 47284 kubeadm.go:317] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I1102 18:58:43.071077 47284 kubeadm.go:317]
I1102 18:58:43.071077 47284 kubeadm.go:317] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I1102 18:58:43.071077 47284 kubeadm.go:317] - 'systemctl status kubelet'
I1102 18:58:43.071077 47284 kubeadm.go:317] - 'journalctl -xeu kubelet'
I1102 18:58:43.071077 47284 kubeadm.go:317]
I1102 18:58:43.071077 47284 kubeadm.go:317] Additionally, a control plane component may have crashed or exited when started by the container runtime.
I1102 18:58:43.071077 47284 kubeadm.go:317] To troubleshoot, list all containers using your preferred container runtimes CLI.
I1102 18:58:43.071077 47284 kubeadm.go:317] Here is one example how you may list all running Kubernetes containers by using crictl:
I1102 18:58:43.071621 47284 kubeadm.go:317] - 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
I1102 18:58:43.071621 47284 kubeadm.go:317] Once you have found the failing container, you can inspect its logs with:
I1102 18:58:43.071621 47284 kubeadm.go:317] - 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'
I1102 18:58:43.071621 47284 kubeadm.go:398] StartCluster complete in 8m5.8438076s
I1102 18:58:43.071621 47284 cri.go:52] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
I1102 18:58:43.080272 47284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I1102 18:58:43.108798 47284 cri.go:87] found id: ""
I1102 18:58:43.108798 47284 logs.go:274] 0 containers: []
W1102 18:58:43.108798 47284 logs.go:276] No container was found matching "kube-apiserver"
I1102 18:58:43.108798 47284 cri.go:52] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
I1102 18:58:43.117334 47284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I1102 18:58:43.148292 47284 cri.go:87] found id: ""
I1102 18:58:43.148292 47284 logs.go:274] 0 containers: []
W1102 18:58:43.148292 47284 logs.go:276] No container was found matching "etcd"
I1102 18:58:43.148292 47284 cri.go:52] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
I1102 18:58:43.156843 47284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I1102 18:58:43.185854 47284 cri.go:87] found id: ""
I1102 18:58:43.185854 47284 logs.go:274] 0 containers: []
W1102 18:58:43.185854 47284 logs.go:276] No container was found matching "coredns"
I1102 18:58:43.185854 47284 cri.go:52] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
I1102 18:58:43.194489 47284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I1102 18:58:43.225749 47284 cri.go:87] found id: ""
I1102 18:58:43.225749 47284 logs.go:274] 0 containers: []
W1102 18:58:43.225749 47284 logs.go:276] No container was found matching "kube-scheduler"
I1102 18:58:43.225749 47284 cri.go:52] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
I1102 18:58:43.234282 47284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I1102 18:58:43.263420 47284 cri.go:87] found id: ""
I1102 18:58:43.263420 47284 logs.go:274] 0 containers: []
W1102 18:58:43.263420 47284 logs.go:276] No container was found matching "kube-proxy"
I1102 18:58:43.263420 47284 cri.go:52] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
I1102 18:58:43.271972 47284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I1102 18:58:43.301608 47284 cri.go:87] found id: ""
I1102 18:58:43.301608 47284 logs.go:274] 0 containers: []
W1102 18:58:43.301608 47284 logs.go:276] No container was found matching "kubernetes-dashboard"
I1102 18:58:43.301608 47284 cri.go:52] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
I1102 18:58:43.309735 47284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I1102 18:58:43.339290 47284 cri.go:87] found id: ""
I1102 18:58:43.339290 47284 logs.go:274] 0 containers: []
W1102 18:58:43.339290 47284 logs.go:276] No container was found matching "storage-provisioner"
I1102 18:58:43.339290 47284 cri.go:52] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
I1102 18:58:43.347819 47284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I1102 18:58:43.379295 47284 cri.go:87] found id: ""
I1102 18:58:43.379295 47284 logs.go:274] 0 containers: []
W1102 18:58:43.379295 47284 logs.go:276] No container was found matching "kube-controller-manager"
I1102 18:58:43.379295 47284 logs.go:123] Gathering logs for describe nodes ...
I1102 18:58:43.379295 47284 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W1102 18:58:43.442324 47284 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:

stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?

** /stderr **
I1102 18:58:43.442324 47284 logs.go:123] Gathering logs for Docker ...
I1102 18:58:43.442324 47284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
I1102 18:58:43.487871 47284 logs.go:123] Gathering logs for container status ...
I1102 18:58:43.487871 47284 ssh_runner.go:195] Run: /bin/bash -c "sudo which crictl || echo crictl ps -a || sudo docker ps -a"
I1102 18:58:43.519738 47284 logs.go:123] Gathering logs for kubelet ...
I1102 18:58:43.519738 47284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I1102 18:58:43.572703 47284 logs.go:123] Gathering logs for dmesg ...
I1102 18:58:43.572703 47284 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
W1102 18:58:43.589188 47284 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.25.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

Unfortunately, an error has occurred:
timed out waiting for the condition

This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'

stderr:
W1102 10:54:42.214388 4671 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W1102 18:58:43.589188 47284 out.go:239]
W1102 18:58:43.589710 47284 out.go:239] 💣 开启 cluster 时出错: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.25.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

Unfortunately, an error has occurred:
timed out waiting for the condition

This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'

stderr:
W1102 10:54:42.214388 4671 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

W1102 18:58:43.590756 47284 out.go:239]
W1102 18:58:43.591850 47284 out.go:239] �[31m╭───────────────────────────────────────────────────────────────────────────────────────────╮�[0m
�[31m│�[0m �[31m│�[0m
�[31m│�[0m 😿 If the above advice does not help, please let us know: �[31m│�[0m
�[31m│�[0m 👉 https://github.com/kubernetes/minikube/issues/new/choose �[31m│�[0m
�[31m│�[0m �[31m│�[0m
�[31m│�[0m Please run minikube logs --file=logs.txt and attach logs.txt to the GitHub issue. �[31m│�[0m
�[31m│�[0m �[31m│�[0m
�[31m╰───────────────────────────────────────────────────────────────────────────────────────────╯�[0m
I1102 18:58:43.595038 47284 out.go:177]
W1102 18:58:43.596609 47284 out.go:239] ❌ Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.25.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

Unfortunately, an error has occurred:
timed out waiting for the condition

This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'

stderr:
W1102 10:54:42.214388 4671 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

W1102 18:58:43.597131 47284 out.go:239] 💡 建议:检查 'journalctl -xeu kubelet' 的输出,尝试启动 minikube 时添加参数 --extra-config=kubelet.cgroup-driver=systemd
W1102 18:58:43.597131 47284 out.go:239] 🍿 Related issue: #4172
I1102 18:58:43.598703 47284 out.go:177]

==> Docker <==
-- Logs begin at Wed 2022-11-02 10:49:06 UTC, end at Wed 2022-11-02 11:00:58 UTC. --
Nov 02 10:58:08 minikube dockerd[728]: time="2022-11-02T10:58:08.519866223Z" level=warning msg="Error getting v2 registry: Get "https://k8s.gcr.io/v2/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Nov 02 10:58:08 minikube dockerd[728]: time="2022-11-02T10:58:08.519955621Z" level=info msg="Attempting next endpoint for pull after error: Get "https://k8s.gcr.io/v2/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Nov 02 10:58:08 minikube dockerd[728]: time="2022-11-02T10:58:08.532514425Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get "https://k8s.gcr.io/v2/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Nov 02 10:58:13 minikube dockerd[728]: time="2022-11-02T10:58:13.517759107Z" level=warning msg="Error getting v2 registry: Get "https://k8s.gcr.io/v2/\": dial tcp 64.233.188.82:443: i/o timeout"
Nov 02 10:58:13 minikube dockerd[728]: time="2022-11-02T10:58:13.517846563Z" level=info msg="Attempting next endpoint for pull after error: Get "https://k8s.gcr.io/v2/\": dial tcp 64.233.188.82:443: i/o timeout"
Nov 02 10:58:13 minikube dockerd[728]: time="2022-11-02T10:58:13.522449193Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get "https://k8s.gcr.io/v2/\": dial tcp 64.233.188.82:443: i/o timeout"
Nov 02 10:58:14 minikube dockerd[728]: time="2022-11-02T10:58:14.515747176Z" level=warning msg="Error getting v2 registry: Get "https://k8s.gcr.io/v2/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Nov 02 10:58:14 minikube dockerd[728]: time="2022-11-02T10:58:14.515805196Z" level=info msg="Attempting next endpoint for pull after error: Get "https://k8s.gcr.io/v2/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Nov 02 10:58:14 minikube dockerd[728]: time="2022-11-02T10:58:14.528034082Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get "https://k8s.gcr.io/v2/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Nov 02 10:58:18 minikube dockerd[728]: time="2022-11-02T10:58:18.529249939Z" level=warning msg="Error getting v2 registry: Get "https://k8s.gcr.io/v2/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Nov 02 10:58:18 minikube dockerd[728]: time="2022-11-02T10:58:18.529333247Z" level=info msg="Attempting next endpoint for pull after error: Get "https://k8s.gcr.io/v2/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Nov 02 10:58:18 minikube dockerd[728]: time="2022-11-02T10:58:18.535116967Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get "https://k8s.gcr.io/v2/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Nov 02 10:58:37 minikube dockerd[728]: time="2022-11-02T10:58:37.511155276Z" level=warning msg="Error getting v2 registry: Get "https://k8s.gcr.io/v2/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Nov 02 10:58:37 minikube dockerd[728]: time="2022-11-02T10:58:37.511205040Z" level=info msg="Attempting next endpoint for pull after error: Get "https://k8s.gcr.io/v2/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Nov 02 10:58:37 minikube dockerd[728]: time="2022-11-02T10:58:37.515486360Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get "https://k8s.gcr.io/v2/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Nov 02 10:58:39 minikube dockerd[728]: time="2022-11-02T10:58:39.496503906Z" level=warning msg="Error getting v2 registry: Get "https://k8s.gcr.io/v2/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Nov 02 10:58:39 minikube dockerd[728]: time="2022-11-02T10:58:39.496577836Z" level=info msg="Attempting next endpoint for pull after error: Get "https://k8s.gcr.io/v2/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Nov 02 10:58:39 minikube dockerd[728]: time="2022-11-02T10:58:39.500400706Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get "https://k8s.gcr.io/v2/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Nov 02 10:58:41 minikube dockerd[728]: time="2022-11-02T10:58:41.499564464Z" level=warning msg="Error getting v2 registry: Get "https://k8s.gcr.io/v2/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Nov 02 10:58:41 minikube dockerd[728]: time="2022-11-02T10:58:41.499633415Z" level=info msg="Attempting next endpoint for pull after error: Get "https://k8s.gcr.io/v2/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Nov 02 10:58:41 minikube dockerd[728]: time="2022-11-02T10:58:41.504975026Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get "https://k8s.gcr.io/v2/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Nov 02 10:58:46 minikube dockerd[728]: time="2022-11-02T10:58:46.510952548Z" level=warning msg="Error getting v2 registry: Get "https://k8s.gcr.io/v2/\": dial tcp 64.233.188.82:443: i/o timeout"
Nov 02 10:58:46 minikube dockerd[728]: time="2022-11-02T10:58:46.511049041Z" level=info msg="Attempting next endpoint for pull after error: Get "https://k8s.gcr.io/v2/\": dial tcp 64.233.188.82:443: i/o timeout"
Nov 02 10:58:46 minikube dockerd[728]: time="2022-11-02T10:58:46.516612382Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get "https://k8s.gcr.io/v2/\": dial tcp 64.233.188.82:443: i/o timeout"
Nov 02 10:59:07 minikube dockerd[728]: time="2022-11-02T10:59:07.509058006Z" level=warning msg="Error getting v2 registry: Get "https://k8s.gcr.io/v2/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Nov 02 10:59:07 minikube dockerd[728]: time="2022-11-02T10:59:07.509161382Z" level=info msg="Attempting next endpoint for pull after error: Get "https://k8s.gcr.io/v2/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Nov 02 10:59:07 minikube dockerd[728]: time="2022-11-02T10:59:07.518121638Z" level=warning msg="Error getting v2 registry: Get "https://k8s.gcr.io/v2/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Nov 02 10:59:07 minikube dockerd[728]: time="2022-11-02T10:59:07.518188845Z" level=info msg="Attempting next endpoint for pull after error: Get "https://k8s.gcr.io/v2/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Nov 02 10:59:07 minikube dockerd[728]: time="2022-11-02T10:59:07.521537110Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get "https://k8s.gcr.io/v2/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Nov 02 10:59:07 minikube dockerd[728]: time="2022-11-02T10:59:07.524433507Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get "https://k8s.gcr.io/v2/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Nov 02 10:59:09 minikube dockerd[728]: time="2022-11-02T10:59:09.493745971Z" level=warning msg="Error getting v2 registry: Get "https://k8s.gcr.io/v2/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Nov 02 10:59:09 minikube dockerd[728]: time="2022-11-02T10:59:09.493808039Z" level=info msg="Attempting next endpoint for pull after error: Get "https://k8s.gcr.io/v2/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Nov 02 10:59:09 minikube dockerd[728]: time="2022-11-02T10:59:09.497951834Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get "https://k8s.gcr.io/v2/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Nov 02 10:59:12 minikube dockerd[728]: time="2022-11-02T10:59:12.505366763Z" level=warning msg="Error getting v2 registry: Get "https://k8s.gcr.io/v2/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Nov 02 10:59:12 minikube dockerd[728]: time="2022-11-02T10:59:12.505423340Z" level=info msg="Attempting next endpoint for pull after error: Get "https://k8s.gcr.io/v2/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Nov 02 10:59:12 minikube dockerd[728]: time="2022-11-02T10:59:12.510554228Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get "https://k8s.gcr.io/v2/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Nov 02 10:59:34 minikube dockerd[728]: time="2022-11-02T10:59:34.506830374Z" level=warning msg="Error getting v2 registry: Get "https://k8s.gcr.io/v2/\": dial tcp 64.233.188.82:443: i/o timeout"
Nov 02 10:59:34 minikube dockerd[728]: time="2022-11-02T10:59:34.506909845Z" level=info msg="Attempting next endpoint for pull after error: Get "https://k8s.gcr.io/v2/\": dial tcp 64.233.188.82:443: i/o timeout"
Nov 02 10:59:34 minikube dockerd[728]: time="2022-11-02T10:59:34.519133286Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get "https://k8s.gcr.io/v2/\": dial tcp 64.233.188.82:443: i/o timeout"
Nov 02 10:59:35 minikube dockerd[728]: time="2022-11-02T10:59:35.495785576Z" level=warning msg="Error getting v2 registry: Get "https://k8s.gcr.io/v2/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Nov 02 10:59:35 minikube dockerd[728]: time="2022-11-02T10:59:35.495929600Z" level=info msg="Attempting next endpoint for pull after error: Get "https://k8s.gcr.io/v2/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Nov 02 10:59:35 minikube dockerd[728]: time="2022-11-02T10:59:35.500789181Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get "https://k8s.gcr.io/v2/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Nov 02 10:59:36 minikube dockerd[728]: time="2022-11-02T10:59:36.492115558Z" level=warning msg="Error getting v2 registry: Get "https://k8s.gcr.io/v2/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Nov 02 10:59:36 minikube dockerd[728]: time="2022-11-02T10:59:36.492231809Z" level=info msg="Attempting next endpoint for pull after error: Get "https://k8s.gcr.io/v2/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Nov 02 10:59:36 minikube dockerd[728]: time="2022-11-02T10:59:36.504985769Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get "https://k8s.gcr.io/v2/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Nov 02 10:59:41 minikube dockerd[728]: time="2022-11-02T10:59:41.522082548Z" level=warning msg="Error getting v2 registry: Get "https://k8s.gcr.io/v2/\": dial tcp 64.233.188.82:443: i/o timeout"
Nov 02 10:59:41 minikube dockerd[728]: time="2022-11-02T10:59:41.522160627Z" level=info msg="Attempting next endpoint for pull after error: Get "https://k8s.gcr.io/v2/\": dial tcp 64.233.188.82:443: i/o timeout"
Nov 02 10:59:41 minikube dockerd[728]: time="2022-11-02T10:59:41.535015903Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get "https://k8s.gcr.io/v2/\": dial tcp 64.233.188.82:443: i/o timeout"
Nov 02 11:00:31 minikube dockerd[728]: time="2022-11-02T11:00:31.495358946Z" level=warning msg="Error getting v2 registry: Get "https://k8s.gcr.io/v2/\": context deadline exceeded"
Nov 02 11:00:31 minikube dockerd[728]: time="2022-11-02T11:00:31.495431513Z" level=info msg="Attempting next endpoint for pull after error: Get "https://k8s.gcr.io/v2/\": context deadline exceeded"
Nov 02 11:00:31 minikube dockerd[728]: time="2022-11-02T11:00:31.506998563Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get "https://k8s.gcr.io/v2/\": context deadline exceeded"
Nov 02 11:00:47 minikube dockerd[728]: time="2022-11-02T11:00:47.616364939Z" level=warning msg="Error getting v2 registry: Get "https://k8s.gcr.io/v2/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Nov 02 11:00:47 minikube dockerd[728]: time="2022-11-02T11:00:47.616466440Z" level=info msg="Attempting next endpoint for pull after error: Get "https://k8s.gcr.io/v2/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Nov 02 11:00:47 minikube dockerd[728]: time="2022-11-02T11:00:47.621178823Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get "https://k8s.gcr.io/v2/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Nov 02 11:00:47 minikube dockerd[728]: time="2022-11-02T11:00:47.631584174Z" level=warning msg="Error getting v2 registry: Get "https://k8s.gcr.io/v2/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Nov 02 11:00:47 minikube dockerd[728]: time="2022-11-02T11:00:47.631637625Z" level=info msg="Attempting next endpoint for pull after error: Get "https://k8s.gcr.io/v2/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Nov 02 11:00:47 minikube dockerd[728]: time="2022-11-02T11:00:47.635267569Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get "https://k8s.gcr.io/v2/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Nov 02 11:00:47 minikube dockerd[728]: time="2022-11-02T11:00:47.644169791Z" level=warning msg="Error getting v2 registry: Get "https://k8s.gcr.io/v2/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Nov 02 11:00:47 minikube dockerd[728]: time="2022-11-02T11:00:47.644257046Z" level=info msg="Attempting next endpoint for pull after error: Get "https://k8s.gcr.io/v2/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Nov 02 11:00:47 minikube dockerd[728]: time="2022-11-02T11:00:47.648351996Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get "https://k8s.gcr.io/v2/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"

==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID

==> describe nodes <==

==> dmesg <==
[ +0.000641] FS-Cache: N-cookie d=000000005396aabd{9P.session} n=000000001ce19253
[ +0.001000] FS-Cache: N-key=[10] '34323934393337343531'
[ +1.082942] WSL (1) ERROR: ConfigApplyWindowsLibPath:2474: open /etc/ld.so.conf.d/ld.wsl.conf
[ +0.000005] failed 2
[ +0.018739] WSL (1) WARNING: /usr/share/zoneinfo/Asia/Shanghai not found. Is the tzdata package installed?
[ +0.080636] Exception:
[ +0.000010] Operation canceled @p9io.cpp:258 (AcceptAsync)

[ +0.075710] WSL (1) ERROR: ConfigApplyWindowsLibPath:2474: open /etc/ld.so.conf.d/ld.wsl.conf
[ +0.000007] failed 2
[ +0.016960] WSL (1) WARNING: /usr/share/zoneinfo/Asia/Shanghai not found. Is the tzdata package installed?
[ +0.396493] misc dxg: dxgk: dxgkio_query_adapter_info: Ioctl failed: -22
[ +0.001459] misc dxg: dxgk: dxgkio_query_adapter_info: Ioctl failed: -22
[ +0.001290] misc dxg: dxgk: dxgkio_query_adapter_info: Ioctl failed: -22
[ +0.009453] misc dxg: dxgk: dxgkio_query_adapter_info: Ioctl failed: -2
[ +0.835252] WSL (2) ERROR: UtilCreateProcessAndWait:661: /bin/mount failed with 2
[ +0.002232] WSL (1) ERROR: UtilCreateProcessAndWait:683: /bin/mount failed with status 0xff00

[ +0.004610] WSL (1) ERROR: ConfigMountFsTab:2526: Processing fstab with mount -a failed.
[ +0.003865] WSL (1) ERROR: ConfigApplyWindowsLibPath:2474: open /etc/ld.so.conf.d/ld.wsl.conf
[ +0.000006] failed 2
[ +0.019572] WSL (1) WARNING: /usr/share/zoneinfo/Asia/Shanghai not found. Is the tzdata package installed?
[ +0.250060] misc dxg: dxgk: dxgkio_query_adapter_info: Ioctl failed: -22
[ +0.003875] misc dxg: dxgk: dxgkio_query_adapter_info: Ioctl failed: -22
[ +0.002617] misc dxg: dxgk: dxgkio_query_adapter_info: Ioctl failed: -22
[ +0.002045] misc dxg: dxgk: dxgkio_query_adapter_info: Ioctl failed: -2
[ +0.923775] misc dxg: dxgk: dxgkio_query_adapter_info: Ioctl failed: -22
[ +0.001304] misc dxg: dxgk: dxgkio_query_adapter_info: Ioctl failed: -22
[ +0.001015] misc dxg: dxgk: dxgkio_query_adapter_info: Ioctl failed: -22
[ +0.001225] misc dxg: dxgk: dxgkio_query_adapter_info: Ioctl failed: -2
[Nov 2 08:37] Exception:
[ +0.000005] Operation canceled @p9io.cpp:258 (AcceptAsync)

[ +0.135727] Exception:
[ +0.000005] Operation canceled @p9io.cpp:258 (AcceptAsync)

[ +0.396475] WSL (1) ERROR: ConfigApplyWindowsLibPath:2474: open /etc/ld.so.conf.d/ld.wsl.conf
[ +0.000005] failed 2
[ +0.005746] WSL (1) WARNING: /usr/share/zoneinfo/Asia/Shanghai not found. Is the tzdata package installed?
[ +0.079973] Exception:
[ +0.000008] Operation canceled @p9io.cpp:258 (AcceptAsync)

[ +0.323007] WSL (1) ERROR: ConfigApplyWindowsLibPath:2474: open /etc/ld.so.conf.d/ld.wsl.conf
[ +0.000007] failed 2
[ +0.005037] WSL (1) WARNING: /usr/share/zoneinfo/Asia/Shanghai not found. Is the tzdata package installed?
[ +0.128410] misc dxg: dxgk: dxgkio_query_adapter_info: Ioctl failed: -22
[ +0.000889] misc dxg: dxgk: dxgkio_query_adapter_info: Ioctl failed: -22
[ +0.000957] misc dxg: dxgk: dxgkio_query_adapter_info: Ioctl failed: -22
[ +0.001281] misc dxg: dxgk: dxgkio_query_adapter_info: Ioctl failed: -2
[ +0.312067] WSL (2) ERROR: UtilCreateProcessAndWait:661: /bin/mount failed with 2
[ +0.001643] WSL (1) ERROR: UtilCreateProcessAndWait:683: /bin/mount failed with status 0xff00

[ +0.001634] WSL (1) ERROR: ConfigMountFsTab:2526: Processing fstab with mount -a failed.
[ +0.001384] WSL (1) ERROR: ConfigApplyWindowsLibPath:2474: open /etc/ld.so.conf.d/ld.wsl.conf
[ +0.000006] failed 2
[ +0.011718] WSL (1) WARNING: /usr/share/zoneinfo/Asia/Shanghai not found. Is the tzdata package installed?
[ +0.345886] misc dxg: dxgk: dxgkio_query_adapter_info: Ioctl failed: -22
[ +0.001441] misc dxg: dxgk: dxgkio_query_adapter_info: Ioctl failed: -22
[ +0.001099] misc dxg: dxgk: dxgkio_query_adapter_info: Ioctl failed: -22
[ +0.001507] misc dxg: dxgk: dxgkio_query_adapter_info: Ioctl failed: -2

==> kernel <==
11:00:58 up 3:24, 0 users, load average: 0.10, 0.09, 0.10
Linux minikube 5.15.68.1-microsoft-standard-WSL2 #1 SMP Mon Sep 19 19:14:52 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 20.04.5 LTS"

==> kubelet <==
-- Logs begin at Wed 2022-11-02 10:49:06 UTC, end at Wed 2022-11-02 11:00:58 UTC. --
Nov 02 11:00:53 minikube kubelet[4815]: E1102 11:00:53.330688 4815 kubelet.go:2448] "Error getting node" err="node "minikube" not found"
Nov 02 11:00:53 minikube kubelet[4815]: E1102 11:00:53.431616 4815 kubelet.go:2448] "Error getting node" err="node "minikube" not found"
Nov 02 11:00:53 minikube kubelet[4815]: E1102 11:00:53.532550 4815 kubelet.go:2448] "Error getting node" err="node "minikube" not found"
Nov 02 11:00:53 minikube kubelet[4815]: E1102 11:00:53.585419 4815 eviction_manager.go:256] "Eviction manager: failed to get summary stats" err="failed to get node info: node "minikube" not found"
Nov 02 11:00:53 minikube kubelet[4815]: E1102 11:00:53.633114 4815 kubelet.go:2448] "Error getting node" err="node "minikube" not found"
Nov 02 11:00:53 minikube kubelet[4815]: E1102 11:00:53.733724 4815 kubelet.go:2448] "Error getting node" err="node "minikube" not found"
Nov 02 11:00:53 minikube kubelet[4815]: I1102 11:00:53.746493 4815 kubelet_node_status.go:70] "Attempting to register node" node="minikube"
Nov 02 11:00:53 minikube kubelet[4815]: E1102 11:00:53.747303 4815 kubelet_node_status.go:92] "Unable to register node with API server" err="Post "https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="minikube"
Nov 02 11:00:53 minikube kubelet[4815]: E1102 11:00:53.834511 4815 kubelet.go:2448] "Error getting node" err="node "minikube" not found"
Nov 02 11:00:53 minikube kubelet[4815]: E1102 11:00:53.935385 4815 kubelet.go:2448] "Error getting node" err="node "minikube" not found"
Nov 02 11:00:54 minikube kubelet[4815]: E1102 11:00:54.036261 4815 kubelet.go:2448] "Error getting node" err="node "minikube" not found"
Nov 02 11:00:54 minikube kubelet[4815]: E1102 11:00:54.136744 4815 kubelet.go:2448] "Error getting node" err="node "minikube" not found"
Nov 02 11:00:54 minikube kubelet[4815]: E1102 11:00:54.237666 4815 kubelet.go:2448] "Error getting node" err="node "minikube" not found"
Nov 02 11:00:54 minikube kubelet[4815]: E1102 11:00:54.338821 4815 kubelet.go:2448] "Error getting node" err="node "minikube" not found"
Nov 02 11:00:54 minikube kubelet[4815]: E1102 11:00:54.439912 4815 kubelet.go:2448] "Error getting node" err="node "minikube" not found"
Nov 02 11:00:54 minikube kubelet[4815]: E1102 11:00:54.540496 4815 kubelet.go:2448] "Error getting node" err="node "minikube" not found"
Nov 02 11:00:54 minikube kubelet[4815]: E1102 11:00:54.641352 4815 kubelet.go:2448] "Error getting node" err="node "minikube" not found"
Nov 02 11:00:54 minikube kubelet[4815]: E1102 11:00:54.741564 4815 kubelet.go:2448] "Error getting node" err="node "minikube" not found"
Nov 02 11:00:54 minikube kubelet[4815]: E1102 11:00:54.841742 4815 kubelet.go:2448] "Error getting node" err="node "minikube" not found"
Nov 02 11:00:54 minikube kubelet[4815]: E1102 11:00:54.942920 4815 kubelet.go:2448] "Error getting node" err="node "minikube" not found"
Nov 02 11:00:55 minikube kubelet[4815]: E1102 11:00:55.043184 4815 kubelet.go:2448] "Error getting node" err="node "minikube" not found"
Nov 02 11:00:55 minikube kubelet[4815]: E1102 11:00:55.143897 4815 kubelet.go:2448] "Error getting node" err="node "minikube" not found"
Nov 02 11:00:55 minikube kubelet[4815]: E1102 11:00:55.244317 4815 kubelet.go:2448] "Error getting node" err="node "minikube" not found"
Nov 02 11:00:55 minikube kubelet[4815]: E1102 11:00:55.345505 4815 kubelet.go:2448] "Error getting node" err="node "minikube" not found"
Nov 02 11:00:55 minikube kubelet[4815]: E1102 11:00:55.445949 4815 kubelet.go:2448] "Error getting node" err="node "minikube" not found"
Nov 02 11:00:55 minikube kubelet[4815]: E1102 11:00:55.485177 4815 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.1723bf2c2e04f988", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node minikube status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:time.Date(2022, time.November, 2, 10, 54, 43, 528759688, time.Local), LastTimestamp:time.Date(2022, time.November, 2, 10, 54, 43, 528759688, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events": dial tcp 192.168.49.2:8443: connect: connection refused'(may retry after sleeping)
Nov 02 11:00:55 minikube kubelet[4815]: E1102 11:00:55.546211 4815 kubelet.go:2448] "Error getting node" err="node "minikube" not found"
Nov 02 11:00:55 minikube kubelet[4815]: E1102 11:00:55.646480 4815 kubelet.go:2448] "Error getting node" err="node "minikube" not found"
Nov 02 11:00:55 minikube kubelet[4815]: E1102 11:00:55.746733 4815 kubelet.go:2448] "Error getting node" err="node "minikube" not found"
Nov 02 11:00:55 minikube kubelet[4815]: E1102 11:00:55.847715 4815 kubelet.go:2448] "Error getting node" err="node "minikube" not found"
Nov 02 11:00:55 minikube kubelet[4815]: E1102 11:00:55.948303 4815 kubelet.go:2448] "Error getting node" err="node "minikube" not found"
Nov 02 11:00:56 minikube kubelet[4815]: E1102 11:00:56.049918 4815 kubelet.go:2448] "Error getting node" err="node "minikube" not found"
Nov 02 11:00:56 minikube kubelet[4815]: E1102 11:00:56.150252 4815 kubelet.go:2448] "Error getting node" err="node "minikube" not found"
Nov 02 11:00:56 minikube kubelet[4815]: E1102 11:00:56.251211 4815 kubelet.go:2448] "Error getting node" err="node "minikube" not found"
Nov 02 11:00:56 minikube kubelet[4815]: E1102 11:00:56.351507 4815 kubelet.go:2448] "Error getting node" err="node "minikube" not found"
Nov 02 11:00:56 minikube kubelet[4815]: E1102 11:00:56.452472 4815 kubelet.go:2448] "Error getting node" err="node "minikube" not found"
Nov 02 11:00:56 minikube kubelet[4815]: E1102 11:00:56.546920 4815 kubelet.go:2448] "Error getting node" err="node "minikube" not found"
Nov 02 11:00:56 minikube kubelet[4815]: E1102 11:00:56.647381 4815 kubelet.go:2448] "Error getting node" err="node "minikube" not found"
Nov 02 11:00:56 minikube kubelet[4815]: E1102 11:00:56.747926 4815 kubelet.go:2448] "Error getting node" err="node "minikube" not found"
Nov 02 11:00:56 minikube kubelet[4815]: E1102 11:00:56.849034 4815 kubelet.go:2448] "Error getting node" err="node "minikube" not found"
Nov 02 11:00:56 minikube kubelet[4815]: E1102 11:00:56.950029 4815 kubelet.go:2448] "Error getting node" err="node "minikube" not found"
Nov 02 11:00:57 minikube kubelet[4815]: E1102 11:00:57.051000 4815 kubelet.go:2448] "Error getting node" err="node "minikube" not found"
Nov 02 11:00:57 minikube kubelet[4815]: E1102 11:00:57.151180 4815 kubelet.go:2448] "Error getting node" err="node "minikube" not found"
Nov 02 11:00:57 minikube kubelet[4815]: E1102 11:00:57.252373 4815 kubelet.go:2448] "Error getting node" err="node "minikube" not found"
Nov 02 11:00:57 minikube kubelet[4815]: E1102 11:00:57.353110 4815 kubelet.go:2448] "Error getting node" err="node "minikube" not found"
Nov 02 11:00:57 minikube kubelet[4815]: E1102 11:00:57.453912 4815 kubelet.go:2448] "Error getting node" err="node "minikube" not found"
Nov 02 11:00:57 minikube kubelet[4815]: E1102 11:00:57.554508 4815 kubelet.go:2448] "Error getting node" err="node "minikube" not found"
Nov 02 11:00:57 minikube kubelet[4815]: E1102 11:00:57.655265 4815 kubelet.go:2448] "Error getting node" err="node "minikube" not found"
Nov 02 11:00:57 minikube kubelet[4815]: E1102 11:00:57.756367 4815 kubelet.go:2448] "Error getting node" err="node "minikube" not found"
Nov 02 11:00:57 minikube kubelet[4815]: E1102 11:00:57.857404 4815 kubelet.go:2448] "Error getting node" err="node "minikube" not found"
Nov 02 11:00:57 minikube kubelet[4815]: E1102 11:00:57.958632 4815 kubelet.go:2448] "Error getting node" err="node "minikube" not found"
Nov 02 11:00:58 minikube kubelet[4815]: E1102 11:00:58.059432 4815 kubelet.go:2448] "Error getting node" err="node "minikube" not found"
Nov 02 11:00:58 minikube kubelet[4815]: E1102 11:00:58.159975 4815 kubelet.go:2448] "Error getting node" err="node "minikube" not found"
Nov 02 11:00:58 minikube kubelet[4815]: E1102 11:00:58.260799 4815 kubelet.go:2448] "Error getting node" err="node "minikube" not found"
Nov 02 11:00:58 minikube kubelet[4815]: E1102 11:00:58.361807 4815 kubelet.go:2448] "Error getting node" err="node "minikube" not found"
Nov 02 11:00:58 minikube kubelet[4815]: E1102 11:00:58.463166 4815 kubelet.go:2448] "Error getting node" err="node "minikube" not found"
Nov 02 11:00:58 minikube kubelet[4815]: E1102 11:00:58.500656 4815 remote_runtime.go:233] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed pulling image "k8s.gcr.io/pause:3.6": Error response from daemon: Get "https://k8s.gcr.io/v2/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Nov 02 11:00:58 minikube kubelet[4815]: E1102 11:00:58.500729 4815 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed pulling image "k8s.gcr.io/pause:3.6": Error response from daemon: Get "https://k8s.gcr.io/v2/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" pod="kube-system/kube-apiserver-minikube"
Nov 02 11:00:58 minikube kubelet[4815]: E1102 11:00:58.500750 4815 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed pulling image "k8s.gcr.io/pause:3.6": Error response from daemon: Get "https://k8s.gcr.io/v2/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" pod="kube-system/kube-apiserver-minikube"
Nov 02 11:00:58 minikube kubelet[4815]: E1102 11:00:58.500817 4815 pod_workers.go:965] "Error syncing pod, skipping" err="failed to "CreatePodSandbox" for "kube-apiserver-minikube_kube-system(1968b8b643c9c0b434c0c1bc5a0e5d87)" with CreatePodSandboxError: "Failed to create sandbox for pod \"kube-apiserver-minikube_kube-system(1968b8b643c9c0b434c0c1bc5a0e5d87)\": rpc error: code = Unknown desc = failed pulling image \"k8s.gcr.io/pause:3.6\": Error response from daemon: Get \"https://k8s.gcr.io/v2/\\\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"" pod="kube-system/kube-apiserver-minikube" podUID=1968b8b643c9c0b434c0c1bc5a0e5d87

@Whitroom
Copy link
Author

Whitroom commented Nov 7, 2022

通过自己一段时间的Debug,这个问题已经解决了,将kubernetes的版本设置为1.23.0即可。启动命令如下:

minikube start --image-mirror-country=cn --kubernetes-version=1.23.0

@Whitroom Whitroom closed this as completed Nov 7, 2022
@Whitroom
Copy link
Author

Whitroom commented Nov 7, 2022

阿里云提供的kubernetes镜像只能支持1.23.0

@brillience
Copy link

你们有没有遇到过这样的问题?
❌ Exiting due to INET_DOWNLOAD_TIMEOUT: updating control plane: downloading binaries: downloading kubeadm: download failed: https://storage.googleapis.com/kubernetes-release/release/v1.18.1/bin/linux/arm64/kubeadm?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.18.1/bin/linux/arm64/kubeadm.sha256: getter: &{Ctx:context.Background Src:https://storage.googleapis.com/kubernetes-release/release/v1.18.1/bin/linux/arm64/kubeadm?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.18.1/bin/linux/arm64/kubeadm.sha256 Dst:/Users/zhangxiaobo/.minikube/cache/linux/arm64/v1.18.1/kubeadm.download Pwd: Mode:2 Umask:---------- Detectors:[0x1053b6fe0 0x1053b6fe0 0x1053b6fe0 0x1053b6fe0 0x1053b6fe0 0x1053b6fe0 0x1053b6fe0] Decompressors:map[bz2:0x1053b6fe0 gz:0x1053b6fe0 tar:0x1053b6fe0 tar.bz2:0x1053b6fe0 tar.gz:0x1053b6fe0 tar.xz:0x1053b6fe0 tar.zst:0x1053b6fe0 tbz2:0x1053b6fe0 tgz:0x1053b6fe0 txz:0x1053b6fe0 tzst:0x1053b6fe0 xz:0x1053b6fe0 zip:0x1053b6fe0 zst:0x1053b6fe0] Getters:map[file:0x14000a3dc20 http:0x14000ea26e0 https:0x14000ea2730] Dir:false ProgressListener:0x105372b80 Insecure:false DisableSymlinks:false Options:[0x1036147b0]}: invalid checksum: Error downloading checksum file: Get "https://storage.googleapis.com/kubernetes-release/release/v1.18.1/bin/linux/arm64/kubeadm.sha256": dial tcp 172.217.160.112:443: i/o timeout

@Whitroom
Copy link
Author

storage.googleapis.com是谷歌的源,国内的网是访问不了的

@Mazhenglong
Copy link

通过自己一段时间的Debug,这个问题已经解决了,将kubernetes的版本设置为1.23.0即可。启动命令如下:

minikube start --image-mirror-country=cn --kubernetes-version=1.23.0

It is working for me !

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
l/zh-CN Issues in or relating to Chinese
Projects
None yet
Development

No branches or pull requests

3 participants