Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

minikube image load mysql:8.0.34 failed #18949

Closed
zonghaishang opened this issue May 23, 2024 · 4 comments
Closed

minikube image load mysql:8.0.34 failed #18949

zonghaishang opened this issue May 23, 2024 · 4 comments
Labels
l/zh-CN Issues in or relating to Chinese lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@zonghaishang
Copy link

zonghaishang commented May 23, 2024

重现问题所需的命令

失败的命令的完整输出

image
yiji@yiji-m1 minimesh-linux-arm64-1.28-beta1 % docker pull mysql:8.0.34 --platform=linux/arm64
8.0.34: Pulling from library/mysql
Digest: sha256:59ae56ac42c6c09aebd9a7bb8d9f07a5fdf836f956e050946f85bbfa29d0a080
Status: Image is up to date for mysql:8.0.34
docker.io/library/mysql:8.0.34

What's Next?
  View a summary of image vulnerabilities and recommendations → docker scout quickview mysql:8.0.34
yiji@yiji-m1 minimesh-linux-arm64-1.28-beta1 % minikube image load mysql:8.0.34

❌  Exiting due to GUEST_IMAGE_LOAD: save to dir: caching images: caching image "/Users/yiji/.minikube/cache/images/arm64/mysql_8.0.34": write: unable to calculate manifest: blob sha256:1c15fc073ee3ff02a63276a86f095c1f26a980503b6bca8a7e2082f86c550b92 not found

╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                        │
│    😿  If the above advice does not help, please let us know:                                                          │
│    👉  https://github.com/kubernetes/minikube/issues/new/choose                                                        │
│                                                                                                                        │
│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
│    Please also attach the following file to the GitHub issue:                                                          │
│    - /var/folders/3k/3zk8s7pd56x3mtnbpzlk1jxr0000gp/T/minikube_image_7e4a11d786a85613f23a0a3795392ff714d83979_0.log    │
│                                                                                                                        │
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

minikube logs命令的输出
logs.txt


* 
* ==> Audit <==
* |---------|---------------------------------------------------------------------------|----------|------|---------|---------------------|---------------------|
| Command |                                   Args                                    | Profile  | User | Version |     Start Time      |      End Time       |
|---------|---------------------------------------------------------------------------|----------|------|---------|---------------------|---------------------|
| image   | load prom/prometheus:v2.47.1                                              | minikube | yiji | v1.29.0 | 22 May 24 20:21 CST |                     |
| delete  | -o json                                                                   | minikube | yiji | v1.29.0 | 22 May 24 20:25 CST | 22 May 24 20:25 CST |
| start   | --apiserver-ips=127.0.0.1 --apiserver-ips=30.249.128.230                  | minikube | yiji | v1.29.0 | 22 May 24 20:25 CST | 22 May 24 20:25 CST |
|         | --apiserver-names=127.0.0.1 --apiserver-names=30.249.128.230              |          |      |         |                     |                     |
|         | --embed-certs=true --driver=docker --mount=true                           |          |      |         |                     |                     |
|         | --mount-string=/Users/yiji/.minimesh/mysql/data:/minikube-host/mysql/data |          |      |         |                     |                     |
|         | --image-mirror-country=cn --listen-address=0.0.0.0 --ports=8443:8443      |          |      |         |                     |                     |
|         | --static-ip=192.168.200.200 --addons=metrics-server                       |          |      |         |                     |                     |
|         | --kubernetes-version=v1.23.8 --memory=8g --cpus=max --disk-size=60g -o    |          |      |         |                     |                     |
|         | json                                                                      |          |      |         |                     |                     |
| image   | load mysql:8.0.34                                                         | minikube | yiji | v1.29.0 | 22 May 24 20:25 CST | 22 May 24 20:26 CST |
| image   | load prom/prometheus:v2.47.1                                              | minikube | yiji | v1.29.0 | 22 May 24 20:26 CST | 22 May 24 20:26 CST |
| image   | load                                                                      | minikube | yiji | v1.29.0 | 22 May 24 20:26 CST | 22 May 24 20:27 CST |
|         | zonghaishang/delve:v1.20.1                                                |          |      |         |                     |                     |
| image   | build                                                                     | minikube | yiji | v1.29.0 | 22 May 24 20:27 CST | 22 May 24 20:28 CST |
|         | /Users/yiji/tools/minimesh-linux-arm64-1.27-beta1                         |          |      |         |                     |                     |
|         | -f Dockerfile-dubbo-server -t dubbo-server:v1                             |          |      |         |                     |                     |
| image   | build                                                                     | minikube | yiji | v1.29.0 | 22 May 24 20:27 CST | 22 May 24 20:28 CST |
|         | /Users/yiji/tools/minimesh-linux-arm64-1.27-beta1                         |          |      |         |                     |                     |
|         | -f Dockerfile-mysql -t mini-mysql:v1                                      |          |      |         |                     |                     |
| image   | build                                                                     | minikube | yiji | v1.29.0 | 22 May 24 20:27 CST | 22 May 24 20:32 CST |
|         | /Users/yiji/tools/minimesh-linux-arm64-1.27-beta1                         |          |      |         |                     |                     |
|         | -f Dockerfile-meshserver -t meshserver:v1                                 |          |      |         |                     |                     |
| image   | build                                                                     | minikube | yiji | v1.29.0 | 22 May 24 20:28 CST | 22 May 24 20:29 CST |
|         | /Users/yiji/tools/minimesh-linux-arm64-1.27-beta1                         |          |      |         |                     |                     |
|         | -f Dockerfile-operator -t operator:v1                                     |          |      |         |                     |                     |
| image   | build                                                                     | minikube | yiji | v1.29.0 | 22 May 24 20:28 CST | 22 May 24 20:29 CST |
|         | /Users/yiji/tools/minimesh-linux-arm64-1.27-beta1                         |          |      |         |                     |                     |
|         | -f Dockerfile-mosn -t mosn:v1                                             |          |      |         |                     |                     |
| image   | build                                                                     | minikube | yiji | v1.29.0 | 22 May 24 20:29 CST | 22 May 24 20:30 CST |
|         | /Users/yiji/tools/minimesh-linux-arm64-1.27-beta1                         |          |      |         |                     |                     |
|         | -f Dockerfile-nacos -t nacos:v1                                           |          |      |         |                     |                     |
| image   | build                                                                     | minikube | yiji | v1.29.0 | 22 May 24 20:29 CST | 22 May 24 20:30 CST |
|         | /Users/yiji/tools/minimesh-linux-arm64-1.27-beta1                         |          |      |         |                     |                     |
|         | -f Dockerfile-prometheus -t prometheus:v1                                 |          |      |         |                     |                     |
| image   | build                                                                     | minikube | yiji | v1.29.0 | 22 May 24 20:30 CST | 22 May 24 20:30 CST |
|         | /Users/yiji/tools/minimesh-linux-arm64-1.27-beta1                         |          |      |         |                     |                     |
|         | -f Dockerfile-dubbo-client -t dubbo-client:v1                             |          |      |         |                     |                     |
| image   | build                                                                     | minikube | yiji | v1.29.0 | 23 May 24 09:08 CST |                     |
|         | /Users/yiji/tools/minimesh-linux-arm64-1.27-beta1                         |          |      |         |                     |                     |
|         | -f Dockerfile-meshserver -t meshserver:v1                                 |          |      |         |                     |                     |
| image   | build                                                                     | minikube | yiji | v1.29.0 | 23 May 24 09:08 CST | 23 May 24 09:09 CST |
|         | /Users/yiji/tools/minimesh-linux-arm64-1.27-beta1                         |          |      |         |                     |                     |
|         | -f Dockerfile-meshserver -t meshserver:v1                                 |          |      |         |                     |                     |
| image   | build                                                                     | minikube | yiji | v1.29.0 | 23 May 24 09:08 CST |                     |
|         | /Users/yiji/tools/minimesh-linux-arm64-1.27-beta1                         |          |      |         |                     |                     |
|         | -f Dockerfile-mysql -t mini-mysql:v1                                      |          |      |         |                     |                     |
| image   | build                                                                     | minikube | yiji | v1.29.0 | 23 May 24 09:08 CST | 23 May 24 09:09 CST |
|         | /Users/yiji/tools/minimesh-linux-arm64-1.27-beta1                         |          |      |         |                     |                     |
|         | -f Dockerfile-mysql -t mini-mysql:v1                                      |          |      |         |                     |                     |
| image   | build                                                                     | minikube | yiji | v1.29.0 | 23 May 24 09:09 CST | 23 May 24 09:09 CST |
|         | /Users/yiji/tools/minimesh-linux-arm64-1.27-beta1                         |          |      |         |                     |                     |
|         | -f Dockerfile-operator -t operator:v1                                     |          |      |         |                     |                     |
| image   | build                                                                     | minikube | yiji | v1.29.0 | 23 May 24 09:09 CST |                     |
|         | /Users/yiji/tools/minimesh-linux-arm64-1.27-beta1                         |          |      |         |                     |                     |
|         | -f Dockerfile-mosn -t mosn:v1                                             |          |      |         |                     |                     |
| image   | build                                                                     | minikube | yiji | v1.29.0 | 23 May 24 09:09 CST |                     |
|         | /Users/yiji/tools/minimesh-linux-arm64-1.27-beta1                         |          |      |         |                     |                     |
|         | -f Dockerfile-nacos -t nacos:v1                                           |          |      |         |                     |                     |
| image   | build                                                                     | minikube | yiji | v1.29.0 | 23 May 24 09:09 CST | 23 May 24 09:10 CST |
|         | /Users/yiji/tools/minimesh-linux-arm64-1.27-beta1                         |          |      |         |                     |                     |
|         | -f Dockerfile-prometheus -t prometheus:v1                                 |          |      |         |                     |                     |
| image   | build                                                                     | minikube | yiji | v1.29.0 | 23 May 24 09:10 CST | 23 May 24 09:11 CST |
|         | /Users/yiji/tools/minimesh-linux-arm64-1.27-beta1                         |          |      |         |                     |                     |
|         | -f Dockerfile-dubbo-server -t dubbo-server:v1                             |          |      |         |                     |                     |
| image   | build                                                                     | minikube | yiji | v1.29.0 | 23 May 24 09:10 CST | 23 May 24 09:11 CST |
|         | /Users/yiji/tools/minimesh-linux-arm64-1.27-beta1                         |          |      |         |                     |                     |
|         | -f Dockerfile-dubbo-client -t dubbo-client:v1                             |          |      |         |                     |                     |
| delete  | -o json                                                                   | minikube | yiji | v1.29.0 | 23 May 24 09:38 CST | 23 May 24 09:39 CST |
| start   | --apiserver-ips=127.0.0.1 --apiserver-ips=30.249.128.230                  | minikube | yiji | v1.29.0 | 23 May 24 09:39 CST | 23 May 24 09:39 CST |
|         | --apiserver-names=127.0.0.1 --apiserver-names=30.249.128.230              |          |      |         |                     |                     |
|         | --embed-certs=true --driver=docker --mount=true                           |          |      |         |                     |                     |
|         | --mount-string=/Users/yiji/.minimesh/mysql/data:/minikube-host/mysql/data |          |      |         |                     |                     |
|         | --image-mirror-country=cn --listen-address=0.0.0.0 --ports=8443:8443      |          |      |         |                     |                     |
|         | --static-ip=192.168.200.200 --addons=metrics-server                       |          |      |         |                     |                     |
|         | --kubernetes-version=v1.23.8 --memory=max --cpus=max --disk-size=60g -o   |          |      |         |                     |                     |
|         | json                                                                      |          |      |         |                     |                     |
| image   | load mysql:8.0.34                                                         | minikube | yiji | v1.29.0 | 23 May 24 09:39 CST | 23 May 24 09:39 CST |
| image   | load prom/prometheus:v2.47.1                                              | minikube | yiji | v1.29.0 | 23 May 24 09:39 CST | 23 May 24 09:40 CST |
| image   | load                                                                      | minikube | yiji | v1.29.0 | 23 May 24 09:40 CST | 23 May 24 09:41 CST |
|         | zonghaishang/delve:v1.20.1                                                |          |      |         |                     |                     |
| image   | build                                                                     | minikube | yiji | v1.29.0 | 23 May 24 09:41 CST | 23 May 24 09:42 CST |
|         | /Users/yiji/tools/minimesh-linux-arm64-1.28-beta1                         |          |      |         |                     |                     |
|         | -f Dockerfile-dubbo-server -t dubbo-server:v1                             |          |      |         |                     |                     |
| image   | build                                                                     | minikube | yiji | v1.29.0 | 23 May 24 09:41 CST | 23 May 24 09:45 CST |
|         | /Users/yiji/tools/minimesh-linux-arm64-1.28-beta1                         |          |      |         |                     |                     |
|         | -f Dockerfile-meshserver -t meshserver:v1                                 |          |      |         |                     |                     |
| image   | build                                                                     | minikube | yiji | v1.29.0 | 23 May 24 09:41 CST | 23 May 24 09:42 CST |
|         | /Users/yiji/tools/minimesh-linux-arm64-1.28-beta1                         |          |      |         |                     |                     |
|         | -f Dockerfile-mysql -t mini-mysql:v1                                      |          |      |         |                     |                     |
| image   | build                                                                     | minikube | yiji | v1.29.0 | 23 May 24 09:42 CST | 23 May 24 09:42 CST |
|         | /Users/yiji/tools/minimesh-linux-arm64-1.28-beta1                         |          |      |         |                     |                     |
|         | -f Dockerfile-operator -t operator:v1                                     |          |      |         |                     |                     |
| image   | build                                                                     | minikube | yiji | v1.29.0 | 23 May 24 09:42 CST | 23 May 24 09:42 CST |
|         | /Users/yiji/tools/minimesh-linux-arm64-1.28-beta1                         |          |      |         |                     |                     |
|         | -f Dockerfile-mosn -t mosn:v1                                             |          |      |         |                     |                     |
| image   | build                                                                     | minikube | yiji | v1.29.0 | 23 May 24 09:42 CST | 23 May 24 09:43 CST |
|         | /Users/yiji/tools/minimesh-linux-arm64-1.28-beta1                         |          |      |         |                     |                     |
|         | -f Dockerfile-nacos -t nacos:v1                                           |          |      |         |                     |                     |
| delete  | -o json                                                                   | minikube | yiji | v1.29.0 | 23 May 24 09:58 CST | 23 May 24 09:58 CST |
| start   | --apiserver-ips=127.0.0.1 --apiserver-ips=30.249.128.230                  | minikube | yiji | v1.29.0 | 23 May 24 09:58 CST | 23 May 24 09:59 CST |
|         | --apiserver-names=127.0.0.1 --apiserver-names=30.249.128.230              |          |      |         |                     |                     |
|         | --embed-certs=true --driver=docker --mount=true                           |          |      |         |                     |                     |
|         | --mount-string=/Users/yiji/.minimesh/mysql/data:/minikube-host/mysql/data |          |      |         |                     |                     |
|         | --image-mirror-country=cn --listen-address=0.0.0.0 --ports=8443:8443      |          |      |         |                     |                     |
|         | --static-ip=192.168.200.200 --addons=metrics-server                       |          |      |         |                     |                     |
|         | --kubernetes-version=v1.23.8 --memory=max --cpus=max --disk-size=60g -o   |          |      |         |                     |                     |
|         | json                                                                      |          |      |         |                     |                     |
| image   | load mysql:8.0.34                                                         | minikube | yiji | v1.29.0 | 23 May 24 09:59 CST |                     |
| delete  | -o json                                                                   | minikube | yiji | v1.29.0 | 23 May 24 10:14 CST | 23 May 24 10:14 CST |
| start   | --apiserver-ips=127.0.0.1 --apiserver-ips=30.249.128.230                  | minikube | yiji | v1.29.0 | 23 May 24 10:14 CST | 23 May 24 10:15 CST |
|         | --apiserver-names=127.0.0.1 --apiserver-names=30.249.128.230              |          |      |         |                     |                     |
|         | --embed-certs=true --driver=docker --mount=true                           |          |      |         |                     |                     |
|         | --mount-string=/Users/yiji/.minimesh/mysql/data:/minikube-host/mysql/data |          |      |         |                     |                     |
|         | --image-mirror-country=cn --listen-address=0.0.0.0 --ports=8443:8443      |          |      |         |                     |                     |
|         | --static-ip=192.168.200.200 --addons=metrics-server                       |          |      |         |                     |                     |
|         | --kubernetes-version=v1.23.8 --memory=max --cpus=max --disk-size=60g -o   |          |      |         |                     |                     |
|         | json                                                                      |          |      |         |                     |                     |
| image   | load mysql:8.0.34                                                         | minikube | yiji | v1.29.0 | 23 May 24 10:15 CST |                     |
| delete  | -o json                                                                   | minikube | yiji | v1.29.0 | 23 May 24 10:18 CST | 23 May 24 10:18 CST |
| start   | --apiserver-ips=127.0.0.1 --apiserver-ips=30.249.128.230                  | minikube | yiji | v1.29.0 | 23 May 24 10:18 CST | 23 May 24 10:19 CST |
|         | --apiserver-names=127.0.0.1 --apiserver-names=30.249.128.230              |          |      |         |                     |                     |
|         | --embed-certs=true --driver=docker --mount=true                           |          |      |         |                     |                     |
|         | --mount-string=/Users/yiji/.minimesh/mysql/data:/minikube-host/mysql/data |          |      |         |                     |                     |
|         | --image-mirror-country=cn --listen-address=0.0.0.0 --ports=8443:8443      |          |      |         |                     |                     |
|         | --static-ip=192.168.200.200 --addons=metrics-server                       |          |      |         |                     |                     |
|         | --kubernetes-version=v1.23.8 --memory=max --cpus=max --disk-size=60g -o   |          |      |         |                     |                     |
|         | json                                                                      |          |      |         |                     |                     |
| image   | load mysql:8.0.34                                                         | minikube | yiji | v1.29.0 | 23 May 24 10:19 CST |                     |
| delete  | -o json                                                                   | minikube | yiji | v1.29.0 | 23 May 24 10:29 CST | 23 May 24 10:29 CST |
| start   | --apiserver-ips=127.0.0.1 --apiserver-ips=30.249.128.230                  | minikube | yiji | v1.29.0 | 23 May 24 10:29 CST | 23 May 24 10:29 CST |
|         | --apiserver-names=127.0.0.1 --apiserver-names=30.249.128.230              |          |      |         |                     |                     |
|         | --embed-certs=true --driver=docker --mount=true                           |          |      |         |                     |                     |
|         | --mount-string=/Users/yiji/.minimesh/mysql/data:/minikube-host/mysql/data |          |      |         |                     |                     |
|         | --image-mirror-country=cn --listen-address=0.0.0.0 --ports=8443:8443      |          |      |         |                     |                     |
|         | --static-ip=192.168.200.200 --addons=metrics-server                       |          |      |         |                     |                     |
|         | --kubernetes-version=v1.23.8 --memory=max --cpus=max --disk-size=60g -o   |          |      |         |                     |                     |
|         | json                                                                      |          |      |         |                     |                     |
| image   | load mysql:8.0.34                                                         | minikube | yiji | v1.29.0 | 23 May 24 10:29 CST |                     |
| image   | load mysql:mysql:8.0.34                                                   | minikube | yiji | v1.29.0 | 23 May 24 10:46 CST |                     |
| image   | load mysql:8.0.34                                                         | minikube | yiji | v1.29.0 | 23 May 24 10:46 CST |                     |
| delete  | -o json                                                                   | minikube | yiji | v1.29.0 | 23 May 24 11:08 CST |                     |
| delete  | -o json                                                                   | minikube | yiji | v1.29.0 | 23 May 24 11:12 CST |                     |
| delete  | -o json                                                                   | minikube | yiji | v1.29.0 | 23 May 24 11:12 CST | 23 May 24 11:12 CST |
| start   | --apiserver-ips=127.0.0.1 --apiserver-ips=30.249.128.230                  | minikube | yiji | v1.29.0 | 23 May 24 11:12 CST |                     |
|         | --apiserver-names=127.0.0.1 --apiserver-names=30.249.128.230              |          |      |         |                     |                     |
|         | --embed-certs=true --driver=docker --mount=true                           |          |      |         |                     |                     |
|         | --mount-string=/Users/yiji/.minimesh/mysql/data:/minikube-host/mysql/data |          |      |         |                     |                     |
|         | --image-mirror-country=cn --listen-address=0.0.0.0 --ports=8443:8443      |          |      |         |                     |                     |
|         | --static-ip=192.168.200.200 --addons=metrics-server                       |          |      |         |                     |                     |
|         | --kubernetes-version=v1.23.8 --memory=max --cpus=max --disk-size=60g -o   |          |      |         |                     |                     |
|         | json                                                                      |          |      |         |                     |                     |
| delete  | -o json                                                                   | minikube | yiji | v1.29.0 | 23 May 24 11:12 CST | 23 May 24 11:12 CST |
| start   | --apiserver-ips=127.0.0.1 --apiserver-ips=30.249.128.230                  | minikube | yiji | v1.29.0 | 23 May 24 11:12 CST | 23 May 24 11:12 CST |
|         | --apiserver-names=127.0.0.1 --apiserver-names=30.249.128.230              |          |      |         |                     |                     |
|         | --embed-certs=true --driver=docker --mount=true                           |          |      |         |                     |                     |
|         | --mount-string=/Users/yiji/.minimesh/mysql/data:/minikube-host/mysql/data |          |      |         |                     |                     |
|         | --image-mirror-country=cn --listen-address=0.0.0.0 --ports=8443:8443      |          |      |         |                     |                     |
|         | --static-ip=192.168.200.200 --addons=metrics-server                       |          |      |         |                     |                     |
|         | --kubernetes-version=v1.23.8 --memory=max --cpus=max --disk-size=60g -o   |          |      |         |                     |                     |
|         | json                                                                      |          |      |         |                     |                     |
| image   | load mysql:8.0.34                                                         | minikube | yiji | v1.29.0 | 23 May 24 11:12 CST |                     |
| image   | load mysql:8.0.34                                                         | minikube | yiji | v1.29.0 | 23 May 24 11:13 CST |                     |
| image   | load mysql:8.0.34                                                         | minikube | yiji | v1.29.0 | 23 May 24 11:14 CST |                     |
| image   | load mysql:8.0.34                                                         | minikube | yiji | v1.29.0 | 23 May 24 11:15 CST |                     |
| image   | load mysql:8.0.34                                                         | minikube | yiji | v1.29.0 | 23 May 24 11:17 CST |                     |
|---------|---------------------------------------------------------------------------|----------|------|---------|---------------------|---------------------|

* 
* ==> Last Start <==
* Log file created at: 2024/05/23 11:12:30
Running on machine: yiji-m1
Binary: Built with gc go1.19.5 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0523 11:12:30.950238   30955 out.go:296] Setting OutFile to fd 1 ...
I0523 11:12:30.950357   30955 out.go:348] isatty.IsTerminal(1) = false
I0523 11:12:30.950358   30955 out.go:309] Setting ErrFile to fd 2...
I0523 11:12:30.950361   30955 out.go:348] isatty.IsTerminal(2) = false
I0523 11:12:30.950447   30955 root.go:334] Updating PATH: /Users/yiji/.minikube/bin
I0523 11:12:30.950864   30955 out.go:303] Setting JSON to true
I0523 11:12:30.989503   30955 start.go:125] hostinfo: {"hostname":"yiji-m1.local","uptime":5839,"bootTime":1716428111,"procs":326,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.1","kernelVersion":"22.2.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"beae6b46-0fbc-5c7d-afb0-06496839392a"}
W0523 11:12:30.989592   30955 start.go:133] gopshost.Virtualization returned error: not implemented yet
I0523 11:12:30.995099   30955 out.go:97] Darwin 13.1 (arm64) 上的 minikube v1.29.0
W0523 11:12:30.995488   30955 preload.go:295] Failed to list preload files: open /Users/yiji/.minikube/cache/preloaded-tarball: no such file or directory
I0523 11:12:30.995790   30955 notify.go:220] Checking for updates...
I0523 11:12:30.996527   30955 driver.go:365] Setting default libvirt URI to qemu:///system
I0523 11:12:31.065978   30955 docker.go:141] docker version: linux-26.1.1:Docker Desktop 4.30.0 (149282)
I0523 11:12:31.066129   30955 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0523 11:12:31.404949   30955 info.go:266] docker info: {ID:cfc601b5-4321-4334-8066-96aea69d734f Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:24 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:false NGoroutines:77 SystemTime:2024-05-23 03:12:31.381329549 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:12 KernelVersion:6.6.26-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:8326602752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/yiji/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:true ServerVersion:26.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/yiji/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0-desktop.1] map[Err:plugin candidate "buildx.zip" did not match "^[a-z][a-z0-9]*$" Name:buildx.zip Path:/Users/yiji/.docker/cli-plugins/docker-buildx.zip] map[Name:compose Path:/Users/yiji/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0-desktop.2] map[Name:debug Path:/Users/yiji/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.29] map[Name:dev Path:/Users/yiji/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/yiji/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/yiji/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/yiji/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/yiji/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/yiji/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.8.0]] Warnings:<nil>}}
I0523 11:12:31.410111   30955 out.go:97] 根据用户配置使用 docker 驱动程序
I0523 11:12:31.410344   30955 start.go:296] selected driver: docker
I0523 11:12:31.410359   30955 start.go:857] validating driver "docker" against <nil>
I0523 11:12:31.410365   30955 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0523 11:12:31.421916   30955 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0523 11:12:31.531316   30955 info.go:266] docker info: {ID:cfc601b5-4321-4334-8066-96aea69d734f Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:24 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:false NGoroutines:77 SystemTime:2024-05-23 03:12:31.51327159 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:12 KernelVersion:6.6.26-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:8326602752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/yiji/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:true ServerVersion:26.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/yiji/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0-desktop.1] map[Err:plugin candidate "buildx.zip" did not match "^[a-z][a-z0-9]*$" Name:buildx.zip Path:/Users/yiji/.docker/cli-plugins/docker-buildx.zip] map[Name:compose Path:/Users/yiji/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0-desktop.2] map[Name:debug Path:/Users/yiji/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.29] map[Name:dev Path:/Users/yiji/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/yiji/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/yiji/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/yiji/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/yiji/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/yiji/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.8.0]] Warnings:<nil>}}
I0523 11:12:31.531472   30955 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
I0523 11:12:31.532360   30955 start.go:927] selecting image repository for country cn ...
I0523 11:12:32.049458   30955 out.go:169] 正在使用镜像存储库 registry.cn-hangzhou.aliyuncs.com/google_containers
I0523 11:12:32.054803   30955 start_flags.go:899] Wait components to verify : map[apiserver:true system_pods:true]
I0523 11:12:32.061788   30955 out.go:169] Using Docker Desktop driver with root privileges
I0523 11:12:32.068935   30955 cni.go:84] Creating CNI manager for ""
I0523 11:12:32.069341   30955 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
I0523 11:12:32.069361   30955 start_flags.go:319] config:
{Name:minikube KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 Memory:7940 CPUs:8 DiskSize:61440 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[/Users/yiji/.minimesh/mysql/data:/minikube-host/mysql/data] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.8 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[127.0.0.1 30.249.128.230] APIServerIPs:[127.0.0.1 30.249.128.230] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository:registry.cn-hangzhou.aliyuncs.com/google_containers LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[8443:8443] ListenAddress:0.0.0.0 Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:true MountString:/Users/yiji/.minimesh/mysql/data:/minikube-host/mysql/data Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:192.168.200.200}
I0523 11:12:32.075520   30955 out.go:97] Starting control plane node minikube in cluster minikube
I0523 11:12:32.075620   30955 cache.go:120] Beginning downloading kic base image for docker with docker
I0523 11:12:32.079504   30955 out.go:97] Pulling base image ...
I0523 11:12:32.080632   30955 image.go:77] Checking for registry.cn-hangzhou.aliyuncs.com/google_containers/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 in local docker daemon
I0523 11:12:32.084023   30955 profile.go:148] Saving config to /Users/yiji/.minikube/profiles/minikube/config.json ...
I0523 11:12:32.084174   30955 lock.go:35] WriteFile acquiring /Users/yiji/.minikube/profiles/minikube/config.json: {Name:mkca8eaba949c25c204614ee503385854d5843a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0523 11:12:32.084549   30955 cache.go:107] acquiring lock: {Name:mke9ce6e78e90554ec704b93cee851e6fe37542e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0523 11:12:32.084577   30955 cache.go:107] acquiring lock: {Name:mk36f366c22f1431593ed0acf435338eb26c1b85 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0523 11:12:32.084681   30955 cache.go:107] acquiring lock: {Name:mk3cb309a248545637fe9ffe0577cecf45ec3d14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0523 11:12:32.085081   30955 cache.go:107] acquiring lock: {Name:mke10f4543a1dafd0a5d03b1c4c9fee654f96e5d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0523 11:12:32.085250   30955 cache.go:107] acquiring lock: {Name:mk9eab9ab54bbd86667af6e8a215e3a5a94f777f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0523 11:12:32.085315   30955 cache.go:107] acquiring lock: {Name:mk065e3e75f85a11db17e895167700d52bb6c7f3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0523 11:12:32.085247   30955 cache.go:107] acquiring lock: {Name:mkf8e0b9a109e566434f98e62aff9ecbacc95eea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0523 11:12:32.085415   30955 cache.go:107] acquiring lock: {Name:mk49737549d49ceb1844b19e0fee213b6844d518 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0523 11:12:32.085562   30955 cache.go:115] /Users/yiji/.minikube/cache/images/arm64/registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy_v1.23.8 exists
I0523 11:12:32.085670   30955 cache.go:96] cache image "registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.23.8" -> "/Users/yiji/.minikube/cache/images/arm64/registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy_v1.23.8" took 1.049917ms
I0523 11:12:32.085684   30955 cache.go:80] save to tar file registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.23.8 -> /Users/yiji/.minikube/cache/images/arm64/registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy_v1.23.8 succeeded
I0523 11:12:32.085677   30955 cache.go:115] /Users/yiji/.minikube/cache/images/arm64/registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver_v1.23.8 exists
I0523 11:12:32.085681   30955 cache.go:115] /Users/yiji/.minikube/cache/images/arm64/registry.cn-hangzhou.aliyuncs.com/google_containers/storage-provisioner_v5 exists
I0523 11:12:32.085956   30955 cache.go:96] cache image "registry.cn-hangzhou.aliyuncs.com/google_containers/storage-provisioner:v5" -> "/Users/yiji/.minikube/cache/images/arm64/registry.cn-hangzhou.aliyuncs.com/google_containers/storage-provisioner_v5" took 1.44525ms
I0523 11:12:32.085967   30955 cache.go:80] save to tar file registry.cn-hangzhou.aliyuncs.com/google_containers/storage-provisioner:v5 -> /Users/yiji/.minikube/cache/images/arm64/registry.cn-hangzhou.aliyuncs.com/google_containers/storage-provisioner_v5 succeeded
I0523 11:12:32.085892   30955 cache.go:115] /Users/yiji/.minikube/cache/images/arm64/registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager_v1.23.8 exists
I0523 11:12:32.086047   30955 cache.go:96] cache image "registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.23.8" -> "/Users/yiji/.minikube/cache/images/arm64/registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager_v1.23.8" took 1.283417ms
I0523 11:12:32.086022   30955 cache.go:96] cache image "registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.23.8" -> "/Users/yiji/.minikube/cache/images/arm64/registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver_v1.23.8" took 1.177375ms
I0523 11:12:32.086056   30955 cache.go:80] save to tar file registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.23.8 -> /Users/yiji/.minikube/cache/images/arm64/registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager_v1.23.8 succeeded
I0523 11:12:32.086068   30955 cache.go:80] save to tar file registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.23.8 -> /Users/yiji/.minikube/cache/images/arm64/registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver_v1.23.8 succeeded
I0523 11:12:32.086107   30955 cache.go:115] /Users/yiji/.minikube/cache/images/arm64/registry.cn-hangzhou.aliyuncs.com/google_containers/coredns_v1.8.6 exists
I0523 11:12:32.086137   30955 cache.go:115] /Users/yiji/.minikube/cache/images/arm64/registry.cn-hangzhou.aliyuncs.com/google_containers/pause_3.6 exists
I0523 11:12:32.086145   30955 cache.go:96] cache image "registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.8.6" -> "/Users/yiji/.minikube/cache/images/arm64/registry.cn-hangzhou.aliyuncs.com/google_containers/coredns_v1.8.6" took 1.037167ms
I0523 11:12:32.086163   30955 cache.go:80] save to tar file registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.8.6 -> /Users/yiji/.minikube/cache/images/arm64/registry.cn-hangzhou.aliyuncs.com/google_containers/coredns_v1.8.6 succeeded
I0523 11:12:32.086175   30955 cache.go:96] cache image "registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6" -> "/Users/yiji/.minikube/cache/images/arm64/registry.cn-hangzhou.aliyuncs.com/google_containers/pause_3.6" took 1.1385ms
I0523 11:12:32.086188   30955 cache.go:80] save to tar file registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6 -> /Users/yiji/.minikube/cache/images/arm64/registry.cn-hangzhou.aliyuncs.com/google_containers/pause_3.6 succeeded
I0523 11:12:32.086242   30955 cache.go:115] /Users/yiji/.minikube/cache/images/arm64/registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler_v1.23.8 exists
I0523 11:12:32.086244   30955 cache.go:115] /Users/yiji/.minikube/cache/images/arm64/registry.cn-hangzhou.aliyuncs.com/google_containers/etcd_3.5.1-0 exists
I0523 11:12:32.086264   30955 cache.go:96] cache image "registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.23.8" -> "/Users/yiji/.minikube/cache/images/arm64/registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler_v1.23.8" took 1.364167ms
I0523 11:12:32.086271   30955 cache.go:80] save to tar file registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.23.8 -> /Users/yiji/.minikube/cache/images/arm64/registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler_v1.23.8 succeeded
I0523 11:12:32.086266   30955 cache.go:96] cache image "registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.1-0" -> "/Users/yiji/.minikube/cache/images/arm64/registry.cn-hangzhou.aliyuncs.com/google_containers/etcd_3.5.1-0" took 1.129875ms
I0523 11:12:32.086275   30955 cache.go:80] save to tar file registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.1-0 -> /Users/yiji/.minikube/cache/images/arm64/registry.cn-hangzhou.aliyuncs.com/google_containers/etcd_3.5.1-0 succeeded
I0523 11:12:32.086286   30955 cache.go:87] Successfully saved all images to host disk.
I0523 11:12:32.162604   30955 cache.go:148] Downloading registry.cn-hangzhou.aliyuncs.com/google_containers/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 to local cache
I0523 11:12:32.162810   30955 image.go:61] Checking for registry.cn-hangzhou.aliyuncs.com/google_containers/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 in local cache directory
I0523 11:12:32.162884   30955 image.go:119] Writing registry.cn-hangzhou.aliyuncs.com/google_containers/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 to local cache
I0523 11:12:32.250313   30955 cache.go:167] failed to download registry.cn-hangzhou.aliyuncs.com/google_containers/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15, will try fallback image if available: getting remote image: GET https://registry.cn-hangzhou.aliyuncs.com/v2/google_containers/kicbase/manifests/sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15: MANIFEST_UNKNOWN: manifest unknown; map[Name:google_containers/kicbase Revision:sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15]
I0523 11:12:32.250331   30955 image.go:77] Checking for docker.io/kicbase/stable:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 in local docker daemon
I0523 11:12:32.315739   30955 image.go:81] Found docker.io/kicbase/stable:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 in local docker daemon, skipping pull
I0523 11:12:32.315755   30955 cache.go:143] docker.io/kicbase/stable:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 exists in daemon, skipping load
W0523 11:12:32.319577   30955 out.go:272] minikube was unable to download registry.cn-hangzhou.aliyuncs.com/google_containers/kicbase:v0.0.37, but successfully downloaded docker.io/kicbase/stable:v0.0.37 as a fallback image
I0523 11:12:32.319598   30955 cache.go:193] Successfully downloaded all kic artifacts
I0523 11:12:32.319823   30955 start.go:364] acquiring machines lock for minikube: {Name:mka60560878273ead321a0763f413b40b16ae274 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0523 11:12:32.319940   30955 start.go:368] acquired machines lock for "minikube" in 105.667µs
I0523 11:12:32.319960   30955 start.go:93] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:docker.io/kicbase/stable:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 Memory:7940 CPUs:8 DiskSize:61440 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[/Users/yiji/.minimesh/mysql/data:/minikube-host/mysql/data] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.8 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[127.0.0.1 30.249.128.230] APIServerIPs:[127.0.0.1 30.249.128.230] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository:registry.cn-hangzhou.aliyuncs.com/google_containers LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.8 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[8443:8443] ListenAddress:0.0.0.0 Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:true MountString:/Users/yiji/.minimesh/mysql/data:/minikube-host/mysql/data Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:192.168.200.200} &{Name: IP: Port:8443 KubernetesVersion:v1.23.8 ContainerRuntime:docker ControlPlane:true Worker:true}
I0523 11:12:32.320005   30955 start.go:125] createHost starting for "" (driver="docker")
I0523 11:12:32.326436   30955 out.go:97] Creating docker container (CPUs=8, Memory=7940MB) ...
I0523 11:12:32.326592   30955 start.go:159] libmachine.API.Create for "minikube" (driver="docker")
I0523 11:12:32.326614   30955 client.go:168] LocalClient.Create starting
I0523 11:12:32.326732   30955 main.go:141] libmachine: Reading certificate data from /Users/yiji/.minikube/certs/ca.pem
I0523 11:12:32.327903   30955 main.go:141] libmachine: Decoding PEM data...
I0523 11:12:32.327913   30955 main.go:141] libmachine: Parsing certificate...
I0523 11:12:32.328160   30955 main.go:141] libmachine: Reading certificate data from /Users/yiji/.minikube/certs/cert.pem
I0523 11:12:32.328325   30955 main.go:141] libmachine: Decoding PEM data...
I0523 11:12:32.328331   30955 main.go:141] libmachine: Parsing certificate...
I0523 11:12:32.328741   30955 cli_runner.go:164] Run: docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0523 11:12:32.373659   30955 cli_runner.go:211] docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0523 11:12:32.373766   30955 network_create.go:281] running [docker network inspect minikube] to gather additional debugging logs...
I0523 11:12:32.373778   30955 cli_runner.go:164] Run: docker network inspect minikube
W0523 11:12:32.418934   30955 cli_runner.go:211] docker network inspect minikube returned with exit code 1
I0523 11:12:32.418955   30955 network_create.go:284] error running [docker network inspect minikube]: docker network inspect minikube: exit status 1
stdout:
[]

stderr:
Error response from daemon: network minikube not found
I0523 11:12:32.418961   30955 network_create.go:286] output of [docker network inspect minikube]: -- stdout --
[]

-- /stdout --
** stderr ** 
Error response from daemon: network minikube not found

** /stderr **
I0523 11:12:32.419058   30955 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0523 11:12:32.469350   30955 network.go:206] using free private subnet 192.168.200.0/24: &{IP:192.168.200.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.200.0/24 Gateway:192.168.200.1 ClientMin:192.168.200.2 ClientMax:192.168.200.254 Broadcast:192.168.200.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x14001317480}
I0523 11:12:32.469371   30955 network_create.go:123] attempt to create docker network minikube 192.168.200.0/24 with gateway 192.168.200.1 and MTU of 65535 ...
I0523 11:12:32.469433   30955 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.200.0/24 --gateway=192.168.200.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=minikube minikube
I0523 11:12:32.545089   30955 network_create.go:107] docker network minikube 192.168.200.0/24 created
I0523 11:12:32.550746   30955 out.go:97] minikube is not meant for production use. You are opening non-local traffic
W0523 11:12:32.554809   30955 out.go:272] Listening to 0.0.0.0. This is not recommended and can cause a security vulnerability. Use at your own risk
I0523 11:12:32.554940   30955 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0523 11:12:32.598542   30955 cli_runner.go:164] Run: docker volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true
I0523 11:12:32.651376   30955 oci.go:103] Successfully created a docker volume minikube
I0523 11:12:32.651504   30955 cli_runner.go:164] Run: docker run --rm --name minikube-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --entrypoint /usr/bin/test -v minikube:/var docker.io/kicbase/stable:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 -d /var/lib
I0523 11:12:33.383627   30955 oci.go:107] Successfully prepared a docker volume minikube
I0523 11:12:33.383677   30955 preload.go:132] Checking if preload exists for k8s version v1.23.8 and runtime docker
W0523 11:12:33.473585   30955 preload.go:115] https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube-preloaded-volume-tarballs/v18/v1.23.8/preloaded-images-k8s-v18-v1.23.8-docker-overlay2-arm64.tar.lz4 status code: 404
I0523 11:12:33.473782   30955 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I0523 11:12:33.571325   30955 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.168.200.200 --volume minikube:/var --security-opt apparmor=unconfined --memory=7940mb --memory-swap=7940mb --cpus=8 -e container=docker --expose 8443 -p 8443:8443 --volume=/Users/yiji/.minimesh/mysql/data:/minikube-host/mysql/data --publish=0.0.0.0::8443 --publish=0.0.0.0::22 --publish=0.0.0.0::2376 --publish=0.0.0.0::5000 --publish=0.0.0.0::32443 docker.io/kicbase/stable:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15
I0523 11:12:33.760184   30955 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Running}}
I0523 11:12:33.802536   30955 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}}
I0523 11:12:33.846004   30955 cli_runner.go:164] Run: docker exec minikube stat /var/lib/dpkg/alternatives/iptables
I0523 11:12:33.926504   30955 oci.go:144] the created container "minikube" has a running status.
I0523 11:12:33.926531   30955 kic.go:221] Creating ssh key for kic: /Users/yiji/.minikube/machines/minikube/id_rsa...
I0523 11:12:33.984846   30955 kic_runner.go:191] docker (temp): /Users/yiji/.minikube/machines/minikube/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0523 11:12:34.049441   30955 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}}
I0523 11:12:34.120400   30955 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0523 11:12:34.120444   30955 kic_runner.go:114] Args: [docker exec --privileged minikube chown docker:docker /home/docker/.ssh/authorized_keys]
I0523 11:12:34.187974   30955 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}}
I0523 11:12:34.226504   30955 machine.go:88] provisioning docker machine ...
I0523 11:12:34.226545   30955 ubuntu.go:169] provisioning hostname "minikube"
I0523 11:12:34.226687   30955 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0523 11:12:34.268697   30955 main.go:141] libmachine: Using SSH client type: native
I0523 11:12:34.268875   30955 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1025a1d30] 0x1025a47d0 <nil>  [] 0s} 127.0.0.1 52261 <nil> <nil>}
I0523 11:12:34.268883   30955 main.go:141] libmachine: About to run SSH command:
sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname
I0523 11:12:34.399946   30955 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube

I0523 11:12:34.400061   30955 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0523 11:12:34.465810   30955 main.go:141] libmachine: Using SSH client type: native
I0523 11:12:34.466009   30955 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1025a1d30] 0x1025a47d0 <nil>  [] 0s} 127.0.0.1 52261 <nil> <nil>}
I0523 11:12:34.466016   30955 main.go:141] libmachine: About to run SSH command:

		if ! grep -xq '.*\sminikube' /etc/hosts; then
			if grep -xq '127.0.1.1\s.*' /etc/hosts; then
				sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts;
			else 
				echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts; 
			fi
		fi
I0523 11:12:34.581955   30955 main.go:141] libmachine: SSH cmd err, output: <nil>: 
I0523 11:12:34.581972   30955 ubuntu.go:175] set auth options {CertDir:/Users/yiji/.minikube CaCertPath:/Users/yiji/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/yiji/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/yiji/.minikube/machines/server.pem ServerKeyPath:/Users/yiji/.minikube/machines/server-key.pem ClientKeyPath:/Users/yiji/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/yiji/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/yiji/.minikube}
I0523 11:12:34.581986   30955 ubuntu.go:177] setting up certificates
I0523 11:12:34.581990   30955 provision.go:83] configureAuth start
I0523 11:12:34.582094   30955 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0523 11:12:34.630375   30955 provision.go:138] copyHostCerts
I0523 11:12:34.630497   30955 exec_runner.go:144] found /Users/yiji/.minikube/ca.pem, removing ...
I0523 11:12:34.630500   30955 exec_runner.go:207] rm: /Users/yiji/.minikube/ca.pem
I0523 11:12:34.630609   30955 exec_runner.go:151] cp: /Users/yiji/.minikube/certs/ca.pem --> /Users/yiji/.minikube/ca.pem (1070 bytes)
I0523 11:12:34.631787   30955 exec_runner.go:144] found /Users/yiji/.minikube/cert.pem, removing ...
I0523 11:12:34.631790   30955 exec_runner.go:207] rm: /Users/yiji/.minikube/cert.pem
I0523 11:12:34.631895   30955 exec_runner.go:151] cp: /Users/yiji/.minikube/certs/cert.pem --> /Users/yiji/.minikube/cert.pem (1115 bytes)
I0523 11:12:34.632038   30955 exec_runner.go:144] found /Users/yiji/.minikube/key.pem, removing ...
I0523 11:12:34.632040   30955 exec_runner.go:207] rm: /Users/yiji/.minikube/key.pem
I0523 11:12:34.632089   30955 exec_runner.go:151] cp: /Users/yiji/.minikube/certs/key.pem --> /Users/yiji/.minikube/key.pem (1679 bytes)
I0523 11:12:34.632403   30955 provision.go:112] generating server cert: /Users/yiji/.minikube/machines/server.pem ca-key=/Users/yiji/.minikube/certs/ca.pem private-key=/Users/yiji/.minikube/certs/ca-key.pem org=yiji.minikube san=[192.168.200.200 127.0.0.1 localhost 127.0.0.1 minikube minikube]
I0523 11:12:34.721979   30955 provision.go:172] copyRemoteCerts
I0523 11:12:34.722228   30955 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0523 11:12:34.722273   30955 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0523 11:12:34.764999   30955 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52261 SSHKeyPath:/Users/yiji/.minikube/machines/minikube/id_rsa Username:docker}
I0523 11:12:34.849360   30955 ssh_runner.go:362] scp /Users/yiji/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1070 bytes)
I0523 11:12:34.860716   30955 ssh_runner.go:362] scp /Users/yiji/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
I0523 11:12:34.869718   30955 ssh_runner.go:362] scp /Users/yiji/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0523 11:12:34.877876   30955 provision.go:86] duration metric: configureAuth took 295.874666ms
I0523 11:12:34.877884   30955 ubuntu.go:193] setting minikube options for container-runtime
I0523 11:12:34.878040   30955 config.go:180] Loaded profile config "minikube": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.8
I0523 11:12:34.878113   30955 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0523 11:12:34.942789   30955 main.go:141] libmachine: Using SSH client type: native
I0523 11:12:34.942943   30955 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1025a1d30] 0x1025a47d0 <nil>  [] 0s} 127.0.0.1 52261 <nil> <nil>}
I0523 11:12:34.942950   30955 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0523 11:12:35.061822   30955 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay

I0523 11:12:35.061834   30955 ubuntu.go:71] root file system type: overlay
I0523 11:12:35.062153   30955 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0523 11:12:35.062346   30955 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0523 11:12:35.127948   30955 main.go:141] libmachine: Using SSH client type: native
I0523 11:12:35.128092   30955 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1025a1d30] 0x1025a47d0 <nil>  [] 0s} 127.0.0.1 52261 <nil> <nil>}
I0523 11:12:35.128135   30955 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60

[Service]
Type=notify
Restart=on-failure



# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP \$MAINPID

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

# kill only the docker process, not all processes in the cgroup
KillMode=process

[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0523 11:12:35.266266   30955 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60

[Service]
Type=notify
Restart=on-failure



# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP $MAINPID

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

# kill only the docker process, not all processes in the cgroup
KillMode=process

[Install]
WantedBy=multi-user.target

I0523 11:12:35.266421   30955 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0523 11:12:35.356763   30955 main.go:141] libmachine: Using SSH client type: native
I0523 11:12:35.356943   30955 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1025a1d30] 0x1025a47d0 <nil>  [] 0s} 127.0.0.1 52261 <nil> <nil>}
I0523 11:12:35.356951   30955 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0523 11:12:35.782668   30955 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-01-19 17:31:11.000000000 +0000
+++ /lib/systemd/system/docker.service.new	2024-05-23 03:12:35.263243009 +0000
@@ -1,30 +1,32 @@
 [Unit]
 Description=Docker Application Container Engine
 Documentation=https://docs.docker.com
-After=network-online.target docker.socket firewalld.service containerd.service
+BindsTo=containerd.service
+After=network-online.target firewalld.service containerd.service
 Wants=network-online.target
-Requires=docker.socket containerd.service
+Requires=docker.socket
+StartLimitBurst=3
+StartLimitIntervalSec=60
 
 [Service]
 Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutSec=0
-RestartSec=2
-Restart=always
-
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
+Restart=on-failure
 
-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
+ExecReload=/bin/kill -s HUP $MAINPID
 
 # Having non-zero Limit*s causes performance problems due to accounting overhead
 # in the kernel. We recommend using cgroups to do container-local accounting.
@@ -32,16 +34,16 @@
 LimitNPROC=infinity
 LimitCORE=infinity
 
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
 TasksMax=infinity
+TimeoutStartSec=0
 
 # set delegate yes so that systemd does not reset the cgroups of docker containers
 Delegate=yes
 
 # kill only the docker process, not all processes in the cgroup
 KillMode=process
-OOMScoreAdjust=-500
 
 [Install]
 WantedBy=multi-user.target
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker

I0523 11:12:35.782683   30955 machine.go:91] provisioned docker machine in 1.55615375s
I0523 11:12:35.782687   30955 client.go:171] LocalClient.Create took 3.456055417s
I0523 11:12:35.782699   30955 start.go:167] duration metric: libmachine.API.Create for "minikube" took 3.456091834s
I0523 11:12:35.782701   30955 start.go:300] post-start starting for "minikube" (driver="docker")
I0523 11:12:35.782704   30955 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0523 11:12:35.782842   30955 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0523 11:12:35.782906   30955 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0523 11:12:35.839553   30955 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52261 SSHKeyPath:/Users/yiji/.minikube/machines/minikube/id_rsa Username:docker}
I0523 11:12:35.926841   30955 ssh_runner.go:195] Run: cat /etc/os-release
I0523 11:12:35.929215   30955 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0523 11:12:35.929233   30955 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0523 11:12:35.929240   30955 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0523 11:12:35.929244   30955 info.go:137] Remote host: Ubuntu 20.04.5 LTS
I0523 11:12:35.929250   30955 filesync.go:126] Scanning /Users/yiji/.minikube/addons for local assets ...
I0523 11:12:35.929588   30955 filesync.go:126] Scanning /Users/yiji/.minikube/files for local assets ...
I0523 11:12:35.929788   30955 filesync.go:149] local asset: /Users/yiji/.minikube/files/etc/hosts -> hosts in /etc
I0523 11:12:35.929827   30955 ssh_runner.go:362] scp /Users/yiji/.minikube/files/etc/hosts --> /etc/hosts (40 bytes)
I0523 11:12:35.942217   30955 start.go:303] post-start completed in 159.504792ms
I0523 11:12:35.942831   30955 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0523 11:12:35.993986   30955 profile.go:148] Saving config to /Users/yiji/.minikube/profiles/minikube/config.json ...
I0523 11:12:35.994824   30955 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0523 11:12:35.994878   30955 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0523 11:12:36.036396   30955 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52261 SSHKeyPath:/Users/yiji/.minikube/machines/minikube/id_rsa Username:docker}
I0523 11:12:36.124572   30955 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0523 11:12:36.128990   30955 start.go:128] duration metric: createHost completed in 3.808964334s
I0523 11:12:36.128997   30955 start.go:83] releasing machines lock for "minikube", held for 3.809036334s
I0523 11:12:36.129091   30955 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0523 11:12:36.183862   30955 ssh_runner.go:195] Run: cat /version.json
I0523 11:12:36.183939   30955 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.cn-hangzhou.aliyuncs.com/google_containers/
I0523 11:12:36.183945   30955 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0523 11:12:36.184744   30955 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0523 11:12:36.249233   30955 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52261 SSHKeyPath:/Users/yiji/.minikube/machines/minikube/id_rsa Username:docker}
I0523 11:12:36.249235   30955 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52261 SSHKeyPath:/Users/yiji/.minikube/machines/minikube/id_rsa Username:docker}
I0523 11:12:36.508225   30955 ssh_runner.go:195] Run: systemctl --version
I0523 11:12:36.517168   30955 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0523 11:12:36.524614   30955 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I0523 11:12:36.550803   30955 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I0523 11:12:36.551221   30955 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
I0523 11:12:36.564911   30955 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
I0523 11:12:36.572722   30955 cni.go:307] configured [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
I0523 11:12:36.572755   30955 start.go:483] detecting cgroup driver to use...
I0523 11:12:36.572792   30955 detect.go:196] detected "cgroupfs" cgroup driver on host os
I0523 11:12:36.573268   30955 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0523 11:12:36.585267   30955 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6"|' /etc/containerd/config.toml"
I0523 11:12:36.592204   30955 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0523 11:12:36.598874   30955 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
I0523 11:12:36.599143   30955 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0523 11:12:36.606491   30955 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0523 11:12:36.612176   30955 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0523 11:12:36.617204   30955 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0523 11:12:36.621854   30955 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0523 11:12:36.626087   30955 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0523 11:12:36.630339   30955 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0523 11:12:36.634275   30955 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0523 11:12:36.637894   30955 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0523 11:12:36.669043   30955 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0523 11:12:36.714959   30955 start.go:483] detecting cgroup driver to use...
I0523 11:12:36.714986   30955 detect.go:196] detected "cgroupfs" cgroup driver on host os
I0523 11:12:36.715196   30955 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0523 11:12:36.722022   30955 cruntime.go:273] skipping containerd shutdown because we are bound to it
I0523 11:12:36.722216   30955 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0523 11:12:36.728587   30955 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
image-endpoint: unix:///var/run/dockershim.sock
" | sudo tee /etc/crictl.yaml"
I0523 11:12:36.735850   30955 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0523 11:12:36.770604   30955 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0523 11:12:36.812576   30955 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
I0523 11:12:36.812602   30955 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
I0523 11:12:36.822795   30955 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0523 11:12:36.859419   30955 ssh_runner.go:195] Run: sudo systemctl restart docker
I0523 11:12:36.962416   30955 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0523 11:12:37.030930   30955 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0523 11:12:37.054936   30955 out.go:97] 正在 Docker 20.10.23 中准备 Kubernetes v1.23.8…
I0523 11:12:37.055097   30955 cli_runner.go:164] Run: docker exec -t minikube dig +short host.docker.internal
I0523 11:12:37.142981   30955 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
I0523 11:12:37.143166   30955 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
I0523 11:12:37.145156   30955 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0523 11:12:37.150267   30955 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" minikube
I0523 11:12:37.190556   30955 preload.go:132] Checking if preload exists for k8s version v1.23.8 and runtime docker
I0523 11:12:37.190620   30955 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0523 11:12:37.208298   30955 docker.go:630] Got preloaded images: 
I0523 11:12:37.208308   30955 docker.go:636] registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.23.8 wasn't preloaded
I0523 11:12:37.208311   30955 cache_images.go:88] LoadImages start: [registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.23.8 registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.23.8 registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.23.8 registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.23.8 registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6 registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.1-0 registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.8.6 registry.cn-hangzhou.aliyuncs.com/google_containers/storage-provisioner:v5]
I0523 11:12:37.212521   30955 image.go:134] retrieving image: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.23.8
I0523 11:12:37.212527   30955 image.go:134] retrieving image: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.23.8
I0523 11:12:37.212554   30955 image.go:134] retrieving image: registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6
I0523 11:12:37.212619   30955 image.go:134] retrieving image: registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.1-0
I0523 11:12:37.212749   30955 image.go:134] retrieving image: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.23.8
I0523 11:12:37.212830   30955 image.go:134] retrieving image: registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.8.6
I0523 11:12:37.212847   30955 image.go:134] retrieving image: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.23.8
I0523 11:12:37.212984   30955 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.cn-hangzhou.aliyuncs.com/google_containers/storage-provisioner:v5
I0523 11:12:37.217901   30955 image.go:177] daemon lookup for registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6: Error: No such image: registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6
I0523 11:12:37.218124   30955 image.go:177] daemon lookup for registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.1-0: Error: No such image: registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.1-0
I0523 11:12:37.219017   30955 image.go:177] daemon lookup for registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.23.8: Error: No such image: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.23.8
I0523 11:12:37.219259   30955 image.go:177] daemon lookup for registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.23.8: Error: No such image: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.23.8
I0523 11:12:37.219267   30955 image.go:177] daemon lookup for registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.8.6: Error: No such image: registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.8.6
I0523 11:12:37.219296   30955 image.go:177] daemon lookup for registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.23.8: Error: No such image: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.23.8
I0523 11:12:37.219299   30955 image.go:177] daemon lookup for registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.23.8: Error: No such image: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.23.8
I0523 11:12:37.224984   30955 cache_images.go:116] "registry.cn-hangzhou.aliyuncs.com/google_containers/storage-provisioner:v5" needs transfer: "registry.cn-hangzhou.aliyuncs.com/google_containers/storage-provisioner:v5" does not exist at hash "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6" in container runtime
I0523 11:12:37.225027   30955 docker.go:306] Removing image: registry.cn-hangzhou.aliyuncs.com/google_containers/storage-provisioner:v5
I0523 11:12:37.225095   30955 ssh_runner.go:195] Run: docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/storage-provisioner:v5
I0523 11:12:37.236010   30955 cache_images.go:286] Loading image from: /Users/yiji/.minikube/cache/images/arm64/registry.cn-hangzhou.aliyuncs.com/google_containers/storage-provisioner_v5
I0523 11:12:37.236504   30955 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
I0523 11:12:37.238401   30955 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
stdout:

stderr:
stat: cannot stat '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
I0523 11:12:37.238427   30955 ssh_runner.go:362] scp /Users/yiji/.minikube/cache/images/arm64/registry.cn-hangzhou.aliyuncs.com/google_containers/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
I0523 11:12:37.375354   30955 docker.go:273] Loading image: /var/lib/minikube/images/storage-provisioner_v5
I0523 11:12:37.375390   30955 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
I0523 11:12:37.563795   30955 cache_images.go:315] Transferred and loaded /Users/yiji/.minikube/cache/images/arm64/registry.cn-hangzhou.aliyuncs.com/google_containers/storage-provisioner_v5 from cache
I0523 11:12:37.701409   30955 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6
I0523 11:12:37.713689   30955 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.1-0
I0523 11:12:37.716746   30955 cache_images.go:116] "registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6" needs transfer: "registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6" does not exist at hash "7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e" in container runtime
I0523 11:12:37.716771   30955 docker.go:306] Removing image: registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6
I0523 11:12:37.716844   30955 ssh_runner.go:195] Run: docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6
I0523 11:12:37.729023   30955 cache_images.go:116] "registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.1-0" needs transfer: "registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.1-0" does not exist at hash "1040f7790951c9d14469b9c1fb94f8e6212b17ad124055e4a5c8456ee8ef5d7e" in container runtime
I0523 11:12:37.729057   30955 docker.go:306] Removing image: registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.1-0
I0523 11:12:37.729143   30955 ssh_runner.go:195] Run: docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.1-0
I0523 11:12:37.737506   30955 cache_images.go:286] Loading image from: /Users/yiji/.minikube/cache/images/arm64/registry.cn-hangzhou.aliyuncs.com/google_containers/pause_3.6
I0523 11:12:37.737731   30955 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.6
I0523 11:12:37.742101   30955 cache_images.go:286] Loading image from: /Users/yiji/.minikube/cache/images/arm64/registry.cn-hangzhou.aliyuncs.com/google_containers/etcd_3.5.1-0
I0523 11:12:37.742112   30955 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.6: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.6: Process exited with status 1
stdout:

stderr:
stat: cannot stat '/var/lib/minikube/images/pause_3.6': No such file or directory
I0523 11:12:37.742131   30955 ssh_runner.go:362] scp /Users/yiji/.minikube/cache/images/arm64/registry.cn-hangzhou.aliyuncs.com/google_containers/pause_3.6 --> /var/lib/minikube/images/pause_3.6 (252416 bytes)
I0523 11:12:37.742238   30955 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.1-0
I0523 11:12:37.745736   30955 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.1-0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.1-0: Process exited with status 1
stdout:

stderr:
stat: cannot stat '/var/lib/minikube/images/etcd_3.5.1-0': No such file or directory
I0523 11:12:37.745755   30955 ssh_runner.go:362] scp /Users/yiji/.minikube/cache/images/arm64/registry.cn-hangzhou.aliyuncs.com/google_containers/etcd_3.5.1-0 --> /var/lib/minikube/images/etcd_3.5.1-0 (59557888 bytes)
I0523 11:12:37.755450   30955 docker.go:273] Loading image: /var/lib/minikube/images/pause_3.6
I0523 11:12:37.755479   30955 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.6 | docker load"
I0523 11:12:37.857830   30955 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.23.8
I0523 11:12:37.882119   30955 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.23.8
I0523 11:12:37.889102   30955 cache_images.go:315] Transferred and loaded /Users/yiji/.minikube/cache/images/arm64/registry.cn-hangzhou.aliyuncs.com/google_containers/pause_3.6 from cache
I0523 11:12:37.921228   30955 cache_images.go:116] "registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.23.8" needs transfer: "registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.23.8" does not exist at hash "a1c5e956efa93b048d5337ec3c8811c8d6f30b27c5ceb235fc270d60f1b0b3b8" in container runtime
I0523 11:12:37.921285   30955 docker.go:306] Removing image: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.23.8
I0523 11:12:37.921417   30955 ssh_runner.go:195] Run: docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.23.8
I0523 11:12:37.952198   30955 cache_images.go:116] "registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.23.8" needs transfer: "registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.23.8" does not exist at hash "ccc7707581070601ac61230edb2639d55414692157a7e48bcbe6e7c9629c9afe" in container runtime
I0523 11:12:37.952222   30955 docker.go:306] Removing image: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.23.8
I0523 11:12:37.952298   30955 ssh_runner.go:195] Run: docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.23.8
I0523 11:12:37.991357   30955 cache_images.go:286] Loading image from: /Users/yiji/.minikube/cache/images/arm64/registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy_v1.23.8
I0523 11:12:37.991721   30955 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.23.8
I0523 11:12:38.008845   30955 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.23.8
I0523 11:12:38.019093   30955 cache_images.go:286] Loading image from: /Users/yiji/.minikube/cache/images/arm64/registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager_v1.23.8
I0523 11:12:38.019295   30955 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.23.8
W0523 11:12:38.041408   30955 image.go:265] image registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
I0523 11:12:38.041691   30955 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.8.6
I0523 11:12:38.066914   30955 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.23.8: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.23.8: Process exited with status 1
stdout:

stderr:
stat: cannot stat '/var/lib/minikube/images/kube-proxy_v1.23.8': No such file or directory
I0523 11:12:38.066950   30955 ssh_runner.go:362] scp /Users/yiji/.minikube/cache/images/arm64/registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy_v1.23.8 --> /var/lib/minikube/images/kube-proxy_v1.23.8 (37931520 bytes)
I0523 11:12:38.083730   30955 cache_images.go:116] "registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.23.8" needs transfer: "registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.23.8" does not exist at hash "a6622f1ebe63944abf91aabc625ffc4607aeea14362b192af4c69a027d339702" in container runtime
I0523 11:12:38.083734   30955 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.23.8: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.23.8: Process exited with status 1
stdout:

stderr:
stat: cannot stat '/var/lib/minikube/images/kube-controller-manager_v1.23.8': No such file or directory
I0523 11:12:38.083752   30955 docker.go:306] Removing image: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.23.8
I0523 11:12:38.083769   30955 ssh_runner.go:362] scp /Users/yiji/.minikube/cache/images/arm64/registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager_v1.23.8 --> /var/lib/minikube/images/kube-controller-manager_v1.23.8 (27493376 bytes)
I0523 11:12:38.083816   30955 ssh_runner.go:195] Run: docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.23.8
I0523 11:12:38.121259   30955 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.23.8
I0523 11:12:38.128429   30955 cache_images.go:116] "registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.8.6" needs transfer: "registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
I0523 11:12:38.128471   30955 docker.go:306] Removing image: registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.8.6
I0523 11:12:38.128557   30955 ssh_runner.go:195] Run: docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.8.6
I0523 11:12:38.197207   30955 cache_images.go:286] Loading image from: /Users/yiji/.minikube/cache/images/arm64/registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver_v1.23.8
I0523 11:12:38.197586   30955 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.23.8
I0523 11:12:38.223591   30955 cache_images.go:116] "registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.23.8" needs transfer: "registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.23.8" does not exist at hash "915f4d226acb3bc5681061d57b1dc9b77689ba1663b613180f4a7facd7d9d623" in container runtime
I0523 11:12:38.223621   30955 docker.go:306] Removing image: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.23.8
I0523 11:12:38.223702   30955 ssh_runner.go:195] Run: docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.23.8
I0523 11:12:38.232802   30955 cache_images.go:286] Loading image from: /Users/yiji/.minikube/cache/images/arm64/registry.cn-hangzhou.aliyuncs.com/google_containers/coredns_v1.8.6
I0523 11:12:38.233005   30955 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
I0523 11:12:38.270723   30955 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.23.8: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.23.8: Process exited with status 1
stdout:

stderr:
stat: cannot stat '/var/lib/minikube/images/kube-apiserver_v1.23.8': No such file or directory
I0523 11:12:38.270800   30955 ssh_runner.go:362] scp /Users/yiji/.minikube/cache/images/arm64/registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver_v1.23.8 --> /var/lib/minikube/images/kube-apiserver_v1.23.8 (29742592 bytes)
I0523 11:12:38.299174   30955 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
stdout:

stderr:
stat: cannot stat '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
I0523 11:12:38.299218   30955 cache_images.go:286] Loading image from: /Users/yiji/.minikube/cache/images/arm64/registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler_v1.23.8
I0523 11:12:38.299222   30955 ssh_runner.go:362] scp /Users/yiji/.minikube/cache/images/arm64/registry.cn-hangzhou.aliyuncs.com/google_containers/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
I0523 11:12:38.299516   30955 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.23.8
I0523 11:12:38.369120   30955 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.23.8: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.23.8: Process exited with status 1
stdout:

stderr:
stat: cannot stat '/var/lib/minikube/images/kube-scheduler_v1.23.8': No such file or directory
I0523 11:12:38.369163   30955 ssh_runner.go:362] scp /Users/yiji/.minikube/cache/images/arm64/registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler_v1.23.8 --> /var/lib/minikube/images/kube-scheduler_v1.23.8 (13765632 bytes)
I0523 11:12:39.506566   30955 docker.go:273] Loading image: /var/lib/minikube/images/coredns_v1.8.6
I0523 11:12:39.506602   30955 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
I0523 11:12:39.948960   30955 cache_images.go:315] Transferred and loaded /Users/yiji/.minikube/cache/images/arm64/registry.cn-hangzhou.aliyuncs.com/google_containers/coredns_v1.8.6 from cache
I0523 11:12:39.949010   30955 docker.go:273] Loading image: /var/lib/minikube/images/kube-scheduler_v1.23.8
I0523 11:12:39.949051   30955 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-scheduler_v1.23.8 | docker load"
I0523 11:12:40.526228   30955 cache_images.go:315] Transferred and loaded /Users/yiji/.minikube/cache/images/arm64/registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler_v1.23.8 from cache
I0523 11:12:40.526271   30955 docker.go:273] Loading image: /var/lib/minikube/images/etcd_3.5.1-0
I0523 11:12:40.526289   30955 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.1-0 | docker load"
I0523 11:12:41.339322   30955 cache_images.go:315] Transferred and loaded /Users/yiji/.minikube/cache/images/arm64/registry.cn-hangzhou.aliyuncs.com/google_containers/etcd_3.5.1-0 from cache
I0523 11:12:41.339350   30955 docker.go:273] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.23.8
I0523 11:12:41.339366   30955 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-controller-manager_v1.23.8 | docker load"
I0523 11:12:41.801411   30955 cache_images.go:315] Transferred and loaded /Users/yiji/.minikube/cache/images/arm64/registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager_v1.23.8 from cache
I0523 11:12:41.801516   30955 docker.go:273] Loading image: /var/lib/minikube/images/kube-apiserver_v1.23.8
I0523 11:12:41.801530   30955 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-apiserver_v1.23.8 | docker load"
I0523 11:12:42.494832   30955 cache_images.go:315] Transferred and loaded /Users/yiji/.minikube/cache/images/arm64/registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver_v1.23.8 from cache
I0523 11:12:42.494921   30955 docker.go:273] Loading image: /var/lib/minikube/images/kube-proxy_v1.23.8
I0523 11:12:42.494954   30955 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-proxy_v1.23.8 | docker load"
I0523 11:12:43.144427   30955 cache_images.go:315] Transferred and loaded /Users/yiji/.minikube/cache/images/arm64/registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy_v1.23.8 from cache
I0523 11:12:43.144524   30955 cache_images.go:123] Successfully loaded all cached images
I0523 11:12:43.144533   30955 cache_images.go:92] LoadImages completed in 5.93618475s
I0523 11:12:43.145089   30955 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0523 11:12:43.262752   30955 cni.go:84] Creating CNI manager for ""
I0523 11:12:43.262771   30955 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
I0523 11:12:43.263053   30955 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0523 11:12:43.263080   30955 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.200.200 APIServerPort:8443 KubernetesVersion:v1.23.8 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository:registry.cn-hangzhou.aliyuncs.com/google_containers ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.200.200"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.200.200 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
I0523 11:12:43.263460   30955 kubeadm.go:177] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.200.200
  bindPort: 8443
bootstrapTokens:
  - groups:
      - system:bootstrappers:kubeadm:default-node-token
    ttl: 24h0m0s
    usages:
      - signing
      - authentication
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: "minikube"
  kubeletExtraArgs:
    node-ip: 192.168.200.200
  taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
apiServer:
  certSANs: ["127.0.0.1", "localhost", "192.168.200.200"]
  extraArgs:
    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
  extraArgs:
    allocate-node-cidrs: "true"
    leader-elect: "false"
scheduler:
  extraArgs:
    leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
  local:
    dataDir: /var/lib/minikube/etcd
    extraArgs:
      proxy-refresh-interval: "70000"
kubernetesVersion: v1.23.8
networking:
  dnsDomain: cluster.local
  podSubnet: "10.244.0.0/16"
  serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
  x509:
    clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
  nodefs.available: "0%!"(MISSING)
  nodefs.inodesFree: "0%!"(MISSING)
  imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
  maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
  tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
  tcpCloseWaitTimeout: 0s

I0523 11:12:43.263666   30955 kubeadm.go:968] kubelet [Unit]
Wants=docker.socket

[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.23.8/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.200.200 --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6

[Install]
 config:
{KubernetesVersion:v1.23.8 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[127.0.0.1 30.249.128.230] APIServerIPs:[127.0.0.1 30.249.128.230] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository:registry.cn-hangzhou.aliyuncs.com/google_containers LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0523 11:12:43.263879   30955 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.8
I0523 11:12:43.268680   30955 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.23.8: Process exited with status 2
stdout:

stderr:
ls: cannot access '/var/lib/minikube/binaries/v1.23.8': No such file or directory

Initiating transfer...
I0523 11:12:43.268864   30955 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.23.8
I0523 11:12:43.274823   30955 binary.go:76] Not caching binary, using https://kubernetes.oss-cn-hangzhou.aliyuncs.com/kubernetes-release/release/v1.23.8/bin/linux/arm64/kubectl?checksum=file:https://kubernetes.oss-cn-hangzhou.aliyuncs.com/kubernetes-release/release/v1.23.8/bin/linux/arm64/kubectl.sha256
I0523 11:12:43.274828   30955 binary.go:76] Not caching binary, using https://kubernetes.oss-cn-hangzhou.aliyuncs.com/kubernetes-release/release/v1.23.8/bin/linux/arm64/kubelet?checksum=file:https://kubernetes.oss-cn-hangzhou.aliyuncs.com/kubernetes-release/release/v1.23.8/bin/linux/arm64/kubelet.sha256
I0523 11:12:43.275060   30955 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0523 11:12:43.275254   30955 binary.go:76] Not caching binary, using https://kubernetes.oss-cn-hangzhou.aliyuncs.com/kubernetes-release/release/v1.23.8/bin/linux/arm64/kubeadm?checksum=file:https://kubernetes.oss-cn-hangzhou.aliyuncs.com/kubernetes-release/release/v1.23.8/bin/linux/arm64/kubeadm.sha256
I0523 11:12:43.275291   30955 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.23.8/kubectl
I0523 11:12:43.275624   30955 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.23.8/kubeadm
I0523 11:12:43.283421   30955 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.23.8/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.23.8/kubectl: Process exited with status 1
stdout:

stderr:
stat: cannot stat '/var/lib/minikube/binaries/v1.23.8/kubectl': No such file or directory
I0523 11:12:43.283458   30955 ssh_runner.go:362] scp /Users/yiji/.minikube/cache/linux/arm64/v1.23.8/kubectl --> /var/lib/minikube/binaries/v1.23.8/kubectl (46202880 bytes)
I0523 11:12:43.283518   30955 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.23.8/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.23.8/kubeadm: Process exited with status 1
stdout:

stderr:
stat: cannot stat '/var/lib/minikube/binaries/v1.23.8/kubeadm': No such file or directory
I0523 11:12:43.283533   30955 ssh_runner.go:362] scp /Users/yiji/.minikube/cache/linux/arm64/v1.23.8/kubeadm --> /var/lib/minikube/binaries/v1.23.8/kubeadm (44826624 bytes)
I0523 11:12:43.283768   30955 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.23.8/kubelet
I0523 11:12:43.286666   30955 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.23.8/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.23.8/kubelet: Process exited with status 1
stdout:

stderr:
stat: cannot stat '/var/lib/minikube/binaries/v1.23.8/kubelet': No such file or directory
I0523 11:12:43.286760   30955 ssh_runner.go:362] scp /Users/yiji/.minikube/cache/linux/arm64/v1.23.8/kubelet --> /var/lib/minikube/binaries/v1.23.8/kubelet (122389816 bytes)
I0523 11:12:46.500796   30955 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0523 11:12:46.510885   30955 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (427 bytes)
I0523 11:12:46.521423   30955 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0523 11:12:46.531679   30955 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
I0523 11:12:46.539939   30955 ssh_runner.go:195] Run: grep 192.168.200.200	control-plane.minikube.internal$ /etc/hosts
I0523 11:12:46.542256   30955 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.200.200	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0523 11:12:46.555087   30955 certs.go:56] Setting up /Users/yiji/.minikube/profiles/minikube for IP: 192.168.200.200
I0523 11:12:46.555102   30955 certs.go:186] acquiring lock for shared ca certs: {Name:mkfe0855c1dd6689397175bcf6d7a137aff04be9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0523 11:12:46.556339   30955 certs.go:195] skipping minikubeCA CA generation: /Users/yiji/.minikube/ca.key
I0523 11:12:46.556601   30955 certs.go:195] skipping proxyClientCA CA generation: /Users/yiji/.minikube/proxy-client-ca.key
I0523 11:12:46.556673   30955 certs.go:315] generating minikube-user signed cert: /Users/yiji/.minikube/profiles/minikube/client.key
I0523 11:12:46.556685   30955 crypto.go:68] Generating cert /Users/yiji/.minikube/profiles/minikube/client.crt with IP's: []
I0523 11:12:46.637056   30955 crypto.go:156] Writing cert to /Users/yiji/.minikube/profiles/minikube/client.crt ...
I0523 11:12:46.637067   30955 lock.go:35] WriteFile acquiring /Users/yiji/.minikube/profiles/minikube/client.crt: {Name:mka9a082485fcf5df1ad0a9005d9a9a45f25ce32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0523 11:12:46.637369   30955 crypto.go:164] Writing key to /Users/yiji/.minikube/profiles/minikube/client.key ...
I0523 11:12:46.637382   30955 lock.go:35] WriteFile acquiring /Users/yiji/.minikube/profiles/minikube/client.key: {Name:mk5c3bc9cebb26175d675ffd2c3e3cd628246cb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0523 11:12:46.637560   30955 certs.go:315] generating minikube signed cert: /Users/yiji/.minikube/profiles/minikube/apiserver.key.78d64015
I0523 11:12:46.637569   30955 crypto.go:68] Generating cert /Users/yiji/.minikube/profiles/minikube/apiserver.crt.78d64015 with IP's: [127.0.0.1 30.249.128.230 192.168.200.200 10.96.0.1 127.0.0.1 10.0.0.1]
I0523 11:12:46.828251   30955 crypto.go:156] Writing cert to /Users/yiji/.minikube/profiles/minikube/apiserver.crt.78d64015 ...
I0523 11:12:46.828262   30955 lock.go:35] WriteFile acquiring /Users/yiji/.minikube/profiles/minikube/apiserver.crt.78d64015: {Name:mkedc5327f968d0ca49a80da249618f239e44564 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0523 11:12:46.828664   30955 crypto.go:164] Writing key to /Users/yiji/.minikube/profiles/minikube/apiserver.key.78d64015 ...
I0523 11:12:46.828672   30955 lock.go:35] WriteFile acquiring /Users/yiji/.minikube/profiles/minikube/apiserver.key.78d64015: {Name:mk2124899a23e210f4cd33ae04ac1d60dd9a8680 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0523 11:12:46.828938   30955 certs.go:333] copying /Users/yiji/.minikube/profiles/minikube/apiserver.crt.78d64015 -> /Users/yiji/.minikube/profiles/minikube/apiserver.crt
I0523 11:12:46.829239   30955 certs.go:337] copying /Users/yiji/.minikube/profiles/minikube/apiserver.key.78d64015 -> /Users/yiji/.minikube/profiles/minikube/apiserver.key
I0523 11:12:46.829648   30955 certs.go:315] generating aggregator signed cert: /Users/yiji/.minikube/profiles/minikube/proxy-client.key
I0523 11:12:46.829660   30955 crypto.go:68] Generating cert /Users/yiji/.minikube/profiles/minikube/proxy-client.crt with IP's: []
I0523 11:12:46.910600   30955 crypto.go:156] Writing cert to /Users/yiji/.minikube/profiles/minikube/proxy-client.crt ...
I0523 11:12:46.910608   30955 lock.go:35] WriteFile acquiring /Users/yiji/.minikube/profiles/minikube/proxy-client.crt: {Name:mk214f9fef9adf07dae82d5a6e31cd9c87be8d02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0523 11:12:46.910904   30955 crypto.go:164] Writing key to /Users/yiji/.minikube/profiles/minikube/proxy-client.key ...
I0523 11:12:46.910907   30955 lock.go:35] WriteFile acquiring /Users/yiji/.minikube/profiles/minikube/proxy-client.key: {Name:mk3eb9d51b566c3fe551c1480eb7da55991a036b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0523 11:12:46.911394   30955 certs.go:401] found cert: /Users/yiji/.minikube/certs/Users/yiji/.minikube/certs/ca-key.pem (1675 bytes)
I0523 11:12:46.911662   30955 certs.go:401] found cert: /Users/yiji/.minikube/certs/Users/yiji/.minikube/certs/ca.pem (1070 bytes)
I0523 11:12:46.911825   30955 certs.go:401] found cert: /Users/yiji/.minikube/certs/Users/yiji/.minikube/certs/cert.pem (1115 bytes)
I0523 11:12:46.912043   30955 certs.go:401] found cert: /Users/yiji/.minikube/certs/Users/yiji/.minikube/certs/key.pem (1679 bytes)
I0523 11:12:46.912903   30955 ssh_runner.go:362] scp /Users/yiji/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1452 bytes)
I0523 11:12:46.941715   30955 ssh_runner.go:362] scp /Users/yiji/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0523 11:12:46.953074   30955 ssh_runner.go:362] scp /Users/yiji/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0523 11:12:46.965238   30955 ssh_runner.go:362] scp /Users/yiji/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0523 11:12:46.976366   30955 ssh_runner.go:362] scp /Users/yiji/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0523 11:12:46.988198   30955 ssh_runner.go:362] scp /Users/yiji/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0523 11:12:46.999546   30955 ssh_runner.go:362] scp /Users/yiji/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0523 11:12:47.010058   30955 ssh_runner.go:362] scp /Users/yiji/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0523 11:12:47.020670   30955 ssh_runner.go:362] scp /Users/yiji/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0523 11:12:47.029881   30955 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0523 11:12:47.038351   30955 ssh_runner.go:195] Run: openssl version
I0523 11:12:47.043188   30955 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0523 11:12:47.048017   30955 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0523 11:12:47.050142   30955 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Sep 26  2023 /usr/share/ca-certificates/minikubeCA.pem
I0523 11:12:47.050191   30955 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0523 11:12:47.052776   30955 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0523 11:12:47.057208   30955 kubeadm.go:401] StartCluster: {Name:minikube KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:docker.io/kicbase/stable:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 Memory:7940 CPUs:8 DiskSize:61440 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[/Users/yiji/.minimesh/mysql/data:/minikube-host/mysql/data] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.8 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[127.0.0.1 30.249.128.230] APIServerIPs:[127.0.0.1 30.249.128.230] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository:registry.cn-hangzhou.aliyuncs.com/google_containers LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.200.200 Port:8443 KubernetesVersion:v1.23.8 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[8443:8443] ListenAddress:0.0.0.0 Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:true MountString:/Users/yiji/.minimesh/mysql/data:/minikube-host/mysql/data Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:192.168.200.200}
I0523 11:12:47.057315   30955 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0523 11:12:47.083653   30955 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0523 11:12:47.087678   30955 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0523 11:12:47.091363   30955 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
I0523 11:12:47.091417   30955 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0523 11:12:47.095175   30955 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:

stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0523 11:12:47.095194   30955 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.8:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0523 11:12:47.131210   30955 kubeadm.go:322] [init] Using Kubernetes version: v1.23.8
I0523 11:12:47.131269   30955 kubeadm.go:322] [preflight] Running pre-flight checks
I0523 11:12:47.311010   30955 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
I0523 11:12:47.311151   30955 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0523 11:12:47.311319   30955 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0523 11:12:47.375762   30955 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0523 11:12:47.382476   30955 out.go:97] Generating certificates and keys ...
I0523 11:12:47.382694   30955 kubeadm.go:322] [certs] Using existing ca certificate authority
I0523 11:12:47.382855   30955 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
I0523 11:12:47.511104   30955 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
I0523 11:12:47.589759   30955 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
I0523 11:12:47.636440   30955 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
I0523 11:12:47.710107   30955 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
I0523 11:12:47.762555   30955 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
I0523 11:12:47.762740   30955 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost minikube] and IPs [192.168.200.200 127.0.0.1 ::1]
I0523 11:12:48.013726   30955 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
I0523 11:12:48.013867   30955 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost minikube] and IPs [192.168.200.200 127.0.0.1 ::1]
I0523 11:12:48.177999   30955 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
I0523 11:12:48.249072   30955 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
I0523 11:12:48.415487   30955 kubeadm.go:322] [certs] Generating "sa" key and public key
I0523 11:12:48.415616   30955 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0523 11:12:48.446050   30955 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
I0523 11:12:48.532100   30955 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0523 11:12:48.620464   30955 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0523 11:12:48.679147   30955 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0523 11:12:48.686797   30955 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0523 11:12:48.687038   30955 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0523 11:12:48.687160   30955 kubeadm.go:322] [kubelet-start] Starting the kubelet
I0523 11:12:48.732860   30955 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0523 11:12:48.736912   30955 out.go:97] Booting up control plane ...
I0523 11:12:48.737037   30955 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0523 11:12:48.737150   30955 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0523 11:12:48.737222   30955 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0523 11:12:48.737356   30955 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0523 11:12:48.737572   30955 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0523 11:12:53.263111   30955 kubeadm.go:322] [apiclient] All control plane components are healthy after 4.513735 seconds
I0523 11:12:53.263292   30955 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0523 11:12:53.271078   30955 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.23" in namespace kube-system with the configuration for the kubelets in the cluster
I0523 11:12:53.271363   30955 kubeadm.go:322] NOTE: The "kubelet-config-1.23" naming of the kubelet ConfigMap is deprecated. Once the UnversionedKubeletConfigMap feature gate graduates to Beta the default name will become just "kubelet-config". Kubeadm upgrade will handle this transition transparently.
I0523 11:12:53.780812   30955 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
I0523 11:12:53.781439   30955 kubeadm.go:322] [mark-control-plane] Marking the node minikube as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I0523 11:12:54.297562   30955 kubeadm.go:322] [bootstrap-token] Using token: y3t1xb.trgz9veao3u8w7tz
I0523 11:12:54.306451   30955 out.go:97] Configuring RBAC rules ...
I0523 11:12:54.306649   30955 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0523 11:12:54.306747   30955 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
I0523 11:12:54.312980   30955 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0523 11:12:54.315125   30955 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0523 11:12:54.317274   30955 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0523 11:12:54.319598   30955 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0523 11:12:54.327142   30955 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0523 11:12:54.431386   30955 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
I0523 11:12:54.704650   30955 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
I0523 11:12:54.705326   30955 kubeadm.go:322] 
I0523 11:12:54.705408   30955 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
I0523 11:12:54.705414   30955 kubeadm.go:322] 
I0523 11:12:54.705548   30955 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
I0523 11:12:54.705554   30955 kubeadm.go:322] 
I0523 11:12:54.705591   30955 kubeadm.go:322]   mkdir -p $HOME/.kube
I0523 11:12:54.705669   30955 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0523 11:12:54.705736   30955 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0523 11:12:54.705741   30955 kubeadm.go:322] 
I0523 11:12:54.705824   30955 kubeadm.go:322] Alternatively, if you are the root user, you can run:
I0523 11:12:54.705829   30955 kubeadm.go:322] 
I0523 11:12:54.705893   30955 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
I0523 11:12:54.705900   30955 kubeadm.go:322] 
I0523 11:12:54.705970   30955 kubeadm.go:322] You should now deploy a pod network to the cluster.
I0523 11:12:54.706079   30955 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0523 11:12:54.706173   30955 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0523 11:12:54.706177   30955 kubeadm.go:322] 
I0523 11:12:54.706289   30955 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
I0523 11:12:54.706418   30955 kubeadm.go:322] and service account keys on each node and then running the following as root:
I0523 11:12:54.706423   30955 kubeadm.go:322] 
I0523 11:12:54.706535   30955 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token y3t1xb.trgz9veao3u8w7tz \
I0523 11:12:54.706687   30955 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:18fc904dca56514c14203ecbce5deedea59d26e5bbe06dfca9ffb34376f6214d \
I0523 11:12:54.706713   30955 kubeadm.go:322] 	--control-plane 
I0523 11:12:54.706717   30955 kubeadm.go:322] 
I0523 11:12:54.706838   30955 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
I0523 11:12:54.706857   30955 kubeadm.go:322] 
I0523 11:12:54.706967   30955 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token y3t1xb.trgz9veao3u8w7tz \
I0523 11:12:54.707104   30955 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:18fc904dca56514c14203ecbce5deedea59d26e5bbe06dfca9ffb34376f6214d 
I0523 11:12:54.707306   30955 kubeadm.go:322] 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
I0523 11:12:54.707442   30955 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0523 11:12:54.707450   30955 cni.go:84] Creating CNI manager for ""
I0523 11:12:54.707479   30955 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
I0523 11:12:54.707536   30955 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0523 11:12:54.707761   30955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.8/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0523 11:12:54.707846   30955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.8/kubectl label nodes minikube.k8s.io/version=v1.29.0 minikube.k8s.io/commit=ddac20b4b34a9c8c857fc602203b6ba2679794d3 minikube.k8s.io/name=minikube minikube.k8s.io/updated_at=2024_05_23T11_12_54_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
I0523 11:12:54.712885   30955 ops.go:34] apiserver oom_adj: -16
I0523 11:12:54.794336   30955 kubeadm.go:1073] duration metric: took 86.798541ms to wait for elevateKubeSystemPrivileges.
I0523 11:12:54.794367   30955 kubeadm.go:403] StartCluster complete in 7.737131375s
I0523 11:12:54.794383   30955 settings.go:142] acquiring lock: {Name:mkcf04507109bc56058a2e9e728198965071c725 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0523 11:12:54.794723   30955 settings.go:150] Updating kubeconfig:  /Users/yiji/.kube/config
I0523 11:12:54.797381   30955 lock.go:35] WriteFile acquiring /Users/yiji/.kube/config: {Name:mk43ecbcbfaf2f14310b41daf050ab2f55a33264 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0523 11:12:54.797939   30955 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.8/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0523 11:12:54.798505   30955 config.go:180] Loaded profile config "minikube": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.8
I0523 11:12:54.798491   30955 addons.go:489] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
I0523 11:12:54.798816   30955 addons.go:65] Setting storage-provisioner=true in profile "minikube"
I0523 11:12:54.798826   30955 addons.go:65] Setting default-storageclass=true in profile "minikube"
I0523 11:12:54.798828   30955 addons.go:227] Setting addon storage-provisioner=true in "minikube"
W0523 11:12:54.798831   30955 addons.go:236] addon storage-provisioner should already be in state true
I0523 11:12:54.798830   30955 addons.go:65] Setting metrics-server=true in profile "minikube"
I0523 11:12:54.798845   30955 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube"
I0523 11:12:54.798854   30955 addons.go:227] Setting addon metrics-server=true in "minikube"
I0523 11:12:54.799070   30955 host.go:66] Checking if "minikube" exists ...
I0523 11:12:54.799071   30955 host.go:66] Checking if "minikube" exists ...
I0523 11:12:54.799356   30955 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}}
I0523 11:12:54.799451   30955 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}}
I0523 11:12:54.799481   30955 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}}
I0523 11:12:54.833304   30955 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.8/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.23.8/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0523 11:12:54.944203   30955 out.go:169] Using image registry.cn-hangzhou.aliyuncs.com/google_containers/storage-provisioner:v5
I0523 11:12:54.941502   30955 addons.go:227] Setting addon default-storageclass=true in "minikube"
I0523 11:12:54.948327   30955 out.go:169] Using image registry.k8s.io/metrics-server/metrics-server:v0.6.2
W0523 11:12:54.948329   30955 addons.go:236] addon default-storageclass should already be in state true
I0523 11:12:54.948408   30955 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0523 11:12:54.953424   30955 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2708 bytes)
I0523 11:12:54.953440   30955 host.go:66] Checking if "minikube" exists ...
I0523 11:12:54.953578   30955 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0523 11:12:54.953683   30955 addons.go:419] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I0523 11:12:54.953688   30955 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I0523 11:12:54.953751   30955 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0523 11:12:54.955218   30955 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}}
I0523 11:12:55.004533   30955 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52261 SSHKeyPath:/Users/yiji/.minikube/machines/minikube/id_rsa Username:docker}
I0523 11:12:55.005107   30955 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
I0523 11:12:55.005117   30955 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0523 11:12:55.005230   30955 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0523 11:12:55.008522   30955 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52261 SSHKeyPath:/Users/yiji/.minikube/machines/minikube/id_rsa Username:docker}
I0523 11:12:55.045477   30955 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52261 SSHKeyPath:/Users/yiji/.minikube/machines/minikube/id_rsa Username:docker}
I0523 11:12:55.091788   30955 addons.go:419] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I0523 11:12:55.091799   30955 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
I0523 11:12:55.092342   30955 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.8/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0523 11:12:55.100987   30955 addons.go:419] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I0523 11:12:55.100996   30955 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I0523 11:12:55.108426   30955 addons.go:419] installing /etc/kubernetes/addons/metrics-server-service.yaml
I0523 11:12:55.108437   30955 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I0523 11:12:55.116015   30955 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.8/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0523 11:12:55.127937   30955 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.8/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0523 11:12:55.263405   30955 start.go:919] {"host.minikube.internal": 192.168.65.254} host record injected into CoreDNS's ConfigMap
I0523 11:12:55.288577   30955 addons.go:457] Verifying addon metrics-server=true in "minikube"
I0523 11:12:55.295345   30955 out.go:97] Enabled addons: storage-provisioner, metrics-server, default-storageclass
I0523 11:12:55.295356   30955 addons.go:492] enable addons completed in 496.873833ms: enabled=[storage-provisioner metrics-server default-storageclass]
I0523 11:12:55.322328   30955 kapi.go:248] "coredns" deployment in "kube-system" namespace and "minikube" context rescaled to 1 replicas
I0523 11:12:55.322358   30955 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.200.200 Port:8443 KubernetesVersion:v1.23.8 ContainerRuntime:docker ControlPlane:true Worker:true}
I0523 11:12:55.326379   30955 out.go:97] Verifying Kubernetes components...
I0523 11:12:55.326673   30955 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0523 11:12:55.333203   30955 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" minikube
I0523 11:12:55.383872   30955 api_server.go:51] waiting for apiserver process to appear ...
I0523 11:12:55.383944   30955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0523 11:12:55.391145   30955 api_server.go:71] duration metric: took 68.76475ms to wait for apiserver process to appear ...
I0523 11:12:55.391152   30955 api_server.go:87] waiting for apiserver healthz status ...
I0523 11:12:55.391361   30955 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:8443/healthz ...
I0523 11:12:55.396554   30955 api_server.go:278] https://127.0.0.1:8443/healthz returned 200:
ok
I0523 11:12:55.397595   30955 api_server.go:140] control plane version: v1.23.8
I0523 11:12:55.397602   30955 api_server.go:130] duration metric: took 6.447459ms to wait for apiserver health ...
I0523 11:12:55.397606   30955 system_pods.go:43] waiting for kube-system pods to appear ...
I0523 11:12:55.401474   30955 system_pods.go:59] 5 kube-system pods found
I0523 11:12:55.401483   30955 system_pods.go:61] "etcd-minikube" [424911d2-07ba-4ac0-a5a9-66cb3a462d66] Pending
I0523 11:12:55.401486   30955 system_pods.go:61] "kube-apiserver-minikube" [6c013544-5ebf-46d7-b3f3-92d64780f24c] Pending
I0523 11:12:55.401489   30955 system_pods.go:61] "kube-controller-manager-minikube" [22e3de1c-c5ac-4b61-b58c-5a5575c01183] Pending
I0523 11:12:55.401491   30955 system_pods.go:61] "kube-scheduler-minikube" [2e94f15e-b339-4df9-aaba-8ac2e2acff67] Pending
I0523 11:12:55.401493   30955 system_pods.go:61] "storage-provisioner" [2956869c-bbcc-4edf-a934-1e4c970e1c52] Pending
I0523 11:12:55.401495   30955 system_pods.go:74] duration metric: took 3.8875ms to wait for pod list to return data ...
I0523 11:12:55.401498   30955 kubeadm.go:578] duration metric: took 79.12025ms to wait for : map[apiserver:true system_pods:true] ...
I0523 11:12:55.401508   30955 node_conditions.go:102] verifying NodePressure condition ...
I0523 11:12:55.403091   30955 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
I0523 11:12:55.403100   30955 node_conditions.go:123] node cpu capacity is 8
I0523 11:12:55.403111   30955 node_conditions.go:105] duration metric: took 1.60125ms to run NodePressure ...
I0523 11:12:55.403117   30955 start.go:228] waiting for startup goroutines ...
I0523 11:12:55.403120   30955 start.go:233] waiting for cluster config update ...
I0523 11:12:55.403129   30955 start.go:240] writing updated cluster config ...
I0523 11:12:55.403554   30955 ssh_runner.go:195] Run: rm -f paused
I0523 11:12:55.560348   30955 start.go:555] kubectl: 1.24.3, cluster: 1.23.8 (minor skew: 1)
I0523 11:12:55.565208   30955 out.go:97] Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

* 
* ==> Docker <==
* -- Logs begin at Thu 2024-05-23 03:12:33 UTC, end at Thu 2024-05-23 03:19:28 UTC. --
May 23 03:12:35 minikube dockerd[382]: time="2024-05-23T03:12:35.710617259Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
May 23 03:12:35 minikube dockerd[382]: time="2024-05-23T03:12:35.712109301Z" level=info msg="Loading containers: start."
May 23 03:12:35 minikube dockerd[382]: time="2024-05-23T03:12:35.751636342Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
May 23 03:12:35 minikube dockerd[382]: time="2024-05-23T03:12:35.767204342Z" level=info msg="Loading containers: done."
May 23 03:12:35 minikube dockerd[382]: time="2024-05-23T03:12:35.772656217Z" level=info msg="Docker daemon" commit=6051f14 graphdriver(s)=overlay2 version=20.10.23
May 23 03:12:35 minikube dockerd[382]: time="2024-05-23T03:12:35.772695092Z" level=info msg="Daemon has completed initialization"
May 23 03:12:35 minikube systemd[1]: Started Docker Application Container Engine.
May 23 03:12:35 minikube dockerd[382]: time="2024-05-23T03:12:35.783526509Z" level=info msg="API listen on [::]:2376"
May 23 03:12:35 minikube dockerd[382]: time="2024-05-23T03:12:35.785611967Z" level=info msg="API listen on /var/run/docker.sock"
May 23 03:12:36 minikube systemd[1]: Stopping Docker Application Container Engine...
May 23 03:12:36 minikube dockerd[382]: time="2024-05-23T03:12:36.673191760Z" level=info msg="Processing signal 'terminated'"
May 23 03:12:36 minikube dockerd[382]: time="2024-05-23T03:12:36.673571385Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
May 23 03:12:36 minikube dockerd[382]: time="2024-05-23T03:12:36.673636426Z" level=info msg="Daemon shutdown complete"
May 23 03:12:36 minikube systemd[1]: docker.service: Succeeded.
May 23 03:12:36 minikube systemd[1]: Stopped Docker Application Container Engine.
May 23 03:12:36 minikube systemd[1]: Starting Docker Application Container Engine...
May 23 03:12:36 minikube dockerd[626]: time="2024-05-23T03:12:36.735593385Z" level=info msg="Starting up"
May 23 03:12:36 minikube dockerd[626]: time="2024-05-23T03:12:36.736416260Z" level=info msg="parsed scheme: \"unix\"" module=grpc
May 23 03:12:36 minikube dockerd[626]: time="2024-05-23T03:12:36.736428426Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
May 23 03:12:36 minikube dockerd[626]: time="2024-05-23T03:12:36.736438676Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
May 23 03:12:36 minikube dockerd[626]: time="2024-05-23T03:12:36.736443843Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
May 23 03:12:36 minikube dockerd[626]: time="2024-05-23T03:12:36.737243093Z" level=info msg="parsed scheme: \"unix\"" module=grpc
May 23 03:12:36 minikube dockerd[626]: time="2024-05-23T03:12:36.737254135Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
May 23 03:12:36 minikube dockerd[626]: time="2024-05-23T03:12:36.737261135Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
May 23 03:12:36 minikube dockerd[626]: time="2024-05-23T03:12:36.737264885Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
May 23 03:12:36 minikube dockerd[626]: time="2024-05-23T03:12:36.740517010Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
May 23 03:12:36 minikube dockerd[626]: time="2024-05-23T03:12:36.742177635Z" level=info msg="Loading containers: start."
May 23 03:12:36 minikube dockerd[626]: time="2024-05-23T03:12:36.771717010Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
May 23 03:12:36 minikube dockerd[626]: time="2024-05-23T03:12:36.791601468Z" level=info msg="Loading containers: done."
May 23 03:12:36 minikube dockerd[626]: time="2024-05-23T03:12:36.795918635Z" level=info msg="Docker daemon" commit=6051f14 graphdriver(s)=overlay2 version=20.10.23
May 23 03:12:36 minikube dockerd[626]: time="2024-05-23T03:12:36.795948760Z" level=info msg="Daemon has completed initialization"
May 23 03:12:36 minikube systemd[1]: Started Docker Application Container Engine.
May 23 03:12:36 minikube dockerd[626]: time="2024-05-23T03:12:36.819559426Z" level=info msg="API listen on [::]:2376"
May 23 03:12:36 minikube dockerd[626]: time="2024-05-23T03:12:36.821659301Z" level=info msg="API listen on /var/run/docker.sock"
May 23 03:12:36 minikube systemd[1]: Stopping Docker Application Container Engine...
May 23 03:12:36 minikube dockerd[626]: time="2024-05-23T03:12:36.863936926Z" level=info msg="Processing signal 'terminated'"
May 23 03:12:36 minikube dockerd[626]: time="2024-05-23T03:12:36.864332718Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
May 23 03:12:36 minikube dockerd[626]: time="2024-05-23T03:12:36.864424760Z" level=info msg="Daemon shutdown complete"
May 23 03:12:36 minikube systemd[1]: docker.service: Succeeded.
May 23 03:12:36 minikube systemd[1]: Stopped Docker Application Container Engine.
May 23 03:12:36 minikube systemd[1]: Starting Docker Application Container Engine...
May 23 03:12:36 minikube dockerd[816]: time="2024-05-23T03:12:36.899372343Z" level=info msg="Starting up"
May 23 03:12:36 minikube dockerd[816]: time="2024-05-23T03:12:36.900877676Z" level=info msg="parsed scheme: \"unix\"" module=grpc
May 23 03:12:36 minikube dockerd[816]: time="2024-05-23T03:12:36.900890010Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
May 23 03:12:36 minikube dockerd[816]: time="2024-05-23T03:12:36.900906260Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
May 23 03:12:36 minikube dockerd[816]: time="2024-05-23T03:12:36.900911843Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
May 23 03:12:36 minikube dockerd[816]: time="2024-05-23T03:12:36.901650676Z" level=info msg="parsed scheme: \"unix\"" module=grpc
May 23 03:12:36 minikube dockerd[816]: time="2024-05-23T03:12:36.901665051Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
May 23 03:12:36 minikube dockerd[816]: time="2024-05-23T03:12:36.901673968Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
May 23 03:12:36 minikube dockerd[816]: time="2024-05-23T03:12:36.901678218Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
May 23 03:12:36 minikube dockerd[816]: time="2024-05-23T03:12:36.906382093Z" level=info msg="Loading containers: start."
May 23 03:12:36 minikube dockerd[816]: time="2024-05-23T03:12:36.934132968Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
May 23 03:12:36 minikube dockerd[816]: time="2024-05-23T03:12:36.944731343Z" level=info msg="Loading containers: done."
May 23 03:12:36 minikube dockerd[816]: time="2024-05-23T03:12:36.952415676Z" level=info msg="Docker daemon" commit=6051f14 graphdriver(s)=overlay2 version=20.10.23
May 23 03:12:36 minikube dockerd[816]: time="2024-05-23T03:12:36.952447468Z" level=info msg="Daemon has completed initialization"
May 23 03:12:36 minikube systemd[1]: Started Docker Application Container Engine.
May 23 03:12:36 minikube dockerd[816]: time="2024-05-23T03:12:36.962555635Z" level=info msg="API listen on [::]:2376"
May 23 03:12:36 minikube dockerd[816]: time="2024-05-23T03:12:36.965334385Z" level=info msg="API listen on /var/run/docker.sock"
May 23 03:13:09 minikube dockerd[816]: time="2024-05-23T03:13:09.492803136Z" level=warning msg="reference for unknown type: " digest="sha256:f977ad859fb500c1302d9c3428c6271db031bb7431e7076213b676b345a88dc2" remote="registry.k8s.io/metrics-server/metrics-server@sha256:f977ad859fb500c1302d9c3428c6271db031bb7431e7076213b676b345a88dc2"
May 23 03:13:38 minikube dockerd[816]: time="2024-05-23T03:13:38.001368344Z" level=info msg="ignoring event" container=30bd5ce80e485eee447027b5d549eef7423bbfc1c7bfecf41dec2af9a4ca3dcc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"

* 
* ==> container status <==
* CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID
0df689185c0d7       66749159455b3                                                                                                           5 minutes ago       Running             storage-provisioner       1                   c5f399b089e1a
3bd17ddfad217       registry.k8s.io/metrics-server/metrics-server@sha256:f977ad859fb500c1302d9c3428c6271db031bb7431e7076213b676b345a88dc2   6 minutes ago       Running             metrics-server            0                   36dbb0708dd25
0c037f1ce8b94       6af7f860a8197                                                                                                           6 minutes ago       Running             coredns                   0                   940ef267459e4
36e60f38796bb       a1c5e956efa93                                                                                                           6 minutes ago       Running             kube-proxy                0                   39f3cd6eeedc9
30bd5ce80e485       66749159455b3                                                                                                           6 minutes ago       Exited              storage-provisioner       0                   c5f399b089e1a
d0fc7ff88fda7       a6622f1ebe639                                                                                                           6 minutes ago       Running             kube-apiserver            0                   6d640896b7960
dd1b503890d4e       915f4d226acb3                                                                                                           6 minutes ago       Running             kube-scheduler            0                   a625d1771c203
b44a2711c72af       ccc7707581070                                                                                                           6 minutes ago       Running             kube-controller-manager   0                   0430b1a744a7e
42dc67e3d2096       1040f7790951c                                                                                                           6 minutes ago       Running             etcd                      0                   04c2dfc72e38b

* 
* ==> coredns [0c037f1ce8b9] <==
* .:53
[INFO] plugin/reload: Running configuration MD5 = 512bc0e06a520fa44f35dc15de10fdd6
CoreDNS-1.8.6
linux/arm64, go1.17.1, 13a9191
[INFO] 127.0.0.1:41552 - 50040 "HINFO IN 5536122734382375227.4695148845966212508. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.133835666s

* 
* ==> describe nodes <==
* Name:               minikube
Roles:              control-plane,master
Labels:             beta.kubernetes.io/arch=arm64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=arm64
                    kubernetes.io/hostname=minikube
                    kubernetes.io/os=linux
                    minikube.k8s.io/commit=ddac20b4b34a9c8c857fc602203b6ba2679794d3
                    minikube.k8s.io/name=minikube
                    minikube.k8s.io/primary=true
                    minikube.k8s.io/updated_at=2024_05_23T11_12_54_0700
                    minikube.k8s.io/version=v1.29.0
                    node-role.kubernetes.io/control-plane=
                    node-role.kubernetes.io/master=
                    node.kubernetes.io/exclude-from-external-load-balancers=
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Thu, 23 May 2024 03:12:51 +0000
Taints:             <none>
Unschedulable:      false
Lease:
  HolderIdentity:  minikube
  AcquireTime:     <unset>
  RenewTime:       Thu, 23 May 2024 03:19:22 +0000
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  MemoryPressure   False   Thu, 23 May 2024 03:18:31 +0000   Thu, 23 May 2024 03:12:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Thu, 23 May 2024 03:18:31 +0000   Thu, 23 May 2024 03:12:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Thu, 23 May 2024 03:18:31 +0000   Thu, 23 May 2024 03:12:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            True    Thu, 23 May 2024 03:18:31 +0000   Thu, 23 May 2024 03:12:54 +0000   KubeletReady                 kubelet is posting ready status
Addresses:
  InternalIP:  192.168.200.200
  Hostname:    minikube
Capacity:
  cpu:                8
  ephemeral-storage:  61202244Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  hugepages-32Mi:     0
  hugepages-64Ki:     0
  memory:             8131448Ki
  pods:               110
Allocatable:
  cpu:                8
  ephemeral-storage:  61202244Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  hugepages-32Mi:     0
  hugepages-64Ki:     0
  memory:             8131448Ki
  pods:               110
System Info:
  Machine ID:                 befa1230ef574d39bf2d8fa114f009f8
  System UUID:                befa1230ef574d39bf2d8fa114f009f8
  Boot ID:                    643f5045-856e-44c4-b3f8-99e781c12d1d
  Kernel Version:             6.6.26-linuxkit
  OS Image:                   Ubuntu 20.04.5 LTS
  Operating System:           linux
  Architecture:               arm64
  Container Runtime Version:  docker://20.10.23
  Kubelet Version:            v1.23.8
  Kube-Proxy Version:         v1.23.8
PodCIDR:                      10.244.0.0/24
PodCIDRs:                     10.244.0.0/24
Non-terminated Pods:          (8 in total)
  Namespace                   Name                                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
  ---------                   ----                                ------------  ----------  ---------------  -------------  ---
  kube-system                 coredns-65c54cc984-4v9fr            100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     6m21s
  kube-system                 etcd-minikube                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         6m34s
  kube-system                 kube-apiserver-minikube             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m36s
  kube-system                 kube-controller-manager-minikube    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m35s
  kube-system                 kube-proxy-thw5z                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m22s
  kube-system                 kube-scheduler-minikube             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m36s
  kube-system                 metrics-server-7b8544996b-7x42l     100m (1%!)(MISSING)     0 (0%!)(MISSING)      200Mi (2%!)(MISSING)       0 (0%!)(MISSING)         6m21s
  kube-system                 storage-provisioner                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m33s
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests    Limits
  --------           --------    ------
  cpu                850m (10%!)(MISSING)  0 (0%!)(MISSING)
  memory             370Mi (4%!)(MISSING)  170Mi (2%!)(MISSING)
  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
Events:
  Type    Reason                   Age                    From        Message
  ----    ------                   ----                   ----        -------
  Normal  Starting                 6m20s                  kube-proxy  
  Normal  NodeHasSufficientMemory  6m39s (x5 over 6m39s)  kubelet     Node minikube status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    6m39s (x5 over 6m39s)  kubelet     Node minikube status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     6m39s (x4 over 6m39s)  kubelet     Node minikube status is now: NodeHasSufficientPID
  Normal  NodeAllocatableEnforced  6m39s                  kubelet     Updated Node Allocatable limit across pods
  Normal  NodeAllocatableEnforced  6m34s                  kubelet     Updated Node Allocatable limit across pods
  Normal  NodeHasSufficientMemory  6m34s                  kubelet     Node minikube status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    6m34s                  kubelet     Node minikube status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     6m34s                  kubelet     Node minikube status is now: NodeHasSufficientPID
  Normal  NodeReady                6m34s                  kubelet     Node minikube status is now: NodeReady
  Normal  Starting                 6m34s                  kubelet     Starting kubelet.

* 
* ==> dmesg <==
* [May23 01:57] cacheinfo: Unable to detect cache hierarchy for CPU 0
[  +0.152569] netlink: 'init': attribute type 4 has an invalid length.
[  +0.031761] fakeowner: loading out-of-tree module taints kernel.
[  +0.074415] netlink: 'init': attribute type 22 has an invalid length.
[May23 01:58] systemd[827]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set

* 
* ==> etcd [42dc67e3d209] <==
* {"level":"warn","ts":1716433970.013263,"caller":"flags/flag.go:93","msg":"unrecognized environment variable","environment-variable":"ETCD_UNSUPPORTED_ARCH=arm64"}
{"level":"info","ts":"2024-05-23T03:12:50.013Z","caller":"etcdmain/etcd.go:72","msg":"Running: ","args":["etcd","--advertise-client-urls=https://192.168.200.200:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--initial-advertise-peer-urls=https://192.168.200.200:2380","--initial-cluster=minikube=https://192.168.200.200:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://192.168.200.200:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://192.168.200.200:2380","--name=minikube","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--proxy-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
{"level":"info","ts":"2024-05-23T03:12:50.013Z","caller":"embed/etcd.go:131","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.200.200:2380"]}
{"level":"info","ts":"2024-05-23T03:12:50.013Z","caller":"embed/etcd.go:478","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
{"level":"info","ts":"2024-05-23T03:12:50.014Z","caller":"embed/etcd.go:139","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.200.200:2379"]}
{"level":"info","ts":"2024-05-23T03:12:50.014Z","caller":"embed/etcd.go:307","msg":"starting an etcd server","etcd-version":"3.5.1","git-sha":"d42e8589e","go-version":"go1.16.2","go-os":"linux","go-arch":"arm64","max-cpu-set":8,"max-cpu-available":8,"member-initialized":false,"name":"minikube","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.200.200:2380"],"listen-peer-urls":["https://192.168.200.200:2380"],"advertise-client-urls":["https://192.168.200.200:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.200.200:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"minikube=https://192.168.200.200:2380","initial-cluster-state":"new","initial-cluster-token":"etcd-cluster","quota-size-bytes":2147483648,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
{"level":"info","ts":"2024-05-23T03:12:50.015Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"1.315041ms"}
{"level":"info","ts":"2024-05-23T03:12:50.020Z","caller":"etcdserver/raft.go:448","msg":"starting local member","local-member-id":"f1751f15f702dfc9","cluster-id":"29101622b3a6d747"}
{"level":"info","ts":"2024-05-23T03:12:50.020Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f1751f15f702dfc9 switched to configuration voters=()"}
{"level":"info","ts":"2024-05-23T03:12:50.020Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f1751f15f702dfc9 became follower at term 0"}
{"level":"info","ts":"2024-05-23T03:12:50.020Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft f1751f15f702dfc9 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]"}
{"level":"info","ts":"2024-05-23T03:12:50.020Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f1751f15f702dfc9 became follower at term 1"}
{"level":"info","ts":"2024-05-23T03:12:50.020Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f1751f15f702dfc9 switched to configuration voters=(17398846914614714313)"}
{"level":"warn","ts":"2024-05-23T03:12:50.021Z","caller":"auth/store.go:1220","msg":"simple token is not cryptographically signed"}
{"level":"info","ts":"2024-05-23T03:12:50.022Z","caller":"mvcc/kvstore.go:415","msg":"kvstore restored","current-rev":1}
{"level":"info","ts":"2024-05-23T03:12:50.023Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
{"level":"info","ts":"2024-05-23T03:12:50.024Z","caller":"etcdserver/server.go:843","msg":"starting etcd server","local-member-id":"f1751f15f702dfc9","local-server-version":"3.5.1","cluster-version":"to_be_decided"}
{"level":"info","ts":"2024-05-23T03:12:50.024Z","caller":"etcdserver/server.go:728","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"f1751f15f702dfc9","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
{"level":"info","ts":"2024-05-23T03:12:50.026Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f1751f15f702dfc9 switched to configuration voters=(17398846914614714313)"}
{"level":"info","ts":"2024-05-23T03:12:50.026Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"29101622b3a6d747","local-member-id":"f1751f15f702dfc9","added-peer-id":"f1751f15f702dfc9","added-peer-peer-urls":["https://192.168.200.200:2380"]}
{"level":"info","ts":"2024-05-23T03:12:50.029Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
{"level":"info","ts":"2024-05-23T03:12:50.030Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"f1751f15f702dfc9","initial-advertise-peer-urls":["https://192.168.200.200:2380"],"listen-peer-urls":["https://192.168.200.200:2380"],"advertise-client-urls":["https://192.168.200.200:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.200.200:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
{"level":"info","ts":"2024-05-23T03:12:50.030Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2024-05-23T03:12:50.030Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.200.200:2380"}
{"level":"info","ts":"2024-05-23T03:12:50.030Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.200.200:2380"}
{"level":"info","ts":"2024-05-23T03:12:50.321Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f1751f15f702dfc9 is starting a new election at term 1"}
{"level":"info","ts":"2024-05-23T03:12:50.322Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f1751f15f702dfc9 became pre-candidate at term 1"}
{"level":"info","ts":"2024-05-23T03:12:50.322Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f1751f15f702dfc9 received MsgPreVoteResp from f1751f15f702dfc9 at term 1"}
{"level":"info","ts":"2024-05-23T03:12:50.322Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f1751f15f702dfc9 became candidate at term 2"}
{"level":"info","ts":"2024-05-23T03:12:50.322Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f1751f15f702dfc9 received MsgVoteResp from f1751f15f702dfc9 at term 2"}
{"level":"info","ts":"2024-05-23T03:12:50.322Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f1751f15f702dfc9 became leader at term 2"}
{"level":"info","ts":"2024-05-23T03:12:50.322Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f1751f15f702dfc9 elected leader f1751f15f702dfc9 at term 2"}
{"level":"info","ts":"2024-05-23T03:12:50.322Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"f1751f15f702dfc9","local-member-attributes":"{Name:minikube ClientURLs:[https://192.168.200.200:2379]}","request-path":"/0/members/f1751f15f702dfc9/attributes","cluster-id":"29101622b3a6d747","publish-timeout":"7s"}
{"level":"info","ts":"2024-05-23T03:12:50.322Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
{"level":"info","ts":"2024-05-23T03:12:50.322Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-05-23T03:12:50.322Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
{"level":"info","ts":"2024-05-23T03:12:50.322Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
{"level":"info","ts":"2024-05-23T03:12:50.322Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"29101622b3a6d747","local-member-id":"f1751f15f702dfc9","cluster-version":"3.5"}
{"level":"info","ts":"2024-05-23T03:12:50.322Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-05-23T03:12:50.322Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2024-05-23T03:12:50.322Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
{"level":"info","ts":"2024-05-23T03:12:50.323Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.200.200:2379"}
{"level":"info","ts":"2024-05-23T03:12:50.323Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
{"level":"warn","ts":"2024-05-23T03:15:00.705Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"106.947584ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/mutatingwebhookconfigurations/\" range_end:\"/registry/mutatingwebhookconfigurations0\" count_only:true ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2024-05-23T03:15:00.706Z","caller":"traceutil/trace.go:171","msg":"trace[786971461] range","detail":"{range_begin:/registry/mutatingwebhookconfigurations/; range_end:/registry/mutatingwebhookconfigurations0; response_count:0; response_revision:588; }","duration":"107.327375ms","start":"2024-05-23T03:15:00.598Z","end":"2024-05-23T03:15:00.706Z","steps":["trace[786971461] 'count revisions from in-memory index tree'  (duration: 106.883666ms)"],"step_count":1}

* 
* ==> kernel <==
*  03:19:29 up  1:21,  0 users,  load average: 3.23, 2.75, 2.71
Linux minikube 6.6.26-linuxkit #1 SMP Sat Apr 27 04:13:19 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
PRETTY_NAME="Ubuntu 20.04.5 LTS"

* 
* ==> kube-apiserver [d0fc7ff88fda] <==
* I0523 03:12:51.569037       1 shared_informer.go:240] Waiting for caches to sync for cluster_authentication_trust_controller
I0523 03:12:51.569150       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0523 03:12:51.569153       1 controller.go:85] Starting OpenAPI controller
I0523 03:12:51.569166       1 naming_controller.go:291] Starting NamingConditionController
I0523 03:12:51.569181       1 establishing_controller.go:76] Starting EstablishingController
I0523 03:12:51.569199       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
I0523 03:12:51.569215       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
I0523 03:12:51.569220       1 crd_finalizer.go:266] Starting CRDFinalizer
I0523 03:12:51.569309       1 dynamic_cafile_content.go:156] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
I0523 03:12:51.569345       1 dynamic_serving_content.go:131] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
I0523 03:12:51.570651       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
I0523 03:12:51.570662       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I0523 03:12:51.570683       1 autoregister_controller.go:141] Starting autoregister controller
I0523 03:12:51.570687       1 cache.go:32] Waiting for caches to sync for autoregister controller
I0523 03:12:51.570696       1 controller.go:83] Starting OpenAPI AggregationController
I0523 03:12:51.570997       1 available_controller.go:491] Starting AvailableConditionController
I0523 03:12:51.571008       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I0523 03:12:51.571025       1 apf_controller.go:317] Starting API Priority and Fairness config controller
I0523 03:12:51.574403       1 crdregistration_controller.go:111] Starting crd-autoregister controller
I0523 03:12:51.574424       1 shared_informer.go:240] Waiting for caches to sync for crd-autoregister
I0523 03:12:51.579634       1 dynamic_cafile_content.go:156] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
I0523 03:12:51.586298       1 controller.go:611] quota admission added evaluator for: namespaces
I0523 03:12:51.658849       1 shared_informer.go:247] Caches are synced for node_authorizer 
I0523 03:12:51.670400       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
I0523 03:12:51.671339       1 cache.go:39] Caches are synced for autoregister controller
I0523 03:12:51.671438       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0523 03:12:51.671622       1 apf_controller.go:322] Running API Priority and Fairness config worker
I0523 03:12:51.671625       1 cache.go:39] Caches are synced for AvailableConditionController controller
I0523 03:12:51.675603       1 shared_informer.go:247] Caches are synced for crd-autoregister 
I0523 03:12:52.569166       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0523 03:12:52.590980       1 storage_scheduling.go:93] created PriorityClass system-node-critical with value 2000001000
I0523 03:12:52.594384       1 storage_scheduling.go:93] created PriorityClass system-cluster-critical with value 2000000000
I0523 03:12:52.594410       1 storage_scheduling.go:109] all system priority classes are created successfully or already exist.
I0523 03:12:52.757084       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0523 03:12:52.768058       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I0523 03:12:52.790719       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
W0523 03:12:52.792513       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.200.200]
I0523 03:12:52.792908       1 controller.go:611] quota admission added evaluator for: endpoints
I0523 03:12:52.794472       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
I0523 03:12:53.723099       1 controller.go:611] quota admission added evaluator for: serviceaccounts
I0523 03:12:54.424144       1 controller.go:611] quota admission added evaluator for: deployments.apps
I0523 03:12:54.429548       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
I0523 03:12:54.485120       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
I0523 03:12:54.507592       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
I0523 03:12:55.286063       1 alloc.go:329] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs=map[IPv4:10.99.51.81]
W0523 03:12:56.279804       1 handler_proxy.go:104] no RequestInfo found in the context
E0523 03:12:56.279864       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
I0523 03:12:56.279878       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0523 03:13:06.913507       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
I0523 03:13:07.169509       1 controller.go:611] quota admission added evaluator for: replicasets.apps
I0523 03:13:07.169509       1 controller.go:611] quota admission added evaluator for: replicasets.apps
I0523 03:13:08.019728       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
W0523 03:13:08.482290       1 handler_proxy.go:104] no RequestInfo found in the context
E0523 03:13:08.482556       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
I0523 03:13:08.482583       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
E0523 03:13:23.836140       1 available_controller.go:524] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.51.81:443/apis/metrics.k8s.io/v1beta1: Get "https://10.99.51.81:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.99.51.81:443: connect: connection refused
E0523 03:13:23.836529       1 available_controller.go:524] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.51.81:443/apis/metrics.k8s.io/v1beta1: Get "https://10.99.51.81:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.99.51.81:443: connect: connection refused
E0523 03:13:23.843039       1 available_controller.go:524] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.51.81:443/apis/metrics.k8s.io/v1beta1: Get "https://10.99.51.81:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.99.51.81:443: connect: connection refused

* 
* ==> kube-controller-manager [b44a2711c72a] <==
* I0523 03:13:06.460212       1 controllermanager.go:605] Started "csrcleaner"
I0523 03:13:06.460240       1 cleaner.go:82] Starting CSR cleaner controller
I0523 03:13:06.463278       1 shared_informer.go:240] Waiting for caches to sync for resource quota
W0523 03:13:06.468767       1 actual_state_of_world.go:539] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist
E0523 03:13:06.473447       1 memcache.go:196] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E0523 03:13:06.475075       1 memcache.go:101] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0523 03:13:06.476609       1 shared_informer.go:247] Caches are synced for cronjob 
I0523 03:13:06.477070       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
I0523 03:13:06.478583       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown 
I0523 03:13:06.478613       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving 
I0523 03:13:06.478652       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client 
I0523 03:13:06.478665       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client 
I0523 03:13:06.488786       1 shared_informer.go:247] Caches are synced for TTL after finished 
I0523 03:13:06.496621       1 shared_informer.go:247] Caches are synced for job 
I0523 03:13:06.506631       1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator 
I0523 03:13:06.507124       1 shared_informer.go:247] Caches are synced for daemon sets 
I0523 03:13:06.510572       1 shared_informer.go:247] Caches are synced for HPA 
I0523 03:13:06.511177       1 shared_informer.go:247] Caches are synced for GC 
I0523 03:13:06.513570       1 shared_informer.go:247] Caches are synced for endpoint_slice 
I0523 03:13:06.513735       1 shared_informer.go:247] Caches are synced for deployment 
I0523 03:13:06.515568       1 shared_informer.go:247] Caches are synced for ReplicationController 
I0523 03:13:06.529664       1 shared_informer.go:247] Caches are synced for node 
I0523 03:13:06.529686       1 range_allocator.go:173] Starting range CIDR allocator
I0523 03:13:06.529689       1 shared_informer.go:240] Waiting for caches to sync for cidrallocator
I0523 03:13:06.529694       1 shared_informer.go:247] Caches are synced for cidrallocator 
I0523 03:13:06.533587       1 range_allocator.go:374] Set node minikube PodCIDR to [10.244.0.0/24]
I0523 03:13:06.535616       1 shared_informer.go:247] Caches are synced for taint 
I0523 03:13:06.535726       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
I0523 03:13:06.535817       1 node_lifecycle_controller.go:1397] Initializing eviction metric for zone: 
W0523 03:13:06.535992       1 node_lifecycle_controller.go:1012] Missing timestamp for Node minikube. Assuming now as a timestamp.
I0523 03:13:06.536149       1 event.go:294] "Event occurred" object="minikube" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node minikube event: Registered Node minikube in Controller"
I0523 03:13:06.536528       1 node_lifecycle_controller.go:1213] Controller detected that zone  is now in state Normal.
I0523 03:13:06.540720       1 shared_informer.go:247] Caches are synced for PV protection 
I0523 03:13:06.542746       1 shared_informer.go:247] Caches are synced for bootstrap_signer 
I0523 03:13:06.557057       1 shared_informer.go:247] Caches are synced for TTL 
I0523 03:13:06.557074       1 shared_informer.go:247] Caches are synced for crt configmap 
I0523 03:13:06.561911       1 shared_informer.go:247] Caches are synced for ReplicaSet 
I0523 03:13:06.563595       1 shared_informer.go:247] Caches are synced for namespace 
I0523 03:13:06.567009       1 shared_informer.go:247] Caches are synced for service account 
I0523 03:13:06.569551       1 shared_informer.go:247] Caches are synced for certificate-csrapproving 
I0523 03:13:06.635588       1 shared_informer.go:247] Caches are synced for PVC protection 
I0523 03:13:06.645596       1 shared_informer.go:247] Caches are synced for endpoint 
I0523 03:13:06.651610       1 shared_informer.go:247] Caches are synced for persistent volume 
I0523 03:13:06.662258       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
I0523 03:13:06.663617       1 shared_informer.go:247] Caches are synced for attach detach 
I0523 03:13:06.663686       1 shared_informer.go:247] Caches are synced for stateful set 
I0523 03:13:06.693604       1 shared_informer.go:247] Caches are synced for ephemeral 
I0523 03:13:06.713608       1 shared_informer.go:247] Caches are synced for expand 
I0523 03:13:06.729417       1 shared_informer.go:247] Caches are synced for disruption 
I0523 03:13:06.729453       1 disruption.go:371] Sending events to api server.
I0523 03:13:06.763481       1 shared_informer.go:247] Caches are synced for resource quota 
I0523 03:13:06.770631       1 shared_informer.go:247] Caches are synced for resource quota 
I0523 03:13:06.918333       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-thw5z"
I0523 03:13:07.171398       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-65c54cc984 to 1"
I0523 03:13:07.174380       1 event.go:294] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-7b8544996b to 1"
I0523 03:13:07.174467       1 shared_informer.go:247] Caches are synced for garbage collector 
I0523 03:13:07.174538       1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0523 03:13:07.177119       1 shared_informer.go:247] Caches are synced for garbage collector 
I0523 03:13:07.580543       1 event.go:294] "Event occurred" object="kube-system/metrics-server-7b8544996b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-7b8544996b-7x42l"
I0523 03:13:07.580584       1 event.go:294] "Event occurred" object="kube-system/coredns-65c54cc984" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-65c54cc984-4v9fr"

* 
* ==> kube-proxy [36e60f38796b] <==
* time="2024-05-23T03:13:07Z" level=warning msg="Running modprobe ip_vs failed with message: `modprobe: WARNING: Module ip_vs not found in directory /lib/modules/6.6.26-linuxkit`, error: exit status 1"
I0523 03:13:08.004973       1 node.go:163] Successfully retrieved node IP: 192.168.200.200
I0523 03:13:08.005016       1 server_others.go:138] "Detected node IP" address="192.168.200.200"
I0523 03:13:08.005044       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
I0523 03:13:08.017075       1 server_others.go:206] "Using iptables Proxier"
I0523 03:13:08.017091       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
I0523 03:13:08.017095       1 server_others.go:214] "Creating dualStackProxier for iptables"
I0523 03:13:08.017104       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
I0523 03:13:08.017277       1 server.go:656] "Version info" version="v1.23.8"
I0523 03:13:08.017902       1 config.go:317] "Starting service config controller"
I0523 03:13:08.017935       1 shared_informer.go:240] Waiting for caches to sync for service config
I0523 03:13:08.017957       1 config.go:226] "Starting endpoint slice config controller"
I0523 03:13:08.017967       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I0523 03:13:08.118625       1 shared_informer.go:247] Caches are synced for endpoint slice config 
I0523 03:13:08.118641       1 shared_informer.go:247] Caches are synced for service config 

* 
* ==> kube-scheduler [dd1b503890d4] <==
* I0523 03:12:50.245639       1 serving.go:348] Generated self-signed cert in-memory
W0523 03:12:51.579239       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0523 03:12:51.579275       1 authentication.go:345] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0523 03:12:51.579283       1 authentication.go:346] Continuing without authentication configuration. This may treat all requests as anonymous.
W0523 03:12:51.579288       1 authentication.go:347] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0523 03:12:51.585717       1 server.go:139] "Starting Kubernetes Scheduler" version="v1.23.8"
I0523 03:12:51.587107       1 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0523 03:12:51.587146       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0523 03:12:51.587240       1 secure_serving.go:200] Serving securely on 127.0.0.1:10259
I0523 03:12:51.587301       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
W0523 03:12:51.589615       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
E0523 03:12:51.589668       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
W0523 03:12:51.589697       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0523 03:12:51.589701       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
W0523 03:12:51.589733       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0523 03:12:51.589767       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
W0523 03:12:51.589796       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0523 03:12:51.589805       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
W0523 03:12:51.589821       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0523 03:12:51.589824       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
W0523 03:12:51.589852       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0523 03:12:51.589857       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
W0523 03:12:51.589631       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
W0523 03:12:51.589876       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0523 03:12:51.589879       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0523 03:12:51.589882       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
W0523 03:12:51.589627       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0523 03:12:51.589930       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
W0523 03:12:51.589986       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0523 03:12:51.589999       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
W0523 03:12:51.590004       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
E0523 03:12:51.590012       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
W0523 03:12:51.590048       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0523 03:12:51.590058       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
W0523 03:12:51.590208       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0523 03:12:51.590229       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
W0523 03:12:51.590291       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0523 03:12:51.590307       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
W0523 03:12:51.590317       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
E0523 03:12:51.590321       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
W0523 03:12:52.472952       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0523 03:12:52.473061       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
W0523 03:12:52.609728       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
E0523 03:12:52.609787       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
W0523 03:12:52.612768       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0523 03:12:52.612811       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
W0523 03:12:52.640249       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0523 03:12:52.640268       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
W0523 03:12:52.676209       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0523 03:12:52.676228       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
W0523 03:12:52.700096       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0523 03:12:52.700115       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0523 03:12:52.818535       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
E0523 03:12:52.839642       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
E0523 03:12:53.043393       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
I0523 03:12:54.587788       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 

* 
* ==> kubelet <==
* -- Logs begin at Thu 2024-05-23 03:12:33 UTC, end at Thu 2024-05-23 03:19:29 UTC. --
May 23 03:12:54 minikube kubelet[2787]: I0523 03:12:54.617734    2787 topology_manager.go:200] "Topology Admit Handler"
May 23 03:12:54 minikube kubelet[2787]: I0523 03:12:54.618006    2787 topology_manager.go:200] "Topology Admit Handler"
May 23 03:12:54 minikube kubelet[2787]: I0523 03:12:54.618041    2787 topology_manager.go:200] "Topology Admit Handler"
May 23 03:12:54 minikube kubelet[2787]: E0523 03:12:54.621023    2787 kubelet.go:1742] "Failed creating a mirror pod for" err="pods \"kube-apiserver-minikube\" already exists" pod="kube-system/kube-apiserver-minikube"
May 23 03:12:54 minikube kubelet[2787]: E0523 03:12:54.621950    2787 kubelet.go:1742] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-minikube\" already exists" pod="kube-system/kube-controller-manager-minikube"
May 23 03:12:54 minikube kubelet[2787]: I0523 03:12:54.804841    2787 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/826a0b16e23b188741f93c8dd23cb4c1-flexvolume-dir\") pod \"kube-controller-manager-minikube\" (UID: \"826a0b16e23b188741f93c8dd23cb4c1\") " pod="kube-system/kube-controller-manager-minikube"
May 23 03:12:54 minikube kubelet[2787]: I0523 03:12:54.804876    2787 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cd1fddb963b1e849b0a55584b509acd9-ca-certs\") pod \"kube-apiserver-minikube\" (UID: \"cd1fddb963b1e849b0a55584b509acd9\") " pod="kube-system/kube-apiserver-minikube"
May 23 03:12:54 minikube kubelet[2787]: I0523 03:12:54.804889    2787 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cd1fddb963b1e849b0a55584b509acd9-etc-ca-certificates\") pod \"kube-apiserver-minikube\" (UID: \"cd1fddb963b1e849b0a55584b509acd9\") " pod="kube-system/kube-apiserver-minikube"
May 23 03:12:54 minikube kubelet[2787]: I0523 03:12:54.804904    2787 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/ff15fc94621370fafd0ba9bdb7d2bdd4-etcd-certs\") pod \"etcd-minikube\" (UID: \"ff15fc94621370fafd0ba9bdb7d2bdd4\") " pod="kube-system/etcd-minikube"
May 23 03:12:54 minikube kubelet[2787]: I0523 03:12:54.804915    2787 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/ff15fc94621370fafd0ba9bdb7d2bdd4-etcd-data\") pod \"etcd-minikube\" (UID: \"ff15fc94621370fafd0ba9bdb7d2bdd4\") " pod="kube-system/etcd-minikube"
May 23 03:12:54 minikube kubelet[2787]: I0523 03:12:54.804924    2787 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cd1fddb963b1e849b0a55584b509acd9-k8s-certs\") pod \"kube-apiserver-minikube\" (UID: \"cd1fddb963b1e849b0a55584b509acd9\") " pod="kube-system/kube-apiserver-minikube"
May 23 03:12:54 minikube kubelet[2787]: I0523 03:12:54.804935    2787 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/826a0b16e23b188741f93c8dd23cb4c1-k8s-certs\") pod \"kube-controller-manager-minikube\" (UID: \"826a0b16e23b188741f93c8dd23cb4c1\") " pod="kube-system/kube-controller-manager-minikube"
May 23 03:12:54 minikube kubelet[2787]: I0523 03:12:54.804944    2787 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cdf3a356ac601fe012036c0f864f2ee8-kubeconfig\") pod \"kube-scheduler-minikube\" (UID: \"cdf3a356ac601fe012036c0f864f2ee8\") " pod="kube-system/kube-scheduler-minikube"
May 23 03:12:54 minikube kubelet[2787]: I0523 03:12:54.804958    2787 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cd1fddb963b1e849b0a55584b509acd9-usr-share-ca-certificates\") pod \"kube-apiserver-minikube\" (UID: \"cd1fddb963b1e849b0a55584b509acd9\") " pod="kube-system/kube-apiserver-minikube"
May 23 03:12:54 minikube kubelet[2787]: I0523 03:12:54.804976    2787 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/826a0b16e23b188741f93c8dd23cb4c1-usr-local-share-ca-certificates\") pod \"kube-controller-manager-minikube\" (UID: \"826a0b16e23b188741f93c8dd23cb4c1\") " pod="kube-system/kube-controller-manager-minikube"
May 23 03:12:54 minikube kubelet[2787]: I0523 03:12:54.804994    2787 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/826a0b16e23b188741f93c8dd23cb4c1-etc-ca-certificates\") pod \"kube-controller-manager-minikube\" (UID: \"826a0b16e23b188741f93c8dd23cb4c1\") " pod="kube-system/kube-controller-manager-minikube"
May 23 03:12:54 minikube kubelet[2787]: I0523 03:12:54.805007    2787 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/826a0b16e23b188741f93c8dd23cb4c1-kubeconfig\") pod \"kube-controller-manager-minikube\" (UID: \"826a0b16e23b188741f93c8dd23cb4c1\") " pod="kube-system/kube-controller-manager-minikube"
May 23 03:12:54 minikube kubelet[2787]: I0523 03:12:54.805021    2787 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/826a0b16e23b188741f93c8dd23cb4c1-usr-share-ca-certificates\") pod \"kube-controller-manager-minikube\" (UID: \"826a0b16e23b188741f93c8dd23cb4c1\") " pod="kube-system/kube-controller-manager-minikube"
May 23 03:12:54 minikube kubelet[2787]: I0523 03:12:54.805037    2787 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cd1fddb963b1e849b0a55584b509acd9-usr-local-share-ca-certificates\") pod \"kube-apiserver-minikube\" (UID: \"cd1fddb963b1e849b0a55584b509acd9\") " pod="kube-system/kube-apiserver-minikube"
May 23 03:12:54 minikube kubelet[2787]: I0523 03:12:54.805056    2787 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/826a0b16e23b188741f93c8dd23cb4c1-ca-certs\") pod \"kube-controller-manager-minikube\" (UID: \"826a0b16e23b188741f93c8dd23cb4c1\") " pod="kube-system/kube-controller-manager-minikube"
May 23 03:12:55 minikube kubelet[2787]: E0523 03:12:55.100601    2787 kubelet.go:1742] "Failed creating a mirror pod for" err="pods \"kube-scheduler-minikube\" already exists" pod="kube-system/kube-scheduler-minikube"
May 23 03:12:55 minikube kubelet[2787]: I0523 03:12:55.496639    2787 apiserver.go:52] "Watching apiserver"
May 23 03:12:55 minikube kubelet[2787]: I0523 03:12:55.715214    2787 reconciler.go:157] "Reconciler: start to sync state"
May 23 03:12:56 minikube kubelet[2787]: E0523 03:12:56.121560    2787 kubelet.go:1742] "Failed creating a mirror pod for" err="pods \"kube-scheduler-minikube\" already exists" pod="kube-system/kube-scheduler-minikube"
May 23 03:12:56 minikube kubelet[2787]: E0523 03:12:56.303934    2787 kubelet.go:1742] "Failed creating a mirror pod for" err="pods \"etcd-minikube\" already exists" pod="kube-system/etcd-minikube"
May 23 03:12:56 minikube kubelet[2787]: E0523 03:12:56.508102    2787 kubelet.go:1742] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-minikube\" already exists" pod="kube-system/kube-controller-manager-minikube"
May 23 03:12:56 minikube kubelet[2787]: I0523 03:12:56.700075    2787 request.go:665] Waited for 1.166965792s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
May 23 03:12:56 minikube kubelet[2787]: E0523 03:12:56.726958    2787 kubelet.go:1742] "Failed creating a mirror pod for" err="pods \"kube-apiserver-minikube\" already exists" pod="kube-system/kube-apiserver-minikube"
May 23 03:13:06 minikube kubelet[2787]: I0523 03:13:06.558444    2787 kuberuntime_manager.go:1105] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
May 23 03:13:06 minikube kubelet[2787]: I0523 03:13:06.561014    2787 docker_service.go:364] "Docker cri received runtime config" runtimeConfig="&RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
May 23 03:13:06 minikube kubelet[2787]: I0523 03:13:06.561371    2787 kubelet_network.go:76] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
May 23 03:13:06 minikube kubelet[2787]: I0523 03:13:06.567068    2787 topology_manager.go:200] "Topology Admit Handler"
May 23 03:13:06 minikube kubelet[2787]: I0523 03:13:06.759020    2787 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxk9k\" (UniqueName: \"kubernetes.io/projected/2956869c-bbcc-4edf-a934-1e4c970e1c52-kube-api-access-vxk9k\") pod \"storage-provisioner\" (UID: \"2956869c-bbcc-4edf-a934-1e4c970e1c52\") " pod="kube-system/storage-provisioner"
May 23 03:13:06 minikube kubelet[2787]: I0523 03:13:06.759090    2787 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/2956869c-bbcc-4edf-a934-1e4c970e1c52-tmp\") pod \"storage-provisioner\" (UID: \"2956869c-bbcc-4edf-a934-1e4c970e1c52\") " pod="kube-system/storage-provisioner"
May 23 03:13:06 minikube kubelet[2787]: E0523 03:13:06.876465    2787 projected.go:293] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
May 23 03:13:06 minikube kubelet[2787]: E0523 03:13:06.876525    2787 projected.go:199] Error preparing data for projected volume kube-api-access-vxk9k for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
May 23 03:13:06 minikube kubelet[2787]: E0523 03:13:06.876643    2787 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/2956869c-bbcc-4edf-a934-1e4c970e1c52-kube-api-access-vxk9k podName:2956869c-bbcc-4edf-a934-1e4c970e1c52 nodeName:}" failed. No retries permitted until 2024-05-23 03:13:07.376589885 +0000 UTC m=+12.961829549 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-vxk9k" (UniqueName: "kubernetes.io/projected/2956869c-bbcc-4edf-a934-1e4c970e1c52-kube-api-access-vxk9k") pod "storage-provisioner" (UID: "2956869c-bbcc-4edf-a934-1e4c970e1c52") : configmap "kube-root-ca.crt" not found
May 23 03:13:06 minikube kubelet[2787]: I0523 03:13:06.925678    2787 topology_manager.go:200] "Topology Admit Handler"
May 23 03:13:07 minikube kubelet[2787]: I0523 03:13:07.066812    2787 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/374e89c7-158b-4a0c-af48-b296431dc66d-kube-proxy\") pod \"kube-proxy-thw5z\" (UID: \"374e89c7-158b-4a0c-af48-b296431dc66d\") " pod="kube-system/kube-proxy-thw5z"
May 23 03:13:07 minikube kubelet[2787]: I0523 03:13:07.066857    2787 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/374e89c7-158b-4a0c-af48-b296431dc66d-xtables-lock\") pod \"kube-proxy-thw5z\" (UID: \"374e89c7-158b-4a0c-af48-b296431dc66d\") " pod="kube-system/kube-proxy-thw5z"
May 23 03:13:07 minikube kubelet[2787]: I0523 03:13:07.066888    2787 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/374e89c7-158b-4a0c-af48-b296431dc66d-lib-modules\") pod \"kube-proxy-thw5z\" (UID: \"374e89c7-158b-4a0c-af48-b296431dc66d\") " pod="kube-system/kube-proxy-thw5z"
May 23 03:13:07 minikube kubelet[2787]: I0523 03:13:07.066901    2787 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlcmh\" (UniqueName: \"kubernetes.io/projected/374e89c7-158b-4a0c-af48-b296431dc66d-kube-api-access-mlcmh\") pod \"kube-proxy-thw5z\" (UID: \"374e89c7-158b-4a0c-af48-b296431dc66d\") " pod="kube-system/kube-proxy-thw5z"
May 23 03:13:07 minikube kubelet[2787]: E0523 03:13:07.177125    2787 projected.go:293] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
May 23 03:13:07 minikube kubelet[2787]: E0523 03:13:07.177154    2787 projected.go:199] Error preparing data for projected volume kube-api-access-mlcmh for pod kube-system/kube-proxy-thw5z: configmap "kube-root-ca.crt" not found
May 23 03:13:07 minikube kubelet[2787]: E0523 03:13:07.177206    2787 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/374e89c7-158b-4a0c-af48-b296431dc66d-kube-api-access-mlcmh podName:374e89c7-158b-4a0c-af48-b296431dc66d nodeName:}" failed. No retries permitted until 2024-05-23 03:13:07.67718876 +0000 UTC m=+13.262428257 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-mlcmh" (UniqueName: "kubernetes.io/projected/374e89c7-158b-4a0c-af48-b296431dc66d-kube-api-access-mlcmh") pod "kube-proxy-thw5z" (UID: "374e89c7-158b-4a0c-af48-b296431dc66d") : configmap "kube-root-ca.crt" not found
May 23 03:13:07 minikube kubelet[2787]: I0523 03:13:07.588826    2787 topology_manager.go:200] "Topology Admit Handler"
May 23 03:13:07 minikube kubelet[2787]: I0523 03:13:07.588953    2787 topology_manager.go:200] "Topology Admit Handler"
May 23 03:13:07 minikube kubelet[2787]: I0523 03:13:07.775863    2787 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a769bb81-cc31-48dd-9f70-bb39a7767b83-config-volume\") pod \"coredns-65c54cc984-4v9fr\" (UID: \"a769bb81-cc31-48dd-9f70-bb39a7767b83\") " pod="kube-system/coredns-65c54cc984-4v9fr"
May 23 03:13:07 minikube kubelet[2787]: I0523 03:13:07.775938    2787 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c75e2a4f-7faf-48b6-a1ac-89f00767053f-tmp-dir\") pod \"metrics-server-7b8544996b-7x42l\" (UID: \"c75e2a4f-7faf-48b6-a1ac-89f00767053f\") " pod="kube-system/metrics-server-7b8544996b-7x42l"
May 23 03:13:07 minikube kubelet[2787]: I0523 03:13:07.776005    2787 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jx589\" (UniqueName: \"kubernetes.io/projected/c75e2a4f-7faf-48b6-a1ac-89f00767053f-kube-api-access-jx589\") pod \"metrics-server-7b8544996b-7x42l\" (UID: \"c75e2a4f-7faf-48b6-a1ac-89f00767053f\") " pod="kube-system/metrics-server-7b8544996b-7x42l"
May 23 03:13:07 minikube kubelet[2787]: I0523 03:13:07.776106    2787 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fptq\" (UniqueName: \"kubernetes.io/projected/a769bb81-cc31-48dd-9f70-bb39a7767b83-kube-api-access-4fptq\") pod \"coredns-65c54cc984-4v9fr\" (UID: \"a769bb81-cc31-48dd-9f70-bb39a7767b83\") " pod="kube-system/coredns-65c54cc984-4v9fr"
May 23 03:13:08 minikube kubelet[2787]: I0523 03:13:08.654089    2787 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="36dbb0708dd25a786b8c1deb527a1d5623614f77f9d3dba89c1787d8ac3042e2"
May 23 03:13:08 minikube kubelet[2787]: I0523 03:13:08.654257    2787 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/metrics-server-7b8544996b-7x42l through plugin: invalid network status for"
May 23 03:13:08 minikube kubelet[2787]: I0523 03:13:08.941602    2787 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-65c54cc984-4v9fr through plugin: invalid network status for"
May 23 03:13:09 minikube kubelet[2787]: I0523 03:13:09.672239    2787 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/metrics-server-7b8544996b-7x42l through plugin: invalid network status for"
May 23 03:13:09 minikube kubelet[2787]: I0523 03:13:09.676224    2787 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-65c54cc984-4v9fr through plugin: invalid network status for"
May 23 03:13:22 minikube kubelet[2787]: I0523 03:13:22.792718    2787 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/metrics-server-7b8544996b-7x42l through plugin: invalid network status for"
May 23 03:13:39 minikube kubelet[2787]: I0523 03:13:39.007338    2787 scope.go:110] "RemoveContainer" containerID="30bd5ce80e485eee447027b5d549eef7423bbfc1c7bfecf41dec2af9a4ca3dcc"
May 23 03:17:54 minikube kubelet[2787]: W0523 03:17:54.540513    2787 sysinfo.go:203] Nodes topology is not available, providing CPU topology
May 23 03:17:54 minikube kubelet[2787]: W0523 03:17:54.541650    2787 machine.go:65] Cannot read vendor id correctly, set empty.

* 
* ==> storage-provisioner [0df689185c0d] <==
* I0523 03:13:39.080501       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0523 03:13:39.092527       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0523 03:13:39.092600       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0523 03:13:39.108634       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0523 03:13:39.108737       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_minikube_239db83b-0f2d-48c3-80a1-fbfad4b44929!
I0523 03:13:39.108792       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a7119a13-3e7b-4b10-854f-14204ff2253a", APIVersion:"v1", ResourceVersion:"530", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' minikube_239db83b-0f2d-48c3-80a1-fbfad4b44929 became leader
I0523 03:13:39.209900       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_minikube_239db83b-0f2d-48c3-80a1-fbfad4b44929!

* 
* ==> storage-provisioner [30bd5ce80e48] <==
* I0523 03:13:07.963099       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
F0523 03:13:37.973307       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout




使用的操作系统版本

mac os 13.1 Apple M1 Pro

yiji@yiji-m1 minimesh-linux-arm64-1.28-beta1 % minikube version
minikube version: v1.29.0
commit: ddac20b

@zonghaishang zonghaishang added the l/zh-CN Issues in or relating to Chinese label May 23, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 21, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Sep 20, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Oct 20, 2024
@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
l/zh-CN Issues in or relating to Chinese lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

3 participants