Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

downloading error when i use 'minikube start' #17009

Closed
PetterZhukov opened this issue Aug 7, 2023 · 4 comments
Closed

downloading error when i use 'minikube start' #17009

PetterZhukov opened this issue Aug 7, 2023 · 4 comments
Labels
l/zh-CN Issues in or relating to Chinese lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@PetterZhukov
Copy link

重现问题所需的命令
minikube start --image-mirror-country=cn --image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers
--driver=docker
失败的命令的完整输出


😄 Microsoft Windows 10 Enterprise 10.0.22621 Build 22621 上的 minikube v1.20.0
✨ 根据用户配置使用 docker 驱动程序
✅ 正在使用镜像存储库 registry.cn-hangzhou.aliyuncs.com/google_containers
👍 Starting control plane node minikube in cluster minikube
🚜 Pulling base image ...
🔥 Creating docker container (CPUs=8, Memory=7786MB) ...\ E0807 11:37:15.373886 3152 kic.go:261] icacls failed applying permissions - err - [%!s()], output - [�Ѵ������ļ�: C:\Users\zhukefu1.minikube\machines\minikube\id_rsa
�ѳɹ����� 1 ���ļ�; ���� 0 ���ļ�ʱʧ��]

🐳 正在 Docker 20.10.6 中准备 Kubernetes v1.20.2…

❌ Exiting due to K8S_INSTALL_FAILED: updating control plane: downloading binaries: downloading kubectl: download failed: https://kubernetes.oss-cn-hangzhou.aliyuncs.com/kubernetes-release/release/v1.20.2/bin/linux/amd64/kubectl?checksum=file:https://kubernetes.oss-cn-hangzhou.aliyuncs.com/kubernetes-release/release/v1.20.2/bin/linux/amd64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://kubernetes.oss-cn-hangzhou.aliyuncs.com/kubernetes-release/release/v1.20.2/bin/linux/amd64/kubectl?checksum=file:https://kubernetes.oss-cn-hangzhou.aliyuncs.com/kubernetes-release/release/v1.20.2/bin/linux/amd64/kubectl.sha256 Dst:C:\Users\zhukefu1.minikube\cache\linux\v1.20.2/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x2f0c050 0x2f0c050 0x2f0c050 0x2f0c050 0x2f0c050 0x2f0c050 0x2f0c050] Decompressors:map[bz2:0x2f0c050 gz:0x2f0c050 tar.bz2:0x2f0c050 tar.gz:0x2f0c050 tar.xz:0x2f0c050 tar.zst:0x2f0c050 tbz2:0x2f0c050 tgz:0x2f0c050 txz:0x2f0c050 tzst:0x2f0c050 xz:0x2f0c050 zip:0x2f0c050 zst:0x2f0c050] Getters:map[file:0xc001018410 http:0xc000088ec0 https:0xc000089220] Dir:false ProgressListener:0x2eb7ea0 Options:[0x16d7a60]}: invalid checksum: Error downloading checksum file: bad response code: 404

minikube logs命令的输出


-- Logs begin at Mon 2023-08-07 03:37:14 UTC, end at Mon 2023-08-07 03:41:06 UTC. --
Aug 07 03:37:14 minikube systemd[1]: Starting Docker Application Container Engine...
Aug 07 03:37:14 minikube dockerd[220]: time="2023-08-07T03:37:14.478779397Z" level=info msg="Starting up"
Aug 07 03:37:14 minikube dockerd[220]: time="2023-08-07T03:37:14.480222360Z" level=info msg="parsed scheme: "unix"" module=grpc
Aug 07 03:37:14 minikube dockerd[220]: time="2023-08-07T03:37:14.480236219Z" level=info msg="scheme "unix" not registered, fallback to default scheme" module=grpc
Aug 07 03:37:14 minikube dockerd[220]: time="2023-08-07T03:37:14.480249959Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc
Aug 07 03:37:14 minikube dockerd[220]: time="2023-08-07T03:37:14.480256664Z" level=info msg="ClientConn switching balancer to "pick_first"" module=grpc
Aug 07 03:37:14 minikube dockerd[220]: time="2023-08-07T03:37:14.481429817Z" level=info msg="parsed scheme: "unix"" module=grpc
Aug 07 03:37:14 minikube dockerd[220]: time="2023-08-07T03:37:14.481443288Z" level=info msg="scheme "unix" not registered, fallback to default scheme" module=grpc
Aug 07 03:37:14 minikube dockerd[220]: time="2023-08-07T03:37:14.481454885Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc
Aug 07 03:37:14 minikube dockerd[220]: time="2023-08-07T03:37:14.481460621Z" level=info msg="ClientConn switching balancer to "pick_first"" module=grpc
Aug 07 03:37:14 minikube dockerd[220]: time="2023-08-07T03:37:14.549416812Z" level=warning msg="Your kernel does not support cgroup blkio weight"
Aug 07 03:37:14 minikube dockerd[220]: time="2023-08-07T03:37:14.549443014Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
Aug 07 03:37:14 minikube dockerd[220]: time="2023-08-07T03:37:14.549446870Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device"
Aug 07 03:37:14 minikube dockerd[220]: time="2023-08-07T03:37:14.549449895Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device"
Aug 07 03:37:14 minikube dockerd[220]: time="2023-08-07T03:37:14.549452745Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device"
Aug 07 03:37:14 minikube dockerd[220]: time="2023-08-07T03:37:14.549455645Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device"
Aug 07 03:37:14 minikube dockerd[220]: time="2023-08-07T03:37:14.549592910Z" level=info msg="Loading containers: start."
Aug 07 03:37:14 minikube dockerd[220]: time="2023-08-07T03:37:14.600383859Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Aug 07 03:37:14 minikube dockerd[220]: time="2023-08-07T03:37:14.628674672Z" level=info msg="Loading containers: done."
Aug 07 03:37:14 minikube dockerd[220]: time="2023-08-07T03:37:14.677268859Z" level=info msg="Docker daemon" commit=8728dd2 graphdriver(s)=overlay2 version=20.10.6
Aug 07 03:37:14 minikube dockerd[220]: time="2023-08-07T03:37:14.677534342Z" level=info msg="Daemon has completed initialization"
Aug 07 03:37:14 minikube systemd[1]: Started Docker Application Container Engine.
Aug 07 03:37:14 minikube dockerd[220]: time="2023-08-07T03:37:14.710546117Z" level=info msg="API listen on /run/docker.sock"
Aug 07 03:37:17 minikube systemd[1]: docker.service: Current command vanished from the unit file, execution of the command list won't be resumed.
Aug 07 03:37:17 minikube systemd[1]: Stopping Docker Application Container Engine...
Aug 07 03:37:17 minikube dockerd[220]: time="2023-08-07T03:37:17.500749022Z" level=info msg="Processing signal 'terminated'"
Aug 07 03:37:17 minikube dockerd[220]: time="2023-08-07T03:37:17.501610435Z" level=info msg="stopping event stream following graceful shutdown" error="" module=libcontainerd namespace=moby
Aug 07 03:37:17 minikube dockerd[220]: time="2023-08-07T03:37:17.501794000Z" level=info msg="Daemon shutdown complete"
Aug 07 03:37:17 minikube systemd[1]: docker.service: Succeeded.
Aug 07 03:37:17 minikube systemd[1]: Stopped Docker Application Container Engine.
Aug 07 03:37:17 minikube systemd[1]: Starting Docker Application Container Engine...
Aug 07 03:37:17 minikube dockerd[482]: time="2023-08-07T03:37:17.540737243Z" level=info msg="Starting up"
Aug 07 03:37:17 minikube dockerd[482]: time="2023-08-07T03:37:17.542266895Z" level=info msg="parsed scheme: "unix"" module=grpc
Aug 07 03:37:17 minikube dockerd[482]: time="2023-08-07T03:37:17.542288209Z" level=info msg="scheme "unix" not registered, fallback to default scheme" module=grpc
Aug 07 03:37:17 minikube dockerd[482]: time="2023-08-07T03:37:17.542301186Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc
Aug 07 03:37:17 minikube dockerd[482]: time="2023-08-07T03:37:17.542308023Z" level=info msg="ClientConn switching balancer to "pick_first"" module=grpc
Aug 07 03:37:17 minikube dockerd[482]: time="2023-08-07T03:37:17.543398253Z" level=info msg="parsed scheme: "unix"" module=grpc
Aug 07 03:37:17 minikube dockerd[482]: time="2023-08-07T03:37:17.543420051Z" level=info msg="scheme "unix" not registered, fallback to default scheme" module=grpc
Aug 07 03:37:17 minikube dockerd[482]: time="2023-08-07T03:37:17.543430240Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc
Aug 07 03:37:17 minikube dockerd[482]: time="2023-08-07T03:37:17.543436227Z" level=info msg="ClientConn switching balancer to "pick_first"" module=grpc
Aug 07 03:37:17 minikube dockerd[482]: time="2023-08-07T03:37:17.567288281Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
Aug 07 03:37:17 minikube dockerd[482]: time="2023-08-07T03:37:17.571553606Z" level=warning msg="Your kernel does not support cgroup blkio weight"
Aug 07 03:37:17 minikube dockerd[482]: time="2023-08-07T03:37:17.571574919Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
Aug 07 03:37:17 minikube dockerd[482]: time="2023-08-07T03:37:17.571578996Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device"
Aug 07 03:37:17 minikube dockerd[482]: time="2023-08-07T03:37:17.571582077Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device"
Aug 07 03:37:17 minikube dockerd[482]: time="2023-08-07T03:37:17.571584917Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device"
Aug 07 03:37:17 minikube dockerd[482]: time="2023-08-07T03:37:17.571587774Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device"
Aug 07 03:37:17 minikube dockerd[482]: time="2023-08-07T03:37:17.571718593Z" level=info msg="Loading containers: start."
Aug 07 03:37:17 minikube dockerd[482]: time="2023-08-07T03:37:17.624756381Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Aug 07 03:37:17 minikube dockerd[482]: time="2023-08-07T03:37:17.649969893Z" level=info msg="Loading containers: done."
Aug 07 03:37:17 minikube dockerd[482]: time="2023-08-07T03:37:17.681093412Z" level=info msg="Docker daemon" commit=8728dd2 graphdriver(s)=overlay2 version=20.10.6
Aug 07 03:37:17 minikube dockerd[482]: time="2023-08-07T03:37:17.681151877Z" level=info msg="Daemon has completed initialization"
Aug 07 03:37:17 minikube systemd[1]: Started Docker Application Container Engine.
Aug 07 03:37:17 minikube dockerd[482]: time="2023-08-07T03:37:17.702813528Z" level=info msg="API listen on [::]:2376"
Aug 07 03:37:17 minikube dockerd[482]: time="2023-08-07T03:37:17.705345234Z" level=info msg="API listen on /var/run/docker.sock"

==> container status <==
time="2023-08-07T03:41:08Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

==> describe nodes <==
E0807 11:41:08.601634 3084 logs.go:190] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:

stderr:
sudo: /var/lib/minikube/binaries/v1.20.2/kubectl: command not found
output: "\n** stderr ** \nsudo: /var/lib/minikube/binaries/v1.20.2/kubectl: command not found\n\n** /stderr **"

==> dmesg <==
[ +0.015074] PCI: System does not support PCI
[ +0.241594] kvm: already loaded the other module
[ +1.325957] FS-Cache: Duplicate cookie detected
[ +0.000673] FS-Cache: O-cookie c=00000004 [p=00000002 fl=222 nc=0 na=1]
[ +0.000672] FS-Cache: O-cookie d=000000004dd76619{9P.session} n=000000001f8ccc5e
[ +0.000558] FS-Cache: O-key=[10] '34323934393337343536'
[ +0.000344] FS-Cache: N-cookie c=00000005 [p=00000002 fl=2 nc=0 na=1]
[ +0.000481] FS-Cache: N-cookie d=000000004dd76619{9P.session} n=00000000d0b8cbc2
[ +0.000570] FS-Cache: N-key=[10] '34323934393337343536'
[ +1.232994] 9pnet_virtio: no channels available for device drvfsa
[ +0.000595] WSL (1) WARNING: mount: waiting for virtio device drvfsa
[ +0.149876] WSL (1) ERROR: ConfigApplyWindowsLibPath:2431: open /etc/ld.so.conf.d/ld.wsl.conf
[ +0.000004] failed 2
[ +0.008704] 9pnet_virtio: no channels available for device drvfsa
[ +0.000708] WSL (1) WARNING: mount: waiting for virtio device drvfsa
[ +0.113986] WSL (1) WARNING: /usr/share/zoneinfo/Asia/Shanghai not found. Is the tzdata package installed?
[ +0.144326] misc dxg: dxgk: dxgkio_query_adapter_info: Ioctl failed: -22
[ +0.000825] misc dxg: dxgk: dxgkio_query_adapter_info: Ioctl failed: -22
[ +0.000682] misc dxg: dxgk: dxgkio_query_adapter_info: Ioctl failed: -22
[ +0.001658] misc dxg: dxgk: dxgkio_query_adapter_info: Ioctl failed: -2
[ +0.133449] Exception:
[ +0.000005] Operation canceled @p9io.cpp:258 (AcceptAsync)

[ +0.030103] blk_update_request: I/O error, dev sdc, sector 0 op 0x1:(WRITE) flags 0x800 phys_seg 0 prio class 0
[ +0.006681] Buffer I/O error on dev sdc, logical block 134184960, lost sync page write
[ +0.000579] JBD2: Error -5 detected when updating journal superblock for sdc-8.
[ +0.000438] Aborting journal on device sdc-8.
[ +0.000266] Buffer I/O error on dev sdc, logical block 134184960, lost sync page write
[ +0.000463] JBD2: Error -5 detected when updating journal superblock for sdc-8.
[ +0.000379] EXT4-fs error (device sdc): ext4_put_super:1188: comm Xwayland: Couldn't clean up the journal
[ +0.000601] EXT4-fs (sdc): Remounting filesystem read-only
[ +1.526551] FS-Cache: Duplicate cookie detected
[ +0.000634] FS-Cache: O-cookie c=00000012 [p=00000002 fl=222 nc=0 na=1]
[ +0.000487] FS-Cache: O-cookie d=000000004dd76619{9P.session} n=00000000e8ed0057
[ +0.000335] FS-Cache: O-key=[10] '34323934393337373932'
[ +0.000192] FS-Cache: N-cookie c=00000013 [p=00000002 fl=2 nc=0 na=1]
[ +0.000348] FS-Cache: N-cookie d=000000004dd76619{9P.session} n=00000000b59b150c
[ +0.000269] FS-Cache: N-key=[10] '34323934393337373932'
[ +0.016451] WSL (1) ERROR: ConfigApplyWindowsLibPath:2431: open /etc/ld.so.conf.d/ld.wsl.conf
[ +0.000004] failed 2
[ +0.024366] WSL (1) WARNING: /usr/share/zoneinfo/Asia/Shanghai not found. Is the tzdata package installed?
[ +0.078342] misc dxg: dxgk: dxgkio_query_adapter_info: Ioctl failed: -22
[ +0.001217] misc dxg: dxgk: dxgkio_query_adapter_info: Ioctl failed: -22
[ +0.000760] misc dxg: dxgk: dxgkio_query_adapter_info: Ioctl failed: -22
[ +0.000963] misc dxg: dxgk: dxgkio_query_adapter_info: Ioctl failed: -2
[ +1.152216] 9pnet_virtio: no channels available for device drvfsa
[ +0.000708] WSL (1) WARNING: mount: waiting for virtio device drvfsa
[ +0.173810] WSL (2) ERROR: UtilCreateProcessAndWait:662: /bin/mount failed with 2
[ +0.001157] WSL (1) ERROR: UtilCreateProcessAndWait:684: /bin/mount failed with status 0xff00

[ +0.000934] WSL (1) ERROR: ConfigMountFsTab:2483: Processing fstab with mount -a failed.
[ +0.008665] WSL (1) ERROR: ConfigApplyWindowsLibPath:2431: open /etc/ld.so.conf.d/ld.wsl.conf
[ +0.000005] failed 2
[ +0.012005] 9pnet_virtio: no channels available for device drvfsa
[ +0.000632] WSL (1) WARNING: mount: waiting for virtio device drvfsa
[ +0.089405] misc dxg: dxgk: dxgkio_query_adapter_info: Ioctl failed: -22
[ +0.000655] misc dxg: dxgk: dxgkio_query_adapter_info: Ioctl failed: -22
[ +0.000896] misc dxg: dxgk: dxgkio_query_adapter_info: Ioctl failed: -22
[ +0.001073] misc dxg: dxgk: dxgkio_query_adapter_info: Ioctl failed: -2
[ +0.036561] WSL (1) WARNING: /usr/share/zoneinfo/Asia/Shanghai not found. Is the tzdata package installed?

==> kernel <==
03:41:08 up 24 min, 0 users, load average: 0.07, 0.07, 0.02
Linux minikube 5.15.90.1-microsoft-standard-WSL2 #1 SMP Fri Jan 27 02:56:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 20.04.2 LTS"

==> kubelet <==
-- Logs begin at Mon 2023-08-07 03:37:14 UTC, end at Mon 2023-08-07 03:41:08 UTC. --
-- No entries --

❗ unable to fetch logs for: describe nodes

使用的操作系统版本
Windows11 22H2

@PetterZhukov PetterZhukov added the l/zh-CN Issues in or relating to Chinese label Aug 7, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 25, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Feb 24, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Mar 26, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
l/zh-CN Issues in or relating to Chinese lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

3 participants