Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

E0629 Error caching images for kubeadm fetching remote image UNAUTHORIZED Not Authorized. #4643

Closed
flstaats opened this issue Jun 29, 2019 · 9 comments
Labels
good first issue Denotes an issue ready for a new contributor, according to the "help wanted" guidelines. help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.

Comments

@flstaats
Copy link

flstaats commented Jun 29, 2019

Getting error message on minikube start that caching images for kubeadm is failing. Not clear from the logs or documentation on how to resolve. On an open internet connection in the United States. Fresh install of minikube and the started k8s cluster seems to run correctly even with the caching issue. Using -v 7 log level did not appear to provide additional information regarding the caching.

E0629 07:57:04.266903 11077 start.go:403] Error caching images: Caching images for kubeadm: caching images: caching image /home/flstaats/.minikube/cache/images/k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64_1.14.13: fetching remote image: UNAUTHORIZED: Not Authorized.

Verified that no files are actually cached after command is run:

$ tree -a .minikube/cache
.minikube/cache
├── images
│   ├── gcr.io
│   │   └── k8s-minikube
│   └── k8s.gcr.io
├── iso
│   └── minikube-v1.2.0.iso
└── v1.15.0
    ├── kubeadm
    └── kubelet

The exact command to reproduce the issue:

$ minikube start # Configured to use the kvm2 driver by default

The full output of the command that failed:

$ minikube start
😄  minikube v1.2.0 on linux (amd64)
💡  Tip: Use 'minikube start -p <name>' to create a new cluster, or 'minikube delete' to delete this one.
🔄  Restarting existing kvm2 VM for "minikube" ...
⌛  Waiting for SSH access ...
🐳  Configuring environment for Kubernetes v1.15.0 on Docker 18.09.6
E0629 08:14:38.107006   12154 start.go:403] Error caching images:  Caching images for kubeadm: caching images: caching image /home/flstaats/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v1.8.1: fetching remote image: UNAUTHORIZED: Not Authorized.
❌  Unable to load cached images: loading cached images: loading image /home/flstaats/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.15.0: stat /home/flstaats/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.15.0: no such file or directory
🔄  Relaunching Kubernetes v1.15.0 using kubeadm ... 
⌛  Verifying: apiserver proxy etcd scheduler controller dns
🏄  Done! kubectl is now configured to use "minikube"

The output of the minikube logs command:

minikube logs
==> coredns <==
.:53
2019-06-29T14:57:26.304Z [INFO] CoreDNS-1.3.1
2019-06-29T14:57:26.304Z [INFO] linux/amd64, go1.11.4, 6b56a9c
CoreDNS-1.3.1
linux/amd64, go1.11.4, 6b56a9c
2019-06-29T14:57:26.304Z [INFO] plugin/reload: Running configuration MD5 = 5d5369fbc12f985709b924e721217843

==> dmesg <==
[Jun29 14:56] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
[ +0.071035] ACPI: PCI Interrupt Link [LNKD] enabled at IRQ 11
[ +11.005946] ACPI: PCI Interrupt Link [LNKB] enabled at IRQ 10
[ +0.014667] ACPI: PCI Interrupt Link [LNKC] enabled at IRQ 11
[ +0.014360] ACPI: PCI Interrupt Link [LNKA] enabled at IRQ 10
[ +0.078431] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
[ +0.174281] systemd-fstab-generator[1080]: Ignoring "noauto" for root device
[ +0.007968] systemd[1]: File /usr/lib/systemd/system/systemd-journald.service:35 configures an IP firewall (IPAddressDeny=any), but the local system does not support BPF/cgroup based firewalling.
[ +0.000003] systemd[1]: Proceeding WITHOUT firewalling in effect! (This warning is only shown for the first loaded unit using IP firewalling.)
[ +0.398926] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
[ +0.069241] vboxguest: loading out-of-tree module taints kernel.
[ +0.002572] vboxguest: PCI device not found, probably running on physical hardware.
[Jun29 14:57] systemd-fstab-generator[1889]: Ignoring "noauto" for root device
[ +2.587348] systemd-fstab-generator[2183]: Ignoring "noauto" for root device
[ +13.285212] kauditd_printk_skb: 110 callbacks suppressed
[ +6.762518] tee (2893): /proc/2689/oom_adj is deprecated, please use /proc/2689/oom_score_adj instead.
[ +0.699235] kauditd_printk_skb: 20 callbacks suppressed
[ +30.539765] kauditd_printk_skb: 35 callbacks suppressed
[Jun29 14:58] NFSD: Unable to end grace period: -110

==> kernel <==
15:00:03 up 3 min, 0 users, load average: 0.45, 0.23, 0.09
Linux minikube 4.15.0 #1 SMP Sun Jun 23 23:02:01 PDT 2019 x86_64 GNU/Linux

==> kube-addon-manager <==
error: unable to recognize "STDIN": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refuIsNFOed
: == Generated kubectl prune whitelist flags: --prune-whitelist core/v1/ConfigMap --prune-whitelist core/v1/Endpoints --prune-whitelist core/v1/Namespace --prune-whitelist core/v1/PersistentVolumeClaim --prfind: '/etc/kubernetune-whitelist core/v1/PersistentVolumes/admission-controls': No such file or de --prune-whitelist core/v1/Pod --prune-whitelist core/v1/ReplicationController --prune-whitelist core/v1/Secretirectory
--prune-whitelist core/v1/Service --prune-whitelist batch/v1/Job --prune-whitelist error: no objects pasbatch/v1beta1/CronJob --prune-whitelist apps/v1/DaemonSet --prune-whitelist apps/v1/sed to apply
Deployment --prune-whitelerror: no objects pasist apps/v1/ReplicaSet --prune-whitelsed to apply
ist apps/v1/StatefulSeterror: no objects pas --prune-whitelist extensions/v1beta1/Ingress ==
sed to apply
INFO: == Kubernetes addon manager started at 2019-06-29T14:57:18+00:00 with ADDON_CHECK_INTERVAL_SEC=60 ==
WRN: == Failed to start /opt/namespace.yaml in namespace at 2019-06-29T14:57:19+00:00. 99 tries remaining. ==
INFO: == Default service account in the kube-system namespace has token default-token-l6wvr ==
INFO: == Entering periodical apply loop at 2019-06-29T14:57:23+00:00 ==
INFO: Leader is minikube
INFO: == Kubernetes addon ensure completed at 2019-06-29T14:57:23+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-06-29T14:57:25+00:00 ==
namespace/kube-system unchanged
INFO: == Successfully started /opt/namespace.yaml in namespace at 2019-06-29T14:57:29+00:00
INFO: Leader is minikube
INFO: == Kubernetes addon ensure completed at 2019-06-29T14:58:23+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-06-29T14:58:24+00:00 ==
INFO: Leader is minikube
INFO: == Kubernetes addon ensure completed at 2019-06-29T14:59:24+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-06-29T14:59:25+00:00 ==

==> kube-apiserver <==
E0629 14:57:22.109912 1 prometheus.go:152] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
E0629 14:57:22.109994 1 prometheus.go:164] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
E0629 14:57:22.110073 1 prometheus.go:176] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
E0629 14:57:22.110151 1 prometheus.go:188] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
E0629 14:57:22.110232 1 prometheus.go:203] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
E0629 14:57:22.110301 1 prometheus.go:216] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
I0629 14:57:22.110358 1 plugins.go:158] Loaded 10 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook.
I0629 14:57:22.110415 1 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
I0629 14:57:22.111833 1 client.go:354] parsed scheme: ""
I0629 14:57:22.111898 1 client.go:354] scheme "" not registered, fallback to default scheme
I0629 14:57:22.111971 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 }]
I0629 14:57:22.112068 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 }]
I0629 14:57:22.117020 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 }]
I0629 14:57:22.117342 1 client.go:354] parsed scheme: ""
I0629 14:57:22.117356 1 client.go:354] scheme "" not registered, fallback to default scheme
I0629 14:57:22.117372 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 }]
I0629 14:57:22.117413 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 }]
I0629 14:57:22.124290 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 }]
I0629 14:57:23.126047 1 secure_serving.go:116] Serving securely on [::]:8443
I0629 14:57:23.127162 1 apiservice_controller.go:94] Starting APIServiceRegistrationController
I0629 14:57:23.127289 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I0629 14:57:23.127649 1 controller.go:81] Starting OpenAPI AggregationController
I0629 14:57:23.127811 1 autoregister_controller.go:140] Starting autoregister controller
I0629 14:57:23.127843 1 cache.go:32] Waiting for caches to sync for autoregister controller
I0629 14:57:23.128005 1 crd_finalizer.go:255] Starting CRDFinalizer
I0629 14:57:23.177190 1 available_controller.go:374] Starting AvailableConditionController
I0629 14:57:23.177215 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I0629 14:57:23.177229 1 crdregistration_controller.go:112] Starting crd-autoregister controller
I0629 14:57:23.177232 1 controller_utils.go:1029] Waiting for caches to sync for crd-autoregister controller
I0629 14:57:23.177242 1 controller.go:83] Starting OpenAPI controller
I0629 14:57:23.177248 1 customresource_discovery_controller.go:208] Starting DiscoveryController
I0629 14:57:23.177260 1 naming_controller.go:288] Starting NamingConditionController
I0629 14:57:23.177276 1 establishing_controller.go:73] Starting EstablishingController
I0629 14:57:23.177287 1 nonstructuralschema_controller.go:191] Starting NonStructuralSchemaConditionController
E0629 14:57:23.204452 1 controller.go:148] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.39.105, ResourceVersion: 0, AdditionalErrorMsg:
I0629 14:57:23.228106 1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
I0629 14:57:23.238651 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0629 14:57:23.238669 1 cache.go:39] Caches are synced for autoregister controller
I0629 14:57:23.277535 1 controller_utils.go:1036] Caches are synced for crd-autoregister controller
I0629 14:57:23.277566 1 cache.go:39] Caches are synced for AvailableConditionController controller
I0629 14:57:24.125277 1 controller.go:107] OpenAPI AggregationController: Processing item
I0629 14:57:24.125337 1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I0629 14:57:24.125355 1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0629 14:57:24.138855 1 storage_scheduling.go:128] all system priority classes are created successfully or already exist.
I0629 14:57:24.616033 1 controller.go:606] quota admission added evaluator for: serviceaccounts
I0629 14:57:24.630549 1 controller.go:606] quota admission added evaluator for: deployments.apps
I0629 14:57:24.660482 1 controller.go:606] quota admission added evaluator for: daemonsets.apps
I0629 14:57:24.672339 1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0629 14:57:24.677058 1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I0629 14:57:41.216642 1 controller.go:606] quota admission added evaluator for: endpoints

==> kube-proxy <==
W0629 14:57:25.824364 1 server_others.go:249] Flag proxy-mode="" unknown, assuming iptables proxy
I0629 14:57:25.863111 1 server_others.go:143] Using iptables Proxier.
W0629 14:57:25.863366 1 proxier.go:321] clusterCIDR not specified, unable to distinguish between internal and external traffic
I0629 14:57:25.863658 1 server.go:534] Version: v1.15.0
I0629 14:57:25.871565 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
I0629 14:57:25.871589 1 conntrack.go:52] Setting nf_conntrack_max to 131072
I0629 14:57:25.872060 1 conntrack.go:83] Setting conntrack hashsize to 32768
I0629 14:57:25.874448 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I0629 14:57:25.874484 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I0629 14:57:25.874617 1 config.go:96] Starting endpoints config controller
I0629 14:57:25.874637 1 controller_utils.go:1029] Waiting for caches to sync for endpoints config controller
I0629 14:57:25.874652 1 config.go:187] Starting service config controller
I0629 14:57:25.874664 1 controller_utils.go:1029] Waiting for caches to sync for service config controller
I0629 14:57:25.974783 1 controller_utils.go:1036] Caches are synced for endpoints config controller
I0629 14:57:25.974836 1 controller_utils.go:1036] Caches are synced for service config controller

==> kube-scheduler <==
I0629 14:57:19.383833 1 serving.go:319] Generated self-signed cert in-memory
W0629 14:57:19.644243 1 authentication.go:249] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.
W0629 14:57:19.644264 1 authentication.go:252] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.
W0629 14:57:19.644274 1 authorization.go:146] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
I0629 14:57:19.646823 1 server.go:142] Version: v1.15.0
I0629 14:57:19.647093 1 defaults.go:87] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
W0629 14:57:19.648852 1 authorization.go:47] Authorization is disabled
W0629 14:57:19.648867 1 authentication.go:55] Authentication is disabled
I0629 14:57:19.649484 1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
I0629 14:57:19.650248 1 secure_serving.go:116] Serving securely on 127.0.0.1:10259
E0629 14:57:23.208356 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0629 14:57:23.208503 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0629 14:57:23.208545 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0629 14:57:23.208612 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0629 14:57:23.208681 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0629 14:57:23.208712 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0629 14:57:23.208753 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0629 14:57:23.208799 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0629 14:57:23.212743 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0629 14:57:23.212786 1 reflector.go:125] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:226: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
I0629 14:57:25.052741 1 leaderelection.go:235] attempting to acquire leader lease kube-system/kube-scheduler...
I0629 14:57:41.219954 1 leaderelection.go:245] successfully acquired lease kube-system/kube-scheduler

==> kubelet <==
-- Logs begin at Sat 2019-06-29 14:56:58 UTC, end at Sat 2019-06-29 15:00:03 UTC. --
Jun 29 14:57:21 minikube kubelet[2257]: E0629 14:57:21.438849 2257 kubelet.go:2248] node "minikube" not found
Jun 29 14:57:21 minikube kubelet[2257]: E0629 14:57:21.539006 2257 kubelet.go:2248] node "minikube" not found
Jun 29 14:57:21 minikube kubelet[2257]: E0629 14:57:21.639158 2257 kubelet.go:2248] node "minikube" not found
Jun 29 14:57:21 minikube kubelet[2257]: E0629 14:57:21.739318 2257 kubelet.go:2248] node "minikube" not found
Jun 29 14:57:21 minikube kubelet[2257]: E0629 14:57:21.839470 2257 kubelet.go:2248] node "minikube" not found
Jun 29 14:57:21 minikube kubelet[2257]: E0629 14:57:21.939614 2257 kubelet.go:2248] node "minikube" not found
Jun 29 14:57:22 minikube kubelet[2257]: E0629 14:57:22.039752 2257 kubelet.go:2248] node "minikube" not found
Jun 29 14:57:22 minikube kubelet[2257]: E0629 14:57:22.139879 2257 kubelet.go:2248] node "minikube" not found
Jun 29 14:57:22 minikube kubelet[2257]: E0629 14:57:22.240013 2257 kubelet.go:2248] node "minikube" not found
Jun 29 14:57:22 minikube kubelet[2257]: E0629 14:57:22.340158 2257 kubelet.go:2248] node "minikube" not found
Jun 29 14:57:22 minikube kubelet[2257]: E0629 14:57:22.440313 2257 kubelet.go:2248] node "minikube" not found
Jun 29 14:57:22 minikube kubelet[2257]: E0629 14:57:22.540505 2257 kubelet.go:2248] node "minikube" not found
Jun 29 14:57:22 minikube kubelet[2257]: E0629 14:57:22.640687 2257 kubelet.go:2248] node "minikube" not found
Jun 29 14:57:22 minikube kubelet[2257]: E0629 14:57:22.740887 2257 kubelet.go:2248] node "minikube" not found
Jun 29 14:57:22 minikube kubelet[2257]: E0629 14:57:22.841065 2257 kubelet.go:2248] node "minikube" not found
Jun 29 14:57:22 minikube kubelet[2257]: E0629 14:57:22.941246 2257 kubelet.go:2248] node "minikube" not found
Jun 29 14:57:23 minikube kubelet[2257]: E0629 14:57:23.041421 2257 kubelet.go:2248] node "minikube" not found
Jun 29 14:57:23 minikube kubelet[2257]: E0629 14:57:23.141541 2257 kubelet.go:2248] node "minikube" not found
Jun 29 14:57:23 minikube kubelet[2257]: E0629 14:57:23.202753 2257 reflector.go:125] object-"kube-system"/"kube-proxy-token-zrt6s": Failed to list *v1.Secret: secrets "kube-proxy-token-zrt6s" is forbidden: User "system:node:minikube" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node "minikube" and this object
Jun 29 14:57:23 minikube kubelet[2257]: E0629 14:57:23.202987 2257 reflector.go:125] object-"kube-system"/"kube-proxy": Failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:minikube" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node "minikube" and this object
Jun 29 14:57:23 minikube kubelet[2257]: E0629 14:57:23.203527 2257 reflector.go:125] object-"kube-system"/"coredns-token-xvhkx": Failed to list *v1.Secret: secrets "coredns-token-xvhkx" is forbidden: User "system:node:minikube" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node "minikube" and this object
Jun 29 14:57:23 minikube kubelet[2257]: E0629 14:57:23.203621 2257 reflector.go:125] object-"kube-system"/"coredns": Failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:minikube" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node "minikube" and this object
Jun 29 14:57:23 minikube kubelet[2257]: E0629 14:57:23.212554 2257 reflector.go:125] object-"kube-system"/"storage-provisioner-token-5x7kd": Failed to list *v1.Secret: secrets "storage-provisioner-token-5x7kd" is forbidden: User "system:node:minikube" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node "minikube" and this object
Jun 29 14:57:23 minikube kubelet[2257]: I0629 14:57:23.249005 2257 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-xvhkx" (UniqueName: "kubernetes.io/secret/7e99c028-036f-4ee8-b26b-f7f003a45486-coredns-token-xvhkx") pod "coredns-5c98db65d4-z9jnv" (UID: "7e99c028-036f-4ee8-b26b-f7f003a45486")
Jun 29 14:57:23 minikube kubelet[2257]: I0629 14:57:23.249051 2257 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-xvhkx" (UniqueName: "kubernetes.io/secret/93920069-9031-43d7-a375-87fd12c8285c-coredns-token-xvhkx") pod "coredns-5c98db65d4-sc66b" (UID: "93920069-9031-43d7-a375-87fd12c8285c")
Jun 29 14:57:23 minikube kubelet[2257]: I0629 14:57:23.249076 2257 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/bb1fe8da-5687-4a2f-9606-a55bacc239df-tmp") pod "storage-provisioner" (UID: "bb1fe8da-5687-4a2f-9606-a55bacc239df")
Jun 29 14:57:23 minikube kubelet[2257]: I0629 14:57:23.249095 2257 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-5x7kd" (UniqueName: "kubernetes.io/secret/bb1fe8da-5687-4a2f-9606-a55bacc239df-storage-provisioner-token-5x7kd") pod "storage-provisioner" (UID: "bb1fe8da-5687-4a2f-9606-a55bacc239df")
Jun 29 14:57:23 minikube kubelet[2257]: I0629 14:57:23.249111 2257 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/18ab1469-cfa9-4e93-b00f-2df5740a835a-kube-proxy") pod "kube-proxy-59wbg" (UID: "18ab1469-cfa9-4e93-b00f-2df5740a835a")
Jun 29 14:57:23 minikube kubelet[2257]: I0629 14:57:23.249123 2257 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/18ab1469-cfa9-4e93-b00f-2df5740a835a-lib-modules") pod "kube-proxy-59wbg" (UID: "18ab1469-cfa9-4e93-b00f-2df5740a835a")
Jun 29 14:57:23 minikube kubelet[2257]: I0629 14:57:23.249144 2257 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-zrt6s" (UniqueName: "kubernetes.io/secret/18ab1469-cfa9-4e93-b00f-2df5740a835a-kube-proxy-token-zrt6s") pod "kube-proxy-59wbg" (UID: "18ab1469-cfa9-4e93-b00f-2df5740a835a")
Jun 29 14:57:23 minikube kubelet[2257]: I0629 14:57:23.249158 2257 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/93920069-9031-43d7-a375-87fd12c8285c-config-volume") pod "coredns-5c98db65d4-sc66b" (UID: "93920069-9031-43d7-a375-87fd12c8285c")
Jun 29 14:57:23 minikube kubelet[2257]: I0629 14:57:23.249172 2257 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/7e99c028-036f-4ee8-b26b-f7f003a45486-config-volume") pod "coredns-5c98db65d4-z9jnv" (UID: "7e99c028-036f-4ee8-b26b-f7f003a45486")
Jun 29 14:57:23 minikube kubelet[2257]: I0629 14:57:23.249184 2257 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/18ab1469-cfa9-4e93-b00f-2df5740a835a-xtables-lock") pod "kube-proxy-59wbg" (UID: "18ab1469-cfa9-4e93-b00f-2df5740a835a")
Jun 29 14:57:23 minikube kubelet[2257]: I0629 14:57:23.249191 2257 reconciler.go:150] Reconciler: start to sync state
Jun 29 14:57:23 minikube kubelet[2257]: I0629 14:57:23.598669 2257 kubelet_node_status.go:114] Node minikube was previously registered
Jun 29 14:57:23 minikube kubelet[2257]: I0629 14:57:23.598978 2257 kubelet_node_status.go:75] Successfully registered node minikube
Jun 29 14:57:24 minikube kubelet[2257]: E0629 14:57:24.352094 2257 secret.go:198] Couldn't get secret kube-system/coredns-token-xvhkx: couldn't propagate object cache: timed out waiting for the condition
Jun 29 14:57:24 minikube kubelet[2257]: E0629 14:57:24.352256 2257 nestedpendingoperations.go:270] Operation for ""kubernetes.io/secret/7e99c028-036f-4ee8-b26b-f7f003a45486-coredns-token-xvhkx" ("7e99c028-036f-4ee8-b26b-f7f003a45486")" failed. No retries permitted until 2019-06-29 14:57:24.852217664 +0000 UTC m=+10.067121239 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume "coredns-token-xvhkx" (UniqueName: "kubernetes.io/secret/7e99c028-036f-4ee8-b26b-f7f003a45486-coredns-token-xvhkx") pod "coredns-5c98db65d4-z9jnv" (UID: "7e99c028-036f-4ee8-b26b-f7f003a45486") : couldn't propagate object cache: timed out waiting for the condition"
Jun 29 14:57:24 minikube kubelet[2257]: E0629 14:57:24.352292 2257 configmap.go:203] Couldn't get configMap kube-system/coredns: couldn't propagate object cache: timed out waiting for the condition
Jun 29 14:57:24 minikube kubelet[2257]: E0629 14:57:24.352349 2257 nestedpendingoperations.go:270] Operation for ""kubernetes.io/configmap/93920069-9031-43d7-a375-87fd12c8285c-config-volume" ("93920069-9031-43d7-a375-87fd12c8285c")" failed. No retries permitted until 2019-06-29 14:57:24.852327216 +0000 UTC m=+10.067230789 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/93920069-9031-43d7-a375-87fd12c8285c-config-volume") pod "coredns-5c98db65d4-sc66b" (UID: "93920069-9031-43d7-a375-87fd12c8285c") : couldn't propagate object cache: timed out waiting for the condition"
Jun 29 14:57:24 minikube kubelet[2257]: E0629 14:57:24.352445 2257 secret.go:198] Couldn't get secret kube-system/coredns-token-xvhkx: couldn't propagate object cache: timed out waiting for the condition
Jun 29 14:57:24 minikube kubelet[2257]: E0629 14:57:24.352529 2257 nestedpendingoperations.go:270] Operation for ""kubernetes.io/secret/93920069-9031-43d7-a375-87fd12c8285c-coredns-token-xvhkx" ("93920069-9031-43d7-a375-87fd12c8285c")" failed. No retries permitted until 2019-06-29 14:57:24.852503857 +0000 UTC m=+10.067407459 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume "coredns-token-xvhkx" (UniqueName: "kubernetes.io/secret/93920069-9031-43d7-a375-87fd12c8285c-coredns-token-xvhkx") pod "coredns-5c98db65d4-sc66b" (UID: "93920069-9031-43d7-a375-87fd12c8285c") : couldn't propagate object cache: timed out waiting for the condition"
Jun 29 14:57:24 minikube kubelet[2257]: E0629 14:57:24.352550 2257 configmap.go:203] Couldn't get configMap kube-system/coredns: couldn't propagate object cache: timed out waiting for the condition
Jun 29 14:57:24 minikube kubelet[2257]: E0629 14:57:24.352612 2257 nestedpendingoperations.go:270] Operation for ""kubernetes.io/configmap/7e99c028-036f-4ee8-b26b-f7f003a45486-config-volume" ("7e99c028-036f-4ee8-b26b-f7f003a45486")" failed. No retries permitted until 2019-06-29 14:57:24.852592328 +0000 UTC m=+10.067495890 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/7e99c028-036f-4ee8-b26b-f7f003a45486-config-volume") pod "coredns-5c98db65d4-z9jnv" (UID: "7e99c028-036f-4ee8-b26b-f7f003a45486") : couldn't propagate object cache: timed out waiting for the condition"
Jun 29 14:57:24 minikube kubelet[2257]: E0629 14:57:24.352632 2257 secret.go:198] Couldn't get secret kube-system/storage-provisioner-token-5x7kd: couldn't propagate object cache: timed out waiting for the condition
Jun 29 14:57:24 minikube kubelet[2257]: E0629 14:57:24.352696 2257 nestedpendingoperations.go:270] Operation for ""kubernetes.io/secret/bb1fe8da-5687-4a2f-9606-a55bacc239df-storage-provisioner-token-5x7kd" ("bb1fe8da-5687-4a2f-9606-a55bacc239df")" failed. No retries permitted until 2019-06-29 14:57:24.852676085 +0000 UTC m=+10.067579645 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume "storage-provisioner-token-5x7kd" (UniqueName: "kubernetes.io/secret/bb1fe8da-5687-4a2f-9606-a55bacc239df-storage-provisioner-token-5x7kd") pod "storage-provisioner" (UID: "bb1fe8da-5687-4a2f-9606-a55bacc239df") : couldn't propagate object cache: timed out waiting for the condition"
Jun 29 14:57:24 minikube kubelet[2257]: E0629 14:57:24.352716 2257 configmap.go:203] Couldn't get configMap kube-system/kube-proxy: couldn't propagate object cache: timed out waiting for the condition
Jun 29 14:57:24 minikube kubelet[2257]: E0629 14:57:24.352774 2257 nestedpendingoperations.go:270] Operation for ""kubernetes.io/configmap/18ab1469-cfa9-4e93-b00f-2df5740a835a-kube-proxy" ("18ab1469-cfa9-4e93-b00f-2df5740a835a")" failed. No retries permitted until 2019-06-29 14:57:24.852755645 +0000 UTC m=+10.067659206 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/18ab1469-cfa9-4e93-b00f-2df5740a835a-kube-proxy") pod "kube-proxy-59wbg" (UID: "18ab1469-cfa9-4e93-b00f-2df5740a835a") : couldn't propagate object cache: timed out waiting for the condition"
Jun 29 14:57:24 minikube kubelet[2257]: E0629 14:57:24.352793 2257 secret.go:198] Couldn't get secret kube-system/kube-proxy-token-zrt6s: couldn't propagate object cache: timed out waiting for the condition
Jun 29 14:57:24 minikube kubelet[2257]: E0629 14:57:24.352852 2257 nestedpendingoperations.go:270] Operation for ""kubernetes.io/secret/18ab1469-cfa9-4e93-b00f-2df5740a835a-kube-proxy-token-zrt6s" ("18ab1469-cfa9-4e93-b00f-2df5740a835a")" failed. No retries permitted until 2019-06-29 14:57:24.852831988 +0000 UTC m=+10.067735549 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume "kube-proxy-token-zrt6s" (UniqueName: "kubernetes.io/secret/18ab1469-cfa9-4e93-b00f-2df5740a835a-kube-proxy-token-zrt6s") pod "kube-proxy-59wbg" (UID: "18ab1469-cfa9-4e93-b00f-2df5740a835a") : couldn't propagate object cache: timed out waiting for the condition"

==> storage-provisioner <==

**The operating system version**:

% uname -a # Note: dell xps 13 9380 developer edition with full apt-get dist-upgrade
Linux fls 4.15.0-1043-oem #48-Ubuntu SMP Fri Jun 14 09:57:55 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux

@medyagh
Copy link
Member

medyagh commented Jul 1, 2019

I am curious, are you in a network that restricts your access to docker hub?
"fetching remote image: UNAUTHORIZED: Not Authorized."

@medyagh medyagh added the needs-solution-message Issues where where offering a solution for an error would be helpful label Jul 2, 2019
@flstaats
Copy link
Author

flstaats commented Jul 2, 2019

My local docker can pull images just fine. Note I have login with "credsStore": "pass" which works correctly.

$ docker login
Authenticating with existing credentials...
Login Succeeded
$ docker image ls
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
alpine              latest              4d90542f0623        13 days ago         5.58MB
ubuntu              latest              4c108a37151f        2 ''' ago         64.2MB
$ docker image pull hello-world:latest
latest: Pulling from library/hello-world
1b930d010525: Pull complete 
Digest: sha256:41a65640635299bab090f783209c1e3a3f11934cf7756b09cb2f1e02147c6ed8
Status: Downloaded newer image for hello-world:latest
$ docker image ls
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
alpine              latest              4d90542f0623        13 days ago         5.58MB
ubuntu              latest              4c108a37151f        2 weeks ago         64.2MB
hello-world         latest              fce289e99eb9        6 months ago        1.84kB

@medyagh
Copy link
Member

medyagh commented Jul 2, 2019

seems like you are trying to pull an image from a private regsitery ?
I am curious if this would solve your problem:

1- first connect to minikube's docker
eval $(minikube docker-env)
2- try to pull your image (run in the same terminal)
docker pull library/hello-world

@tstromberg tstromberg added the triage/needs-information Indicates an issue needs more information in order to work on it. label Jul 16, 2019
@flstaats
Copy link
Author

I figured out what was happening. During the first run minikube downloads files from docker under the ~/.minikube/cache directory. If you have your local docker configured with interactive authentication the prompt for authentication is not exposed so you wind with with an authentication failure. Simply removing the line '"credsStore": "pass"' from the .docker/config.json allowed minikube to bootstrap itself correctly.

So the issue with minikube seems to be that it does not interact correctly with docker credential managers (e.g. credsStore) when the credential manager requires user feedback.

@medyagh
Copy link
Member

medyagh commented Jul 25, 2019

@flstaats Thank you so much for sharing for this work around you found , we totally need to provide a better solution message !

we should detect this and advice user to do the workarround you did !

I would be happy to review any PR that addresses this isssue !

@medyagh medyagh added good first issue Denotes an issue ready for a new contributor, according to the "help wanted" guidelines. help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. and removed triage/needs-information Indicates an issue needs more information in order to work on it. labels Jul 25, 2019
@asarkar
Copy link

asarkar commented Sep 5, 2019

I kept getting minikube the error "invoking docker-credential-desktop: exit status 1; output: No stored credential for https://gcr.io" and the Minikube cluster kept timing out on start. My credsStore is set to desktop, and deleting that line made no difference. The error remained after I restarted my computer, but the cluster started successfully.
I ended up uninstalling Minikube and using the Kubernetes feature offered in the Docker Desktop.

@tstromberg
Copy link
Contributor

I believe this was fixed in #5224 - and is available to test in the v1.4 betas.

@tstromberg tstromberg removed the needs-solution-message Issues where where offering a solution for an error would be helpful label Sep 17, 2019
@tstromberg
Copy link
Contributor

minikube v1.4 was released, which addresses this.

@flstaats
Copy link
Author

I have validated that minikube v1.4 bootstraps a new k8s environment with my ~/.docker/config.json having the value "credsStore": "pass" set. Thanks for fixing this inconvenience!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
good first issue Denotes an issue ready for a new contributor, according to the "help wanted" guidelines. help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.
Projects
None yet
Development

No branches or pull requests

4 participants