Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docker: Ingress not exposed on MacOS #7332

Closed
jkornata opened this issue Mar 31, 2020 · 74 comments · Fixed by #7393 or #12089
Closed

docker: Ingress not exposed on MacOS #7332

jkornata opened this issue Mar 31, 2020 · 74 comments · Fixed by #7393 or #12089
Labels
addon/ingress co/docker-driver Issues related to kubernetes in container help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug. os/macos priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. top-10-issues Top 10 support issues

Comments

@jkornata
Copy link

Steps to reproduce the issue:
I can't access ingress on the fresh installation. It's on MacOS, with docker for mac and kubernetes disabled in docker for mac.

  1. minikube start --vm-driver=docker --kubernetes-version v1.14.0
  2. minikube addons enable ingress

Issue is not affected by the kubernetes version. It also happens on the newest. I've tried following this guide but it's doesn't work without ingress service. I thought that as suggested here adding service manually will fix the issue but it doesn't.

kubectl get ep
NAME         ENDPOINTS         AGE
kubernetes   172.17.0.2:8443   33m
web          172.18.0.5:8080   23m

But if I try to curl 172.18.0.5:8080 it cannot connect.

kubectl -n kube-system  describe po nginx-ingress-controller-b84556868-kh8n6 
Name:           nginx-ingress-controller-b84556868-kh8n6
Namespace:      kube-system
Priority:       0
Node:           minikube/172.17.0.2
Start Time:     Tue, 31 Mar 2020 10:35:06 +0200
Labels:         addonmanager.kubernetes.io/mode=Reconcile
                app.kubernetes.io/name=nginx-ingress-controller
                app.kubernetes.io/part-of=kube-system
                pod-template-hash=b84556868
Annotations:    prometheus.io/port: 10254
                prometheus.io/scrape: true
Status:         Running
IP:             172.18.0.4
IPs:            <none>

curl 172.18.0.4 doesn't work either.

kubectl get ing
NAME              HOSTS              ADDRESS      PORTS   AGE
example-ingress   hello-world.info   172.17.0.2   80      24m

Neither does curl 172.17.0.2 or curl hello-world.info (with /etc/hosts modified

docker ps
CONTAINER ID        IMAGE                                COMMAND                  CREATED             STATUS              PORTS                                                                           NAMES
2bb364a550d9        gcr.io/k8s-minikube/kicbase:v0.0.8   "/usr/local/bin/entr…"   40 minutes ago      Up 40 minutes       127.0.0.1:32773->22/tcp, 127.0.0.1:32772->2376/tcp, 127.0.0.1:32771->8443/tcp   minikube

Full output of failed command:

|-------------|--------------------------|--------------------------------|-----|
|  NAMESPACE  |           NAME           |          TARGET PORT           | URL |
|-------------|--------------------------|--------------------------------|-----|
| default     | kubernetes               | No node port                   |
| kube-system | kube-dns                 | No node port                   |
|-------------|--------------------------|--------------------------------|-----|

Full output of minikube start command used, if not already included:

😄 minikube v1.9.0 na Darwin 10.12.6
✨ Using the docker driver based on user configuration
🚜 Pulling base image ...
🔥 Creating Kubernetes in docker container with (CPUs=2) (2 available), Memory=1989MB (1989MB available) ...
🐳 przygowowywanie Kubernetesa v1.14.0 na Docker 19.03.2...
▪ kubeadm.pod-network-cidr=10.244.0.0/16
🌟 Enabling addons: default-storageclass, storage-provisioner
🏄 Gotowe! kubectl jest skonfigurowany do użycia z "minikube".

❗ /usr/local/bin/kubectl is v1.18.0, which may be incompatible with Kubernetes v1.14.0.
💡 You can also use 'minikube kubectl -- get pods' to invoke a matching version

Optional: Full output of minikube logs command:

==> Docker <== -- Logs begin at Tue 2020-03-31 08:29:37 UTC, end at Tue 2020-03-31 08:56:14 UTC. -- Mar 31 08:29:43 minikube dockerd[492]: time="2020-03-31T08:29:43.764070218Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 31 08:29:43 minikube dockerd[492]: time="2020-03-31T08:29:43.764087347Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 31 08:29:43 minikube dockerd[492]: time="2020-03-31T08:29:43.764102400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 31 08:29:43 minikube dockerd[492]: time="2020-03-31T08:29:43.764116769Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 31 08:29:43 minikube dockerd[492]: time="2020-03-31T08:29:43.764132540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 31 08:29:43 minikube dockerd[492]: time="2020-03-31T08:29:43.764146535Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 31 08:29:43 minikube dockerd[492]: time="2020-03-31T08:29:43.764191662Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 31 08:29:43 minikube dockerd[492]: time="2020-03-31T08:29:43.764209275Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 31 08:29:43 minikube dockerd[492]: time="2020-03-31T08:29:43.764224931Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 31 08:29:43 minikube dockerd[492]: time="2020-03-31T08:29:43.764239224Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 31 08:29:43 minikube dockerd[492]: time="2020-03-31T08:29:43.764676320Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock Mar 31 08:29:43 minikube dockerd[492]: time="2020-03-31T08:29:43.764742415Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc Mar 31 08:29:43 minikube dockerd[492]: time="2020-03-31T08:29:43.764898357Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock Mar 31 08:29:43 minikube dockerd[492]: time="2020-03-31T08:29:43.764918518Z" level=info msg="containerd successfully booted in 0.075847s" Mar 31 08:29:43 minikube dockerd[492]: time="2020-03-31T08:29:43.768852318Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc000932020, READY" module=grpc Mar 31 08:29:43 minikube dockerd[492]: time="2020-03-31T08:29:43.772244255Z" level=info msg="parsed scheme: \"unix\"" module=grpc Mar 31 08:29:43 minikube dockerd[492]: time="2020-03-31T08:29:43.772733171Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Mar 31 08:29:43 minikube dockerd[492]: time="2020-03-31T08:29:43.773028740Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0 }] }" module=grpc Mar 31 08:29:43 minikube dockerd[492]: time="2020-03-31T08:29:43.773232914Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Mar 31 08:29:43 minikube dockerd[492]: time="2020-03-31T08:29:43.773688407Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc000663a00, CONNECTING" module=grpc Mar 31 08:29:43 minikube dockerd[492]: time="2020-03-31T08:29:43.773693901Z" level=info msg="blockingPicker: the picked transport is not ready, loop back to repick" module=grpc Mar 31 08:29:43 minikube dockerd[492]: time="2020-03-31T08:29:43.774303501Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc000663a00, READY" module=grpc Mar 31 08:29:43 minikube dockerd[492]: time="2020-03-31T08:29:43.775500487Z" level=info msg="parsed scheme: \"unix\"" module=grpc Mar 31 08:29:43 minikube dockerd[492]: time="2020-03-31T08:29:43.775655306Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Mar 31 08:29:43 minikube dockerd[492]: time="2020-03-31T08:29:43.775702803Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0 }] }" module=grpc Mar 31 08:29:43 minikube dockerd[492]: time="2020-03-31T08:29:43.775735963Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Mar 31 08:29:43 minikube dockerd[492]: time="2020-03-31T08:29:43.775832092Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc000663f40, CONNECTING" module=grpc Mar 31 08:29:43 minikube dockerd[492]: time="2020-03-31T08:29:43.776416332Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc000663f40, READY" module=grpc Mar 31 08:29:43 minikube dockerd[492]: time="2020-03-31T08:29:43.781092553Z" level=info msg="[graphdriver] using prior storage driver: overlay2" Mar 31 08:29:43 minikube dockerd[492]: time="2020-03-31T08:29:43.796947718Z" level=info msg="Loading containers: start." Mar 31 08:29:44 minikube dockerd[492]: time="2020-03-31T08:29:44.012391294Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address" Mar 31 08:29:44 minikube dockerd[492]: time="2020-03-31T08:29:44.112329971Z" level=info msg="Loading containers: done." Mar 31 08:29:44 minikube dockerd[492]: time="2020-03-31T08:29:44.145329464Z" level=info msg="Docker daemon" commit=6a30dfca03 graphdriver(s)=overlay2 version=19.03.2 Mar 31 08:29:44 minikube dockerd[492]: time="2020-03-31T08:29:44.145538169Z" level=info msg="Daemon has completed initialization" Mar 31 08:29:44 minikube systemd[1]: Started Docker Application Container Engine. Mar 31 08:29:44 minikube dockerd[492]: time="2020-03-31T08:29:44.200990838Z" level=info msg="API listen on /var/run/docker.sock" Mar 31 08:29:44 minikube dockerd[492]: time="2020-03-31T08:29:44.201139377Z" level=info msg="API listen on [::]:2376" Mar 31 08:31:31 minikube dockerd[492]: time="2020-03-31T08:31:31.739954908Z" level=info msg="shim containerd-shim started" address=/containerd-shim/f9656dbf0466d93ef18b4df2bd71f153525ccf97621f24ffea19318ad3e51657.sock debug=false pid=2061 Mar 31 08:31:31 minikube dockerd[492]: time="2020-03-31T08:31:31.767851969Z" level=info msg="shim containerd-shim started" address=/containerd-shim/6d0c1d199ed791ac12ed903ee38af96fcaf6f6aa88827aacf1e0522fbd4bf4f6.sock debug=false pid=2065 Mar 31 08:31:31 minikube dockerd[492]: time="2020-03-31T08:31:31.772055007Z" level=info msg="shim containerd-shim started" address=/containerd-shim/2d9cdc27b4ee56ba50d171794fea9b61119e7e5a0c205188f9ca4df157170e05.sock debug=false pid=2068 Mar 31 08:31:31 minikube dockerd[492]: time="2020-03-31T08:31:31.781207101Z" level=info msg="shim containerd-shim started" address=/containerd-shim/732f37e8c13ebed913ae0b08f53511d9bb83fbfe84b3c4f8f9267867806c4e4b.sock debug=false pid=2072 Mar 31 08:31:32 minikube dockerd[492]: time="2020-03-31T08:31:32.561984783Z" level=info msg="shim containerd-shim started" address=/containerd-shim/55ae5c97f9cc2124da21f5992d9a70a3f2c7754923206cb07403e0a7ddd60aaf.sock debug=false pid=2257 Mar 31 08:31:32 minikube dockerd[492]: time="2020-03-31T08:31:32.694220871Z" level=info msg="shim containerd-shim started" address=/containerd-shim/1969993e8979c9e1492e8bf269ed3381e5de4fcd206b6d24932168c49ad47fa6.sock debug=false pid=2295 Mar 31 08:31:32 minikube dockerd[492]: time="2020-03-31T08:31:32.708992427Z" level=info msg="shim containerd-shim started" address=/containerd-shim/4120649ccf3aacf8220077782d5711078e53fdd33dfd12f94b21e638d54ef4fd.sock debug=false pid=2302 Mar 31 08:31:32 minikube dockerd[492]: time="2020-03-31T08:31:32.728154316Z" level=info msg="shim containerd-shim started" address=/containerd-shim/e0e5c370e03b6e7ab4787dd6359ba272d56a13f17130ad0140513ffc79a1f677.sock debug=false pid=2309 Mar 31 08:32:03 minikube dockerd[492]: time="2020-03-31T08:32:03.361833574Z" level=info msg="shim containerd-shim started" address=/containerd-shim/2981a3a881af97dc083d2e2349d45f1af94116c3a445e8c9d7d2261d0eab561f.sock debug=false pid=2876 Mar 31 08:32:03 minikube dockerd[492]: time="2020-03-31T08:32:03.533736979Z" level=info msg="shim containerd-shim started" address=/containerd-shim/fcd40646714e8511819738aa14c55f375b0ab025db1aedec5e64dc99c8929c30.sock debug=false pid=2906 Mar 31 08:32:03 minikube dockerd[492]: time="2020-03-31T08:32:03.936151887Z" level=info msg="shim containerd-shim started" address=/containerd-shim/bb5bcfea109fcad86618deb96ce4906be121a8ba325cbf4a9a86ad847605c23e.sock debug=false pid=2958 Mar 31 08:32:04 minikube dockerd[492]: time="2020-03-31T08:32:04.066814231Z" level=info msg="shim containerd-shim started" address=/containerd-shim/efa7427bb2c173ea13bca75be0a9f54c7096c622a1a552e37d73107d997c1ba0.sock debug=false pid=2982 Mar 31 08:32:05 minikube dockerd[492]: time="2020-03-31T08:32:05.379357085Z" level=info msg="shim containerd-shim started" address=/containerd-shim/07478204604916b55f0526b99919d1924ce0b9e7d8bfbb883989c6f9f6cd8118.sock debug=false pid=3061 Mar 31 08:32:05 minikube dockerd[492]: time="2020-03-31T08:32:05.671648875Z" level=info msg="shim containerd-shim started" address=/containerd-shim/6b33c5a37b92afa15820bb1bf8b0c2eacbd64fd61e6355305ad8b8072dfbc781.sock debug=false pid=3094 Mar 31 08:32:05 minikube dockerd[492]: time="2020-03-31T08:32:05.738650640Z" level=info msg="shim containerd-shim started" address=/containerd-shim/ed999c3330247aead76ae32bd6b1a431161fc7447f333b9e7d3777ccffd87eeb.sock debug=false pid=3113 Mar 31 08:32:07 minikube dockerd[492]: time="2020-03-31T08:32:07.356036160Z" level=info msg="shim containerd-shim started" address=/containerd-shim/b76d0042d4fc4476b1849988e9512e92ee0df797ee38b34a02d831cc05b6303b.sock debug=false pid=3264 Mar 31 08:32:07 minikube dockerd[492]: time="2020-03-31T08:32:07.361647388Z" level=info msg="shim containerd-shim started" address=/containerd-shim/3ed7ecff86048f785946d4c701e90d4bc1385978017c8533fc27fd790659206e.sock debug=false pid=3265 Mar 31 08:32:27 minikube dockerd[492]: time="2020-03-31T08:32:27.075828822Z" level=info msg="shim containerd-shim started" address=/containerd-shim/e43aba448c53331f08ec1cf1a2cc3b896cd36cf09ae377d92c6ad9cda82e031d.sock debug=false pid=3557 Mar 31 08:35:07 minikube dockerd[492]: time="2020-03-31T08:35:07.062000473Z" level=info msg="shim containerd-shim started" address=/containerd-shim/bf1bfb3c846a13cca7780c6a6592d470b7840e06528d827b058b826211009772.sock debug=false pid=4850 Mar 31 08:35:09 minikube dockerd[492]: time="2020-03-31T08:35:09.544399020Z" level=warning msg="[DEPRECATION NOTICE] registry v2 schema1 support will be removed in an upcoming release. Please contact admins of the quay.io registry NOW to avoid future disruption." Mar 31 08:36:53 minikube dockerd[492]: time="2020-03-31T08:36:53.035598147Z" level=info msg="shim containerd-shim started" address=/containerd-shim/6c4828993809d8b74455926d52bbec30da38e86cafd759bc7414a0ff4c3b3d42.sock debug=false pid=5776 Mar 31 08:41:55 minikube dockerd[492]: time="2020-03-31T08:41:55.036141932Z" level=info msg="shim containerd-shim started" address=/containerd-shim/f22126523d7baaa584a3e9797d19b195849472a0d973f0ae5429d94368466590.sock debug=false pid=8113 Mar 31 08:42:00 minikube dockerd[492]: time="2020-03-31T08:42:00.287239535Z" level=info msg="shim containerd-shim started" address=/containerd-shim/d5bcd9e0e2c52dae9a7cd6398f49bf0c57acba6f6b7db9f65c458b3ea52be9c8.sock debug=false pid=8213

==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
49548f067e0fb gcr.io/google-samples/hello-app@sha256:c62ead5b8c15c231f9e786250b07909daf6c266d0fcddd93fea882eb722c3be4 14 minutes ago Running web 0 cc3588d4252ea
6e356d38f6644 quay.io/kubernetes-ingress-controller/nginx-ingress-controller@sha256:d0b22f715fcea5598ef7f869d308b55289a3daaa12922fa52a1abf17703c88e7 19 minutes ago Running nginx-ingress-controller 0 0254de39b3801
fdf3890ae6cad kindest/kindnetd@sha256:bc1833b3da442bb639008dd5a62861a0419d3f64b58fce6fb38b749105232555 23 minutes ago Running kindnet-cni 0 fdc9efa64e13c
5987d4d29db7b eb516548c180f 24 minutes ago Running coredns 0 a48e9875ea2d7
6a507738d34a6 eb516548c180f 24 minutes ago Running coredns 0 55124d3804fb1
31fa7a07f95ed 5cd54e388abaf 24 minutes ago Running kube-proxy 0 00fed65b89e57
791695c1a1a89 4689081edb103 24 minutes ago Running storage-provisioner 0 4e5d751c70346
b82aa41df356b 2c4adeb21b4ff 24 minutes ago Running etcd 0 09a6124253491
636cbc28b02a5 00638a24688b0 24 minutes ago Running kube-scheduler 0 59929901cfb8d
a15a83b0d226f ecf910f40d6e0 24 minutes ago Running kube-apiserver 0 1702fda9a509f
c3fe71e5fc3a8 b95b1efa0436b 24 minutes ago Running kube-controller-manager 0 d91b6fdb43251

==> coredns [5987d4d29db7] <==
.:53
2020-03-31T08:32:08.712Z [INFO] CoreDNS-1.3.1
2020-03-31T08:32:08.713Z [INFO] linux/amd64, go1.11.4, 6b56a9c
CoreDNS-1.3.1
linux/amd64, go1.11.4, 6b56a9c
2020-03-31T08:32:08.713Z [INFO] plugin/reload: Running configuration MD5 = 599b9eb76b8c147408aed6a0bbe0f669

==> coredns [6a507738d34a] <==
.:53
2020-03-31T08:32:08.711Z [INFO] CoreDNS-1.3.1
2020-03-31T08:32:08.711Z [INFO] linux/amd64, go1.11.4, 6b56a9c
CoreDNS-1.3.1
linux/amd64, go1.11.4, 6b56a9c
2020-03-31T08:32:08.711Z [INFO] plugin/reload: Running configuration MD5 = 599b9eb76b8c147408aed6a0bbe0f669

==> describe nodes <==
Name: minikube
Roles: master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=minikube
kubernetes.io/os=linux
minikube.k8s.io/commit=8af1ea66d8a0cb7202a44a91b6dc775577868ed1
minikube.k8s.io/name=minikube
minikube.k8s.io/updated_at=2020_03_31T10_31_49_0700
minikube.k8s.io/version=v1.9.0
node-role.kubernetes.io/master=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Tue, 31 Mar 2020 08:31:43 +0000
Taints:
Unschedulable: false
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message


MemoryPressure False Tue, 31 Mar 2020 08:55:44 +0000 Tue, 31 Mar 2020 08:31:35 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Tue, 31 Mar 2020 08:55:44 +0000 Tue, 31 Mar 2020 08:31:35 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Tue, 31 Mar 2020 08:55:44 +0000 Tue, 31 Mar 2020 08:31:35 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Tue, 31 Mar 2020 08:55:44 +0000 Tue, 31 Mar 2020 08:31:35 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 172.17.0.2
Hostname: minikube
Capacity:
cpu: 2
ephemeral-storage: 61255492Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 2037620Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 61255492Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 2037620Ki
pods: 110
System Info:
Machine ID: 8545c5f5c4eb42e884baacaf5fa1f5fb
System UUID: e80618a3-0f92-4608-98b0-196f69922a9e
Boot ID: 598d6f3e-313e-44ba-867d-08468399f9d3
Kernel Version: 4.19.76-linuxkit
OS Image: Ubuntu 19.10
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://19.3.2
Kubelet Version: v1.14.0
Kube-Proxy Version: v1.14.0
PodCIDR: 10.244.0.0/24
Non-terminated Pods: (11 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE


default web 0 (0%) 0 (0%) 0 (0%) 0 (0%) 14m
kube-system coredns-fb8b8dccf-bktjn 100m (5%) 0 (0%) 70Mi (3%) 170Mi (8%) 24m
kube-system coredns-fb8b8dccf-lbpbz 100m (5%) 0 (0%) 70Mi (3%) 170Mi (8%) 24m
kube-system etcd-minikube 0 (0%) 0 (0%) 0 (0%) 0 (0%) 23m
kube-system kindnet-hcl42 100m (5%) 100m (5%) 50Mi (2%) 50Mi (2%) 24m
kube-system kube-apiserver-minikube 250m (12%) 0 (0%) 0 (0%) 0 (0%) 23m
kube-system kube-controller-manager-minikube 200m (10%) 0 (0%) 0 (0%) 0 (0%) 23m
kube-system kube-proxy-m7v6p 0 (0%) 0 (0%) 0 (0%) 0 (0%) 24m
kube-system kube-scheduler-minikube 100m (5%) 0 (0%) 0 (0%) 0 (0%) 23m
kube-system nginx-ingress-controller-b84556868-kh8n6 0 (0%) 0 (0%) 0 (0%) 0 (0%) 21m
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 24m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits


cpu 850m (42%) 100m (5%)
memory 190Mi (9%) 390Mi (19%)
ephemeral-storage 0 (0%) 0 (0%)
Events:
Type Reason Age From Message


Normal NodeHasSufficientMemory 24m (x8 over 24m) kubelet, minikube Node minikube status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 24m (x8 over 24m) kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 24m (x7 over 24m) kubelet, minikube Node minikube status is now: NodeHasSufficientPID
Warning readOnlySysFS 24m kube-proxy, minikube CRI error: /sys is read-only: cannot modify conntrack limits, problems may arise later (If running Docker, see docker issue #24000)
Normal Starting 24m kube-proxy, minikube Starting kube-proxy.

==> dmesg <==
[Mar31 07:34] tsc: Unable to calibrate against PIT
[ +0.597814] virtio-pci 0000:00:01.0: can't derive routing for PCI INT A
[ +0.001924] virtio-pci 0000:00:01.0: PCI INT A: no GSI
[ +0.005139] virtio-pci 0000:00:07.0: can't derive routing for PCI INT A
[ +0.001680] virtio-pci 0000:00:07.0: PCI INT A: no GSI
[ +0.058545] Hangcheck: starting hangcheck timer 0.9.1 (tick is 180 seconds, margin is 60 seconds).
[ +0.022298] ahci 0000:00:02.0: can't derive routing for PCI INT A
[ +0.001507] ahci 0000:00:02.0: PCI INT A: no GSI
[ +0.683851] i8042: Can't read CTR while initializing i8042
[ +0.001417] i8042: probe of i8042 failed with error -5
[ +0.006370] ACPI Error: Could not enable RealTimeClock event (20180810/evxfevnt-184)
[ +0.001774] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20180810/evxface-620)
[ +0.260204] ata1.00: ATA Identify Device Log not supported
[ +0.001281] ata1.00: Security Log not supported
[ +0.002459] ata1.00: ATA Identify Device Log not supported
[ +0.001264] ata1.00: Security Log not supported
[ +0.154008] FAT-fs (sr0): utf8 is not a recommended IO charset for FAT filesystems, filesystem will be case sensitive!
[ +0.021992] FAT-fs (sr0): utf8 is not a recommended IO charset for FAT filesystems, filesystem will be case sensitive!
[Mar31 07:35] FAT-fs (sr2): utf8 is not a recommended IO charset for FAT filesystems, filesystem will be case sensitive!
[ +0.077989] FAT-fs (sr2): utf8 is not a recommended IO charset for FAT filesystems, filesystem will be case sensitive!
[Mar31 07:40] hrtimer: interrupt took 2316993 ns
[Mar31 07:47] tee (5973): /proc/5576/oom_adj is deprecated, please use /proc/5576/oom_score_adj instead.

==> etcd [b82aa41df356] <==
2020-03-31 08:31:34.136909 I | etcdmain: etcd Version: 3.3.10
2020-03-31 08:31:34.139611 I | etcdmain: Git SHA: 27fc7e2
2020-03-31 08:31:34.139688 I | etcdmain: Go Version: go1.10.4
2020-03-31 08:31:34.140806 I | etcdmain: Go OS/Arch: linux/amd64
2020-03-31 08:31:34.141644 I | etcdmain: setting maximum number of CPUs to 2, total number of available CPUs is 2
2020-03-31 08:31:34.144109 I | embed: peerTLS: cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file =
2020-03-31 08:31:34.161365 I | embed: listening for peers on https://172.17.0.2:2380
2020-03-31 08:31:34.162964 I | embed: listening for client requests on 127.0.0.1:2379
2020-03-31 08:31:34.163139 I | embed: listening for client requests on 172.17.0.2:2379
2020-03-31 08:31:34.193488 I | etcdserver: name = minikube
2020-03-31 08:31:34.194252 I | etcdserver: data dir = /var/lib/minikube/etcd
2020-03-31 08:31:34.195167 I | etcdserver: member dir = /var/lib/minikube/etcd/member
2020-03-31 08:31:34.195636 I | etcdserver: heartbeat = 100ms
2020-03-31 08:31:34.195985 I | etcdserver: election = 1000ms
2020-03-31 08:31:34.196385 I | etcdserver: snapshot count = 10000
2020-03-31 08:31:34.196656 I | etcdserver: advertise client URLs = https://172.17.0.2:2379
2020-03-31 08:31:34.197009 I | etcdserver: initial advertise peer URLs = https://172.17.0.2:2380
2020-03-31 08:31:34.197237 I | etcdserver: initial cluster = minikube=https://172.17.0.2:2380
2020-03-31 08:31:34.236216 I | etcdserver: starting member b8e14bda2255bc24 in cluster 38b0e74a458e7a1f
2020-03-31 08:31:34.236303 I | raft: b8e14bda2255bc24 became follower at term 0
2020-03-31 08:31:34.236320 I | raft: newRaft b8e14bda2255bc24 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
2020-03-31 08:31:34.236334 I | raft: b8e14bda2255bc24 became follower at term 1
2020-03-31 08:31:34.340367 W | auth: simple token is not cryptographically signed
2020-03-31 08:31:34.401667 I | etcdserver: starting server... [version: 3.3.10, cluster version: to_be_decided]
2020-03-31 08:31:34.409456 I | etcdserver: b8e14bda2255bc24 as single-node; fast-forwarding 9 ticks (election ticks 10)
2020-03-31 08:31:34.424575 I | etcdserver/membership: added member b8e14bda2255bc24 [https://172.17.0.2:2380] to cluster 38b0e74a458e7a1f
2020-03-31 08:31:34.442258 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file =
2020-03-31 08:31:34.444013 I | embed: listening for metrics on http://172.17.0.2:2381
2020-03-31 08:31:34.444133 I | embed: listening for metrics on http://127.0.0.1:2381
2020-03-31 08:31:34.702254 I | raft: b8e14bda2255bc24 is starting a new election at term 1
2020-03-31 08:31:34.702335 I | raft: b8e14bda2255bc24 became candidate at term 2
2020-03-31 08:31:34.702368 I | raft: b8e14bda2255bc24 received MsgVoteResp from b8e14bda2255bc24 at term 2
2020-03-31 08:31:34.702389 I | raft: b8e14bda2255bc24 became leader at term 2
2020-03-31 08:31:34.702402 I | raft: raft.node: b8e14bda2255bc24 elected leader b8e14bda2255bc24 at term 2
2020-03-31 08:31:34.931189 I | etcdserver: published {Name:minikube ClientURLs:[https://172.17.0.2:2379]} to cluster 38b0e74a458e7a1f
2020-03-31 08:31:35.006979 I | etcdserver: setting up the initial cluster version to 3.3
2020-03-31 08:31:35.060823 I | embed: ready to serve client requests
2020-03-31 08:31:35.391969 N | etcdserver/membership: set the initial cluster version to 3.3
2020-03-31 08:31:35.432869 I | etcdserver/api: enabled capabilities for version 3.3
2020-03-31 08:31:35.461278 I | embed: ready to serve client requests
2020-03-31 08:31:35.497338 I | embed: serving client requests on 127.0.0.1:2379
2020-03-31 08:31:35.498302 I | embed: serving client requests on 172.17.0.2:2379
proto: no coders for int
proto: no encoder for ValueSize int [GetProperties]
2020-03-31 08:32:26.935952 W | etcdserver: request "header:<ID:13557085228049851706 username:"kube-apiserver-etcd-client" auth_revision:1 > txn:<compare:<target:MOD key:"/registry/masterleases/172.17.0.2" mod_revision:439 > success:<request_put:<key:"/registry/masterleases/172.17.0.2" value_size:65 lease:4333713191195075896 >> failure:<request_range:<key:"/registry/masterleases/172.17.0.2" > >>" with result "size:16" took too long (262.086409ms) to execute
2020-03-31 08:32:26.936285 W | etcdserver: read-only range request "key:"/registry/services/endpoints/kube-system/kube-scheduler" " with result "range_response_count:1 size:430" took too long (178.640542ms) to execute
2020-03-31 08:36:01.834832 W | etcdserver: read-only range request "key:"/registry/services/endpoints/kube-system/kube-controller-manager" " with result "range_response_count:1 size:448" took too long (537.346776ms) to execute
2020-03-31 08:36:01.837558 W | etcdserver: read-only range request "key:"/registry/deployments" range_end:"/registry/deploymentt" count_only:true " with result "range_response_count:0 size:7" took too long (237.353514ms) to execute
2020-03-31 08:36:50.822689 W | etcdserver: read-only range request "key:"/registry/persistentvolumeclaims" range_end:"/registry/persistentvolumeclaimt" count_only:true " with result "range_response_count:0 size:5" took too long (268.036763ms) to execute
2020-03-31 08:36:50.823106 W | etcdserver: read-only range request "key:"/registry/leases/kube-node-lease/minikube" " with result "range_response_count:1 size:289" took too long (313.963517ms) to execute
2020-03-31 08:36:52.839697 W | etcdserver: read-only range request "key:"/registry/runtimeclasses" range_end:"/registry/runtimeclasset" count_only:true " with result "range_response_count:0 size:5" took too long (521.345081ms) to execute
2020-03-31 08:41:36.476771 I | mvcc: store.index: compact 792
2020-03-31 08:41:36.485267 I | mvcc: finished scheduled compaction at 792 (took 4.328598ms)
2020-03-31 08:46:36.273524 I | mvcc: store.index: compact 1204
2020-03-31 08:46:36.277749 I | mvcc: finished scheduled compaction at 1204 (took 1.397204ms)
2020-03-31 08:51:36.069722 I | mvcc: store.index: compact 1625
2020-03-31 08:51:36.071463 I | mvcc: finished scheduled compaction at 1625 (took 836.551µs)

==> kernel <==
08:56:17 up 1:21, 0 users, load average: 0.33, 0.36, 0.53
Linux minikube 4.19.76-linuxkit #1 SMP Thu Oct 17 19:31:58 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 19.10"

==> kube-apiserver [a15a83b0d226] <==
I0331 08:55:48.541814 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0331 08:55:48.542070 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0331 08:55:49.542325 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0331 08:55:49.542511 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0331 08:55:50.543443 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0331 08:55:50.543681 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0331 08:55:51.545228 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0331 08:55:51.545400 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0331 08:55:52.548788 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0331 08:55:52.549108 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0331 08:55:53.550212 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0331 08:55:53.550512 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0331 08:55:54.550920 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0331 08:55:54.559542 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0331 08:55:55.552142 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0331 08:55:55.562253 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0331 08:55:56.552804 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0331 08:55:56.563460 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0331 08:55:57.554372 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0331 08:55:57.564611 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0331 08:55:58.555926 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0331 08:55:58.565912 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0331 08:55:59.557787 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0331 08:55:59.567042 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0331 08:56:00.558500 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0331 08:56:00.567752 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0331 08:56:01.559200 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0331 08:56:01.568257 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0331 08:56:02.560176 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0331 08:56:02.568718 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0331 08:56:03.560969 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0331 08:56:03.569388 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0331 08:56:04.562444 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0331 08:56:04.570431 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0331 08:56:05.563591 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0331 08:56:05.571439 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0331 08:56:06.542265 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0331 08:56:06.551395 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0331 08:56:07.545431 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0331 08:56:07.551901 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0331 08:56:08.546286 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0331 08:56:08.552996 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0331 08:56:09.547546 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0331 08:56:09.553592 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0331 08:56:10.553217 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0331 08:56:10.554171 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0331 08:56:11.554591 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0331 08:56:11.554731 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0331 08:56:12.555210 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0331 08:56:12.555426 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0331 08:56:13.555827 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0331 08:56:13.556101 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0331 08:56:14.556416 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0331 08:56:14.556718 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0331 08:56:15.557116 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0331 08:56:15.557383 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0331 08:56:16.558507 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0331 08:56:16.558968 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0331 08:56:17.559695 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0331 08:56:17.565042 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001

==> kube-controller-manager [c3fe71e5fc3a] <==
I0331 08:32:01.282471 1 controllermanager.go:497] Started "daemonset"
W0331 08:32:01.282653 1 controllermanager.go:489] Skipping "root-ca-cert-publisher"
I0331 08:32:01.738243 1 controllermanager.go:497] Started "horizontalpodautoscaling"
I0331 08:32:01.739200 1 horizontal.go:156] Starting HPA controller
I0331 08:32:01.741221 1 controller_utils.go:1027] Waiting for caches to sync for HPA controller
I0331 08:32:01.989670 1 controllermanager.go:497] Started "tokencleaner"
W0331 08:32:01.990240 1 controllermanager.go:489] Skipping "ttl-after-finished"
E0331 08:32:01.990935 1 resource_quota_controller.go:437] failed to sync resource monitors: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"
I0331 08:32:01.990176 1 tokencleaner.go:116] Starting token cleaner controller
I0331 08:32:01.994933 1 controller_utils.go:1027] Waiting for caches to sync for token_cleaner controller
W0331 08:32:02.083571 1 actual_state_of_world.go:503] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist
I0331 08:32:02.086826 1 controller_utils.go:1034] Caches are synced for bootstrap_signer controller
I0331 08:32:02.087999 1 controller_utils.go:1034] Caches are synced for deployment controller
I0331 08:32:02.089057 1 controller_utils.go:1034] Caches are synced for certificate controller
I0331 08:32:02.092192 1 controller_utils.go:1034] Caches are synced for ReplicaSet controller
I0331 08:32:02.093670 1 controller_utils.go:1034] Caches are synced for endpoint controller
I0331 08:32:02.093757 1 controller_utils.go:1034] Caches are synced for certificate controller
I0331 08:32:02.096554 1 controller_utils.go:1034] Caches are synced for token_cleaner controller
I0331 08:32:02.132764 1 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"0f9b3570-732a-11ea-9f29-02429a45b1b2", APIVersion:"apps/v1", ResourceVersion:"197", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-fb8b8dccf to 2
I0331 08:32:02.135841 1 controller_utils.go:1034] Caches are synced for node controller
I0331 08:32:02.135926 1 range_allocator.go:157] Starting range CIDR allocator
I0331 08:32:02.136016 1 controller_utils.go:1027] Waiting for caches to sync for cidrallocator controller
I0331 08:32:02.139985 1 controller_utils.go:1034] Caches are synced for GC controller
I0331 08:32:02.142975 1 controller_utils.go:1034] Caches are synced for HPA controller
I0331 08:32:02.143886 1 controller_utils.go:1034] Caches are synced for TTL controller
I0331 08:32:02.153627 1 controller_utils.go:1034] Caches are synced for PV protection controller
I0331 08:32:02.156638 1 controller_utils.go:1034] Caches are synced for taint controller
I0331 08:32:02.156788 1 node_lifecycle_controller.go:1159] Initializing eviction metric for zone:
W0331 08:32:02.156892 1 node_lifecycle_controller.go:833] Missing timestamp for Node minikube. Assuming now as a timestamp.
I0331 08:32:02.157068 1 node_lifecycle_controller.go:1059] Controller detected that zone is now in state Normal.
I0331 08:32:02.158108 1 taint_manager.go:198] Starting NoExecuteTaintManager
I0331 08:32:02.160204 1 event.go:209] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"0cf13fa1-732a-11ea-9f29-02429a45b1b2", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node minikube event: Registered Node minikube in Controller
I0331 08:32:02.170846 1 controller_utils.go:1034] Caches are synced for job controller
I0331 08:32:02.173867 1 log.go:172] [INFO] signed certificate with serial number 348836518710746890614976265293012047567942960152
I0331 08:32:02.190539 1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-fb8b8dccf", UID:"18426705-732a-11ea-9f29-02429a45b1b2", APIVersion:"apps/v1", ResourceVersion:"327", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-fb8b8dccf-lbpbz
I0331 08:32:02.221681 1 controller_utils.go:1034] Caches are synced for service account controller
I0331 08:32:02.225773 1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-fb8b8dccf", UID:"18426705-732a-11ea-9f29-02429a45b1b2", APIVersion:"apps/v1", ResourceVersion:"327", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-fb8b8dccf-bktjn
I0331 08:32:02.236302 1 controller_utils.go:1034] Caches are synced for cidrallocator controller
I0331 08:32:02.260197 1 controller_utils.go:1034] Caches are synced for namespace controller
I0331 08:32:02.319173 1 range_allocator.go:310] Set node minikube PodCIDR to 10.244.0.0/24
I0331 08:32:02.483880 1 controller_utils.go:1034] Caches are synced for daemon sets controller
I0331 08:32:02.553898 1 controller_utils.go:1034] Caches are synced for ClusterRoleAggregator controller
I0331 08:32:02.561597 1 controller_utils.go:1034] Caches are synced for persistent volume controller
I0331 08:32:02.594321 1 controller_utils.go:1034] Caches are synced for attach detach controller
I0331 08:32:02.623154 1 controller_utils.go:1034] Caches are synced for stateful set controller
I0331 08:32:02.626184 1 controller_utils.go:1034] Caches are synced for expand controller
I0331 08:32:02.641836 1 controller_utils.go:1034] Caches are synced for PVC protection controller
I0331 08:32:02.675653 1 controller_utils.go:1034] Caches are synced for disruption controller
I0331 08:32:02.675749 1 disruption.go:294] Sending events to api server.
I0331 08:32:02.678210 1 controller_utils.go:1034] Caches are synced for ReplicationController controller
I0331 08:32:02.693864 1 controller_utils.go:1027] Waiting for caches to sync for garbage collector controller
I0331 08:32:02.724773 1 event.go:209] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"0fb881c6-732a-11ea-9f29-02429a45b1b2", APIVersion:"apps/v1", ResourceVersion:"208", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-m7v6p
I0331 08:32:02.753727 1 event.go:209] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"109cdd5b-732a-11ea-9f29-02429a45b1b2", APIVersion:"apps/v1", ResourceVersion:"240", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kindnet-hcl42
I0331 08:32:02.815582 1 controller_utils.go:1034] Caches are synced for garbage collector controller
I0331 08:32:02.815791 1 garbagecollector.go:139] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0331 08:32:02.863222 1 controller_utils.go:1034] Caches are synced for resource quota controller
I0331 08:32:02.894120 1 controller_utils.go:1034] Caches are synced for garbage collector controller
E0331 08:32:03.056559 1 clusterroleaggregation_controller.go:180] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
I0331 08:35:06.194335 1 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"nginx-ingress-controller", UID:"85f7245a-732a-11ea-9f29-02429a45b1b2", APIVersion:"apps/v1", ResourceVersion:"668", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-ingress-controller-b84556868 to 1
I0331 08:35:06.243687 1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"nginx-ingress-controller-b84556868", UID:"85f8a669-732a-11ea-9f29-02429a45b1b2", APIVersion:"apps/v1", ResourceVersion:"669", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-ingress-controller-b84556868-kh8n6

==> kube-proxy [31fa7a07f95e] <==
W0331 08:32:06.518547 1 server_others.go:295] Flag proxy-mode="" unknown, assuming iptables proxy
I0331 08:32:06.672751 1 server_others.go:148] Using iptables Proxier.
I0331 08:32:06.675746 1 server_others.go:178] Tearing down inactive rules.
I0331 08:32:07.027370 1 server.go:555] Version: v1.14.0
I0331 08:32:07.066710 1 conntrack.go:52] Setting nf_conntrack_max to 131072
E0331 08:32:07.067346 1 conntrack.go:127] sysfs is not writable: {Device:sysfs Path:/sys Type:sysfs Opts:[ro nosuid nodev noexec relatime] Freq:0 Pass:0} (mount options are [ro nosuid nodev noexec relatime])
I0331 08:32:07.067633 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I0331 08:32:07.067763 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I0331 08:32:07.068184 1 config.go:202] Starting service config controller
I0331 08:32:07.068371 1 controller_utils.go:1027] Waiting for caches to sync for service config controller
I0331 08:32:07.089152 1 config.go:102] Starting endpoints config controller
I0331 08:32:07.089722 1 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller
I0331 08:32:07.195756 1 controller_utils.go:1034] Caches are synced for endpoints config controller
I0331 08:32:07.269068 1 controller_utils.go:1034] Caches are synced for service config controller

==> kube-scheduler [636cbc28b02a] <==
I0331 08:31:35.938018 1 serving.go:319] Generated self-signed cert in-memory
W0331 08:31:36.608645 1 authentication.go:249] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.
W0331 08:31:36.608726 1 authentication.go:252] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.
W0331 08:31:36.608757 1 authorization.go:146] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
I0331 08:31:36.621912 1 server.go:142] Version: v1.14.0
I0331 08:31:36.625207 1 defaults.go:87] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
W0331 08:31:36.638219 1 authorization.go:47] Authorization is disabled
W0331 08:31:36.638287 1 authentication.go:55] Authentication is disabled
I0331 08:31:36.638311 1 deprecated_insecure_serving.go:49] Serving healthz insecurely on [::]:10251
I0331 08:31:36.640459 1 secure_serving.go:116] Serving securely on 127.0.0.1:10259
E0331 08:31:43.052618 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0331 08:31:43.053184 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0331 08:31:43.053690 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0331 08:31:43.055118 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0331 08:31:43.055202 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0331 08:31:43.055360 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0331 08:31:43.055806 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0331 08:31:43.055849 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0331 08:31:43.056810 1 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0331 08:31:43.070097 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0331 08:31:44.058160 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0331 08:31:44.059737 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0331 08:31:44.059875 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0331 08:31:44.069524 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0331 08:31:44.070192 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0331 08:31:44.073620 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0331 08:31:44.073938 1 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0331 08:31:44.074342 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0331 08:31:44.080776 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0331 08:31:44.081063 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
I0331 08:31:45.926301 1 controller_utils.go:1027] Waiting for caches to sync for scheduler controller
I0331 08:31:46.026597 1 controller_utils.go:1034] Caches are synced for scheduler controller
I0331 08:31:46.027034 1 leaderelection.go:217] attempting to acquire leader lease kube-system/kube-scheduler...
I0331 08:31:46.066937 1 leaderelection.go:227] successfully acquired lease kube-system/kube-scheduler

==> kubelet <==
-- Logs begin at Tue 2020-03-31 08:29:37 UTC, end at Tue 2020-03-31 08:56:19 UTC. --
Mar 31 08:31:43 minikube kubelet[1618]: E0331 08:31:43.308177 1618 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.1601565a249aa607", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node minikube status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf98ddd839f3e607, ext:711236661, loc:(*time.Location)(0x7ff88e0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf98ddd839f3e607, ext:711236661, loc:(*time.Location)(0x7ff88e0)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Mar 31 08:31:43 minikube kubelet[1618]: E0331 08:31:43.363793 1618 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.1601565a249a41c8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node minikube status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf98ddd839f381c8, ext:711211006, loc:(*time.Location)(0x7ff88e0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf98ddd839f381c8, ext:711211006, loc:(*time.Location)(0x7ff88e0)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Mar 31 08:31:43 minikube kubelet[1618]: E0331 08:31:43.419479 1618 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.1601565a249abc1a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node minikube status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf98ddd839f3fc1a, ext:711242326, loc:(*time.Location)(0x7ff88e0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf98ddd839f3fc1a, ext:711242326, loc:(*time.Location)(0x7ff88e0)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Mar 31 08:31:43 minikube kubelet[1618]: E0331 08:31:43.481355 1618 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.1601565a249a41c8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node minikube status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf98ddd839f381c8, ext:711211006, loc:(*time.Location)(0x7ff88e0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf98ddd84f838bb8, ext:999230435, loc:(*time.Location)(0x7ff88e0)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Mar 31 08:31:43 minikube kubelet[1618]: E0331 08:31:43.543899 1618 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.1601565a249aa607", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node minikube status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf98ddd839f3e607, ext:711236661, loc:(*time.Location)(0x7ff88e0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf98ddd84f83b36e, ext:999240601, loc:(*time.Location)(0x7ff88e0)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Mar 31 08:31:43 minikube kubelet[1618]: E0331 08:31:43.627373 1618 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.1601565a249abc1a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node minikube status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf98ddd839f3fc1a, ext:711242326, loc:(*time.Location)(0x7ff88e0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf98ddd84f83d361, ext:999248781, loc:(*time.Location)(0x7ff88e0)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Mar 31 08:31:43 minikube kubelet[1618]: E0331 08:31:43.692428 1618 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.1601565a249abc1a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node minikube status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf98ddd839f3fc1a, ext:711242326, loc:(*time.Location)(0x7ff88e0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf98ddd85f75d96f, ext:1266768277, loc:(*time.Location)(0x7ff88e0)}}, Count:3, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Mar 31 08:31:43 minikube kubelet[1618]: E0331 08:31:43.851375 1618 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.1601565a249a41c8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node minikube status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf98ddd839f381c8, ext:711211006, loc:(*time.Location)(0x7ff88e0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf98ddd85f75aebd, ext:1266757353, loc:(*time.Location)(0x7ff88e0)}}, Count:3, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Mar 31 08:31:44 minikube kubelet[1618]: E0331 08:31:44.249636 1618 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.1601565a249aa607", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node minikube status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf98ddd839f3e607, ext:711236661, loc:(*time.Location)(0x7ff88e0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf98ddd85f75ca73, ext:1266764442, loc:(*time.Location)(0x7ff88e0)}}, Count:3, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Mar 31 08:31:44 minikube kubelet[1618]: E0331 08:31:44.452700 1618 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.1601565a4a335340", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf98ddd863f1c940, ext:1341999475, loc:(*time.Location)(0x7ff88e0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf98ddd863f1c940, ext:1341999475, loc:(*time.Location)(0x7ff88e0)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Mar 31 08:31:44 minikube kubelet[1618]: E0331 08:31:44.847316 1618 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.1601565a249a41c8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node minikube status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf98ddd839f381c8, ext:711211006, loc:(*time.Location)(0x7ff88e0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf98ddd86b2d1402, ext:1463325782, loc:(*time.Location)(0x7ff88e0)}}, Count:4, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Mar 31 08:31:45 minikube kubelet[1618]: E0331 08:31:45.248634 1618 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.1601565a249aa607", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node minikube status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf98ddd839f3e607, ext:711236661, loc:(*time.Location)(0x7ff88e0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf98ddd86b2eae4a, ext:1463430769, loc:(*time.Location)(0x7ff88e0)}}, Count:4, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Mar 31 08:31:45 minikube kubelet[1618]: E0331 08:31:45.655875 1618 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.1601565a249abc1a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node minikube status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf98ddd839f3fc1a, ext:711242326, loc:(*time.Location)(0x7ff88e0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf98ddd86b33e875, ext:1463773344, loc:(*time.Location)(0x7ff88e0)}}, Count:4, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Mar 31 08:31:46 minikube kubelet[1618]: E0331 08:31:46.055732 1618 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.1601565a249a41c8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node minikube status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf98ddd839f381c8, ext:711211006, loc:(*time.Location)(0x7ff88e0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf98ddd893eb4f75, ext:2073139617, loc:(*time.Location)(0x7ff88e0)}}, Count:5, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Mar 31 08:31:49 minikube kubelet[1618]: E0331 08:31:49.648591 1618 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods"
Mar 31 08:31:49 minikube kubelet[1618]: E0331 08:31:49.657552 1618 helpers.go:721] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics
Mar 31 08:31:59 minikube kubelet[1618]: E0331 08:31:59.692174 1618 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods"
Mar 31 08:31:59 minikube kubelet[1618]: E0331 08:31:59.692365 1618 helpers.go:721] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics
Mar 31 08:32:02 minikube kubelet[1618]: I0331 08:32:02.343909 1618 kuberuntime_manager.go:946] updating runtime config through cri with podcidr 10.244.0.0/24
Mar 31 08:32:02 minikube kubelet[1618]: I0331 08:32:02.345132 1618 docker_service.go:353] docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}
Mar 31 08:32:02 minikube kubelet[1618]: I0331 08:32:02.345503 1618 kubelet_network.go:77] Setting Pod CIDR: -> 10.244.0.0/24
Mar 31 08:32:02 minikube kubelet[1618]: E0331 08:32:02.399773 1618 reflector.go:126] object-"kube-system"/"coredns": Failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:minikube" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node "minikube" and this object
Mar 31 08:32:02 minikube kubelet[1618]: E0331 08:32:02.402299 1618 reflector.go:126] object-"kube-system"/"coredns-token-sflpk": Failed to list *v1.Secret: secrets "coredns-token-sflpk" is forbidden: User "system:node:minikube" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node "minikube" and this object
Mar 31 08:32:02 minikube kubelet[1618]: I0331 08:32:02.651242 1618 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/1846dd62-732a-11ea-9f29-02429a45b1b2-config-volume") pod "coredns-fb8b8dccf-lbpbz" (UID: "1846dd62-732a-11ea-9f29-02429a45b1b2")
Mar 31 08:32:02 minikube kubelet[1618]: I0331 08:32:02.663192 1618 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-sflpk" (UniqueName: "kubernetes.io/secret/184fbeb3-732a-11ea-9f29-02429a45b1b2-coredns-token-sflpk") pod "coredns-fb8b8dccf-bktjn" (UID: "184fbeb3-732a-11ea-9f29-02429a45b1b2")
Mar 31 08:32:02 minikube kubelet[1618]: I0331 08:32:02.663343 1618 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/12c200e7-732a-11ea-9f29-02429a45b1b2-tmp") pod "storage-provisioner" (UID: "12c200e7-732a-11ea-9f29-02429a45b1b2")
Mar 31 08:32:02 minikube kubelet[1618]: I0331 08:32:02.663423 1618 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/184fbeb3-732a-11ea-9f29-02429a45b1b2-config-volume") pod "coredns-fb8b8dccf-bktjn" (UID: "184fbeb3-732a-11ea-9f29-02429a45b1b2")
Mar 31 08:32:02 minikube kubelet[1618]: I0331 08:32:02.666767 1618 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-sflpk" (UniqueName: "kubernetes.io/secret/1846dd62-732a-11ea-9f29-02429a45b1b2-coredns-token-sflpk") pod "coredns-fb8b8dccf-lbpbz" (UID: "1846dd62-732a-11ea-9f29-02429a45b1b2")
Mar 31 08:32:02 minikube kubelet[1618]: I0331 08:32:02.682574 1618 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-cjc6f" (UniqueName: "kubernetes.io/secret/12c200e7-732a-11ea-9f29-02429a45b1b2-storage-provisioner-token-cjc6f") pod "storage-provisioner" (UID: "12c200e7-732a-11ea-9f29-02429a45b1b2")
Mar 31 08:32:02 minikube kubelet[1618]: I0331 08:32:02.791486 1618 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "cni-cfg" (UniqueName: "kubernetes.io/host-path/18876401-732a-11ea-9f29-02429a45b1b2-cni-cfg") pod "kindnet-hcl42" (UID: "18876401-732a-11ea-9f29-02429a45b1b2")
Mar 31 08:32:02 minikube kubelet[1618]: I0331 08:32:02.791875 1618 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/18876401-732a-11ea-9f29-02429a45b1b2-lib-modules") pod "kindnet-hcl42" (UID: "18876401-732a-11ea-9f29-02429a45b1b2")
Mar 31 08:32:02 minikube kubelet[1618]: I0331 08:32:02.792173 1618 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kindnet-token-n82c5" (UniqueName: "kubernetes.io/secret/18876401-732a-11ea-9f29-02429a45b1b2-kindnet-token-n82c5") pod "kindnet-hcl42" (UID: "18876401-732a-11ea-9f29-02429a45b1b2")
Mar 31 08:32:02 minikube kubelet[1618]: I0331 08:32:02.792706 1618 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/18876401-732a-11ea-9f29-02429a45b1b2-xtables-lock") pod "kindnet-hcl42" (UID: "18876401-732a-11ea-9f29-02429a45b1b2")
Mar 31 08:32:02 minikube kubelet[1618]: E0331 08:32:02.798128 1618 reflector.go:126] object-"kube-system"/"kindnet-token-n82c5": Failed to list *v1.Secret: secrets "kindnet-token-n82c5" is forbidden: User "system:node:minikube" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node "minikube" and this object
Mar 31 08:32:02 minikube kubelet[1618]: I0331 08:32:02.893841 1618 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/18869b9f-732a-11ea-9f29-02429a45b1b2-lib-modules") pod "kube-proxy-m7v6p" (UID: "18869b9f-732a-11ea-9f29-02429a45b1b2")
Mar 31 08:32:02 minikube kubelet[1618]: I0331 08:32:02.895351 1618 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/18869b9f-732a-11ea-9f29-02429a45b1b2-kube-proxy") pod "kube-proxy-m7v6p" (UID: "18869b9f-732a-11ea-9f29-02429a45b1b2")
Mar 31 08:32:02 minikube kubelet[1618]: I0331 08:32:02.896545 1618 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/18869b9f-732a-11ea-9f29-02429a45b1b2-xtables-lock") pod "kube-proxy-m7v6p" (UID: "18869b9f-732a-11ea-9f29-02429a45b1b2")
Mar 31 08:32:02 minikube kubelet[1618]: I0331 08:32:02.900684 1618 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-82nbp" (UniqueName: "kubernetes.io/secret/18869b9f-732a-11ea-9f29-02429a45b1b2-kube-proxy-token-82nbp") pod "kube-proxy-m7v6p" (UID: "18869b9f-732a-11ea-9f29-02429a45b1b2")
Mar 31 08:32:03 minikube kubelet[1618]: E0331 08:32:03.793791 1618 secret.go:198] Couldn't get secret kube-system/coredns-token-sflpk: couldn't propagate object cache: timed out waiting for the condition
Mar 31 08:32:03 minikube kubelet[1618]: E0331 08:32:03.793998 1618 nestedpendingoperations.go:267] Operation for ""kubernetes.io/secret/184fbeb3-732a-11ea-9f29-02429a45b1b2-coredns-token-sflpk" ("184fbeb3-732a-11ea-9f29-02429a45b1b2")" failed. No retries permitted until 2020-03-31 08:32:04.293967663 +0000 UTC m=+36.055171975 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume "coredns-token-sflpk" (UniqueName: "kubernetes.io/secret/184fbeb3-732a-11ea-9f29-02429a45b1b2-coredns-token-sflpk") pod "coredns-fb8b8dccf-bktjn" (UID: "184fbeb3-732a-11ea-9f29-02429a45b1b2") : couldn't propagate object cache: timed out waiting for the condition"
Mar 31 08:32:03 minikube kubelet[1618]: E0331 08:32:03.794879 1618 secret.go:198] Couldn't get secret kube-system/coredns-token-sflpk: couldn't propagate object cache: timed out waiting for the condition
Mar 31 08:32:03 minikube kubelet[1618]: E0331 08:32:03.794952 1618 nestedpendingoperations.go:267] Operation for ""kubernetes.io/secret/1846dd62-732a-11ea-9f29-02429a45b1b2-coredns-token-sflpk" ("1846dd62-732a-11ea-9f29-02429a45b1b2")" failed. No retries permitted until 2020-03-31 08:32:04.294926895 +0000 UTC m=+36.056131206 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume "coredns-token-sflpk" (UniqueName: "kubernetes.io/secret/1846dd62-732a-11ea-9f29-02429a45b1b2-coredns-token-sflpk") pod "coredns-fb8b8dccf-lbpbz" (UID: "1846dd62-732a-11ea-9f29-02429a45b1b2") : couldn't propagate object cache: timed out waiting for the condition"
Mar 31 08:32:03 minikube kubelet[1618]: E0331 08:32:03.900920 1618 secret.go:198] Couldn't get secret kube-system/kindnet-token-n82c5: couldn't propagate object cache: timed out waiting for the condition
Mar 31 08:32:03 minikube kubelet[1618]: E0331 08:32:03.901234 1618 nestedpendingoperations.go:267] Operation for ""kubernetes.io/secret/18876401-732a-11ea-9f29-02429a45b1b2-kindnet-token-n82c5" ("18876401-732a-11ea-9f29-02429a45b1b2")" failed. No retries permitted until 2020-03-31 08:32:04.401170675 +0000 UTC m=+36.162375074 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume "kindnet-token-n82c5" (UniqueName: "kubernetes.io/secret/18876401-732a-11ea-9f29-02429a45b1b2-kindnet-token-n82c5") pod "kindnet-hcl42" (UID: "18876401-732a-11ea-9f29-02429a45b1b2") : couldn't propagate object cache: timed out waiting for the condition"
Mar 31 08:32:04 minikube kubelet[1618]: W0331 08:32:04.418840 1618 container.go:409] Failed to create summary reader for "/system.slice/run-rfbc88cf5398744519564ad9cbf4ff678.scope": none of the resources are being tracked.
Mar 31 08:32:04 minikube kubelet[1618]: W0331 08:32:04.419588 1618 container.go:409] Failed to create summary reader for "/system.slice/run-r0435686948fa4809aafd2bfdbacf7779.scope": none of the resources are being tracked.
Mar 31 08:32:05 minikube kubelet[1618]: W0331 08:32:05.976174 1618 pod_container_deletor.go:75] Container "fdc9efa64e13c2ce2c3745c444a18be062347bf4c9dd4e17f131c14e020b9101" not found in pod's containers
Mar 31 08:32:06 minikube kubelet[1618]: W0331 08:32:06.837949 1618 pod_container_deletor.go:75] Container "55124d3804fb1e46a3df0165b6a8e99f7b1ccc3fd80da91f0645219a283f7b79" not found in pod's containers
Mar 31 08:32:06 minikube kubelet[1618]: W0331 08:32:06.868003 1618 pod_container_deletor.go:75] Container "a48e9875ea2d71897bfcb6a9d5163006cbc89e4d738c41f651c47396299b93fb" not found in pod's containers
Mar 31 08:32:08 minikube kubelet[1618]: I0331 08:32:08.373210 1618 transport.go:132] certificate rotation detected, shutting down client connections to start using new credentials
Mar 31 08:32:09 minikube kubelet[1618]: E0331 08:32:09.711731 1618 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods"
Mar 31 08:32:09 minikube kubelet[1618]: E0331 08:32:09.711865 1618 helpers.go:721] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics
Mar 31 08:32:19 minikube kubelet[1618]: E0331 08:32:19.816629 1618 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods"
Mar 31 08:32:19 minikube kubelet[1618]: E0331 08:32:19.817086 1618 helpers.go:721] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics
Mar 31 08:35:06 minikube kubelet[1618]: I0331 08:35:06.390555 1618 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "nginx-ingress-token-6hbxw" (UniqueName: "kubernetes.io/secret/86005fa7-732a-11ea-9f29-02429a45b1b2-nginx-ingress-token-6hbxw") pod "nginx-ingress-controller-b84556868-kh8n6" (UID: "86005fa7-732a-11ea-9f29-02429a45b1b2")
Mar 31 08:35:07 minikube kubelet[1618]: W0331 08:35:07.441583 1618 pod_container_deletor.go:75] Container "0254de39b3801b1cdce25aea2b15a6cf57f9d4c13e50b84459be2a1b197f73aa" not found in pod's containers
Mar 31 08:41:53 minikube kubelet[1618]: E0331 08:41:53.448884 1618 reflector.go:126] object-"default"/"default-token-jp22c": Failed to list *v1.Secret: secrets "default-token-jp22c" is forbidden: User "system:node:minikube" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node "minikube" and this object
Mar 31 08:41:53 minikube kubelet[1618]: I0331 08:41:53.535630 1618 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-jp22c" (UniqueName: "kubernetes.io/secret/78b61bb0-732b-11ea-9f29-02429a45b1b2-default-token-jp22c") pod "web" (UID: "78b61bb0-732b-11ea-9f29-02429a45b1b2")
Mar 31 08:41:55 minikube kubelet[1618]: W0331 08:41:55.682086 1618 pod_container_deletor.go:75] Container "cc3588d4252ea6a8587eecc630d55d513d07e8630a4f8eb3bbffb6ed7c4bc995" not found in pod's containers
Mar 31 08:52:32 minikube kubelet[1618]: W0331 08:52:32.579484 1618 reflector.go:289] object-"kube-system"/"coredns": watch of *v1.ConfigMap ended with: too old resource version: 325 (1077)

==> storage-provisioner [791695c1a1a8] <==

@tstromberg tstromberg changed the title Ingress not exposed on MacOS docker: Ingress not exposed on MacOS Mar 31, 2020
@tstromberg
Copy link
Contributor

I suspect something may be missing to forward the port with the docker driver. I don't know if this is a documentation issue or an implementation issue. @medyagh - can you comment?

Do you mind trying to see if it works properly with --driver=hyperkit?

@tstromberg tstromberg added addon/ingress co/docker-driver Issues related to kubernetes in container triage/needs-information Indicates an issue needs more information in order to work on it. kind/support Categorizes issue or PR as a support question. labels Mar 31, 2020
@jkornata
Copy link
Author

Works just fine with --driver=hyperkit

@medyagh
Copy link
Member

medyagh commented Apr 2, 2020

the ingress addon is currently not supported on docker driver on MacOs. this is due the limitation on docker bridge on mac.
there is a work arround that we have implemented for the core minikube tasks such as tunnel and service.

we could add same work arround for addon ingress on docker driver on mac and windows.
that said I will mark this as a bug to fix.

sorry that you faced this issue, the LEAST we could do is at least not allow the user to enable this addon on docker on macos driver for now. till it is fixed

@jkornata
I will make a PR to fix this bug

@medyagh medyagh added kind/bug Categorizes issue or PR as related to a bug. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. and removed triage/needs-information Indicates an issue needs more information in order to work on it. kind/support Categorizes issue or PR as a support question. labels Apr 2, 2020
@medyagh
Copy link
Member

medyagh commented Apr 2, 2020

cc: @josedonizetti

@jkornata
Copy link
Author

jkornata commented Apr 3, 2020

Thank you @medyagh

@metacubed
Copy link

@medyagh, could you please re-open this until the defect is fixed?

@maximus1108
Copy link

@medyagh +1

@Asarew
Copy link
Contributor

Asarew commented May 19, 2020

This issue is referenced in the cli output when trying to enable the ingress addon, yet the status is closed? probably better to open it up @medyagh

@afbjorklund
Copy link
Collaborator

I think the bot heard it wrong, the comment said to not close this bug

@afbjorklund afbjorklund reopened this May 19, 2020
@oconnelc
Copy link

oconnelc commented May 31, 2020

I've been trying to enable ingress on Windows 10. When I try, I get the following error:

$ minikube addons enable ingress
* Due to docker networking limitations on windows, ingress addon is not supported for this driver.
Alternatively to use this addon you can use a vm-based driver:

        'minikube start --vm=true'

To track the update on this work in progress feature please check:
https://github.com/kubernetes/minikube/issues/7332

I believe this error message was introduced as part of fix: #7393

Which redirects to this error. Is this the correct ticket? If so why does the ticket only refer to MacOS. If not, what is the correct ticket.

I'm sorry if this comment doens't have anything to do with this ticket, but I reached a dead end with this error and I wanted to make sure I'm tracking correctly.

@sharifelgamal
Copy link
Collaborator

Yes, this error message will show up for the docker driver on both MacOS and Windows, since this ticket applies to both. This is still an outstanding bug we need to address.

@medyagh
Copy link
Member

medyagh commented Jul 5, 2020

@oconnelc have you tried the suggestion that minikube gave?
'minikube start --vm=true'

@medyagh medyagh added this to the v1.13.0-candidate milestone Jul 6, 2020
@astuppad
Copy link

astuppad commented Jul 18, 2020

This issue is still exists if you want a small work around

I suggest you to install the virtualbox and run the command
minikube addons enable ingress

if you get the below error in Mac(OS)
Starting local Kubernetes v1.10.0 cluster...
Starting VM...
E0911 13:34:45.394430 41676 start.go:174] Error starting host: Error
creating host: Error executing step: Creating VM.
: Error setting up host only network on machine start: The host-only
adapter we just created is not visible. This is a well known
VirtualBox bug. You might want to uninstall it and reinstall at least
version 5.0.12 that is is supposed to fix this issue.

Then try doing following steps
System Preferences -> Security & Privacy -> Allow -> Then allow the software corporation (in this case Oracle)
Restart

@medyagh medyagh modified the milestones: v1.13.0, v1.14.0-candidate Jul 27, 2020
@sharifelgamal
Copy link
Collaborator

Our next release should be at the end of August.

@nsourov
Copy link

nsourov commented Sep 3, 2021

Our next release should be at the end of August.

Any update regarding this issue?

@sharifelgamal
Copy link
Collaborator

Release is underway right now, 1.23.0 will be released today.

@mdhume
Copy link

mdhume commented Sep 28, 2021

@sharifelgamal @zhan9san We ran into an issue due to this change. We were running K8s 1.17.4 using minikube version 1.23.0.
When we run minikube tunnel we see the following error:

E0927 17:11:25.166351   27424 ssh_tunnel.go:82] error listing ingresses: the server could not find the requested resource

I believe the reason is because in K8s 1.17 Ingress is only present in v1beta1 - https://v1-17.docs.kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#ingresslist-v1beta1-networking-k8s-io

The PR tries to list the ingress resources using v1 apiVersion which is not present in versions prior to K8s 1.19

@zhan9san
Copy link
Contributor

Hi @mdhume
Sorry for the inconvenience

Would it be possible to upgrade the k8s cluster? It will introduce more logic to support backward compatibility

@LibertyBeta
Copy link

Is minikube tunnel still required on mac?

@sharifelgamal
Copy link
Collaborator

For the docker driver, yes.

@mdhume
Copy link

mdhume commented Sep 29, 2021

@zhan9san unfortunately we won't be able to since that is the version we are running currently. One option could be to revert to previous behavior i.e. disable ingress support, if the K8s version detected is prior to 1.19

@zhan9san
Copy link
Contributor

How about adding an option like

minikube tunnel --service-only

or something else to set up tunnels for 'service' only?

@mdhume
Copy link

mdhume commented Sep 30, 2021

@zhan9san that would work too 👍

@zhan9san
Copy link
Contributor

@sharifelgamal

To follow the code of conduct in existing flags, I'd like to implement the following command.

minikube tunnel --ingress would create tunnels for both service and ingress.

while

minikube tunnel would for service only.

But this would have an impact on ingress for non-Mac system.

Do you have any concern?

@mnahinkhan
Copy link

What helped me:

minikube start --driver=virtualbox

(Since hyperkit has issues accessing the internet for me)

bradbeck added a commit to buildsec/frsca that referenced this issue Feb 17, 2022
The referenced issue (kubernetes/minikube#7332)
appears to have been resolved by kubernetes/minikube#12089
kodiakhq bot added a commit to buildsec/frsca that referenced this issue Feb 28, 2022
The referenced issue (kubernetes/minikube#7332)
appears to have been resolved by kubernetes/minikube#12089

Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
@ProbStub
Copy link

ProbStub commented Jun 16, 2022

Here is a workaround. It looks ugly but it works.

This is (at time of posting) still the only way to make it work on Mac Silicon (M1, 2020) using:

  • Darwin 21.3.0 (MacOS 12.2.1)
  • minikube 1.25.2
  • docker 20.10.16

Is there a specific reason the workaround cannot be incorporated into the master?

To date the Apple Silicon virtualization drivers are still limited. So working with docker is rather useful, and this workaround literally saved my day.

@nikunjg
Copy link

nikunjg commented Jun 17, 2022

what is the workaround for using ingress for minikube on docker driver on m1 chip macOS

@ProbStub
Copy link

what is the workaround for using ingress for minikube on docker driver on m1 chip macOS

The one described by @zhan9san above

@Yesyoor
Copy link

Yesyoor commented Jun 20, 2022

Oh I have been trying to export my nodeport of a service to my host machine running the minikube (macOS) and now I see this open issue. Well is there a workaround? I mean it really is the most basic thing to try to reach the minikube wirh a client outside of the minikube, isn't it? I really wonder how this can be, but maybe I am missing the point why someone would set up a cluster without having access to it.

@michelesr
Copy link
Contributor

michelesr commented Jun 23, 2022

I was able to get ingress and ingress-dns exposed properly on minikube with docker driver by using docker-mac-net-connect

@Yesyoor
Copy link

Yesyoor commented Jun 23, 2022

@michelesr I am not using ingress but a regular node port. it is only possible using the virtual box drivers on intel based macs.

@michelesr
Copy link
Contributor

@michelesr I am not using ingress but a regular node port. it is only possible using the virtual box drivers on intel based macs.

That would work with the tool I linked. It basically allow you to reach docker containers using their IP address, just like you would on a Linux machine, and so makes the minikube ip reachable from the host and your node ports accessible

@Yesyoor
Copy link

Yesyoor commented Jun 24, 2022

@michelesr I tried it out already and unfortunately I didn't work with it either. Still thank you very much for trying to help.

@rahil-p
Copy link
Contributor

rahil-p commented Jul 18, 2022

I was able to get ingress and ingress-dns exposed properly on minikube with docker driver by using docker-mac-net-connect

@michelesr Thanks for sharing - that tool is incredibly useful. It's the only way I've been able to get ingress-dns to work on a Mac with an ARM64 chip.

@flibustier7seas
Copy link

If this issue is closed, why does the documentation say that ingress doesn't work for Docker on Windows?

The ingress, and ingress-dns addons are currently only supported on Linux. See #7332

@mtx2d
Copy link

mtx2d commented Aug 17, 2024

ip.

Hi @rahil-p , do you mind sharing the steps you took?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
addon/ingress co/docker-driver Issues related to kubernetes in container help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug. os/macos priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. top-10-issues Top 10 support issues
Projects
None yet