Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add FAQ to website how to expose custom ports on docker driver #6584

Open
irizzant opened this issue Feb 11, 2020 · 17 comments
Open

add FAQ to website how to expose custom ports on docker driver #6584

irizzant opened this issue Feb 11, 2020 · 17 comments
Labels
addon/ingress help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/documentation Categorizes issue or PR as related to documentation. kind/feature Categorizes issue or PR as related to a new feature. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/backlog Higher priority than priority/awaiting-more-evidence.

Comments

@irizzant
Copy link

Hello
as far as I can see after enabling the Nginx ingress controller addon there is no service created for nginx deployment in kube-system namespace nor there is anything listening on ports 80/443 in minikube VM, resulting in unaccessible workloads from outside the cluster.

The exact command to reproduce the issue:
Start minikube:
minikube start \ --memory 12000 \ --cpus 8 \ --bootstrapper=kubeadm \ --extra-config=kubelet.authentication-token-webhook=true \ --extra-config=kubelet.authorization-mode=Webhook \ --extra-config=scheduler.address=0.0.0.0 \ --extra-config=controller-manager.address=0.0.0.0 \ --extra-config=apiserver.authorization-mode=Node,RBAC \ --insecure-registry=maven-repo.sdb.it:18081 \ --insecure-registry=maven-repo.sdb.it:18080 \ --network-plugin=cni \ --enable-default-cni \

Enable nginx controller addon:
minikube addons enable ingress

Inspect services:
kubectl get svc -n kube-system

You'll see there are no services for Nginx.

The full output of the command that failed:


kubectl get svc -n kube-system

NAME                       TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                                      AGE
kube-dns                   ClusterIP   10.96.0.10      <none>        53/UDP,53/TCP,9153/TCP                       21m

From inside minikube VM:
minikube ssh
sudo netstat -ln | grep 80

tcp        0      0 192.168.39.192:2380     0.0.0.0:*               LISTEN      
unix  2      [ ACC ]     STREAM     LISTENING      32839 @/containerd-shim/moby/217fb70dbe0cda110904a352a83802730b3fa6b6197237eff0650a7da28c5a0c/shim.sock@
unix  2      [ ACC ]     STREAM     LISTENING      38020 @/containerd-shim/moby/a1f8eab0a0cf9df09703fa879012240144b8b1d07c9bdbeb22347df20925d9e2/shim.sock@
unix  2      [ ACC ]     STREAM     LISTENING      37336 @/containerd-shim/moby/4367c59cd396a0ba23b3c97ebd496c28018008487196efd65b076be02921cd26/shim.sock@

The output of the minikube logs command:


==> Docker <==
-- Logs begin at Tue 2020-02-11 12:07:39 UTC, end at Tue 2020-02-11 12:32:58 UTC. --
Feb 11 12:07:47 minikube dockerd[2184]: time="2020-02-11T12:07:47.439778469Z" level=warning msg="Your kernel does not support cgroup blkio weight"
Feb 11 12:07:47 minikube dockerd[2184]: time="2020-02-11T12:07:47.439893802Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
Feb 11 12:07:47 minikube dockerd[2184]: time="2020-02-11T12:07:47.439917307Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device"
Feb 11 12:07:47 minikube dockerd[2184]: time="2020-02-11T12:07:47.439934085Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device"
Feb 11 12:07:47 minikube dockerd[2184]: time="2020-02-11T12:07:47.439957769Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device"
Feb 11 12:07:47 minikube dockerd[2184]: time="2020-02-11T12:07:47.439975320Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device"
Feb 11 12:07:47 minikube dockerd[2184]: time="2020-02-11T12:07:47.440530491Z" level=info msg="Loading containers: start."
Feb 11 12:07:47 minikube dockerd[2184]: time="2020-02-11T12:07:47.638041321Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Feb 11 12:07:47 minikube dockerd[2184]: time="2020-02-11T12:07:47.742529699Z" level=info msg="Loading containers: done."
Feb 11 12:07:47 minikube dockerd[2184]: time="2020-02-11T12:07:47.870352526Z" level=info msg="Docker daemon" commit=633a0ea838 graphdriver(s)=overlay2 version=19.03.5
Feb 11 12:07:47 minikube dockerd[2184]: time="2020-02-11T12:07:47.870839932Z" level=info msg="Daemon has completed initialization"
Feb 11 12:07:48 minikube dockerd[2184]: time="2020-02-11T12:07:48.005483715Z" level=info msg="API listen on /var/run/docker.sock"
Feb 11 12:07:48 minikube dockerd[2184]: time="2020-02-11T12:07:48.005539415Z" level=info msg="API listen on [::]:2376"
Feb 11 12:07:48 minikube systemd[1]: Started Docker Application Container Engine.
Feb 11 12:08:27 minikube dockerd[2184]: time="2020-02-11T12:08:27.166542705Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/6a28f2f79518c53ef99be11a9e1dce56c4e313b66e6ad0a98b3fda95e9b13133/shim.sock" debug=false pid=3910
Feb 11 12:08:27 minikube dockerd[2184]: time="2020-02-11T12:08:27.170443693Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/217fb70dbe0cda110904a352a83802730b3fa6b6197237eff0650a7da28c5a0c/shim.sock" debug=false pid=3911
Feb 11 12:08:27 minikube dockerd[2184]: time="2020-02-11T12:08:27.233563865Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/b0f9bab79487c0a0e16bb40da26e1d843cd767a1ad5d07e8f1e8796f05fa1e05/shim.sock" debug=false pid=3941
Feb 11 12:08:27 minikube dockerd[2184]: time="2020-02-11T12:08:27.238481115Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/191fea29c8b45a35446aaab168d6e43965492a8dca93d3f8f739817e0684a748/shim.sock" debug=false pid=3943
Feb 11 12:08:27 minikube dockerd[2184]: time="2020-02-11T12:08:27.506050793Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/5190c4c17a79e1357048affb0e1385939084a813d7f94085c5a3095ead2be4ee/shim.sock" debug=false pid=4102
Feb 11 12:08:27 minikube dockerd[2184]: time="2020-02-11T12:08:27.508858614Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/a17dd36fab79374188454889c50dbcc6d6f5044df6f8e9d6d5231200a51ab693/shim.sock" debug=false pid=4109
Feb 11 12:08:27 minikube dockerd[2184]: time="2020-02-11T12:08:27.527743655Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/506e2a827b24fd0bfefda5bc90efc2b75597efff423f300a468e4c952a5c1e9c/shim.sock" debug=false pid=4136
Feb 11 12:08:27 minikube dockerd[2184]: time="2020-02-11T12:08:27.545655349Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/bcd358df4aeb2b172bd56d75b93ab939c790e707aef82f4a1f193f86e1b678c1/shim.sock" debug=false pid=4154
Feb 11 12:08:41 minikube dockerd[2184]: time="2020-02-11T12:08:41.161242608Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/a1f8eab0a0cf9df09703fa879012240144b8b1d07c9bdbeb22347df20925d9e2/shim.sock" debug=false pid=4921
Feb 11 12:08:41 minikube dockerd[2184]: time="2020-02-11T12:08:41.563443592Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/324e1bc9bc95e864f02537e23eb2ef0b276405400bce2ccdee2a1d00a85534fd/shim.sock" debug=false pid=4971
Feb 11 12:08:41 minikube dockerd[2184]: time="2020-02-11T12:08:41.620688279Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/4367c59cd396a0ba23b3c97ebd496c28018008487196efd65b076be02921cd26/shim.sock" debug=false pid=4998
Feb 11 12:08:41 minikube dockerd[2184]: time="2020-02-11T12:08:41.871627276Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/6554e012bc5ef1841bae7581ceae4df757af471b794287d363fd1536b2b4b1a0/shim.sock" debug=false pid=5078
Feb 11 12:08:42 minikube dockerd[2184]: time="2020-02-11T12:08:42.838300187Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/866ec33dbe463436a35a0ee3dae8c5a9a654f937385dbe65938a842deb2c34bb/shim.sock" debug=false pid=5185
Feb 11 12:08:42 minikube dockerd[2184]: time="2020-02-11T12:08:42.845013307Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/023c44164191b39a86bd490ca1ae47f932713aba91956372af64541b4c051648/shim.sock" debug=false pid=5195
Feb 11 12:08:43 minikube dockerd[2184]: time="2020-02-11T12:08:43.165998715Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/905375fce66f9a1220d56c0d77035110010276238d884090c9513295d8284090/shim.sock" debug=false pid=5379
Feb 11 12:08:43 minikube dockerd[2184]: time="2020-02-11T12:08:43.169955089Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/9de3d490fcdd4bae8ea84d74eb771c6c332f90e6f1bb9533f0720ad201c6cead/shim.sock" debug=false pid=5391
Feb 11 12:09:11 minikube dockerd[2184]: time="2020-02-11T12:09:11.929783921Z" level=info msg="shim reaped" id=324e1bc9bc95e864f02537e23eb2ef0b276405400bce2ccdee2a1d00a85534fd
Feb 11 12:09:11 minikube dockerd[2184]: time="2020-02-11T12:09:11.940017657Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 11 12:09:11 minikube dockerd[2184]: time="2020-02-11T12:09:11.940237371Z" level=warning msg="324e1bc9bc95e864f02537e23eb2ef0b276405400bce2ccdee2a1d00a85534fd cleanup: failed to unmount IPC: umount /var/lib/docker/containers/324e1bc9bc95e864f02537e23eb2ef0b276405400bce2ccdee2a1d00a85534fd/mounts/shm, flags: 0x2: no such file or directory"
Feb 11 12:09:12 minikube dockerd[2184]: time="2020-02-11T12:09:12.271599383Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/46616b07c1484ba569494ec16a3be022ae64127076fbdaf4a0fec5f5b6642c02/shim.sock" debug=false pid=5717
Feb 11 12:09:13 minikube dockerd[2184]: time="2020-02-11T12:09:13.096587574Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/bfe7a7ab7e5aa6af7cf5cac5e2d5d95639b448b57ea4d1f67a88c304ad647e27/shim.sock" debug=false pid=5795
Feb 11 12:09:38 minikube dockerd[2184]: time="2020-02-11T12:09:38.434141681Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/ccdd1e5dfe53319a90b761b53268eca9f13f773eb45ed8dd5d22e61fae827867/shim.sock" debug=false pid=6099
Feb 11 12:09:43 minikube dockerd[2184]: time="2020-02-11T12:09:43.190475335Z" level=info msg="Attempting next endpoint for pull after error: manifest unknown: manifest unknown"
Feb 11 12:09:43 minikube dockerd[2184]: time="2020-02-11T12:09:43.828980735Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/1ebfefcb875b52f77e5f6bcba715bda348a39582a5391373e84fc5850e87af87/shim.sock" debug=false pid=6308
Feb 11 12:09:44 minikube dockerd[2184]: time="2020-02-11T12:09:44.515704201Z" level=error msg="Not continuing with pull after error: errors:\ndenied: requested access to the resource is denied\nunauthorized: authentication required\n"
Feb 11 12:09:44 minikube dockerd[2184]: time="2020-02-11T12:09:44.515938638Z" level=info msg="Ignoring extra error returned from registry: unauthorized: authentication required"
Feb 11 12:09:57 minikube dockerd[2184]: time="2020-02-11T12:09:57.907028386Z" level=info msg="Attempting next endpoint for pull after error: manifest unknown: manifest unknown"
Feb 11 12:09:59 minikube dockerd[2184]: time="2020-02-11T12:09:59.256862651Z" level=error msg="Not continuing with pull after error: errors:\ndenied: requested access to the resource is denied\nunauthorized: authentication required\n"
Feb 11 12:09:59 minikube dockerd[2184]: time="2020-02-11T12:09:59.256976353Z" level=info msg="Ignoring extra error returned from registry: unauthorized: authentication required"
Feb 11 12:10:23 minikube dockerd[2184]: time="2020-02-11T12:10:23.659074961Z" level=info msg="Attempting next endpoint for pull after error: manifest unknown: manifest unknown"
Feb 11 12:10:24 minikube dockerd[2184]: time="2020-02-11T12:10:24.984771513Z" level=error msg="Not continuing with pull after error: errors:\ndenied: requested access to the resource is denied\nunauthorized: authentication required\n"
Feb 11 12:10:24 minikube dockerd[2184]: time="2020-02-11T12:10:24.984831841Z" level=info msg="Ignoring extra error returned from registry: unauthorized: authentication required"
Feb 11 12:11:19 minikube dockerd[2184]: time="2020-02-11T12:11:19.578515402Z" level=info msg="Attempting next endpoint for pull after error: manifest unknown: manifest unknown"
Feb 11 12:11:20 minikube dockerd[2184]: time="2020-02-11T12:11:20.713436077Z" level=info msg="shim reaped" id=ccdd1e5dfe53319a90b761b53268eca9f13f773eb45ed8dd5d22e61fae827867
Feb 11 12:11:20 minikube dockerd[2184]: time="2020-02-11T12:11:20.723759737Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 11 12:11:20 minikube dockerd[2184]: time="2020-02-11T12:11:20.933655937Z" level=error msg="Not continuing with pull after error: errors:\ndenied: requested access to the resource is denied\nunauthorized: authentication required\n"
Feb 11 12:11:20 minikube dockerd[2184]: time="2020-02-11T12:11:20.933769458Z" level=info msg="Ignoring extra error returned from registry: unauthorized: authentication required"
Feb 11 12:11:29 minikube dockerd[2184]: time="2020-02-11T12:11:29.572214936Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/dca71f4df68a4709587186bc0ad6e6bb72019627b017683e0692385028ff6fab/shim.sock" debug=false pid=7237
Feb 11 12:11:39 minikube dockerd[2184]: time="2020-02-11T12:11:39.637877577Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/244012be2bf7dd70d0726e2870c2580bb9f3a8f67d7938dd415a594b690748c8/shim.sock" debug=false pid=7422
Feb 11 12:11:48 minikube dockerd[2184]: time="2020-02-11T12:11:48.367868383Z" level=info msg="shim reaped" id=244012be2bf7dd70d0726e2870c2580bb9f3a8f67d7938dd415a594b690748c8
Feb 11 12:11:48 minikube dockerd[2184]: time="2020-02-11T12:11:48.378137957Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 11 12:11:48 minikube dockerd[2184]: time="2020-02-11T12:11:48.378251070Z" level=warning msg="244012be2bf7dd70d0726e2870c2580bb9f3a8f67d7938dd415a594b690748c8 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/244012be2bf7dd70d0726e2870c2580bb9f3a8f67d7938dd415a594b690748c8/mounts/shm, flags: 0x2: no such file or directory"
Feb 11 12:11:48 minikube dockerd[2184]: time="2020-02-11T12:11:48.880153953Z" level=info msg="shim reaped" id=dca71f4df68a4709587186bc0ad6e6bb72019627b017683e0692385028ff6fab
Feb 11 12:11:48 minikube dockerd[2184]: time="2020-02-11T12:11:48.890511419Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 11 12:11:59 minikube dockerd[2184]: time="2020-02-11T12:11:59.987506155Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/c11c9fc924552fa8346d2f717e8ab1bdec687098b01aaa6e5c2a697eadbb4395/shim.sock" debug=false pid=7660
Feb 11 12:12:00 minikube dockerd[2184]: time="2020-02-11T12:12:00.666304757Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/81df4af5918bf5dbfb5030bad2b404857c6b39609b54863a7a5309e8f1eb0aa1/shim.sock" debug=false pid=7758

==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
81df4af5918bf nginx@sha256:62f787b94e5faddb79f96c84ac0877aaf28fb325bfc3601b9c0934d4c107ba94 20 minutes ago Running nginx 0 c11c9fc924552
1ebfefcb875b5 quay.io/kubernetes-ingress-controller/nginx-ingress-controller@sha256:d0b22f715fcea5598ef7f869d308b55289a3daaa12922fa52a1abf17703c88e7 23 minutes ago Running nginx-ingress-controller 0 bfe7a7ab7e5aa
46616b07c1484 4689081edb103 23 minutes ago Running storage-provisioner 1 a1f8eab0a0cf9
905375fce66f9 70f311871ae12 24 minutes ago Running coredns 0 866ec33dbe463
9de3d490fcdd4 70f311871ae12 24 minutes ago Running coredns 0 023c44164191b
6554e012bc5ef cba2a99699bdf 24 minutes ago Running kube-proxy 0 4367c59cd396a
324e1bc9bc95e 4689081edb103 24 minutes ago Exited storage-provisioner 0 a1f8eab0a0cf9
bcd358df4aeb2 da5fd66c4068c 24 minutes ago Running kube-controller-manager 0 b0f9bab79487c
506e2a827b24f f52d4c527ef2f 24 minutes ago Running kube-scheduler 0 191fea29c8b45
5190c4c17a79e 303ce5db0e90d 24 minutes ago Running etcd 0 6a28f2f79518c
a17dd36fab793 41ef50a5f06a7 24 minutes ago Running kube-apiserver 0 217fb70dbe0cd

==> coredns [905375fce66f] <==
.:53
[INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
CoreDNS-1.6.5
linux/amd64, go1.13.4, c2fd1b2

==> coredns [9de3d490fcdd] <==
.:53
[INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
CoreDNS-1.6.5
linux/amd64, go1.13.4, c2fd1b2

==> dmesg <==
[Feb11 12:07] You have booted with nomodeset. This means your GPU drivers are DISABLED
[ +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
[ +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
[ +0.031248] #2
[ +0.001011] #3
[ +0.001003] #4
[ +0.000990] #5
[ +0.000995] #6
[ +0.000999] #7
[ +0.011095] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
[ +2.017695] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
[ +0.953443] systemd[1]: Failed to bump fs.file-max, ignoring: Invalid argument
[ +0.003654] systemd-fstab-generator[1229]: Ignoring "noauto" for root device
[ +0.002714] systemd[1]: File /usr/lib/systemd/system/systemd-journald.service:12 configures an IP firewall (IPAddressDeny=any), but the local system does not support BPF/cgroup based firewalling.
[ +0.000002] systemd[1]: Proceeding WITHOUT firewalling in effect! (This warning is only shown for the first loaded unit using IP firewalling.)
[ +0.766640] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
[ +1.561154] vboxguest: loading out-of-tree module taints kernel.
[ +0.007306] vboxguest: PCI device not found, probably running on physical hardware.
[ +4.733128] systemd-fstab-generator[2151]: Ignoring "noauto" for root device
[ +0.258335] systemd-fstab-generator[2168]: Ignoring "noauto" for root device
[Feb11 12:08] systemd-fstab-generator[3110]: Ignoring "noauto" for root device
[ +0.584080] systemd-fstab-generator[3358]: Ignoring "noauto" for root device
[ +5.668565] kauditd_printk_skb: 65 callbacks suppressed
[ +10.058989] systemd-fstab-generator[4604]: Ignoring "noauto" for root device
[ +8.980186] kauditd_printk_skb: 32 callbacks suppressed
[ +5.493749] kauditd_printk_skb: 74 callbacks suppressed
[Feb11 12:09] kauditd_printk_skb: 5 callbacks suppressed
[ +3.209910] NFSD: Unable to end grace period: -110
[Feb11 12:11] kauditd_printk_skb: 5 callbacks suppressed
[ +27.901342] kauditd_printk_skb: 5 callbacks suppressed
[ +11.541434] kauditd_printk_skb: 2 callbacks suppressed
[Feb11 12:12] kauditd_printk_skb: 5 callbacks suppressed
[Feb11 12:14] kauditd_printk_skb: 2 callbacks suppressed
[Feb11 12:16] kauditd_printk_skb: 2 callbacks suppressed

==> kernel <==
12:32:58 up 25 min, 0 users, load average: 0.35, 0.44, 0.41
Linux minikube 4.19.88 #1 SMP Tue Feb 4 22:25:03 PST 2020 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2019.02.8"

==> kube-apiserver [a17dd36fab79] <==
I0211 12:08:29.044018 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
I0211 12:08:30.263442 1 dynamic_cafile_content.go:166] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
I0211 12:08:30.263516 1 dynamic_cafile_content.go:166] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
I0211 12:08:30.263630 1 dynamic_serving_content.go:129] Starting serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key
I0211 12:08:30.263923 1 secure_serving.go:178] Serving securely on [::]:8443
I0211 12:08:30.264035 1 tlsconfig.go:219] Starting DynamicServingCertificateController
I0211 12:08:30.264173 1 autoregister_controller.go:140] Starting autoregister controller
I0211 12:08:30.264195 1 cache.go:32] Waiting for caches to sync for autoregister controller
I0211 12:08:30.264232 1 available_controller.go:386] Starting AvailableConditionController
I0211 12:08:30.264248 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I0211 12:08:30.264263 1 crdregistration_controller.go:111] Starting crd-autoregister controller
I0211 12:08:30.264281 1 shared_informer.go:197] Waiting for caches to sync for crd-autoregister
I0211 12:08:30.264458 1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
I0211 12:08:30.264873 1 crd_finalizer.go:263] Starting CRDFinalizer
I0211 12:08:30.266571 1 controller.go:81] Starting OpenAPI AggregationController
I0211 12:08:30.264475 1 shared_informer.go:197] Waiting for caches to sync for cluster_authentication_trust_controller
I0211 12:08:30.268386 1 controller.go:85] Starting OpenAPI controller
I0211 12:08:30.268679 1 customresource_discovery_controller.go:208] Starting DiscoveryController
I0211 12:08:30.269013 1 naming_controller.go:288] Starting NamingConditionController
I0211 12:08:30.269362 1 establishing_controller.go:73] Starting EstablishingController
I0211 12:08:30.269654 1 nonstructuralschema_controller.go:191] Starting NonStructuralSchemaConditionController
I0211 12:08:30.269882 1 apiapproval_controller.go:185] Starting KubernetesAPIApprovalPolicyConformantConditionController
I0211 12:08:30.276952 1 apiservice_controller.go:94] Starting APIServiceRegistrationController
I0211 12:08:30.277020 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I0211 12:08:30.277176 1 dynamic_cafile_content.go:166] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
I0211 12:08:30.277304 1 dynamic_cafile_content.go:166] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
E0211 12:08:30.280701 1 controller.go:151] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.39.192, ResourceVersion: 0, AdditionalErrorMsg:
I0211 12:08:30.364404 1 cache.go:39] Caches are synced for autoregister controller
I0211 12:08:30.364429 1 shared_informer.go:204] Caches are synced for crd-autoregister
I0211 12:08:30.364414 1 cache.go:39] Caches are synced for AvailableConditionController controller
I0211 12:08:30.367765 1 shared_informer.go:204] Caches are synced for cluster_authentication_trust_controller
I0211 12:08:30.377176 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0211 12:08:31.263755 1 controller.go:107] OpenAPI AggregationController: Processing item
I0211 12:08:31.263864 1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I0211 12:08:31.263904 1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0211 12:08:31.278789 1 storage_scheduling.go:133] created PriorityClass system-node-critical with value 2000001000
I0211 12:08:31.309439 1 storage_scheduling.go:133] created PriorityClass system-cluster-critical with value 2000000000
I0211 12:08:31.309475 1 storage_scheduling.go:142] all system priority classes are created successfully or already exist.
I0211 12:08:31.819265 1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0211 12:08:31.846210 1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
W0211 12:08:31.972507 1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.39.192]
I0211 12:08:31.973510 1 controller.go:606] quota admission added evaluator for: endpoints
I0211 12:08:32.446579 1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
I0211 12:08:33.102922 1 controller.go:606] quota admission added evaluator for: serviceaccounts
I0211 12:08:33.113816 1 controller.go:606] quota admission added evaluator for: deployments.apps
I0211 12:08:33.378593 1 controller.go:606] quota admission added evaluator for: daemonsets.apps
I0211 12:08:40.560674 1 controller.go:606] quota admission added evaluator for: replicasets.apps
I0211 12:08:41.075717 1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
I0211 12:09:38.287967 1 trace.go:116] Trace[485547310]: "Get" url:/api/v1/namespaces/kube-system/endpoints/kube-scheduler,user-agent:kube-scheduler/v1.17.2 (linux/amd64) kubernetes/59603c6/leader-election,client:127.0.0.1 (started: 2020-02-11 12:09:37.549609892 +0000 UTC m=+69.927163870) (total time: 738.326689ms):
Trace[485547310]: [738.291398ms] [738.271979ms] About to write a response
I0211 12:09:38.287967 1 trace.go:116] Trace[1086596218]: "Get" url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager,user-agent:kube-controller-manager/v1.17.2 (linux/amd64) kubernetes/59603c6/leader-election,client:127.0.0.1 (started: 2020-02-11 12:09:37.499918122 +0000 UTC m=+69.877472100) (total time: 788.016687ms):
Trace[1086596218]: [787.976581ms] [787.966828ms] About to write a response
I0211 12:09:43.576912 1 trace.go:116] Trace[1332019903]: "Get" url:/api/v1/namespaces/kube-system/endpoints/kube-scheduler,user-agent:kube-scheduler/v1.17.2 (linux/amd64) kubernetes/59603c6/leader-election,client:127.0.0.1 (started: 2020-02-11 12:09:42.399579459 +0000 UTC m=+74.777133523) (total time: 1.17723569s):
Trace[1332019903]: [1.177121567s] [1.177077019s] About to write a response
I0211 12:09:43.579465 1 trace.go:116] Trace[1981024970]: "Get" url:/api/v1/namespaces/kube-system/endpoints/kube-controller-manager,user-agent:kube-controller-manager/v1.17.2 (linux/amd64) kubernetes/59603c6/leader-election,client:127.0.0.1 (started: 2020-02-11 12:09:42.375856873 +0000 UTC m=+74.753410939) (total time: 1.203537707s):
Trace[1981024970]: [1.20344492s] [1.203405951s] About to write a response
I0211 12:14:03.242657 1 controller.go:606] quota admission added evaluator for: ingresses.extensions
I0211 12:15:23.595545 1 trace.go:116] Trace[2083614210]: "GuaranteedUpdate etcd3" type:*v1.Endpoints (started: 2020-02-11 12:15:21.988485413 +0000 UTC m=+414.366039482) (total time: 1.606980128s):
Trace[2083614210]: [1.606926555s] [1.601713947s] Transaction committed
E0211 12:30:57.624077 1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted

==> kube-controller-manager [bcd358df4aeb] <==
I0211 12:08:39.993075 1 controllermanager.go:533] Started "replicationcontroller"
I0211 12:08:39.993441 1 replica_set.go:180] Starting replicationcontroller controller
I0211 12:08:39.993609 1 shared_informer.go:197] Waiting for caches to sync for ReplicationController
I0211 12:08:40.243370 1 controllermanager.go:533] Started "serviceaccount"
I0211 12:08:40.243594 1 serviceaccounts_controller.go:116] Starting service account controller
I0211 12:08:40.243620 1 shared_informer.go:197] Waiting for caches to sync for service account
I0211 12:08:40.493323 1 controllermanager.go:533] Started "tokencleaner"
I0211 12:08:40.493425 1 tokencleaner.go:117] Starting token cleaner controller
I0211 12:08:40.493457 1 shared_informer.go:197] Waiting for caches to sync for token_cleaner
I0211 12:08:40.493475 1 shared_informer.go:204] Caches are synced for token_cleaner
W0211 12:08:40.493427 1 controllermanager.go:525] Skipping "nodeipam"
I0211 12:08:40.494347 1 shared_informer.go:197] Waiting for caches to sync for resource quota
I0211 12:08:40.508408 1 shared_informer.go:197] Waiting for caches to sync for garbage collector
W0211 12:08:40.515826 1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist
I0211 12:08:40.539290 1 shared_informer.go:204] Caches are synced for expand
I0211 12:08:40.543145 1 shared_informer.go:204] Caches are synced for ReplicaSet
I0211 12:08:40.546065 1 shared_informer.go:204] Caches are synced for PVC protection
I0211 12:08:40.546558 1 shared_informer.go:204] Caches are synced for deployment
I0211 12:08:40.548957 1 shared_informer.go:204] Caches are synced for endpoint
I0211 12:08:40.567253 1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"826404d3-a618-4a69-b0af-9fa604d71666", APIVersion:"apps/v1", ResourceVersion:"178", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-6955765f44 to 2
I0211 12:08:40.581286 1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-6955765f44", UID:"6eaea2a3-ec0b-445f-a209-b328141e6b6a", APIVersion:"apps/v1", ResourceVersion:"316", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-6955765f44-rl6mp
I0211 12:08:40.589362 1 shared_informer.go:204] Caches are synced for HPA
I0211 12:08:40.591640 1 shared_informer.go:204] Caches are synced for ClusterRoleAggregator
I0211 12:08:40.592986 1 shared_informer.go:204] Caches are synced for TTL
I0211 12:08:40.593739 1 shared_informer.go:204] Caches are synced for PV protection
I0211 12:08:40.593838 1 shared_informer.go:204] Caches are synced for GC
I0211 12:08:40.595002 1 shared_informer.go:204] Caches are synced for persistent volume
I0211 12:08:40.599238 1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-6955765f44", UID:"6eaea2a3-ec0b-445f-a209-b328141e6b6a", APIVersion:"apps/v1", ResourceVersion:"316", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-6955765f44-62mf9
I0211 12:08:40.610325 1 shared_informer.go:204] Caches are synced for job
I0211 12:08:40.645024 1 shared_informer.go:204] Caches are synced for taint
I0211 12:08:40.645138 1 node_lifecycle_controller.go:1443] Initializing eviction metric for zone:
W0211 12:08:40.645198 1 node_lifecycle_controller.go:1058] Missing timestamp for Node minikube. Assuming now as a timestamp.
I0211 12:08:40.645227 1 node_lifecycle_controller.go:1259] Controller detected that zone is now in state Normal.
I0211 12:08:40.645199 1 taint_manager.go:186] Starting NoExecuteTaintManager
I0211 12:08:40.645386 1 event.go:281] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"6e857b16-46b1-4c0b-baba-9842c53f9baf", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node minikube event: Registered Node minikube in Controller
I0211 12:08:40.743067 1 shared_informer.go:204] Caches are synced for bootstrap_signer
I0211 12:08:40.808840 1 shared_informer.go:204] Caches are synced for namespace
I0211 12:08:40.844321 1 shared_informer.go:204] Caches are synced for service account
I0211 12:08:40.943373 1 shared_informer.go:204] Caches are synced for disruption
I0211 12:08:40.943450 1 disruption.go:338] Sending events to api server.
I0211 12:08:40.951971 1 shared_informer.go:204] Caches are synced for certificate-csrapproving
I0211 12:08:40.993647 1 shared_informer.go:204] Caches are synced for certificate-csrsigning
I0211 12:08:40.994041 1 shared_informer.go:204] Caches are synced for ReplicationController
I0211 12:08:41.052252 1 shared_informer.go:204] Caches are synced for resource quota
I0211 12:08:41.066704 1 shared_informer.go:204] Caches are synced for daemon sets
I0211 12:08:41.093296 1 shared_informer.go:204] Caches are synced for stateful set
I0211 12:08:41.094983 1 shared_informer.go:204] Caches are synced for resource quota
I0211 12:08:41.098486 1 shared_informer.go:204] Caches are synced for garbage collector
I0211 12:08:41.098581 1 garbagecollector.go:138] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0211 12:08:41.106072 1 event.go:281] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"7cf866c9-341a-4d2e-97db-c6fffbd32fa2", APIVersion:"apps/v1", ResourceVersion:"183", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-g8j8k
I0211 12:08:41.110166 1 shared_informer.go:204] Caches are synced for garbage collector
I0211 12:08:41.146532 1 shared_informer.go:204] Caches are synced for attach detach
I0211 12:09:12.578608 1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"nginx-ingress-controller", UID:"d43d5d6e-5322-4080-8bc1-a3964e698ef2", APIVersion:"apps/v1", ResourceVersion:"481", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-ingress-controller-6fc5bcc8c9 to 1
I0211 12:09:12.586702 1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"nginx-ingress-controller-6fc5bcc8c9", UID:"e59e0472-0172-4e93-9fae-0f36d4a298ce", APIVersion:"apps/v1", ResourceVersion:"482", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-ingress-controller-6fc5bcc8c9-s98l7
I0211 12:09:35.802368 1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"nginx", UID:"5b6e8070-9133-49fb-8bfe-64a6b752d79b", APIVersion:"apps/v1", ResourceVersion:"548", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-6d49bdb944 to 1
I0211 12:09:35.815925 1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"nginx-6d49bdb944", UID:"9a6e6803-1004-4f54-92a4-3608e3546a69", APIVersion:"apps/v1", ResourceVersion:"549", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-6d49bdb944-x9h4k
I0211 12:11:28.856702 1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"nginx", UID:"3a7f9034-9d1a-4063-a769-226aa7200cf6", APIVersion:"apps/v1", ResourceVersion:"851", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-9ff8f9b57 to 1
I0211 12:11:28.878153 1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"nginx-9ff8f9b57", UID:"b1a6c887-6d30-422d-99b1-8d05b36b6dff", APIVersion:"apps/v1", ResourceVersion:"852", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-9ff8f9b57-6s75w
I0211 12:11:59.317561 1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"nginx", UID:"f185f087-a812-4250-9c93-5e6a23544f39", APIVersion:"apps/v1", ResourceVersion:"948", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-6db489d4b7 to 1
I0211 12:11:59.339394 1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"nginx-6db489d4b7", UID:"c17f667d-5e4d-46da-a274-d18f3efbc817", APIVersion:"apps/v1", ResourceVersion:"949", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-6db489d4b7-hcx56

==> kube-proxy [6554e012bc5e] <==
W0211 12:08:42.002173 1 server_others.go:323] Unknown proxy mode "", assuming iptables proxy
I0211 12:08:42.007226 1 node.go:135] Successfully retrieved node IP: 192.168.39.192
I0211 12:08:42.007255 1 server_others.go:145] Using iptables Proxier.
W0211 12:08:42.007356 1 proxier.go:286] clusterCIDR not specified, unable to distinguish between internal and external traffic
I0211 12:08:42.007515 1 server.go:571] Version: v1.17.2
I0211 12:08:42.007811 1 conntrack.go:52] Setting nf_conntrack_max to 262144
I0211 12:08:42.007863 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I0211 12:08:42.007904 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I0211 12:08:42.008046 1 config.go:313] Starting service config controller
I0211 12:08:42.008063 1 shared_informer.go:197] Waiting for caches to sync for service config
I0211 12:08:42.008145 1 config.go:131] Starting endpoints config controller
I0211 12:08:42.008154 1 shared_informer.go:197] Waiting for caches to sync for endpoints config
I0211 12:08:42.108599 1 shared_informer.go:204] Caches are synced for service config
I0211 12:08:42.108982 1 shared_informer.go:204] Caches are synced for endpoints config

==> kube-scheduler [506e2a827b24] <==
I0211 12:08:28.131898 1 serving.go:312] Generated self-signed cert in-memory
W0211 12:08:28.444867 1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::client-ca-file" due to: configmap "extension-apiserver-authentication" not found
W0211 12:08:28.444913 1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" due to: configmap "extension-apiserver-authentication" not found
W0211 12:08:30.310724 1 authentication.go:348] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0211 12:08:30.310753 1 authentication.go:296] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0211 12:08:30.310760 1 authentication.go:297] Continuing without authentication configuration. This may treat all requests as anonymous.
W0211 12:08:30.310767 1 authentication.go:298] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
W0211 12:08:30.319266 1 authorization.go:47] Authorization is disabled
W0211 12:08:30.319286 1 authentication.go:92] Authentication is disabled
I0211 12:08:30.319292 1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
I0211 12:08:30.321347 1 configmap_cafile_content.go:205] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0211 12:08:30.321389 1 shared_informer.go:197] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0211 12:08:30.321551 1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
I0211 12:08:30.321610 1 tlsconfig.go:219] Starting DynamicServingCertificateController
E0211 12:08:30.322403 1 reflector.go:153] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0211 12:08:30.322620 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0211 12:08:30.322811 1 reflector.go:153] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:246: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0211 12:08:30.322912 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0211 12:08:30.322931 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0211 12:08:30.323149 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0211 12:08:30.323197 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0211 12:08:30.323218 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0211 12:08:30.323278 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0211 12:08:30.323312 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0211 12:08:30.323150 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0211 12:08:30.323578 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0211 12:08:31.325996 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0211 12:08:31.326202 1 reflector.go:153] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0211 12:08:31.328403 1 reflector.go:153] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:246: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0211 12:08:31.328807 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0211 12:08:31.329511 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0211 12:08:31.332566 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0211 12:08:31.333055 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0211 12:08:31.334421 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0211 12:08:31.335467 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0211 12:08:31.336050 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0211 12:08:31.337246 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0211 12:08:31.337910 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
I0211 12:08:32.422045 1 shared_informer.go:204] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0211 12:08:32.423272 1 leaderelection.go:242] attempting to acquire leader lease kube-system/kube-scheduler...
I0211 12:08:32.451404 1 leaderelection.go:252] successfully acquired lease kube-system/kube-scheduler

==> kubelet <==
-- Logs begin at Tue 2020-02-11 12:07:39 UTC, end at Tue 2020-02-11 12:32:58 UTC. --
Feb 11 12:08:36 minikube kubelet[4620]: I0211 12:08:36.560607 4620 policy_none.go:43] [cpumanager] none policy: Start
Feb 11 12:08:36 minikube kubelet[4620]: I0211 12:08:36.561742 4620 plugin_manager.go:114] Starting Kubelet Plugin Manager
Feb 11 12:08:36 minikube kubelet[4620]: I0211 12:08:36.802676 4620 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-data" (UniqueName: "kubernetes.io/host-path/dd452b3c05b1ae46b3d84e5520001699-etcd-data") pod "etcd-minikube" (UID: "dd452b3c05b1ae46b3d84e5520001699")
Feb 11 12:08:36 minikube kubelet[4620]: I0211 12:08:36.803281 4620 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/5793c1919022d74d2bfdc548699e9655-ca-certs") pod "kube-apiserver-minikube" (UID: "5793c1919022d74d2bfdc548699e9655")
Feb 11 12:08:36 minikube kubelet[4620]: I0211 12:08:36.803713 4620 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/5793c1919022d74d2bfdc548699e9655-usr-share-ca-certificates") pod "kube-apiserver-minikube" (UID: "5793c1919022d74d2bfdc548699e9655")
Feb 11 12:08:36 minikube kubelet[4620]: I0211 12:08:36.804019 4620 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/0ae6cf402f641e9b795a3aebca394220-ca-certs") pod "kube-controller-manager-minikube" (UID: "0ae6cf402f641e9b795a3aebca394220")
Feb 11 12:08:36 minikube kubelet[4620]: I0211 12:08:36.804411 4620 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/0ae6cf402f641e9b795a3aebca394220-kubeconfig") pod "kube-controller-manager-minikube" (UID: "0ae6cf402f641e9b795a3aebca394220")
Feb 11 12:08:36 minikube kubelet[4620]: I0211 12:08:36.804860 4620 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/0ae6cf402f641e9b795a3aebca394220-usr-share-ca-certificates") pod "kube-controller-manager-minikube" (UID: "0ae6cf402f641e9b795a3aebca394220")
Feb 11 12:08:36 minikube kubelet[4620]: I0211 12:08:36.805198 4620 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/5793c1919022d74d2bfdc548699e9655-k8s-certs") pod "kube-apiserver-minikube" (UID: "5793c1919022d74d2bfdc548699e9655")
Feb 11 12:08:36 minikube kubelet[4620]: I0211 12:08:36.805462 4620 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "flexvolume-dir" (UniqueName: "kubernetes.io/host-path/0ae6cf402f641e9b795a3aebca394220-flexvolume-dir") pod "kube-controller-manager-minikube" (UID: "0ae6cf402f641e9b795a3aebca394220")
Feb 11 12:08:36 minikube kubelet[4620]: I0211 12:08:36.805723 4620 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/0ae6cf402f641e9b795a3aebca394220-k8s-certs") pod "kube-controller-manager-minikube" (UID: "0ae6cf402f641e9b795a3aebca394220")
Feb 11 12:08:36 minikube kubelet[4620]: I0211 12:08:36.805972 4620 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/6f8ac8bb34ead7a44fc149bb6b78615a-kubeconfig") pod "kube-scheduler-minikube" (UID: "6f8ac8bb34ead7a44fc149bb6b78615a")
Feb 11 12:08:36 minikube kubelet[4620]: I0211 12:08:36.806277 4620 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-certs" (UniqueName: "kubernetes.io/host-path/dd452b3c05b1ae46b3d84e5520001699-etcd-certs") pod "etcd-minikube" (UID: "dd452b3c05b1ae46b3d84e5520001699")
Feb 11 12:08:36 minikube kubelet[4620]: I0211 12:08:36.806468 4620 reconciler.go:156] Reconciler: start to sync state
Feb 11 12:08:40 minikube kubelet[4620]: I0211 12:08:40.728156 4620 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/d7119315-52d3-47ed-a92a-bc041815ca39-tmp") pod "storage-provisioner" (UID: "d7119315-52d3-47ed-a92a-bc041815ca39")
Feb 11 12:08:40 minikube kubelet[4620]: I0211 12:08:40.729666 4620 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-dtknk" (UniqueName: "kubernetes.io/secret/d7119315-52d3-47ed-a92a-bc041815ca39-storage-provisioner-token-dtknk") pod "storage-provisioner" (UID: "d7119315-52d3-47ed-a92a-bc041815ca39")
Feb 11 12:08:41 minikube kubelet[4620]: I0211 12:08:41.235115 4620 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/aaf21fb2-17d9-479a-a531-1635e11b127a-lib-modules") pod "kube-proxy-g8j8k" (UID: "aaf21fb2-17d9-479a-a531-1635e11b127a")
Feb 11 12:08:41 minikube kubelet[4620]: I0211 12:08:41.235166 4620 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/aaf21fb2-17d9-479a-a531-1635e11b127a-kube-proxy") pod "kube-proxy-g8j8k" (UID: "aaf21fb2-17d9-479a-a531-1635e11b127a")
Feb 11 12:08:41 minikube kubelet[4620]: I0211 12:08:41.235187 4620 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/aaf21fb2-17d9-479a-a531-1635e11b127a-xtables-lock") pod "kube-proxy-g8j8k" (UID: "aaf21fb2-17d9-479a-a531-1635e11b127a")
Feb 11 12:08:41 minikube kubelet[4620]: I0211 12:08:41.235254 4620 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-bjdl5" (UniqueName: "kubernetes.io/secret/aaf21fb2-17d9-479a-a531-1635e11b127a-kube-proxy-token-bjdl5") pod "kube-proxy-g8j8k" (UID: "aaf21fb2-17d9-479a-a531-1635e11b127a")
Feb 11 12:08:42 minikube kubelet[4620]: I0211 12:08:42.441711 4620 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/087062c4-347c-4db6-8e5d-5c06c76cf35f-config-volume") pod "coredns-6955765f44-rl6mp" (UID: "087062c4-347c-4db6-8e5d-5c06c76cf35f")
Feb 11 12:08:42 minikube kubelet[4620]: I0211 12:08:42.441855 4620 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-72dk2" (UniqueName: "kubernetes.io/secret/b04b85fb-8622-41e9-b66d-b4a9884af594-coredns-token-72dk2") pod "coredns-6955765f44-62mf9" (UID: "b04b85fb-8622-41e9-b66d-b4a9884af594")
Feb 11 12:08:42 minikube kubelet[4620]: I0211 12:08:42.442027 4620 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-72dk2" (UniqueName: "kubernetes.io/secret/087062c4-347c-4db6-8e5d-5c06c76cf35f-coredns-token-72dk2") pod "coredns-6955765f44-rl6mp" (UID: "087062c4-347c-4db6-8e5d-5c06c76cf35f")
Feb 11 12:08:42 minikube kubelet[4620]: I0211 12:08:42.442407 4620 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/b04b85fb-8622-41e9-b66d-b4a9884af594-config-volume") pod "coredns-6955765f44-62mf9" (UID: "b04b85fb-8622-41e9-b66d-b4a9884af594")
Feb 11 12:09:12 minikube kubelet[4620]: I0211 12:09:12.723882 4620 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "nginx-ingress-token-b59ts" (UniqueName: "kubernetes.io/secret/7df8fc25-7ea9-4ffb-a855-748cdbbd6aa0-nginx-ingress-token-b59ts") pod "nginx-ingress-controller-6fc5bcc8c9-s98l7" (UID: "7df8fc25-7ea9-4ffb-a855-748cdbbd6aa0")
Feb 11 12:09:13 minikube kubelet[4620]: W0211 12:09:13.402275 4620 pod_container_deletor.go:75] Container "bfe7a7ab7e5aa6af7cf5cac5e2d5d95639b448b57ea4d1f67a88c304ad647e27" not found in pod's containers
Feb 11 12:09:35 minikube kubelet[4620]: I0211 12:09:35.996430 4620 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-96sfz" (UniqueName: "kubernetes.io/secret/bb1bd770-3a64-4e84-b376-a804420479f2-default-token-96sfz") pod "nginx-6d49bdb944-x9h4k" (UID: "bb1bd770-3a64-4e84-b376-a804420479f2")
Feb 11 12:09:37 minikube kubelet[4620]: E0211 12:09:37.570659 4620 kuberuntime_manager.go:940] PodSandboxStatus of sandbox "ccdd1e5dfe53319a90b761b53268eca9f13f773eb45ed8dd5d22e61fae827867" for pod "nginx-6d49bdb944-x9h4k_default(bb1bd770-3a64-4e84-b376-a804420479f2)" error: rpc error: code = Unknown desc = Error: No such container: ccdd1e5dfe53319a90b761b53268eca9f13f773eb45ed8dd5d22e61fae827867
Feb 11 12:09:38 minikube kubelet[4620]: W0211 12:09:38.776265 4620 pod_container_deletor.go:75] Container "ccdd1e5dfe53319a90b761b53268eca9f13f773eb45ed8dd5d22e61fae827867" not found in pod's containers
Feb 11 12:09:44 minikube kubelet[4620]: E0211 12:09:44.518736 4620 remote_image.go:113] PullImage "ngingx:latest" from image service failed: rpc error: code = Unknown desc = Error response from daemon: pull access denied for ngingx, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
Feb 11 12:09:44 minikube kubelet[4620]: E0211 12:09:44.518945 4620 kuberuntime_image.go:50] Pull image "ngingx:latest" failed: rpc error: code = Unknown desc = Error response from daemon: pull access denied for ngingx, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
Feb 11 12:09:44 minikube kubelet[4620]: E0211 12:09:44.519294 4620 kuberuntime_manager.go:803] container start failed: ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: pull access denied for ngingx, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
Feb 11 12:09:44 minikube kubelet[4620]: E0211 12:09:44.519486 4620 pod_workers.go:191] Error syncing pod bb1bd770-3a64-4e84-b376-a804420479f2 ("nginx-6d49bdb944-x9h4k_default(bb1bd770-3a64-4e84-b376-a804420479f2)"), skipping: failed to "StartContainer" for "nginx" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: pull access denied for ngingx, repository does not exist or may require 'docker login': denied: requested access to the resource is denied"
Feb 11 12:09:45 minikube kubelet[4620]: E0211 12:09:45.002458 4620 pod_workers.go:191] Error syncing pod bb1bd770-3a64-4e84-b376-a804420479f2 ("nginx-6d49bdb944-x9h4k_default(bb1bd770-3a64-4e84-b376-a804420479f2)"), skipping: failed to "StartContainer" for "nginx" with ImagePullBackOff: "Back-off pulling image "ngingx""
Feb 11 12:09:59 minikube kubelet[4620]: E0211 12:09:59.258411 4620 remote_image.go:113] PullImage "ngingx:latest" from image service failed: rpc error: code = Unknown desc = Error response from daemon: pull access denied for ngingx, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
Feb 11 12:09:59 minikube kubelet[4620]: E0211 12:09:59.258509 4620 kuberuntime_image.go:50] Pull image "ngingx:latest" failed: rpc error: code = Unknown desc = Error response from daemon: pull access denied for ngingx, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
Feb 11 12:09:59 minikube kubelet[4620]: E0211 12:09:59.258689 4620 kuberuntime_manager.go:803] container start failed: ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: pull access denied for ngingx, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
Feb 11 12:09:59 minikube kubelet[4620]: E0211 12:09:59.258784 4620 pod_workers.go:191] Error syncing pod bb1bd770-3a64-4e84-b376-a804420479f2 ("nginx-6d49bdb944-x9h4k_default(bb1bd770-3a64-4e84-b376-a804420479f2)"), skipping: failed to "StartContainer" for "nginx" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: pull access denied for ngingx, repository does not exist or may require 'docker login': denied: requested access to the resource is denied"
Feb 11 12:10:10 minikube kubelet[4620]: E0211 12:10:10.448175 4620 pod_workers.go:191] Error syncing pod bb1bd770-3a64-4e84-b376-a804420479f2 ("nginx-6d49bdb944-x9h4k_default(bb1bd770-3a64-4e84-b376-a804420479f2)"), skipping: failed to "StartContainer" for "nginx" with ImagePullBackOff: "Back-off pulling image "ngingx""
Feb 11 12:10:24 minikube kubelet[4620]: E0211 12:10:24.985540 4620 remote_image.go:113] PullImage "ngingx:latest" from image service failed: rpc error: code = Unknown desc = Error response from daemon: pull access denied for ngingx, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
Feb 11 12:10:24 minikube kubelet[4620]: E0211 12:10:24.985599 4620 kuberuntime_image.go:50] Pull image "ngingx:latest" failed: rpc error: code = Unknown desc = Error response from daemon: pull access denied for ngingx, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
Feb 11 12:10:24 minikube kubelet[4620]: E0211 12:10:24.985702 4620 kuberuntime_manager.go:803] container start failed: ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: pull access denied for ngingx, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
Feb 11 12:10:24 minikube kubelet[4620]: E0211 12:10:24.985740 4620 pod_workers.go:191] Error syncing pod bb1bd770-3a64-4e84-b376-a804420479f2 ("nginx-6d49bdb944-x9h4k_default(bb1bd770-3a64-4e84-b376-a804420479f2)"), skipping: failed to "StartContainer" for "nginx" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: pull access denied for ngingx, repository does not exist or may require 'docker login': denied: requested access to the resource is denied"
Feb 11 12:10:36 minikube kubelet[4620]: E0211 12:10:36.453519 4620 pod_workers.go:191] Error syncing pod bb1bd770-3a64-4e84-b376-a804420479f2 ("nginx-6d49bdb944-x9h4k_default(bb1bd770-3a64-4e84-b376-a804420479f2)"), skipping: failed to "StartContainer" for "nginx" with ImagePullBackOff: "Back-off pulling image "ngingx""
Feb 11 12:10:51 minikube kubelet[4620]: E0211 12:10:51.452519 4620 pod_workers.go:191] Error syncing pod bb1bd770-3a64-4e84-b376-a804420479f2 ("nginx-6d49bdb944-x9h4k_default(bb1bd770-3a64-4e84-b376-a804420479f2)"), skipping: failed to "StartContainer" for "nginx" with ImagePullBackOff: "Back-off pulling image "ngingx""
Feb 11 12:11:03 minikube kubelet[4620]: E0211 12:11:03.452164 4620 pod_workers.go:191] Error syncing pod bb1bd770-3a64-4e84-b376-a804420479f2 ("nginx-6d49bdb944-x9h4k_default(bb1bd770-3a64-4e84-b376-a804420479f2)"), skipping: failed to "StartContainer" for "nginx" with ImagePullBackOff: "Back-off pulling image "ngingx""
Feb 11 12:11:19 minikube kubelet[4620]: I0211 12:11:19.274922 4620 reconciler.go:183] operationExecutor.UnmountVolume started for volume "default-token-96sfz" (UniqueName: "kubernetes.io/secret/bb1bd770-3a64-4e84-b376-a804420479f2-default-token-96sfz") pod "bb1bd770-3a64-4e84-b376-a804420479f2" (UID: "bb1bd770-3a64-4e84-b376-a804420479f2")
Feb 11 12:11:19 minikube kubelet[4620]: I0211 12:11:19.297986 4620 operation_generator.go:713] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb1bd770-3a64-4e84-b376-a804420479f2-default-token-96sfz" (OuterVolumeSpecName: "default-token-96sfz") pod "bb1bd770-3a64-4e84-b376-a804420479f2" (UID: "bb1bd770-3a64-4e84-b376-a804420479f2"). InnerVolumeSpecName "default-token-96sfz". PluginName "kubernetes.io/secret", VolumeGidValue ""
Feb 11 12:11:19 minikube kubelet[4620]: I0211 12:11:19.376039 4620 reconciler.go:303] Volume detached for volume "default-token-96sfz" (UniqueName: "kubernetes.io/secret/bb1bd770-3a64-4e84-b376-a804420479f2-default-token-96sfz") on node "minikube" DevicePath ""
Feb 11 12:11:20 minikube kubelet[4620]: E0211 12:11:20.935328 4620 remote_image.go:113] PullImage "ngingx:latest" from image service failed: rpc error: code = Unknown desc = Error response from daemon: pull access denied for ngingx, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
Feb 11 12:11:20 minikube kubelet[4620]: E0211 12:11:20.935471 4620 kuberuntime_image.go:50] Pull image "ngingx:latest" failed: rpc error: code = Unknown desc = Error response from daemon: pull access denied for ngingx, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
Feb 11 12:11:20 minikube kubelet[4620]: E0211 12:11:20.935625 4620 kuberuntime_manager.go:803] container start failed: ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: pull access denied for ngingx, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
Feb 11 12:11:20 minikube kubelet[4620]: E0211 12:11:20.935691 4620 pod_workers.go:191] Error syncing pod bb1bd770-3a64-4e84-b376-a804420479f2 ("nginx-6d49bdb944-x9h4k_default(bb1bd770-3a64-4e84-b376-a804420479f2)"), skipping: failed to "StartContainer" for "nginx" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: pull access denied for ngingx, repository does not exist or may require 'docker login': denied: requested access to the resource is denied"
Feb 11 12:11:29 minikube kubelet[4620]: I0211 12:11:29.034894 4620 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-96sfz" (UniqueName: "kubernetes.io/secret/450317d1-a3ad-4e03-a2e0-23aeb4e2a853-default-token-96sfz") pod "nginx-9ff8f9b57-6s75w" (UID: "450317d1-a3ad-4e03-a2e0-23aeb4e2a853")
Feb 11 12:11:29 minikube kubelet[4620]: W0211 12:11:29.827458 4620 pod_container_deletor.go:75] Container "dca71f4df68a4709587186bc0ad6e6bb72019627b017683e0692385028ff6fab" not found in pod's containers
Feb 11 12:11:49 minikube kubelet[4620]: E0211 12:11:49.173818 4620 remote_runtime.go:295] ContainerStatus "244012be2bf7dd70d0726e2870c2580bb9f3a8f67d7938dd415a594b690748c8" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: 244012be2bf7dd70d0726e2870c2580bb9f3a8f67d7938dd415a594b690748c8
Feb 11 12:11:49 minikube kubelet[4620]: I0211 12:11:49.227520 4620 reconciler.go:183] operationExecutor.UnmountVolume started for volume "default-token-96sfz" (UniqueName: "kubernetes.io/secret/450317d1-a3ad-4e03-a2e0-23aeb4e2a853-default-token-96sfz") pod "450317d1-a3ad-4e03-a2e0-23aeb4e2a853" (UID: "450317d1-a3ad-4e03-a2e0-23aeb4e2a853")
Feb 11 12:11:49 minikube kubelet[4620]: I0211 12:11:49.243420 4620 operation_generator.go:713] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/450317d1-a3ad-4e03-a2e0-23aeb4e2a853-default-token-96sfz" (OuterVolumeSpecName: "default-token-96sfz") pod "450317d1-a3ad-4e03-a2e0-23aeb4e2a853" (UID: "450317d1-a3ad-4e03-a2e0-23aeb4e2a853"). InnerVolumeSpecName "default-token-96sfz". PluginName "kubernetes.io/secret", VolumeGidValue ""
Feb 11 12:11:49 minikube kubelet[4620]: I0211 12:11:49.328485 4620 reconciler.go:303] Volume detached for volume "default-token-96sfz" (UniqueName: "kubernetes.io/secret/450317d1-a3ad-4e03-a2e0-23aeb4e2a853-default-token-96sfz") on node "minikube" DevicePath ""
Feb 11 12:11:59 minikube kubelet[4620]: I0211 12:11:59.476915 4620 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-96sfz" (UniqueName: "kubernetes.io/secret/a7cb8b06-600c-472e-8ea1-ebdb2ffd52e1-default-token-96sfz") pod "nginx-6db489d4b7-hcx56" (UID: "a7cb8b06-600c-472e-8ea1-ebdb2ffd52e1")

==> storage-provisioner [324e1bc9bc95] <==
F0211 12:09:11.841068 1 main.go:37] Error getting server version: Get https://10.96.0.1:443/version: dial tcp 10.96.0.1:443: i/o timeout

==> storage-provisioner [46616b07c148] <==

The operating system version:
Kubuntu 18.04.4 LTS

@tstromberg
Copy link
Contributor

I see that the ingress nginx is running:

==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
81df4af5918bf nginx@sha256:62f787b94e5faddb79f96c84ac0877aaf28fb325bfc3601b9c0934d4c107ba94 20 minutes ago Running nginx 0 c11c9fc924552
1ebfefcb875b5 quay.io/kubernetes-ingress-controller/nginx-ingress-controller@sha256:d0b22f715fcea5598ef7f869d308b55289a3daaa12922fa52a1abf17703c88e7 23 minutes ago Running nginx-ingress-controller 0 bfe7a7ab7e5aa

Is the service supposed to be running by default?

Were you following the instructions from https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/

@tstromberg tstromberg changed the title Nginx ingress controller addon not working Nginx ingress controller addon: no service running Feb 26, 2020
@tstromberg tstromberg added addon/ingress triage/needs-information Indicates an issue needs more information in order to work on it. kind/support Categorizes issue or PR as a support question. labels Feb 26, 2020
@irizzant
Copy link
Author

irizzant commented Feb 26, 2020

@tstromberg The Nginx controller is running which is fine and the expected behaviour.

What is missing is the Nginx Kubernetes Service with type NodePort which allows external requests to be routed to the Nginx controller.

@boonen
Copy link

boonen commented Mar 23, 2020

@irizzant This command solved the issue for me:

kubectl expose deployment nginx-ingress-controller --target-port=80 --type=NodePort -n kube-system

@irizzant
Copy link
Author

irizzant commented Mar 24, 2020

@boonen thank you for your answer. I know that manually creating the Service fixes the problem.
The Service should be automatically created when ingress controller extension is enabled though

@tstromberg tstromberg added kind/bug Categorizes issue or PR as related to a bug. and removed triage/needs-information Indicates an issue needs more information in order to work on it. kind/support Categorizes issue or PR as a support question. labels Apr 15, 2020
@tstromberg
Copy link
Contributor

If someone wants to contribute the NodePort config to https://github.com/kubernetes/minikube/tree/master/deploy/addons/ingress - it sounds like it should resolve this issue nicely.

@tstromberg tstromberg changed the title Nginx ingress controller addon: no service running ingress addon should include NodePort Apr 22, 2020
@medyagh medyagh added priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. kind/feature Categorizes issue or PR as related to a new feature. and removed kind/bug Categorizes issue or PR as related to a bug. labels Apr 22, 2020
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 21, 2020
@irizzant
Copy link
Author

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 21, 2020
@medyagh medyagh added priority/backlog Higher priority than priority/awaiting-more-evidence. help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. and removed priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. labels Jul 29, 2020
@arielmoraes
Copy link

arielmoraes commented Aug 29, 2020

For me, the command addons enable ingress added a service named nginx-ingress-controller-admission with ClusterIP. The strange thing is that I can access the port 443 from the host without having the service configured with NodePort. That allowed me to create iptables' rules to access the cluster on my lan.

Edit
Only the port 443 was exposed, there is some magic happening, I can sense.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 27, 2020
@irizzant
Copy link
Author

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 27, 2020
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 25, 2021
@irizzant
Copy link
Author

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 25, 2021
@medyagh
Copy link
Member

medyagh commented Apr 14, 2021

For me, the command addons enable ingress added a service named nginx-ingress-controller-admission with ClusterIP. The strange thing is that I can access the port 443 from the host without having the service configured with NodePort. That allowed me to create iptables' rules to access the cluster on my lan.

Edit
Only the port 443 was exposed, there is some magic happening, I can sense.

@arielmoraes by default minikube will only expose a few ports, if you wants to expose more ports you can use this flag

      --ports=[]: List of ports that should be exposed (docker and podman driver only)

I suggest we add this as a FAQ to our website

@medyagh medyagh changed the title ingress addon should include NodePort add FAQ to website how to expose custom ports on docker driver Apr 14, 2021
@medyagh medyagh added the kind/documentation Categorizes issue or PR as related to documentation. label Apr 14, 2021
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 13, 2021
@irizzant
Copy link
Author

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 14, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 12, 2021
@irizzant
Copy link
Author

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 12, 2021
@spowelljr spowelljr added the lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. label Nov 3, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
addon/ingress help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/documentation Categorizes issue or PR as related to documentation. kind/feature Categorizes issue or PR as related to a new feature. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/backlog Higher priority than priority/awaiting-more-evidence.
Projects
None yet
Development

No branches or pull requests

9 participants