Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Docker build hangs on apt-key add command #6018

Closed
mvgijssel opened this issue Dec 5, 2019 · 19 comments
Closed

Docker build hangs on apt-key add command #6018

mvgijssel opened this issue Dec 5, 2019 · 19 comments
Labels
co/runtime/docker Issues specific to a docker runtime kind/support Categorizes issue or PR as a support question. triage/needs-information Indicates an issue needs more information in order to work on it.

Comments

@mvgijssel
Copy link

mvgijssel commented Dec 5, 2019

For some reason Minikube is unable to build the following Dockerfile:

FROM ruby:2.4.5-jessie
RUN wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | apt-key add -

It will hang 10+ minutes on the RUN statement. SSH'ing into the machine

minikube ssh
top -o %CPU

shows that apt-config shell APT_DIR Dir is consuming 100% of 1 core of CPU.

Building the same Dockerfile in Docker for Mac finishes after ~10 seconds 😅.

The exact command to reproduce the issue:

eval $(minikube docker-env)
docker build --tag test -f Dockerfile.test tmp

The full output of the command that failed:

❯ docker build --tag test -f Dockerfile.test tmp
[+] Building 131.5s (4/5)
 => [internal] load .dockerignore                                                                                                                                                  0.0s
 => => transferring context: 2B                                                                                                                                                    0.0s
 => [internal] load build definition from Dockerfile.test                                                                                                                          0.0s
 => => transferring dockerfile: 160B                                                                                                                                               0.0s
 => [internal] load metadata for docker.io/library/ruby:2.4.5-jessie                                                                                                               0.0s
 => CACHED [1/2] FROM docker.io/library/ruby:2.4.5-jessie                                                                                                                          0.0s
 => [2/2] RUN wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | apt-key add -                                                                               548.7s

The output of the minikube logs command:


==> Docker <==
-- Logs begin at Thu 2019-12-05 12:44:31 UTC, end at Thu 2019-12-05 12:54:11 UTC. --
Dec 05 12:44:47 development dockerd[5120]: time="2019-12-05T12:44:47.708056543Z" level=info msg="ClientConn switching balancer to "pick_first"" module=grpc
Dec 05 12:44:47 development dockerd[5120]: time="2019-12-05T12:44:47.739956748Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
Dec 05 12:44:47 development dockerd[5120]: time="2019-12-05T12:44:47.915435670Z" level=warning msg="Your kernel does not support cgroup blkio weight"
Dec 05 12:44:47 development dockerd[5120]: time="2019-12-05T12:44:47.915469737Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
Dec 05 12:44:47 development dockerd[5120]: time="2019-12-05T12:44:47.915477700Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device"
Dec 05 12:44:47 development dockerd[5120]: time="2019-12-05T12:44:47.915482343Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device"
Dec 05 12:44:47 development dockerd[5120]: time="2019-12-05T12:44:47.915487017Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device"
Dec 05 12:44:47 development dockerd[5120]: time="2019-12-05T12:44:47.915491379Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device"
Dec 05 12:44:47 development dockerd[5120]: time="2019-12-05T12:44:47.915711005Z" level=info msg="Loading containers: start."
Dec 05 12:44:48 development dockerd[5120]: time="2019-12-05T12:44:48.081313544Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Dec 05 12:44:48 development dockerd[5120]: time="2019-12-05T12:44:48.117579198Z" level=info msg="Loading containers: done."
Dec 05 12:44:48 development dockerd[5120]: time="2019-12-05T12:44:48.166360148Z" level=info msg="Docker daemon" commit=633a0ea838 graphdriver(s)=overlay2 version=19.03.5
Dec 05 12:44:48 development dockerd[5120]: time="2019-12-05T12:44:48.166892133Z" level=info msg="Daemon has completed initialization"
Dec 05 12:44:48 development dockerd[5120]: time="2019-12-05T12:44:48.196045103Z" level=info msg="API listen on /var/run/docker.sock"
Dec 05 12:44:48 development dockerd[5120]: time="2019-12-05T12:44:48.196058890Z" level=info msg="API listen on [::]:2376"
Dec 05 12:44:48 development systemd[1]: Started Docker Application Container Engine.
Dec 05 12:45:15 development dockerd[5120]: time="2019-12-05T12:45:15.817744337Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/85ca1c3578115b91d4d2ceca23900aaf60e442c0c80a694f655b73ff57958bf1/shim.sock" debug=false pid=6499
Dec 05 12:45:15 development dockerd[5120]: time="2019-12-05T12:45:15.819034981Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/c4769f241eaac216242aa8235ee684b6205977748c1d7167cbedf102291b1e80/shim.sock" debug=false pid=6503
Dec 05 12:45:15 development dockerd[5120]: time="2019-12-05T12:45:15.845783570Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/ed6e159fe61287c5f9084f1943495d70d436f83dba45675a6ebbde8a5d7aa8ce/shim.sock" debug=false pid=6536
Dec 05 12:45:15 development dockerd[5120]: time="2019-12-05T12:45:15.964567522Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/0a9fc7d0f1751b09d2525036dcb1850dfcb8caeca8872a2f13b34ed1b195061a/shim.sock" debug=false pid=6571
Dec 05 12:45:15 development dockerd[5120]: time="2019-12-05T12:45:15.977707008Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/a1c237c0ccd0c7c9eeecf355a8ffc6508429072f311e96b645a9f4f2fd676c78/shim.sock" debug=false pid=6592
Dec 05 12:45:16 development dockerd[5120]: time="2019-12-05T12:45:16.291208859Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/8ae6092946dfa96e38bb74cc8079617bc9345ba56550feeee0a0e99d40a58b13/shim.sock" debug=false pid=6773
Dec 05 12:45:16 development dockerd[5120]: time="2019-12-05T12:45:16.295685842Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/0f1f51e87243e57ace8955d2501f14e905723289749a1abbe9795dac755d2d3a/shim.sock" debug=false pid=6789
Dec 05 12:45:16 development dockerd[5120]: time="2019-12-05T12:45:16.296711459Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/47682076abebdf6c6f6df2590598aca77912f1744e45e71fa980e4e6c163bf99/shim.sock" debug=false pid=6790
Dec 05 12:45:16 development dockerd[5120]: time="2019-12-05T12:45:16.297765253Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/2635c7184c31bda796f52f0ccb6a59c6dca998deaa2e6f58f978bc7e67bd52f9/shim.sock" debug=false pid=6794
Dec 05 12:45:16 development dockerd[5120]: time="2019-12-05T12:45:16.315096788Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/c36f5378a1b87d7fe80f647e6a139f637c4ceaccc2eb07a5f1018c1e6b82e960/shim.sock" debug=false pid=6850
Dec 05 12:45:21 development dockerd[5120]: time="2019-12-05T12:45:21.718842185Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/c7fbb0634b303b455a3a264336e4192f5d92449d6bdfb946a9c109e9a7c3efcb/shim.sock" debug=false pid=7484
Dec 05 12:45:21 development dockerd[5120]: time="2019-12-05T12:45:21.728681763Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/870d789b8d1f4dfbfcd588fb28a73f76c106a08217c0457431f5c333c9bc5f2e/shim.sock" debug=false pid=7494
Dec 05 12:45:21 development dockerd[5120]: time="2019-12-05T12:45:21.781380891Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/11a14528716739b3d9a45a50df3a08acb9025d48e562bfafefec75bff3616692/shim.sock" debug=false pid=7525
Dec 05 12:45:21 development dockerd[5120]: time="2019-12-05T12:45:21.877279687Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/2f825d31868f22be426984cf1a22ef85708563176264e14f28f06f58f93a26a5/shim.sock" debug=false pid=7561
Dec 05 12:45:21 development dockerd[5120]: time="2019-12-05T12:45:21.930670293Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/1ce7ab3aa409a217b54adee61f49b8b09a4a3c01894d61fa25ac866b76124152/shim.sock" debug=false pid=7580
Dec 05 12:45:21 development dockerd[5120]: time="2019-12-05T12:45:21.931972142Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/e8992e1c5c8dd33e23be094d5b2cc0ad334dc364ac5fe08d89944115c8f1f4f5/shim.sock" debug=false pid=7586
Dec 05 12:45:23 development dockerd[5120]: time="2019-12-05T12:45:23.023267238Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/d68791efff234f5aac3b6f2c968c111b697ccd24551b80f49339147769ea3bc8/shim.sock" debug=false pid=7828
Dec 05 12:45:23 development dockerd[5120]: time="2019-12-05T12:45:23.049545775Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/edbb424ad0fc6c678a1e8c6b3b33461ab41b1213683cbe62f2257c920ee68115/shim.sock" debug=false pid=7866
Dec 05 12:45:23 development dockerd[5120]: time="2019-12-05T12:45:23.144109428Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/91aee06c8fbf3e9b432e2f57864b8e19b33a8e502a16c5c79d3c54517f3ae73d/shim.sock" debug=false pid=7964
Dec 05 12:45:23 development dockerd[5120]: time="2019-12-05T12:45:23.232561119Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/ae491c71db38f59a84303079746e5bb786560b6460f148019f12bc224902c144/shim.sock" debug=false pid=8040
Dec 05 12:45:23 development dockerd[5120]: time="2019-12-05T12:45:23.251713951Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/94a394904f6c8e3e85f84370a5a6905afa12b21187f6eb4f8f1088c9fc624423/shim.sock" debug=false pid=8056
Dec 05 12:45:23 development dockerd[5120]: time="2019-12-05T12:45:23.275747438Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/5cdd5a2f2515acbe9d04425bc961276a84d215d1a0f215e352f317f072e002ec/shim.sock" debug=false pid=8080
Dec 05 12:45:36 development dockerd[5120]: time="2019-12-05T12:45:36.172039476Z" level=info msg="parsed scheme: """ module=grpc
Dec 05 12:45:36 development dockerd[5120]: time="2019-12-05T12:45:36.172094858Z" level=info msg="scheme "" not registered, fallback to default scheme" module=grpc
Dec 05 12:45:36 development dockerd[5120]: time="2019-12-05T12:45:36.172120953Z" level=info msg="ccResolverWrapper: sending update to cc: {[{ 0 }] }" module=grpc
Dec 05 12:45:36 development dockerd[5120]: time="2019-12-05T12:45:36.172133546Z" level=info msg="ClientConn switching balancer to "pick_first"" module=grpc
Dec 05 12:45:44 development dockerd[5120]: time="2019-12-05T12:45:44.732225009Z" level=warning msg="grpc: addrConn.createTransport failed to connect to { 0 }. Err :connection error: desc = "transport: Error while dialing only one connection allowed". Reconnecting..." module=grpc
Dec 05 12:45:53 development dockerd[5120]: time="2019-12-05T12:45:53.685597143Z" level=info msg="shim reaped" id=d68791efff234f5aac3b6f2c968c111b697ccd24551b80f49339147769ea3bc8
Dec 05 12:45:53 development dockerd[5120]: time="2019-12-05T12:45:53.696066866Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Dec 05 12:45:53 development dockerd[5120]: time="2019-12-05T12:45:53.696202166Z" level=warning msg="d68791efff234f5aac3b6f2c968c111b697ccd24551b80f49339147769ea3bc8 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/d68791efff234f5aac3b6f2c968c111b697ccd24551b80f49339147769ea3bc8/mounts/shm, flags: 0x2: no such file or directory"
Dec 05 12:46:08 development dockerd[5120]: time="2019-12-05T12:46:08.101081701Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/7684a0b5cb3b5bde9f5b4cfcac1adabdcbb78dc5e4f7ccd6dc4144ec15bb099e/shim.sock" debug=false pid=9335
Dec 05 12:47:25 development dockerd[5120]: time="2019-12-05T12:47:25.156405999Z" level=info msg="parsed scheme: """ module=grpc
Dec 05 12:47:25 development dockerd[5120]: time="2019-12-05T12:47:25.156454708Z" level=info msg="scheme "" not registered, fallback to default scheme" module=grpc
Dec 05 12:47:25 development dockerd[5120]: time="2019-12-05T12:47:25.156474300Z" level=info msg="ccResolverWrapper: sending update to cc: {[{ 0 }] }" module=grpc
Dec 05 12:47:25 development dockerd[5120]: time="2019-12-05T12:47:25.156482571Z" level=info msg="ClientConn switching balancer to "pick_first"" module=grpc
Dec 05 12:47:32 development dockerd[5120]: time="2019-12-05T12:47:32.851185772Z" level=warning msg="grpc: addrConn.createTransport failed to connect to { 0 }. Err :connection error: desc = "transport: Error while dialing only one connection allowed". Reconnecting..." module=grpc
Dec 05 12:48:01 development dockerd[5120]: time="2019-12-05T12:48:01.859546266Z" level=info msg="parsed scheme: """ module=grpc
Dec 05 12:48:01 development dockerd[5120]: time="2019-12-05T12:48:01.859593465Z" level=info msg="scheme "" not registered, fallback to default scheme" module=grpc
Dec 05 12:48:01 development dockerd[5120]: time="2019-12-05T12:48:01.859613747Z" level=info msg="ccResolverWrapper: sending update to cc: {[{ 0 }] }" module=grpc
Dec 05 12:48:01 development dockerd[5120]: time="2019-12-05T12:48:01.859622269Z" level=info msg="ClientConn switching balancer to "pick_first"" module=grpc
Dec 05 12:49:00 development dockerd[5120]: time="2019-12-05T12:49:00.409104917Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/8bd7e3fbeb53f64ca862be5663dd0f8beeb52abb0906e9bb6f6f413fbd645cc7/shim.sock" debug=false pid=12593
Dec 05 12:54:05 development dockerd[5120]: time="2019-12-05T12:54:05.182825748Z" level=info msg="shim reaped" id=8bd7e3fbeb53f64ca862be5663dd0f8beeb52abb0906e9bb6f6f413fbd645cc7
Dec 05 12:54:05 development dockerd[5120]: time="2019-12-05T12:54:05.192955477Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Dec 05 12:54:05 development dockerd[5120]: time="2019-12-05T12:54:05.258957007Z" level=warning msg="8bd7e3fbeb53f64ca862be5663dd0f8beeb52abb0906e9bb6f6f413fbd645cc7 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/8bd7e3fbeb53f64ca862be5663dd0f8beeb52abb0906e9bb6f6f413fbd645cc7/mounts/shm, flags: 0x2: no such file or directory"

==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
7684a0b5cb3b5 4689081edb103 8 minutes ago Running storage-provisioner 2 870d789b8d1f4
5cdd5a2f2515a c2a449c9f8344 8 minutes ago Running registry 1 e8992e1c5c8dd
94a394904f6c8 60dc18151daf8 8 minutes ago Running registry-proxy 1 2f825d31868f2
ae491c71db38f 70f311871ae12 8 minutes ago Running coredns 1 11a1452871673
91aee06c8fbf3 70f311871ae12 8 minutes ago Running coredns 1 1ce7ab3aa409a
edbb424ad0fc6 22243b9b56e72 8 minutes ago Running kube-proxy 1 c7fbb0634b303
d68791efff234 4689081edb103 8 minutes ago Exited storage-provisioner 1 870d789b8d1f4
c36f5378a1b87 82be4be24fb6b 8 minutes ago Running kube-scheduler 1 a1c237c0ccd0c
2635c7184c31b f691d6df3b823 8 minutes ago Running kube-controller-manager 1 0a9fc7d0f1751
8ae6092946dfa 44ebc8208c5da 8 minutes ago Running kube-apiserver 1 ed6e159fe6128
47682076abebd 303ce5db0e90d 8 minutes ago Running etcd 1 c4769f241eaac
0f1f51e87243e bd12a212f9dcb 8 minutes ago Running kube-addon-manager 1 85ca1c3578115
6c5b63693fcb0 70f311871ae12 About an hour ago Exited coredns 0 35d2a57173a45
defc35ec54189 gcr.io/google_containers/kube-registry-proxy@sha256:1040f25a5273de0d72c54865a8efd47e3292de9fb8e5353e3fa76736b854f2da 20 hours ago Exited registry-proxy 0 3b946debea9fe
462eab8d11fda registry.hub.docker.com/library/registry@sha256:5eaafa2318aa0c4c52f95077c2a68bed0b13f6d2b464835723d4de1484052299 20 hours ago Exited registry 0 30474e27dbe27
8a6f55576b5b6 70f311871ae12 20 hours ago Exited coredns 0 613287d5fd25f
c4dff44ab5195 22243b9b56e72 20 hours ago Exited kube-proxy 0 a9baf46498339
bdb0b95bd3266 k8s.gcr.io/kube-addon-manager@sha256:3e315022a842d782a28e729720f21091dde21f1efea28868d65ec595ad871616 20 hours ago Exited kube-addon-manager 0 5491c816137c7
21128a5c51562 f691d6df3b823 20 hours ago Exited kube-controller-manager 0 77aa0eaf4d814
52476de3d24ed 82be4be24fb6b 20 hours ago Exited kube-scheduler 0 80ea25f3239f0
7d01a6ad09d12 303ce5db0e90d 20 hours ago Exited etcd 0 36cb23998f749
755c9793968a5 44ebc8208c5da 20 hours ago Exited kube-apiserver 0 c3025018c3ebf

==> coredns ["6c5b63693fcb"] <==
.:53
E1205 12:43:40.091813 1 reflector.go:283] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to watch *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?resourceVersion=79681&timeout=7m18s&timeoutSeconds=438&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
E1205 12:43:40.091914 1 reflector.go:283] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to watch *v1.Service: Get https://10.96.0.1[44NFO3]/ applu/vi1n/srevlioces?resourceVernsiognu=35t6i&oimeout=D65m = se&2t3i5feccu36Seconds=405&watch=t6r966: di7a6816ccpd 903.4e6bc07.
1:443: connect: coCoreDNS-1r.6f.5s
ed
linux1/a0md614,: go:1.1.3.41, 5c2 fd 1 1 2r
eflec1t205 1g2::43:3]. 0pk8g13m o d / k 8 s1. iroe/fclleicetnotr-.go:283] p-kg/mod6/k8s.5io0/[email protected]/6c2008e51r01-l78cd2oaf.79o2b9ab/ toaols/cache/arecfl ecvt1.Naorespacoe::9 G:e F aileds to 1wa.t9ch 0v.11.:E4n4d3popiit/sv:1 /Gneam ehstptpses/?1r0e.s9o6u.rc.e1V:e4r4s3i/ pci/nn/ecntdipooni nrtesf?urseesdo
urceVersion=79681&timeout=7m18s&timeoutSeconds=438&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
E1205 12:43:40.091813 1 reflector.go:283] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to watch *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?resourceVersion=79681&timeout=7m18s&timeoutSeconds=438&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
E1205 12:43:40.091813 1 reflector.go:283] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to watch *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?resourceVersion=79681&timeout=7m18s&timeoutSeconds=438&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
E1205 12:43:40.091914 1 reflector.go:283] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to watch *v1.Service: Get https://10.96.0.1:443/api/v1/services?resourceVersion=356&timeout=6m45s&timeoutSeconds=405&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
E1205 12:43:40.091914 1 reflector.go:283] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to watch *v1.Service: Get https://10.96.0.1:443/api/v1/services?resourceVersion=356&timeout=6m45s&timeoutSeconds=405&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
E1205 12:43:40.091914 1 reflector.go:283] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to watch *v1.Service: Get https://10.96.0.1:443/api/v1/services?resourceVersion=356&timeout=6m45s&timeoutSeconds=405&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
E1205 12:43:40.091958 1 reflector.go:283] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to watch *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?resourceVersion=148&timeout=6m43s&timeoutSeconds=403&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
E1205 12:43:40.091958 1 reflector.go:283] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to watch *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?resourceVersion=148&timeout=6m43s&timeoutSeconds=403&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
E1205 12:43:40.091958 1 reflector.go:283] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to watch *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?resourceVersion=148&timeout=6m43s&timeoutSeconds=403&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
[INFO] SIGTERM: Shutting down servers then terminating
[INFO] plugin/health: Going into lameduck mode for 5s

==> coredns ["8a6f55576b5b"] <==
.:53
[INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
E1205 12:43:40.091271 1 reflector.go:283] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to watch *v1.Service: Get https://10.96.0.1:443/api/v1/services?resourceVersion=356&timeout=7m1s&timeoutSeconds=421&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
E1205 12:43:40.091314 1 reflector.go:283] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to watch v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?resourceVersion=79681&timeout=8m2Cs&tioeeDNS-1.6.5s=509&watch=true: dial tcp 10.9l.n0x.a1m:d64, goo.n3.4c tc:2fdonnectiobn2
refused
E120E511025: 423::443.40.09112279 0 1 1rerflelctcoro.go:o2:32 p3kg /mkdg/kmosd./ko8/clie/ntlioe@[email protected]/2acaheb/rtofolsc/tache/roeflectoor.9go 98a: lFai lted tot cwha t
ch *ve1.Nameespaece:hGtettp sht/tp1s:.//1.0..96.043:/a4p3/apiuv1/na=7m1acss&?triemsoorcetVSeercsoindns==14281&timeouwatmh=4tr&ut:idieao ttcS 1c0.n6d0.=143444&3:a cohnetcr:u c odinalc iocnpr f1u0.e9d6
.0.1:443: 1c05n1n2:4t::0.09n12e1c tio n 1 resledctor.go:283] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to watch *v1.Service: Get https://10.96.0.1:443/api/v1/services?resourceVersion=356&timeout=7m1s&timeoutSeconds=421&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
E1205 12:43:40.091271 1 reflector.go:283] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to watch *v1.Service: Get https://10.96.0.1:443/api/v1/services?resourceVersion=356&timeout=7m1s&timeoutSeconds=421&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
E1205 12:43:40.091314 1 reflector.go:283] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to watch *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?resourceVersion=79681&timeout=8m29s&timeoutSeconds=509&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
E1205 12:43:40.091314 1 reflector.go:283] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to watch *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?resourceVersion=79681&timeout=8m29s&timeoutSeconds=509&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
E1205 12:43:40.091314 1 reflector.go:283] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to watch *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?resourceVersion=79681&timeout=8m29s&timeoutSeconds=509&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
E1205 12:43:40.091290 1 reflector.go:283] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to watch *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?resourceVersion=148&timeout=7m14s&timeoutSeconds=434&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
E1205 12:43:40.091290 1 reflector.go:283] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to watch *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?resourceVersion=148&timeout=7m14s&timeoutSeconds=434&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
E1205 12:43:40.091290 1 reflector.go:283] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to watch *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?resourceVersion=148&timeout=7m14s&timeoutSeconds=434&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
[INFO] SIGTERM: Shutting down servers then terminating
[INFO] plugin/health: Going into lameduck mode for 5s

==> coredns ["91aee06c8fbf"] <==
.:53
E1205 12:45:53.516658 1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E1205 12:45:53.516748 1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial [INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
CoreDNS-1.6.5
linux/amd64, got1c.1 1..4,6 c0.1f:d443: i/bo timeout
NFO] plEu12in/r1e2:dy5: 53.il17 1w4a t i n g onre lekctberrnget:e1s5] pk[/INoFO/]k8s.io/[email protected]/ pllusg/can/hree/arey: lSetitor wgoit98n:g Fan:led" ktu lirtne
te1.S"
rvi[eI:N FOet httlpusg:/n/10ea96y:0.S:443/alli /wv1i/sierg iocne ?"lkimie=rn00trees"o
urce12e0rs io2n:405:53i.a516 5tc9p 1 . 6.0.1r4ac3e. go:o 2]imeruat
e[2115581934]: "Reflector pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98 ListAndWatch" (started: 2019-12-05 12:45:23.512129852 +0000 UTC m=+0.147295767) (total time: 30.004389118s):
Trace[2115581934]: [30.004389118s] [30.004389118s] END
E1205 12:45:53.516658 1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E1205 12:45:53.516658 1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E1205 12:45:53.516658 1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
I1205 12:45:53.516594 1 trace.go:82] Trace[51800909]: "Reflector pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98 ListAndWatch" (started: 2019-12-05 12:45:23.512079575 +0000 UTC m=+0.147245467) (total time: 30.004349923s):
Trace[51800909]: [30.004349923s] [30.004349923s] END
E1205 12:45:53.516748 1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E1205 12:45:53.516748 1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E1205 12:45:53.516748 1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
I1205 12:45:53.516852 1 trace.go:82] Trace[1524753788]: "Reflector pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98 ListAndWatch" (started: 2019-12-05 12:45:23.512082897 +0000 UTC m=+0.147248804) (total time: 30.004703112s):
Trace[1524753788]: [30.004703112s] [30.004703112s] END
E1205 12:45:53.517014 1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E1205 12:45:53.517014 1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E1205 12:45:53.517014 1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout

==> coredns ["ae491c71db38"] <==
E1205 12:45:53.516639 1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E1205 12:45:53.516664 1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E1205 12:45:53.516735 1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Endpoints: Get https.//10.96.0.1:443:ap5/v31/endpoints?limit=500&r[sIourceVerpilu=0g: nia/l tlpoa0.:96.0.1:4i4n: ico tnimeoutration MD5 = 4e235fcc3696966e76816bcd9034ebc7
CoreDNS-1.6.5
linux/amd64, go1.13.4, c2fd1b2
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
I1205 12:45:53.516577 1 trace.go:82] Trace[180589227]: "Reflector pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98 ListAndWatch" (started: 2019-12-05 12:45:23.512115115 +0000 UTC m=+0.071956340) (total time: 30.004223473s):
Trace[180589227]: [30.004223473s] [30.004223473s] END
E1205 12:45:53.516639 1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E1205 12:45:53.516639 1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E1205 12:45:53.516639 1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
I1205 12:45:53.516610 1 trace.go:82] Trace[280936051]: "Reflector pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98 ListAndWatch" (started: 2019-12-05 12:45:23.512106664 +0000 UTC m=+0.071947880) (total time: 30.004463073s):
Trace[280936051]: [30.004463073s] [30.004463073s] END
E1205 12:45:53.516664 1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E1205 12:45:53.516664 1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E1205 12:45:53.516664 1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
I1205 12:45:53.516711 1 trace.go:82] Trace[2139292546]: "Reflector pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98 ListAndWatch" (started: 2019-12-05 12:45:23.512518628 +0000 UTC m=+0.072359863) (total time: 30.004161325s):
Trace[2139292546]: [30.004161325s] [30.004161325s] END
E1205 12:45:53.516735 1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E1205 12:45:53.516735 1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E1205 12:45:53.516735 1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout

==> dmesg <==
[Dec 5 12:44] You have booted with nomodeset. This means your GPU drivers are DISABLED
[ +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
[ +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
[ +0.638183] core: CPUID marked event: 'cpu cycles' unavailable
[ +0.000000] core: CPUID marked event: 'instructions' unavailable
[ +0.000001] core: CPUID marked event: 'bus cycles' unavailable
[ +0.000000] core: CPUID marked event: 'cache references' unavailable
[ +0.000001] core: CPUID marked event: 'cache misses' unavailable
[ +0.000000] core: CPUID marked event: 'branch instructions' unavailable
[ +0.000001] core: CPUID marked event: 'branch misses' unavailable
[ +0.005076] #2
[ +0.001420] #3
[ +0.001859] #4
[ +0.001268] #5
[ +0.002171] #6
[ +0.002357] #7
[ +0.004135] #9
[ +0.001785] #10
[ +0.001869] #11
[ +0.012590] pmd_set_huge: Cannot satisfy [mem 0xf0000000-0xf0200000] with a huge-page mapping due to MTRR override.
[ +20.235244] sd 0:0:0:0: [sda] Assuming drive cache: write through
[ +0.112281] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
[ +5.629072] systemd[1]: Failed to bump fs.file-max, ignoring: Invalid argument
[ +0.015870] systemd-fstab-generator[2792]: Ignoring "noauto" for root device
[ +0.002596] systemd[1]: File /usr/lib/systemd/system/systemd-journald.service:12 configures an IP firewall (IPAddressDeny=any), but the local system does not support BPF/cgroup based firewalling.
[ +0.000002] systemd[1]: Proceeding WITHOUT firewalling in effect! (This warning is only shown for the first loaded unit using IP firewalling.)
[ +0.829825] vboxguest: loading out-of-tree module taints kernel.
[ +0.003089] vboxguest: PCI device not found, probably running on physical hardware.
[ +0.285949] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
[ +8.612075] systemd-fstab-generator[5018]: Ignoring "noauto" for root device
[ +10.585496] systemd-fstab-generator[5641]: Ignoring "noauto" for root device
[Dec 5 12:45] kauditd_printk_skb: 110 callbacks suppressed
[ +6.682240] kauditd_printk_skb: 29 callbacks suppressed
[ +15.792288] kauditd_printk_skb: 59 callbacks suppressed
[ +7.277080] kauditd_printk_skb: 5 callbacks suppressed
[Dec 5 12:46] NFSD: Unable to end grace period: -110
[Dec 5 12:47] kauditd_printk_skb: 2 callbacks suppressed

==> kernel <==
12:54:11 up 10 min, 0 users, load average: 1.53, 1.10, 0.55
Linux development 4.19.81 #1 SMP Tue Nov 26 10:19:39 PST 2019 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2019.02.7"

==> kube-addon-manager ["0f1f51e87243"] <==
INFO: Leader election disabled.
INFO: == Kubernetes addon ensure completed at 2019-12-05T12:53:43+00:00 ==
INFO: == Reconciling with deprecated label ==
error: no objects passed to apply
INFO: == Reconciling with addon-manager label ==
daemonset.apps/registry-proxy unchanged
replicationcontroller/registry unchanged
service/registry unchanged
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-12-05T12:53:44+00:00 ==
INFO: Leader election disabled.
INFO: == Kubernetes addon ensure completed at 2019-12-05T12:53:47+00:00 ==
INFO: == Reconciling with deprecated label ==
error: no objects passed to apply
INFO: == Reconciling with addon-manager label ==
daemonset.apps/registry-proxy unchanged
replicationcontroller/registry unchanged
service/registry unchanged
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-12-05T12:53:49+00:00 ==
INFO: Leader election disabled.
INFO: == Kubernetes addon ensure completed at 2019-12-05T12:53:52+00:00 ==
INFO: == Reconciling with deprecated label ==
error: no objects passed to apply
INFO: == Reconciling with addon-manager label ==
daemonset.apps/registry-proxy unchanged
replicationcontroller/registry unchanged
service/registry unchanged
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-12-05T12:53:53+00:00 ==
INFO: Leader election disabled.
INFO: == Kubernetes addon ensure completed at 2019-12-05T12:53:57+00:00 ==
INFO: == Reconciling with deprecated label ==
error: no objects passed to apply
INFO: == Reconciling with addon-manager label ==
daemonset.apps/registry-proxy unchanged
replicationcontroller/registry unchanged
service/registry unchanged
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-12-05T12:53:59+00:00 ==
INFO: Leader election disabled.
INFO: == Kubernetes addon ensure completed at 2019-12-05T12:54:02+00:00 ==
INFO: == Reconciling with deprecated label ==
error: no objects passed to apply
INFO: == Reconciling with addon-manager label ==
daemonset.apps/registry-proxy unchanged
replicationcontroller/registry unchanged
service/registry unchanged
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-12-05T12:54:03+00:00 ==
INFO: Leader election disabled.
INFO: == Kubernetes addon ensure completed at 2019-12-05T12:54:07+00:00 ==
INFO: == Reconciling with deprecated label ==
error: no objects passed to apply
INFO: == Reconciling with addon-manager label ==
daemonset.apps/registry-proxy unchanged
replicationcontroller/registry unchanged
service/registry unchanged
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-12-05T12:54:09+00:00 ==

==> kube-addon-manager ["bdb0b95bd326"] <==
error: no objects passed to apply
error when retrieving current configuration of:
Resource: "apps/v1, Resource=daemonsets", GroupVersionKind: "apps/v1, Kind=DaemonSet"
Name: "registry-proxy", Namespace: "kube-system"
Object: &{map["spec":map["selector":map["matchLabels":map["kubernetes.io/minikube-addons":"registry" "registry-proxy":"true"]] "template":map["metadata":map["labels":map["addonmanager.kubernetes.io/mode":"Reconcile" "kubernetes.io/minikube-addonsI:"registryN "FO:re=giReryopncilyn"g: wtrtuhe a]d]o "-smpec":map["containers":[map["name":"registry-proxy" "ports":[map["containerPort":'P' "hostPort":'\u1388' "name":"registry"]] "env":[map["name":"REGISTRY_HOST" "value":"registry.kube-system.svc.cluster.local"] map["name":"REGISTRY_PORT" "value":"80"]] "image":"gcr.io/google_containers/kube-registry-proxy:0.4" "imagePullPolicy":"IfNotPresent"]]]]] "apiVersion":"apps/v1" "kind":"DaemonSet" "metadata":map["labels":map["addonmanager.kubernetes.io/mode":"Reconcile" "kubernetes.io/minikube-addons":"registry"] "name":"registry-proxy" "namespace":"kube-system" "annotations":map["kubectl.kubernetes.io/last-applied-configuration":""]]]}
from server for: "/etc/kubernetes/addons/registry-proxy.yaml": Get https://localhost:8443/apis/apps/v1/namespaces/kube-system/daemonsets/registry-proxy: dial tcp 127.0.0.1:8443: connect: connection refused
error when retrieving current configuration of:
Resource: "/v1, Resource=replicationcontrollers", GroupVersionKind: "/v1, Kind=ReplicationController"
Name: "registry", Namespace: "kube-system"
Object: &{map["apiVersion":"v1" "kind":"ReplicationController" "metadata":map["labels":map["addonmanager.kubernetes.io/mode":"Reconcile" "kubernetes.io/minikube-addons":"registry"] "name":"registry" "namespace":"kube-system" "annotations":map["kubectl.kubernetes.io/last-applied-configuration":""]] "spec":map["replicas":'\x01' "selector":map["kubernetes.io/minikube-addons":"registry"] "template":map["metadata":map["labels":map["actual-registry":"true" "addonmanager.kubernetes.io/mode":"Reconcile" "kubernetes.io/minikube-addons":"registry"]] "spec":map["containers":[map["name":"registry" "ports":[map["containerPort":'\u1388' "protocol":"TCP"]] "env":[map["name":"REGISTRY_STORAGE_DELETE_ENABLED" "value":"true"]] "image":"registry.hub.docker.com/library/registry:2.6.1" "imagePullPolicy":"IfNotPresent"]]]]]]}
from server for: "/etc/kubernetes/addons/registry-rc.yaml": Get https://localhost:8443/api/v1/namespaces/kube-system/replicationcontrollers/registry: dial tcp 127.0.0.1:8443: connect: connection refused
error when retrieving current configuration of:
Resource: "/v1, Resource=services", GroupVersionKind: "/v1, Kind=Service"
Name: "registry", Namespace: "kube-system"
Object: &{map["apiVersion":"v1" "kind":"Service" "metadata":map["namespace":"kube-system" "annotations":map["kubectl.kubernetes.io/last-applied-configuration":""] "labels":map["addonmanager.kubernetes.io/mode":"Reconcile" "kubernetes.io/minikube-addons":"registry"] "name":"registry"] "spec":map["ports":[map["port":'P' "targetPort":'\u1388']] "selector":map["actual-registry":"true" "kubernetes.io/minikube-addons":"registry"] "type":"ClusterIP"]]}
from server for: "/etc/kubernetes/addons/registry-svc.yaml": Get https://localhost:8443/api/v1/namespaces/kube-system/services/registry: dial tcp 127.0.0.1:8443: connect: connection refused
INFO: == Kubernetes addon reconcielrror mwplne red attri0e1v9i-n1g -c0urTr1n2t: 43nfig:u4r3at0o0n 0of:

ResourceI F"/v1, Resource=O: Leader election disabled.
The connection to the server localhost:8s4e3v iwcesc crountse"d GrodpiVdr sionKsipnd:if/yv 1t hKinr=iegvhicte chcount"
or port?
Name: "storage-provisioner", Namespace: "kube-system"
Object: &{map["metadata":map["namespace":"kube-systeFm" "aOn o=a=t iKounse:rmapt"ekubectl.kubernetes.io/last-applied-configuration addon ensure co"m:p"l"et]e d" laatb e2l0s1"9:-m1a2p-[0"5aTd1d2o:n4m3a:n4a8g+e0r0.:k0u0b e=r=n
etes.io/mode"I:N"FROe:c o=n=c iRleec"o]nc i"lnianmge "w:i"tsht odreapgree-cpartoevdi slabel ==
INFO: == Reconciling with addon-manager label ==
ioner"] "apiVersion":"v1" "kind":"ServiceAccount"]}
from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": Get https://localhost:8443/api/v1/namespaces/kube-system/serviceaccounts/storage-provisioner: dial tcp 127.0.0.1:8443: connect: connection refused
error when retrieving current configuration of:
Resource: "/v1, Resource=pods", GroupVersionKind: "/v1, Kind=Pod"
Name: "storage-provisioner", Namespace: "kube-system"
Object: &{map["apiVersion":"v1" "kind":"Pod" "metadata":map["annotations":map["kubectl.kubernetes.io/last-applied-configuration":""] "labels":map["addonmanager.kubernetes.io/mode":"Reconcile" "integration-test":"storage-provisioner"] "name":"storage-provisioner" "namespace":"kube-system"] "spec":map["containers":[map["command":["/storage-provisioner"] "image":"gcr.io/k8s-minikube/storage-provisioner:v1.8.1" "imagePullPolicy":"IfNotPresent" "name":"storage-provisioner" "volumeMounts":[map["mountPath":"/tmp" "name":"tmp"]]]] "hostNetwork":%!q(bool=true) "serviceAccountName":"storage-provisioner" "volumes":[map["hostPath":map["path":"/tmp" "type":"Directory"] "name":"tmp"]]]]}
from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": Get https://localhost:8443/api/v1/namespaces/kube-system/pods/storage-provisioner: dial tcp 127.0.0.1:8443: connect: connection refused
error: no objects passed to apply
error when retrieving current configuration of:
Resource: "apps/v1, Resource=daemonsets", GroupVersionKind: "apps/v1, Kind=DaemonSet"
Name: "registry-proxy", Namespace: "kube-system"
Object: &{map["kind":"DaemonSet" "metadata":map["annotations":map["kubectl.kubernetes.io/last-applied-configuration":""] "labels":map["addonmanager.kubernetes.io/mode":"Reconcile" "kubernetes.io/minikube-addons":"registry"] "name":"registry-proxy" "namespace":"kube-system"] "spec":map["selector":map["matchLabels":map["kubernetes.io/minikube-addons":"registry" "registry-proxy":"true"]] "template":map["metadata":map["labels":map["addonmanager.kubernetes.io/mode":"Reconcile" "kubernetes.io/minikube-addons":"registry" "registry-proxy":"true"]] "spec":map["containers":[map["env":[map["value":"registry.kube-system.svc.cluster.local" "name":"REGISTRY_HOST"] map["name":"REGISTRY_PORT" "value":"80"]] "image":"gcr.io/google_containers/kube-registry-proxy:0.4" "imagePullPolicy":"IfNotPresent" "name":"registry-proxy" "ports":[map["hostPort":'\u1388' "name":"registry" "containerPort":'P']]]]]]] "apiVersion":"apps/v1"]}
from server for: "/etc/kubernetes/addons/registry-proxy.yaml": Get https://localhost:8443/apis/apps/v1/namespaces/kube-system/daemonsets/registry-proxy: dial tcp 127.0.0.1:8443: connect: connection refused
error when retrieving current configuration of:
Resource: "/v1, Resource=replicationcontrollers", GroupVersionKind: "/v1, Kind=ReplicationController"
Name: "registry", Namespace: "kube-system"
Object: &{map["apiVersion":"v1" "kind":"ReplicationController" "metadata":map["labels":map["addonmanager.kubernetes.io/mode":"Reconcile" "kubernetes.io/minikube-addons":"registry"] "name":"registry" "namespace":"kube-system" "annotations":map["kubectl.kubernetes.io/last-applied-configuration":""]] "spec":map["replicas":'\x01' "selector":map["kubernetes.io/minikube-addons":"registry"] "template":map["spec":map["containers":[map["env":[map["value":"true" "name":"REGISTRY_STORAGE_DELETE_ENABLED"]] "image":"registry.hub.docker.com/library/registry:2.6.1" "imagePullPolicy":"IfNotPresent" "name":"registry" "ports":[map["containerPort":'\u1388' "protocol":"TCP"]]]]] "metadata":map["labels":map["actual-registry":"true" "addonmanager.kubernetes.io/mode":"Reconcile" "kubernetes.io/minikube-addons":"registry"]]]]]}
from server for: "/etc/kubernetes/addons/registry-rc.yaml": Get https://localhost:8443/api/v1/namespaces/kube-system/replicationcontrollers/registry: dial tcp 127.0.0.1:8443: connect: connection refused
error when retrieving current configuration of:
Resource: "/v1, Resource=services", GroupVersionKind: "/v1, Kind=Service"
Name: "registry", Namespace: "kube-system"
Object: &{map["apiVersion":"v1" "kind":"Service" "metadata":map["labels":map["addonmanager.kubernetes.io/mode":"Reconcile" "kubernetes.io/minikube-addons":"registry"] "name":"registry" "namespace":"kube-system" "annotations":map["kubectl.kubernetes.io/last-applied-configuration":""]] "spec":map["ports":[map["targetPort":'\u1388' "port":'P']] "selector":map["actual-registry":"true" "kubernetes.io/minikube-addons":"registry"] "type":"ClusterIP"]]}
INFO: == Kubernetes addon reconcile completed at 2019-12-05T12:43:48+00:00 ==
from server for: "/etc/kubernetes/addons/registry-svc.yaml": Get https://localhost:8443/api/v1/namespaces/kube-system/services/registry: dial tcp 127.0.0.1:8443: connect: connection refused
error when retrieving current configuration of:
Resource: "/v1, Resource=serviceaccounts", GroupVersionKind: "/v1, Kind=ServiceAccount"
Name: "storage-provisioner", Namespace: "kube-system"
Object: &{map["apiVersion":"v1" "kind":"ServiceAccount" "metadata":map["labels":map["addonmanager.kubernetes.io/mode":"Reconcile"] "name":"storage-provisioner" "namespace":"kube-system" "annotations":map["kubectl.kubernetes.io/last-applied-configuration":""]]]}
from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": Get https://localhost:8443/api/v1/namespaces/kube-system/serviceaccounts/storage-provisioner: dial tcp 127.0.0.1:8443: connect: connection refused
error when retrieving current configuration of:
Resource: "/v1, Resource=pods", GroupVersionKind: "/v1, Kind=Pod"
Name: "storage-provisioner", Namespace: "kube-system"
Object: &{map["apiVersion":"v1" "kind":"Pod" "metadata":map["labels":map["addonmanager.kubernetes.io/mode":"Reconcile" "integration-test":"storage-provisioner"] "name":"storage-provisioner" "namespace":"kube-system" "annotations":map["kubectl.kubernetes.io/last-applied-configuration":""]] "spec":map["containers":[map["command":["/storage-provisioner"] "image":"gcr.io/k8s-minikube/storage-provisioner:v1.8.1" "imagePullPolicy":"IfNotPresent" "name":"storage-provisioner" "volumeMounts":[map["mountPath":"/tmp" "name":"tmp"]]]] "hostNetwork":%!q(bool=true) "serviceAccountName":"storage-provisioner" "volumes":[map["hostPath":map["path":"/tmp" "type":"Directory"] "name":"tmp"]]]]}
from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": Get https://localhost:8443/api/v1/namespaces/kube-system/pods/storage-provisioner: dial tcp 127.0.0.1:8443: connect: connection refused

==> kube-apiserver ["755c9793968a"] <==
I1205 12:43:40.083569 1 clientconn.go:825] blockingPicker: the picked transport is not ready, loop back to repick
I1205 12:43:40.083657 1 clientconn.go:825] blockingPicker: the picked transport is not ready, loop back to repick
W1205 12:43:40.083660 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1205 12:43:40.083687 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1205 12:43:40.083908 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1205 12:43:40.084018 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1205 12:43:40.084406 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
I1205 12:43:40.084438 1 clientconn.go:825] blockingPicker: the picked transport is not ready, loop back to repick
W1205 12:43:40.084440 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1205 12:43:40.084804 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
I1205 12:43:40.084837 1 clientconn.go:825] blockingPicker: the picked transport is not ready, loop back to repick
I1205 12:43:40.084884 1 clientconn.go:825] blockingPicker: the picked transport is not ready, loop back to repick
W1205 12:43:40.084918 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1205 12:43:40.084920 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
I1205 12:43:40.085103 1 clientconn.go:825] blockingPicker: the picked transport is not ready, loop back to repick
W1205 12:43:40.085158 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1205 12:43:40.085278 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
I1205 12:43:40.085448 1 clientconn.go:825] blockingPicker: the picked transport is not ready, loop back to repick
W1205 12:43:40.085498 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1205 12:43:40.085636 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
I1205 12:43:40.085707 1 clientconn.go:825] blockingPicker: the picked transport is not ready, loop back to repick
W1205 12:43:40.085796 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
I1205 12:43:40.085821 1 clientconn.go:825] blockingPicker: the picked transport is not ready, loop back to repick
W1205 12:43:40.085845 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1205 12:43:40.086082 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
I1205 12:43:40.086613 1 clientconn.go:825] blockingPicker: the picked transport is not ready, loop back to repick
I1205 12:43:40.086652 1 clientconn.go:825] blockingPicker: the picked transport is not ready, loop back to repick
W1205 12:43:40.086663 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1205 12:43:40.086668 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1205 12:43:40.086706 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
I1205 12:43:40.086982 1 clientconn.go:825] blockingPicker: the picked transport is not ready, loop back to repick
W1205 12:43:40.086990 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
I1205 12:43:40.087281 1 clientconn.go:825] blockingPicker: the picked transport is not ready, loop back to repick
W1205 12:43:40.087338 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
I1205 12:43:40.087538 1 clientconn.go:825] blockingPicker: the picked transport is not ready, loop back to repick
W1205 12:43:40.087593 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1205 12:43:40.087668 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1205 12:43:40.087904 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
I1205 12:43:40.088282 1 nonstructuralschema_controller.go:203] Shutting down NonStructuralSchemaConditionController
I1205 12:43:40.088288 1 controller.go:180] Shutting down kubernetes service endpoint reconciler
I1205 12:43:40.088307 1 customresource_discovery_controller.go:219] Shutting down DiscoveryController
I1205 12:43:40.088315 1 apiapproval_controller.go:197] Shutting down KubernetesAPIApprovalPolicyConformantConditionController
I1205 12:43:40.088323 1 establishing_controller.go:84] Shutting down EstablishingController
I1205 12:43:40.088338 1 available_controller.go:398] Shutting down AvailableConditionController
I1205 12:43:40.088344 1 crd_finalizer.go:275] Shutting down CRDFinalizer
I1205 12:43:40.088364 1 cluster_authentication_trust_controller.go:463] Shutting down cluster_authentication_trust_controller controller
I1205 12:43:40.088285 1 naming_controller.go:299] Shutting down NamingConditionController
I1205 12:43:40.088379 1 apiservice_controller.go:106] Shutting down APIServiceRegistrationController
I1205 12:43:40.088391 1 crdregistration_controller.go:142] Shutting down crd-autoregister controller
I1205 12:43:40.088338 1 autoregister_controller.go:164] Shutting down autoregister controller
I1205 12:43:40.088310 1 controller.go:122] Shutting down OpenAPI controller
I1205 12:43:40.088482 1 dynamic_cafile_content.go:181] Shutting down client-ca-bundle::/var/lib/minikube/certs/ca.crt
I1205 12:43:40.088489 1 tlsconfig.go:234] Shutting down DynamicServingCertificateController
I1205 12:43:40.088482 1 dynamic_cafile_content.go:181] Shutting down request-header::/var/lib/minikube/certs/front-proxy-ca.crt
I1205 12:43:40.088491 1 dynamic_serving_content.go:144] Shutting down serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key
I1205 12:43:40.088482 1 dynamic_cafile_content.go:181] Shutting down request-header::/var/lib/minikube/certs/front-proxy-ca.crt
I1205 12:43:40.088498 1 dynamic_cafile_content.go:181] Shutting down client-ca-bundle::/var/lib/minikube/certs/ca.crt
I1205 12:43:40.089606 1 controller.go:87] Shutting down OpenAPI AggregationController
I1205 12:43:40.090362 1 secure_serving.go:222] Stopped listening on [::]:8443
E1205 12:43:40.091344 1 controller.go:183] rpc error: code = Unknown desc = OK: HTTP status code 200; transport: missing content-type field

==> kube-apiserver ["8ae6092946df"] <==
I1205 12:45:18.675730 1 client.go:361] parsed scheme: "endpoint"
I1205 12:45:18.675765 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
I1205 12:45:18.681865 1 client.go:361] parsed scheme: "endpoint"
I1205 12:45:18.681899 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
W1205 12:45:18.886364 1 genericapiserver.go:404] Skipping API batch/v2alpha1 because it has no resources.
W1205 12:45:18.893017 1 genericapiserver.go:404] Skipping API discovery.k8s.io/v1alpha1 because it has no resources.
W1205 12:45:18.900180 1 genericapiserver.go:404] Skipping API node.k8s.io/v1alpha1 because it has no resources.
W1205 12:45:18.912908 1 genericapiserver.go:404] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W1205 12:45:18.915375 1 genericapiserver.go:404] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W1205 12:45:18.929095 1 genericapiserver.go:404] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W1205 12:45:18.983377 1 genericapiserver.go:404] Skipping API apps/v1beta2 because it has no resources.
W1205 12:45:18.983425 1 genericapiserver.go:404] Skipping API apps/v1beta1 because it has no resources.
I1205 12:45:19.020631 1 plugins.go:158] Loaded 11 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook,RuntimeClass.
I1205 12:45:19.020667 1 plugins.go:161] Loaded 7 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,RuntimeClass,ResourceQuota.
I1205 12:45:19.023975 1 client.go:361] parsed scheme: "endpoint"
I1205 12:45:19.024044 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
I1205 12:45:19.031816 1 client.go:361] parsed scheme: "endpoint"
I1205 12:45:19.031855 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
I1205 12:45:21.311132 1 dynamic_cafile_content.go:166] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
I1205 12:45:21.311134 1 dynamic_cafile_content.go:166] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
I1205 12:45:21.311424 1 dynamic_serving_content.go:129] Starting serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key
I1205 12:45:21.311717 1 secure_serving.go:178] Serving securely on [::]:8443
I1205 12:45:21.312316 1 nonstructuralschema_controller.go:191] Starting NonStructuralSchemaConditionController
I1205 12:45:21.312327 1 establishing_controller.go:73] Starting EstablishingController
I1205 12:45:21.312346 1 naming_controller.go:288] Starting NamingConditionController
I1205 12:45:21.312383 1 crdregistration_controller.go:111] Starting crd-autoregister controller
I1205 12:45:21.312391 1 controller.go:85] Starting OpenAPI controller
I1205 12:45:21.312436 1 customresource_discovery_controller.go:208] Starting DiscoveryController
I1205 12:45:21.312450 1 shared_informer.go:197] Waiting for caches to sync for crd-autoregister
I1205 12:45:21.312553 1 controller.go:81] Starting OpenAPI AggregationController
I1205 12:45:21.313211 1 tlsconfig.go:219] Starting DynamicServingCertificateController
I1205 12:45:21.313260 1 crd_finalizer.go:263] Starting CRDFinalizer
I1205 12:45:21.313306 1 autoregister_controller.go:140] Starting autoregister controller
I1205 12:45:21.313321 1 cache.go:32] Waiting for caches to sync for autoregister controller
I1205 12:45:21.313354 1 apiapproval_controller.go:185] Starting KubernetesAPIApprovalPolicyConformantConditionController
I1205 12:45:21.315118 1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
I1205 12:45:21.315133 1 shared_informer.go:197] Waiting for caches to sync for cluster_authentication_trust_controller
I1205 12:45:21.315164 1 dynamic_cafile_content.go:166] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
I1205 12:45:21.315184 1 dynamic_cafile_content.go:166] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
I1205 12:45:21.330595 1 apiservice_controller.go:94] Starting APIServiceRegistrationController
I1205 12:45:21.331548 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I1205 12:45:21.332714 1 available_controller.go:386] Starting AvailableConditionController
I1205 12:45:21.332727 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
E1205 12:45:21.336849 1 controller.go:151] Unable to remove old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
I1205 12:45:21.340393 1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
I1205 12:45:21.412678 1 shared_informer.go:204] Caches are synced for crd-autoregister
I1205 12:45:21.413528 1 cache.go:39] Caches are synced for autoregister controller
I1205 12:45:21.415291 1 shared_informer.go:204] Caches are synced for cluster_authentication_trust_controller
I1205 12:45:21.432119 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I1205 12:45:21.432926 1 cache.go:39] Caches are synced for AvailableConditionController controller
I1205 12:45:22.311087 1 controller.go:107] OpenAPI AggregationController: Processing item
I1205 12:45:22.311118 1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I1205 12:45:22.311128 1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I1205 12:45:22.318025 1 storage_scheduling.go:142] all system priority classes are created successfully or already exist.
I1205 12:45:23.928936 1 controller.go:606] quota admission added evaluator for: deployments.apps
I1205 12:45:23.945314 1 controller.go:606] quota admission added evaluator for: serviceaccounts
I1205 12:45:24.006930 1 controller.go:606] quota admission added evaluator for: daemonsets.apps
I1205 12:45:24.019860 1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I1205 12:45:24.026034 1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I1205 12:45:37.236863 1 controller.go:606] quota admission added evaluator for: endpoints

==> kube-controller-manager ["21128a5c5156"] <==
I1204 16:41:13.589387 1 event.go:281] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"registry-proxy", UID:"0a8f990e-598f-4c14-9af3-d9b8bdf0f365", APIVersion:"apps/v1", ResourceVersion:"343", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: registry-proxy-jvwpv
I1204 16:41:13.605270 1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"kube-system", Name:"registry", UID:"d0726384-f4fc-487e-a006-195e0f570b0e", APIVersion:"v1", ResourceVersion:"350", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: registry-74q4h
I1205 11:47:46.892446 1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"05de388e-30e2-4c4a-bdc6-ebe812ba21f0", APIVersion:"apps/v1", ResourceVersion:"72233", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-7f85fdfc6b to 1
I1205 11:47:46.896172 1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-7f85fdfc6b", UID:"8e3c5ef5-5c4b-472c-b6a8-3218b9609425", APIVersion:"apps/v1", ResourceVersion:"72234", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-7f85fdfc6b-mv5gf
I1205 11:47:46.900156 1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"05de388e-30e2-4c4a-bdc6-ebe812ba21f0", APIVersion:"apps/v1", ResourceVersion:"72233", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-6955765f44 to 1
I1205 11:47:46.905346 1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-6955765f44", UID:"a717e31b-5a76-4003-b4dc-6028d7a5505b", APIVersion:"apps/v1", ResourceVersion:"72240", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-6955765f44-xcgcm
I1205 11:47:46.912757 1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"05de388e-30e2-4c4a-bdc6-ebe812ba21f0", APIVersion:"apps/v1", ResourceVersion:"72235", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-7f85fdfc6b to 2
I1205 11:47:46.916226 1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-7f85fdfc6b", UID:"8e3c5ef5-5c4b-472c-b6a8-3218b9609425", APIVersion:"apps/v1", ResourceVersion:"72251", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-7f85fdfc6b-pltmx
I1205 11:47:46.947335 1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"05de388e-30e2-4c4a-bdc6-ebe812ba21f0", APIVersion:"apps/v1", ResourceVersion:"72262", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-7f85fdfc6b to 0
I1205 11:47:46.951932 1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-7f85fdfc6b", UID:"8e3c5ef5-5c4b-472c-b6a8-3218b9609425", APIVersion:"apps/v1", ResourceVersion:"72265", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-7f85fdfc6b-mv5gf
I1205 11:47:46.954629 1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-7f85fdfc6b", UID:"8e3c5ef5-5c4b-472c-b6a8-3218b9609425", APIVersion:"apps/v1", ResourceVersion:"72265", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-7f85fdfc6b-pltmx
I1205 11:47:46.954738 1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"05de388e-30e2-4c4a-bdc6-ebe812ba21f0", APIVersion:"apps/v1", ResourceVersion:"72264", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-6955765f44 to 2
I1205 11:47:46.959104 1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-6955765f44", UID:"a717e31b-5a76-4003-b4dc-6028d7a5505b", APIVersion:"apps/v1", ResourceVersion:"72270", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-6955765f44-d4g9g
I1205 11:48:05.770097 1 node_lifecycle_controller.go:1209] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
I1205 11:48:15.772601 1 node_lifecycle_controller.go:1236] Controller detected that some Nodes are Ready. Exiting master disruption mode.
E1205 12:43:40.089224 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Deployment: Get https://localhost:8443/apis/apps/v1/deployments?allowWatchBookmarks=true&resourceVersion=72382&timeout=7m36s&timeoutSeconds=456&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
E1205 12:43:40.090226 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ControllerRevision: Get https://localhost:8443/apis/apps/v1/controllerrevisions?allowWatchBookmarks=true&resourceVersion=344&timeout=8m51s&timeoutSeconds=531&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
E1205 12:43:40.090226 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.PodSecurityPolicy: Get https://localhost:8443/apis/policy/v1beta1/podsecuritypolicies?allowWatchBookmarks=true&resourceVersion=1&timeout=5m53s&timeoutSeconds=353&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
E1205 12:43:40.090285 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.Ingress: Get https://localhost:8443/apis/extensions/v1beta1/ingresses?allowWatchBookmarks=true&resourceVersion=1&timeout=9m9s&timeoutSeconds=549&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
E1205 12:43:40.090226 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.Event: Get https://localhost:8443/apis/events.k8s.io/v1beta1/events?allowWatchBookmarks=true&resourceVersion=78580&timeout=8m12s&timeoutSeconds=492&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
E1205 12:43:40.090294 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.Ingress: Get https://localhost:8443/apis/networking.k8s.io/v1beta1/ingresses?allowWatchBookmarks=true&resourceVersion=1&timeout=9m58s&timeoutSeconds=598&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
E1205 12:43:40.090288 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.RuntimeClass: Get https://localhost:8443/apis/node.k8s.io/v1beta1/runtimeclasses?allowWatchBookmarks=true&resourceVersion=1&timeout=8m46s&timeoutSeconds=526&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
E1205 12:43:40.090226 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ServiceAccount: Get https://localhost:8443/api/v1/serviceaccounts?allowWatchBookmarks=true&resourceVersion=362&timeout=5m9s&timeoutSeconds=309&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
E1205 12:43:40.090230 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.LimitRange: Get https://localhost:8443/api/v1/limitranges?allowWatchBookmarks=true&resourceVersion=1&timeout=6m55s&timeoutSeconds=415&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
E1205 12:43:40.090244 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.HorizontalPodAutoscaler: Get https://localhost:8443/apis/autoscaling/v1/horizontalpodautoscalers?allowWatchBookmarks=true&resourceVersion=1&timeout=6m41s&timeoutSeconds=401&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
E1205 12:43:40.090295 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.CronJob: Get https://localhost:8443/apis/batch/v1beta1/cronjobs?allowWatchBookmarks=true&resourceVersion=1&timeout=5m6s&timeoutSeconds=306&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
E1205 12:43:40.090338 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.EndpointSlice: Get https://localhost:8443/apis/discovery.k8s.io/v1beta1/endpointslices?allowWatchBookmarks=true&resourceVersion=1&timeout=9m2s&timeoutSeconds=542&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
E1205 12:43:40.090337 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Endpoints: Get https://localhost:8443/api/v1/endpoints?allowWatchBookmarks=true&resourceVersion=79681&timeout=6m12s&timeoutSeconds=372&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
E1205 12:43:40.090338 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ReplicationController: Get https://localhost:8443/api/v1/replicationcontrollers?allowWatchBookmarks=true&resourceVersion=467&timeout=5m26s&timeoutSeconds=326&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
E1205 12:43:40.090343 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.StorageClass: Get https://localhost:8443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=332&timeout=9m6s&timeoutSeconds=546&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
E1205 12:43:40.090392 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolumeClaim: Get https://localhost:8443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=1&timeout=8m33s&timeoutSeconds=513&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
E1205 12:43:40.090406 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PriorityClass: Get https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses?allowWatchBookmarks=true&resourceVersion=45&timeout=8m13s&timeoutSeconds=493&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
E1205 12:43:40.090603 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.StatefulSet: Get https://localhost:8443/apis/apps/v1/statefulsets?allowWatchBookmarks=true&resourceVersion=1&timeout=9m18s&timeoutSeconds=558&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
E1205 12:43:40.090646 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.NetworkPolicy: Get https://localhost:8443/apis/networking.k8s.io/v1/networkpolicies?allowWatchBookmarks=true&resourceVersion=1&timeout=8m44s&timeoutSeconds=524&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
E1205 12:43:40.090683 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ConfigMap: Get https://localhost:8443/api/v1/configmaps?allowWatchBookmarks=true&resourceVersion=72232&timeout=7m3s&timeoutSeconds=423&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
E1205 12:43:40.090724 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.CSINode: Get https://localhost:8443/apis/storage.k8s.io/v1/csinodes?allowWatchBookmarks=true&resourceVersion=5&timeout=6m56s&timeoutSeconds=416&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
E1205 12:43:40.090741 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Pod: Get https://localhost:8443/api/v1/pods?allowWatchBookmarks=true&resourceVersion=72390&timeout=8m7s&timeoutSeconds=487&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
E1205 12:43:40.090759 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.VolumeAttachment: Get https://localhost:8443/apis/storage.k8s.io/v1/volumeattachments?allowWatchBookmarks=true&resourceVersion=1&timeout=9m38s&timeoutSeconds=578&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
E1205 12:43:40.090767 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolume: Get https://localhost:8443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=1&timeout=5m18s&timeoutSeconds=318&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
E1205 12:43:40.090786 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Job: Get https://localhost:8443/apis/batch/v1/jobs?allowWatchBookmarks=true&resourceVersion=1&timeout=9m29s&timeoutSeconds=569&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
E1205 12:43:40.090792 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Namespace: Get https://localhost:8443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=148&timeout=5m59s&timeoutSeconds=359&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
E1205 12:43:40.090810 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.DaemonSet: Get https://localhost:8443/apis/apps/v1/daemonsets?allowWatchBookmarks=true&resourceVersion=72378&timeout=7m56s&timeoutSeconds=476&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
E1205 12:43:40.090822 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ClusterRole: Get https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles?allowWatchBookmarks=true&resourceVersion=329&timeout=5m13s&timeoutSeconds=313&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
E1205 12:43:40.090839 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ClusterRoleBinding: Get https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?allowWatchBookmarks=true&resourceVersion=331&timeout=8m56s&timeoutSeconds=536&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
E1205 12:43:40.090845 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Role: Get https://localhost:8443/apis/rbac.authorization.k8s.io/v1/roles?allowWatchBookmarks=true&resourceVersion=188&timeout=9m5s&timeoutSeconds=545&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
E1205 12:43:40.090847 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.RoleBinding: Get https://localhost:8443/apis/rbac.authorization.k8s.io/v1/rolebindings?allowWatchBookmarks=true&resourceVersion=189&timeout=5m36s&timeoutSeconds=336&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
E1205 12:43:40.090860 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.PodDisruptionBudget: Get https://localhost:8443/apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=1&timeout=9m41s&timeoutSeconds=581&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
E1205 12:43:40.090883 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ValidatingWebhookConfiguration: Get https://localhost:8443/apis/admissionregistration.k8s.io/v1/validatingwebhookconfigurations?allowWatchBookmarks=true&resourceVersion=1&timeout=6m56s&timeoutSeconds=416&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
E1205 12:43:40.090921 1 reflector.go:320] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:8443/apis/apiextensions.k8s.io/v1/customresourcedefinitions?allowWatchBookmarks=true&resourceVersion=1&timeout=6m35s&timeoutSeconds=395&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
E1205 12:43:40.090953 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Service: Get https://localhost:8443/api/v1/services?allowWatchBookmarks=true&resourceVersion=356&timeout=5m31s&timeoutSeconds=331&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
E1205 12:43:40.090966 1 reflector.go:320] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:8443/apis/apiregistration.k8s.io/v1/apiservices?allowWatchBookmarks=true&resourceVersion=42&timeout=9m30s&timeoutSeconds=570&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
E1205 12:43:40.090990 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.CertificateSigningRequest: Get https://localhost:8443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?allowWatchBookmarks=true&resourceVersion=1&timeout=8m52s&timeoutSeconds=532&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
E1205 12:43:40.091001 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Node: Get https://localhost:8443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=79415&timeout=9m9s&timeoutSeconds=549&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
E1205 12:43:40.091005 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PodTemplate: Get https://localhost:8443/api/v1/podtemplates?allowWatchBookmarks=true&resourceVersion=1&timeout=8m8s&timeoutSeconds=488&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
E1205 12:43:40.091022 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ReplicaSet: Get https://localhost:8443/apis/apps/v1/replicasets?allowWatchBookmarks=true&resourceVersion=72381&timeout=7m59s&timeoutSeconds=479&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
E1205 12:43:40.091022 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.MutatingWebhookConfiguration: Get https://localhost:8443/apis/admissionregistration.k8s.io/v1/mutatingwebhookconfigurations?allowWatchBookmarks=true&resourceVersion=1&timeout=9m35s&timeoutSeconds=575&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
E1205 12:43:40.091024 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ResourceQuota: Get https://localhost:8443/api/v1/resourcequotas?allowWatchBookmarks=true&resourceVersion=1&timeout=8m21s&timeoutSeconds=501&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
E1205 12:43:40.091041 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.CSIDriver: Get https://localhost:8443/apis/storage.k8s.io/v1beta1/csidrivers?allowWatchBookmarks=true&resourceVersion=1&timeout=7m43s&timeoutSeconds=463&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
E1205 12:43:40.091298 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Lease: Get https://localhost:8443/apis/coordination.k8s.io/v1/leases?allowWatchBookmarks=true&resourceVersion=79683&timeout=6m31s&timeoutSeconds=391&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
E1205 12:43:40.091346 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Secret: Get https://localhost:8443/api/v1/secrets?allowWatchBookmarks=true&resourceVersion=361&timeout=5m37s&timeoutSeconds=337&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused

==> kube-controller-manager ["2635c7184c31"] <==
I1205 12:45:40.358263 1 shared_informer.go:197] Waiting for caches to sync for PV protection
I1205 12:45:40.508344 1 controllermanager.go:533] Started "endpoint"
I1205 12:45:40.508401 1 endpoints_controller.go:181] Starting endpoint controller
I1205 12:45:40.508407 1 shared_informer.go:197] Waiting for caches to sync for endpoint
I1205 12:45:40.658281 1 controllermanager.go:533] Started "tokencleaner"
I1205 12:45:40.658366 1 tokencleaner.go:117] Starting token cleaner controller
I1205 12:45:40.658376 1 shared_informer.go:197] Waiting for caches to sync for token_cleaner
I1205 12:45:40.658382 1 shared_informer.go:204] Caches are synced for token_cleaner
I1205 12:45:40.810934 1 controllermanager.go:533] Started "clusterrole-aggregation"
I1205 12:45:40.811037 1 clusterroleaggregation_controller.go:148] Starting ClusterRoleAggregator
I1205 12:45:40.811046 1 shared_informer.go:197] Waiting for caches to sync for ClusterRoleAggregator
I1205 12:45:40.959686 1 controllermanager.go:533] Started "podgc"
I1205 12:45:40.960005 1 gc_controller.go:88] Starting GC controller
I1205 12:45:40.960027 1 shared_informer.go:197] Waiting for caches to sync for GC
I1205 12:45:41.108521 1 controllermanager.go:533] Started "csrapproving"
W1205 12:45:41.108565 1 controllermanager.go:525] Skipping "nodeipam"
I1205 12:45:41.108578 1 core.go:242] Will not configure cloud provider routes for allocate-node-cidrs: false, configure-cloud-routes: true.
W1205 12:45:41.108582 1 controllermanager.go:525] Skipping "route"
I1205 12:45:41.108546 1 certificate_controller.go:118] Starting certificate controller "csrapproving"
I1205 12:45:41.108613 1 shared_informer.go:197] Waiting for caches to sync for certificate-csrapproving
I1205 12:45:41.259198 1 controllermanager.go:533] Started "attachdetach"
I1205 12:45:41.259384 1 shared_informer.go:197] Waiting for caches to sync for resource quota
I1205 12:45:41.259602 1 attach_detach_controller.go:342] Starting attach detach controller
I1205 12:45:41.259608 1 shared_informer.go:197] Waiting for caches to sync for attach detach
W1205 12:45:41.267497 1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist
I1205 12:45:41.308958 1 shared_informer.go:204] Caches are synced for HPA
I1205 12:45:41.310055 1 shared_informer.go:204] Caches are synced for PVC protection
I1205 12:45:41.311207 1 shared_informer.go:204] Caches are synced for ClusterRoleAggregator
I1205 12:45:41.311266 1 shared_informer.go:204] Caches are synced for taint
I1205 12:45:41.311439 1 taint_manager.go:186] Starting NoExecuteTaintManager
I1205 12:45:41.311543 1 node_lifecycle_controller.go:1443] Initializing eviction metric for zone:
W1205 12:45:41.311776 1 node_lifecycle_controller.go:1058] Missing timestamp for Node minikube. Assuming now as a timestamp.
I1205 12:45:41.311833 1 event.go:281] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"fea930d1-a27f-45ed-93f6-fc868f9a6484", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node minikube event: Registered Node minikube in Controller
I1205 12:45:41.311932 1 node_lifecycle_controller.go:1259] Controller detected that zone is now in state Normal.
I1205 12:45:41.329820 1 shared_informer.go:204] Caches are synced for expand
I1205 12:45:41.333697 1 shared_informer.go:204] Caches are synced for namespace
I1205 12:45:41.336757 1 shared_informer.go:204] Caches are synced for job
I1205 12:45:41.339570 1 shared_informer.go:204] Caches are synced for ReplicaSet
I1205 12:45:41.347754 1 shared_informer.go:204] Caches are synced for TTL
I1205 12:45:41.348208 1 shared_informer.go:204] Caches are synced for service account
I1205 12:45:41.356994 1 shared_informer.go:204] Caches are synced for daemon sets
I1205 12:45:41.357005 1 shared_informer.go:204] Caches are synced for ReplicationController
I1205 12:45:41.358500 1 shared_informer.go:204] Caches are synced for PV protection
I1205 12:45:41.358770 1 shared_informer.go:204] Caches are synced for persistent volume
I1205 12:45:41.359040 1 shared_informer.go:204] Caches are synced for bootstrap_signer
I1205 12:45:41.359835 1 shared_informer.go:204] Caches are synced for attach detach
I1205 12:45:41.360246 1 shared_informer.go:204] Caches are synced for GC
I1205 12:45:41.363838 1 shared_informer.go:204] Caches are synced for deployment
I1205 12:45:41.408593 1 shared_informer.go:204] Caches are synced for endpoint
I1205 12:45:41.459086 1 shared_informer.go:197] Waiting for caches to sync for garbage collector
I1205 12:45:41.473585 1 shared_informer.go:204] Caches are synced for certificate-csrsigning
I1205 12:45:41.508830 1 shared_informer.go:204] Caches are synced for certificate-csrapproving
I1205 12:45:41.523252 1 shared_informer.go:204] Caches are synced for stateful set
I1205 12:45:41.708624 1 shared_informer.go:204] Caches are synced for disruption
I1205 12:45:41.708655 1 disruption.go:338] Sending events to api server.
I1205 12:45:41.718886 1 shared_informer.go:204] Caches are synced for resource quota
I1205 12:45:41.759707 1 shared_informer.go:204] Caches are synced for resource quota
I1205 12:45:41.859424 1 shared_informer.go:204] Caches are synced for garbage collector
I1205 12:45:41.863887 1 shared_informer.go:204] Caches are synced for garbage collector
I1205 12:45:41.863920 1 garbagecollector.go:138] Garbage collector: all resource monitors have synced. Proceeding to collect garbage

==> kube-proxy ["c4dff44ab519"] <==
W1204 16:41:26.010957 1 server_others.go:323] Unknown proxy mode "", assuming iptables proxy
I1204 16:41:26.019684 1 node.go:135] Successfully retrieved node IP: 192.168.46.142
I1204 16:41:26.019728 1 server_others.go:145] Using iptables Proxier.
W1204 16:41:26.019810 1 proxier.go:286] clusterCIDR not specified, unable to distinguish between internal and external traffic
I1204 16:41:26.020144 1 server.go:571] Version: v1.17.0-rc.1
I1204 16:41:26.020676 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 393216
I1204 16:41:26.020750 1 conntrack.go:52] Setting nf_conntrack_max to 393216
I1204 16:41:26.021665 1 conntrack.go:83] Setting conntrack hashsize to 98304
I1204 16:41:26.036319 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I1204 16:41:26.036463 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I1204 16:41:26.036661 1 config.go:131] Starting endpoints config controller
I1204 16:41:26.036699 1 shared_informer.go:197] Waiting for caches to sync for endpoints config
I1204 16:41:26.036727 1 config.go:313] Starting service config controller
I1204 16:41:26.036739 1 shared_informer.go:197] Waiting for caches to sync for service config
I1204 16:41:26.136868 1 shared_informer.go:204] Caches are synced for service config
I1204 16:41:26.137453 1 shared_informer.go:204] Caches are synced for endpoints config
E1205 12:43:40.092229 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Service: Get https://localhost:8443/api/v1/services?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=356&timeout=6m13s&timeoutSeconds=373&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
E1205 12:43:40.092776 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Endpoints: Get https://localhost:8443/api/v1/endpoints?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=79681&timeout=6m49s&timeoutSeconds=409&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused

==> kube-proxy ["edbb424ad0fc"] <==
W1205 12:45:23.495549 1 server_others.go:323] Unknown proxy mode "", assuming iptables proxy
I1205 12:45:23.518591 1 node.go:135] Successfully retrieved node IP: 192.168.46.142
I1205 12:45:23.518634 1 server_others.go:145] Using iptables Proxier.
W1205 12:45:23.518768 1 proxier.go:286] clusterCIDR not specified, unable to distinguish between internal and external traffic
I1205 12:45:23.518988 1 server.go:571] Version: v1.17.0-rc.1
I1205 12:45:23.520778 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 393216
I1205 12:45:23.520816 1 conntrack.go:52] Setting nf_conntrack_max to 393216
I1205 12:45:23.521137 1 conntrack.go:83] Setting conntrack hashsize to 98304
I1205 12:45:23.529958 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I1205 12:45:23.530008 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I1205 12:45:23.530952 1 config.go:313] Starting service config controller
I1205 12:45:23.531026 1 shared_informer.go:197] Waiting for caches to sync for service config
I1205 12:45:23.531126 1 config.go:131] Starting endpoints config controller
I1205 12:45:23.531156 1 shared_informer.go:197] Waiting for caches to sync for endpoints config
I1205 12:45:23.631521 1 shared_informer.go:204] Caches are synced for endpoints config
I1205 12:45:23.632298 1 shared_informer.go:204] Caches are synced for service config

==> kube-scheduler ["52476de3d24e"] <==
I1204 16:40:58.172653 1 serving.go:312] Generated self-signed cert in-memory
W1204 16:40:58.753688 1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::client-ca-file" due to: configmap "extension-apiserver-authentication" not found
W1204 16:40:58.753878 1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" due to: configmap "extension-apiserver-authentication" not found
W1204 16:41:01.787130 1 authentication.go:348] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W1204 16:41:01.787159 1 authentication.go:296] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W1204 16:41:01.787167 1 authentication.go:297] Continuing without authentication configuration. This may treat all requests as anonymous.
W1204 16:41:01.787172 1 authentication.go:298] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
W1204 16:41:01.802768 1 authorization.go:47] Authorization is disabled
W1204 16:41:01.802794 1 authentication.go:92] Authentication is disabled
I1204 16:41:01.802807 1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
I1204 16:41:01.804589 1 configmap_cafile_content.go:205] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1204 16:41:01.804615 1 shared_informer.go:197] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1204 16:41:01.804936 1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
I1204 16:41:01.805012 1 tlsconfig.go:219] Starting DynamicServingCertificateController
E1204 16:41:01.810624 1 reflector.go:156] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E1204 16:41:01.820668 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E1204 16:41:01.823634 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E1204 16:41:01.823687 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E1204 16:41:01.823748 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E1204 16:41:01.823776 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E1204 16:41:01.823800 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E1204 16:41:01.823881 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E1204 16:41:01.823858 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E1204 16:41:01.824924 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E1204 16:41:01.825696 1 reflector.go:156] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:246: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E1204 16:41:01.827113 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E1204 16:41:02.811995 1 reflector.go:156] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E1204 16:41:02.821900 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E1204 16:41:02.825134 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E1204 16:41:02.826814 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E1204 16:41:02.828002 1 reflector.go:156] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:246: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E1204 16:41:02.828922 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E1204 16:41:02.830590 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E1204 16:41:02.831457 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E1204 16:41:02.832560 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E1204 16:41:02.834096 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E1204 16:41:02.835094 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E1204 16:41:02.836442 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
I1204 16:41:03.905376 1 shared_informer.go:204] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1204 16:41:03.906013 1 leaderelection.go:242] attempting to acquire leader lease kube-system/kube-scheduler...
I1204 16:41:03.914190 1 leaderelection.go:252] successfully acquired lease kube-system/kube-scheduler
E1204 16:41:12.708677 1 factory.go:494] pod is already present in the activeQ
E1205 12:43:40.090087 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolumeClaim: Get https://localhost:8443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=1&timeout=8m29s&timeoutSeconds=509&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
E1205 12:43:40.090226 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ReplicationController: Get https://localhost:8443/api/v1/replicationcontrollers?allowWatchBookmarks=true&resourceVersion=467&timeout=7m59s&timeoutSeconds=479&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
E1205 12:43:40.090262 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.StatefulSet: Get https://localhost:8443/apis/apps/v1/statefulsets?allowWatchBookmarks=true&resourceVersion=1&timeout=6m14s&timeoutSeconds=374&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
E1205 12:43:40.090339 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolume: Get https://localhost:8443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=1&timeout=8m16s&timeoutSeconds=496&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
E1205 12:43:40.090418 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Service: Get https://localhost:8443/api/v1/services?allowWatchBookmarks=true&resourceVersion=356&timeout=7m20s&timeoutSeconds=440&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
E1205 12:43:40.090452 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.CSINode: Get https://localhost:8443/apis/storage.k8s.io/v1/csinodes?allowWatchBookmarks=true&resourceVersion=5&timeout=8m42s&timeoutSeconds=522&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
E1205 12:43:40.090508 1 reflector.go:320] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to watch *v1.ConfigMap: Get https://localhost:8443/api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dextension-apiserver-authentication&resourceVersion=72232&timeout=5m35s&timeoutSeconds=335&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
E1205 12:43:40.090662 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.StorageClass: Get https://localhost:8443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=332&timeout=8m25s&timeoutSeconds=505&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
E1205 12:43:40.090743 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Node: Get https://localhost:8443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=79415&timeout=5m13s&timeoutSeconds=313&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
E1205 12:43:40.090804 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ReplicaSet: Get https://localhost:8443/apis/apps/v1/replicasets?allowWatchBookmarks=true&resourceVersion=72381&timeout=9m32s&timeoutSeconds=572&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
E1205 12:43:40.090854 1 reflector.go:320] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:246: Failed to watch *v1.Pod: Get https://localhost:8443/api/v1/pods?allowWatchBookmarks=true&fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&resourceVersion=72390&timeoutSeconds=409&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
E1205 12:43:40.091706 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.PodDisruptionBudget: Get https://localhost:8443/apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=1&timeout=9m1s&timeoutSeconds=541&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused

==> kube-scheduler ["c36f5378a1b8"] <==
I1205 12:45:17.169311 1 serving.go:312] Generated self-signed cert in-memory
W1205 12:45:17.555887 1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::client-ca-file" due to: configmap "extension-apiserver-authentication" not found
W1205 12:45:17.555976 1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" due to: configmap "extension-apiserver-authentication" not found
W1205 12:45:21.334464 1 authentication.go:348] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W1205 12:45:21.334500 1 authentication.go:296] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W1205 12:45:21.334509 1 authentication.go:297] Continuing without authentication configuration. This may treat all requests as anonymous.
W1205 12:45:21.334514 1 authentication.go:298] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
W1205 12:45:21.359516 1 authorization.go:47] Authorization is disabled
W1205 12:45:21.359627 1 authentication.go:92] Authentication is disabled
I1205 12:45:21.359666 1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
I1205 12:45:21.363880 1 configmap_cafile_content.go:205] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1205 12:45:21.363908 1 shared_informer.go:197] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1205 12:45:21.364235 1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
I1205 12:45:21.365841 1 tlsconfig.go:219] Starting DynamicServingCertificateController
I1205 12:45:21.464209 1 shared_informer.go:204] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1205 12:45:21.467017 1 leaderelection.go:242] attempting to acquire leader lease kube-system/kube-scheduler...
I1205 12:45:39.832092 1 leaderelection.go:252] successfully acquired lease kube-system/kube-scheduler

==> kubelet <==
-- Logs begin at Thu 2019-12-05 12:44:31 UTC, end at Thu 2019-12-05 12:54:12 UTC. --
Dec 05 12:45:18 development kubelet[5769]: E1205 12:45:18.418599 5769 kubelet.go:2263] node "minikube" not found
Dec 05 12:45:18 development kubelet[5769]: E1205 12:45:18.519116 5769 kubelet.go:2263] node "minikube" not found
Dec 05 12:45:18 development kubelet[5769]: E1205 12:45:18.619343 5769 kubelet.go:2263] node "minikube" not found
Dec 05 12:45:18 development kubelet[5769]: E1205 12:45:18.719606 5769 kubelet.go:2263] node "minikube" not found
Dec 05 12:45:18 development kubelet[5769]: E1205 12:45:18.820906 5769 kubelet.go:2263] node "minikube" not found
Dec 05 12:45:18 development kubelet[5769]: E1205 12:45:18.921340 5769 kubelet.go:2263] node "minikube" not found
Dec 05 12:45:19 development kubelet[5769]: E1205 12:45:19.021706 5769 kubelet.go:2263] node "minikube" not found
Dec 05 12:45:19 development kubelet[5769]: E1205 12:45:19.122402 5769 kubelet.go:2263] node "minikube" not found
Dec 05 12:45:19 development kubelet[5769]: E1205 12:45:19.223414 5769 kubelet.go:2263] node "minikube" not found
Dec 05 12:45:19 development kubelet[5769]: E1205 12:45:19.324104 5769 kubelet.go:2263] node "minikube" not found
Dec 05 12:45:19 development kubelet[5769]: E1205 12:45:19.424979 5769 kubelet.go:2263] node "minikube" not found
Dec 05 12:45:19 development kubelet[5769]: E1205 12:45:19.525522 5769 kubelet.go:2263] node "minikube" not found
Dec 05 12:45:19 development kubelet[5769]: E1205 12:45:19.626449 5769 kubelet.go:2263] node "minikube" not found
Dec 05 12:45:19 development kubelet[5769]: E1205 12:45:19.727560 5769 kubelet.go:2263] node "minikube" not found
Dec 05 12:45:19 development kubelet[5769]: E1205 12:45:19.828864 5769 kubelet.go:2263] node "minikube" not found
Dec 05 12:45:19 development kubelet[5769]: E1205 12:45:19.930480 5769 kubelet.go:2263] node "minikube" not found
Dec 05 12:45:20 development kubelet[5769]: E1205 12:45:20.030694 5769 kubelet.go:2263] node "minikube" not found
Dec 05 12:45:20 development kubelet[5769]: E1205 12:45:20.132448 5769 kubelet.go:2263] node "minikube" not found
Dec 05 12:45:20 development kubelet[5769]: E1205 12:45:20.232780 5769 kubelet.go:2263] node "minikube" not found
Dec 05 12:45:20 development kubelet[5769]: E1205 12:45:20.333483 5769 kubelet.go:2263] node "minikube" not found
Dec 05 12:45:20 development kubelet[5769]: E1205 12:45:20.434375 5769 kubelet.go:2263] node "minikube" not found
Dec 05 12:45:20 development kubelet[5769]: E1205 12:45:20.535380 5769 kubelet.go:2263] node "minikube" not found
Dec 05 12:45:20 development kubelet[5769]: E1205 12:45:20.636131 5769 kubelet.go:2263] node "minikube" not found
Dec 05 12:45:20 development kubelet[5769]: E1205 12:45:20.736782 5769 kubelet.go:2263] node "minikube" not found
Dec 05 12:45:20 development kubelet[5769]: E1205 12:45:20.837026 5769 kubelet.go:2263] node "minikube" not found
Dec 05 12:45:20 development kubelet[5769]: E1205 12:45:20.938087 5769 kubelet.go:2263] node "minikube" not found
Dec 05 12:45:21 development kubelet[5769]: E1205 12:45:21.039551 5769 kubelet.go:2263] node "minikube" not found
Dec 05 12:45:21 development kubelet[5769]: E1205 12:45:21.139758 5769 kubelet.go:2263] node "minikube" not found
Dec 05 12:45:21 development kubelet[5769]: E1205 12:45:21.240277 5769 kubelet.go:2263] node "minikube" not found
Dec 05 12:45:21 development kubelet[5769]: I1205 12:45:21.447641 5769 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-tqjjp" (UniqueName: "kubernetes.io/secret/9248326c-1b3b-4b00-bc8d-7aa89849f5af-coredns-token-tqjjp") pod "coredns-6955765f44-tw2dt" (UID: "9248326c-1b3b-4b00-bc8d-7aa89849f5af")
Dec 05 12:45:21 development kubelet[5769]: I1205 12:45:21.447690 5769 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/ca0eea27-d79c-4d67-ab0d-0cf0e53efcb1-config-volume") pod "coredns-6955765f44-d4g9g" (UID: "ca0eea27-d79c-4d67-ab0d-0cf0e53efcb1")
Dec 05 12:45:21 development kubelet[5769]: I1205 12:45:21.447707 5769 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/60580c40-e4a2-4c84-a34f-706423195a72-xtables-lock") pod "kube-proxy-z67hv" (UID: "60580c40-e4a2-4c84-a34f-706423195a72")
Dec 05 12:45:21 development kubelet[5769]: I1205 12:45:21.447723 5769 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-489wd" (UniqueName: "kubernetes.io/secret/60580c40-e4a2-4c84-a34f-706423195a72-kube-proxy-token-489wd") pod "kube-proxy-z67hv" (UID: "60580c40-e4a2-4c84-a34f-706423195a72")
Dec 05 12:45:21 development kubelet[5769]: I1205 12:45:21.447775 5769 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-z9rwj" (UniqueName: "kubernetes.io/secret/31207f1b-e2e1-4759-9055-147c13c015af-storage-provisioner-token-z9rwj") pod "storage-provisioner" (UID: "31207f1b-e2e1-4759-9055-147c13c015af")
Dec 05 12:45:21 development kubelet[5769]: I1205 12:45:21.447817 5769 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-tqjjp" (UniqueName: "kubernetes.io/secret/ca0eea27-d79c-4d67-ab0d-0cf0e53efcb1-coredns-token-tqjjp") pod "coredns-6955765f44-d4g9g" (UID: "ca0eea27-d79c-4d67-ab0d-0cf0e53efcb1")
Dec 05 12:45:21 development kubelet[5769]: I1205 12:45:21.447836 5769 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/60580c40-e4a2-4c84-a34f-706423195a72-kube-proxy") pod "kube-proxy-z67hv" (UID: "60580c40-e4a2-4c84-a34f-706423195a72")
Dec 05 12:45:21 development kubelet[5769]: I1205 12:45:21.447851 5769 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/60580c40-e4a2-4c84-a34f-706423195a72-lib-modules") pod "kube-proxy-z67hv" (UID: "60580c40-e4a2-4c84-a34f-706423195a72")
Dec 05 12:45:21 development kubelet[5769]: I1205 12:45:21.447887 5769 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-zfhq5" (UniqueName: "kubernetes.io/secret/b2ae733d-5b4d-4696-8ae8-adc628af8e56-default-token-zfhq5") pod "registry-74q4h" (UID: "b2ae733d-5b4d-4696-8ae8-adc628af8e56")
Dec 05 12:45:21 development kubelet[5769]: I1205 12:45:21.447923 5769 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/9248326c-1b3b-4b00-bc8d-7aa89849f5af-config-volume") pod "coredns-6955765f44-tw2dt" (UID: "9248326c-1b3b-4b00-bc8d-7aa89849f5af")
Dec 05 12:45:21 development kubelet[5769]: I1205 12:45:21.447938 5769 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-zfhq5" (UniqueName: "kubernetes.io/secret/010e32f8-64f8-4fa5-a087-a6e29befb230-default-token-zfhq5") pod "registry-proxy-jvwpv" (UID: "010e32f8-64f8-4fa5-a087-a6e29befb230")
Dec 05 12:45:21 development kubelet[5769]: I1205 12:45:21.447974 5769 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/31207f1b-e2e1-4759-9055-147c13c015af-tmp") pod "storage-provisioner" (UID: "31207f1b-e2e1-4759-9055-147c13c015af")
Dec 05 12:45:21 development kubelet[5769]: I1205 12:45:21.448070 5769 reconciler.go:156] Reconciler: start to sync state
Dec 05 12:45:21 development kubelet[5769]: I1205 12:45:21.744572 5769 kubelet_node_status.go:112] Node minikube was previously registered
Dec 05 12:45:21 development kubelet[5769]: I1205 12:45:21.744728 5769 kubelet_node_status.go:73] Successfully registered node minikube
Dec 05 12:45:23 development kubelet[5769]: W1205 12:45:23.095122 5769 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-6955765f44-d4g9g through plugin: invalid network status for
Dec 05 12:45:23 development kubelet[5769]: W1205 12:45:23.097996 5769 pod_container_deletor.go:75] Container "1ce7ab3aa409a217b54adee61f49b8b09a4a3c01894d61fa25ac866b76124152" not found in pod's containers
Dec 05 12:45:23 development kubelet[5769]: W1205 12:45:23.161432 5769 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-6955765f44-tw2dt through plugin: invalid network status for
Dec 05 12:45:23 development kubelet[5769]: W1205 12:45:23.165463 5769 pod_container_deletor.go:75] Container "11a14528716739b3d9a45a50df3a08acb9025d48e562bfafefec75bff3616692" not found in pod's containers
Dec 05 12:45:23 development kubelet[5769]: W1205 12:45:23.177689 5769 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/registry-proxy-jvwpv through plugin: invalid network status for
Dec 05 12:45:23 development kubelet[5769]: W1205 12:45:23.187125 5769 pod_container_deletor.go:75] Container "2f825d31868f22be426984cf1a22ef85708563176264e14f28f06f58f93a26a5" not found in pod's containers
Dec 05 12:45:23 development kubelet[5769]: W1205 12:45:23.200903 5769 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/registry-74q4h through plugin: invalid network status for
Dec 05 12:45:23 development kubelet[5769]: W1205 12:45:23.211262 5769 pod_container_deletor.go:75] Container "870d789b8d1f4dfbfcd588fb28a73f76c106a08217c0457431f5c333c9bc5f2e" not found in pod's containers
Dec 05 12:45:23 development kubelet[5769]: W1205 12:45:23.216585 5769 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/registry-74q4h through plugin: invalid network status for
Dec 05 12:45:23 development kubelet[5769]: W1205 12:45:23.225638 5769 pod_container_deletor.go:75] Container "e8992e1c5c8dd33e23be094d5b2cc0ad334dc364ac5fe08d89944115c8f1f4f5" not found in pod's containers
Dec 05 12:45:23 development kubelet[5769]: W1205 12:45:23.239531 5769 pod_container_deletor.go:75] Container "c7fbb0634b303b455a3a264336e4192f5d92449d6bdfb946a9c109e9a7c3efcb" not found in pod's containers
Dec 05 12:45:24 development kubelet[5769]: W1205 12:45:24.249718 5769 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-6955765f44-tw2dt through plugin: invalid network status for
Dec 05 12:45:24 development kubelet[5769]: W1205 12:45:24.257887 5769 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/registry-proxy-jvwpv through plugin: invalid network status for
Dec 05 12:45:24 development kubelet[5769]: W1205 12:45:24.273402 5769 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-6955765f44-d4g9g through plugin: invalid network status for
Dec 05 12:45:24 development kubelet[5769]: W1205 12:45:24.290181 5769 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/registry-74q4h through plugin: invalid network status for
Dec 05 12:45:54 development kubelet[5769]: E1205 12:45:54.622700 5769 pod_workers.go:191] Error syncing pod 31207f1b-e2e1-4759-9055-147c13c015af ("storage-provisioner_kube-system(31207f1b-e2e1-4759-9055-147c13c015af)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(31207f1b-e2e1-4759-9055-147c13c015af)"

==> storage-provisioner ["7684a0b5cb3b"] <==

==> storage-provisioner ["d68791efff23"] <==
F1205 12:45:53.616669 1 main.go:37] Error getting server version: Get https://10.96.0.1:443/version: dial tcp 10.96.0.1:443: i/o timeout

The operating system version:
macOS Catalina 10.15.1

minikube version

minikube version: v1.6.0-beta.1
commit: a7a5d1e981c85e97f35be38ff5cb8f510a570eea-dirty

vmrun --help

vmrun version 1.17.0 build-15018442

docker (minikube) info

Client:
 Debug Mode: false

Server:
 Containers: 44
  Running: 22
  Paused: 0
  Stopped: 22
 Images: 24
 Server Version: 19.03.5
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Native Overlay Diff: true
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: b34a5c8af56e510852c35414db4c1f4fa6172339
 runc version:
 init version: fec3683
 Security Options:
  seccomp
   Profile: default
 Kernel Version: 4.19.81
 Operating System: Buildroot 2019.02.7
 OSType: linux
 Architecture: x86_64
 CPUs: 12
 Total Memory: 15.66GiB
 Name: development
 ID: WWII:LKCO:Q32Y:XZL5:FALW:M4XN:HAO6:NWU7:BNRL:JN6I:3NY5:RUWK
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Registry: https://index.docker.io/v1/
 Labels:
  provider=vmware
 Experimental: false
 Insecure Registries:
  10.96.0.0/12
  127.0.0.0/8
 Live Restore Enabled: false
 Product License: Community Engine

docker (for mac) info

Client:
 Debug Mode: false

Server:
 Containers: 1
  Running: 0
  Paused: 0
  Stopped: 1
 Images: 1
 Server Version: 19.03.5
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Native Overlay Diff: true
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: b34a5c8af56e510852c35414db4c1f4fa6172339
 runc version: 3e425f80a8c931f88e6d94a8c831b9d5aa481657
 init version: fec3683
 Security Options:
  seccomp
   Profile: default
 Kernel Version: 4.9.184-linuxkit
 Operating System: Docker Desktop
 OSType: linux
 Architecture: x86_64
 CPUs: 6
 Total Memory: 1.952GiB
 Name: docker-desktop
 ID: 7WZI:OULF:7HCR:5XXT:6ICA:GB4V:XRZF:FB3Q:MUPL:J2K2:7GBZ:4ROZ
 Docker Root Dir: /var/lib/docker
 Debug Mode: true
  File Descriptors: 31
  Goroutines: 51
  System Time: 2019-12-05T13:00:42.572521705Z
  EventsListeners: 2
 HTTP Proxy: gateway.docker.internal:3128
 HTTPS Proxy: gateway.docker.internal:3129
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false
 Product License: Community Engine
@mvgijssel mvgijssel changed the title Docker build/run hangs on apt-key add command Docker build hangs on apt-key add command Dec 5, 2019
@medyagh
Copy link
Member

medyagh commented Dec 5, 2019

@mvgijssel I am curious is there a reason you build the docker file inside minikube? and not use minikube docker-env ? or the registery addon ?

@medyagh medyagh added the triage/needs-information Indicates an issue needs more information in order to work on it. label Dec 5, 2019
@mvgijssel
Copy link
Author

mvgijssel commented Dec 6, 2019

Hi @medyagh,

Sorry that I didn't made this clear, but I'm using the local docker cli with the minikube docker host:

eval $(minikube docker-env)
docker build --tag test -f Dockerfile.test tmp

Updated description accordingly!

@tstromberg tstromberg added the kind/support Categorizes issue or PR as a support question. label Dec 19, 2019
@sharifelgamal
Copy link
Collaborator

Running the command with strace would help figure out why the apt-key command would consuming an entire CPU core.

@priyawadhwa
Copy link

Hey @mvgijssel could you try running the command with strace and provide the output here?

@mvgijssel
Copy link
Author

Sorry for the late reply!

Updated the Dockerfile

FROM ruby:2.4.5-jessie
RUN apt-get update && apt-get install -y strace
RUN strace wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | apt-key add -

With docker buildkit

Run the build

export DOCKER_BUILDKIT=1
docker build --tag test -f Dockerfile.test .

Output:


[+] Building 272.7s (4/6)
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load build definition from Dockerfile.test 0.0s
[+] Building 272.9s (4/6)
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load build definition from Dockerfile.test 0.0s
[+] Building 6039.6s (4/6)
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load build definition from Dockerfile.test 0.0s
=> => transferring dockerfile: 216B 0.0s
=> [internal] load metadata for docker.io/library/ruby:2.4.5-jessie 0.0s
=> CACHED [1/3] FROM docker.io/library/ruby:2.4.5-jessie 0.0s
=> [2/3] RUN apt-get update && apt-get install -y strace 6039.5s

Which hangs at trying to install strace 🙄 .

Without docker buildkit

Run the build

export DOCKER_BUILDKIT=0
docker build --tag test -f Dockerfile.test .

Output:


Sending build context to Docker daemon 1.583kB
Step 1/3 : FROM ruby:2.4.5-jessie
---> 1b89e730692c
Step 2/3 : RUN apt-get update && apt-get install -y strace
---> Running in 25545b5c80ca
Ign http://deb.debian.org jessie InRelease
Get:1 http://deb.debian.org jessie Release.gpg [1652 B]
Get:2 http://deb.debian.org jessie Release [77.3 kB]
Get:3 http://security.debian.org jessie/updates InRelease [44.9 kB]
Get:4 http://deb.debian.org jessie/main amd64 Packages [9098 kB]
Get:5 http://security.debian.org jessie/updates/main amd64 Packages [948 kB]
Fetched 10.2 MB in 19s (515 kB/s)
Reading package lists...
Reading package lists...
Building dependency tree...
Reading state information...
The following NEW packages will be installed:
strace
0 upgraded, 1 newly installed, 0 to remove and 94 not upgraded.
Need to get 274 kB of archives.
After this operation, 1014 kB of additional disk space will be used.
Get:1 http://deb.debian.org/debian/ jessie/main strace amd64 4.9-2 [274 kB]
debconf: delaying package configuration, since apt-utils is not installed
Fetched 274 kB in 0s (2263 kB/s)
Selecting previously unselected package strace.
(Reading database ... 21225 files and directories currently installed.)
Preparing to unpack .../strace_4.9-2_amd64.deb ...
Unpacking strace (4.9-2) ...
Setting up strace (4.9-2) ...
Removing intermediate container 25545b5c80ca
---> 4ae718bff02a
Step 3/3 : RUN strace wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | apt-key add -
---> Running in ce7a22be63cf
execve(0x7ffec5c04040, [0x7ffec5c053d0], [/* 0 vars /]) = 0
brk(0) = 0x14b1000
access(0x7f4800ad17b3, F_OK) = -1 ENOENT (No such file or directory)
mmap(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f4800cd5000
access(0x7f4800ad42d0, R_OK) = -1 ENOENT (No such file or directory)
open(0x7f4800ad1ce9, O_RDONLY|O_CLOEXEC) = 3
fstat(3, {...}) = 0
mmap(NULL, 32252, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7f4800ccd000
close(3) = 0
access(0x7f4800ad17b3, F_OK) = -1 ENOENT (No such file or directory)
open(0x7f4800cd1ef6, O_RDONLY|O_CLOEXEC) = 3
read(3, 0x7ffc34ee1060, 832) = 832
fstat(3, {...}) = 0
mmap(NULL, 2298680, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f4800885000
mprotect(0x7f48008b5000, 2097152, PROT_NONE) = 0
mmap(0x7f4800ab5000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x30000) = 0x7f4800ab5000
close(3) = 0
access(0x7f4800ad17b3, F_OK) = -1 ENOENT (No such file or directory)
open(0x7f4800cd2fca, O_RDONLY|O_CLOEXEC) = 3
read(3, 0x7ffc34ee1030, 832) = 832
fstat(3, {...}) = 0
mmap(NULL, 3300992, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f480055f000
mprotect(0x7f4800678000, 2097152, PROT_NONE) = 0
mmap(0x7f4800878000, 49152, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x119000) = 0x7f4800878000
mmap(0x7f4800884000, 3712, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x7f4800884000
close(3) = 0
access(0x7f4800ad17b3, F_OK) = -1 ENOENT (No such file or directory)
open(0x7f4800cd01fa, O_RDONLY|O_CLOEXEC) = 3
read(3, 0x7ffc34ee1000, 832) = 832
fstat(3, {...}) = 0
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f4800ccc000
mmap(NULL, 2204200, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f4800344000
mprotect(0x7f480035e000, 2093056, PROT_NONE) = 0
mmap(0x7f480055d000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x19000) = 0x7f480055d000
close(3) = 0
access(0x7f4800ad17b3, F_OK) = -1 ENOENT (No such file or directory)
open(0x7f4800cd1594, O_RDONLY|O_CLOEXEC) = 3
read(3, 0x7ffc34ee0fd0, 832) = 832
fstat(3, {...}) = 0
mmap(NULL, 2557792, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f48000d3000
mprotect(0x7f48000d6000, 2093056, PROT_NONE) = 0
mmap(0x7f48002d5000, 454656, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x2000) = 0x7f48002d5000
close(3) = 0
access(0x7f4800ad17b3, F_OK) = -1 ENOENT (No such file or directory)
open(0x7f4800cd297e, O_RDONLY|O_CLOEXEC) = 3
read(3, 0x7ffc34ee0fa0, 832) = 832
fstat(3, {...}) = 0
mmap(NULL, 2306528, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f47ffe9f000
mprotect(0x7f47ffed1000, 2097152, PROT_NONE) = 0
mmap(0x7f48000d1000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x32000) = 0x7f48000d1000
close(3) = 0
access(0x7f4800ad17b3, F_OK) = -1 ENOENT (No such file or directory)
open(0x7f4800cd06d9, O_RDONLY|O_CLOEXEC) = 3
read(3, 0x7ffc34ee0f70, 832) = 832
fstat(3, {...}) = 0
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f4800ccb000
mmap(NULL, 2113952, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f47ffc9a000
mprotect(0x7f47ffc9e000, 2093056, PROT_NONE) = 0
mmap(0x7f47ffe9d000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x3000) = 0x7f47ffe9d000
close(3) = 0
access(0x7f4800ad17b3, F_OK) = -1 ENOENT (No such file or directory)
open(0x7f4800cd42bb, O_RDONLY|O_CLOEXEC) = 3
read(3, 0x7ffc34ee0f40, 832) = 832
fstat(3, {...}) = 0
mmap(NULL, 3844640, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f47ff8ef000
mprotect(0x7f47ffa90000, 2097152, PROT_NONE) = 0
mmap(0x7f47ffc90000, 24576, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x1a1000) = 0x7f47ffc90000
mmap(0x7f47ffc96000, 14880, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x7f47ffc96000
close(3) = 0
access(0x7f4800ad17b3, F_OK) = -1 ENOENT (No such file or directory)
open(0x7f4800cd1528, O_RDONLY|O_CLOEXEC) = 3
read(3, 0x7ffc34ee0cd0, 832) = 832
fstat(3, {...}) = 0
mmap(NULL, 2381264, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f47ff6a9000
mprotect(0x7f47ff6e5000, 2093056, PROT_NONE) = 0
mmap(0x7f47ff8e4000, 45056, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x3b000) = 0x7f47ff8e4000
close(3) = 0
access(0x7f4800ad17b3, F_OK) = -1 ENOENT (No such file or directory)
open(0x7f4800cd0aec, O_RDONLY|O_CLOEXEC) = 3
read(3, 0x7ffc34ee0ca0, 832) = 832
fstat(3, {...}) = 0
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f4800cca000
mmap(NULL, 2176392, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f47ff495000
mprotect(0x7f47ff4a6000, 2097152, PROT_NONE) = 0
mmap(0x7f47ff6a6000, 12288, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x11000) = 0x7f47ff6a6000
close(3) = 0
access(0x7f4800ad17b3, F_OK) = -1 ENOENT (No such file or directory)
open(0x7f4800cd2b81, O_RDONLY|O_CLOEXEC) = 3
read(3, 0x7ffc34ee0c70, 832) = 832
fstat(3, {...}) = 0
mmap(NULL, 2287128, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f47ff266000
mprotect(0x7f47ff294000, 2093056, PROT_NONE) = 0
mmap(0x7f47ff493000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x2d000) = 0x7f47ff493000
close(3) = 0
access(0x7f4800ad17b3, F_OK) = -1 ENOENT (No such file or directory)
open(0x7f4800cd306e, O_RDONLY|O_CLOEXEC) = 3
read(3, 0x7ffc34ee0c40, 832) = 832
fstat(3, {...}) = 0
mmap(NULL, 2632576, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f47fefe3000
mprotect(0x7f47ff064000, 2097152, PROT_NONE) = 0
mmap(0x7f47ff264000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x81000) = 0x7f47ff264000
close(3) = 0
access(0x7f4800ad17b3, F_OK) = -1 ENOENT (No such file or directory)
open(0x7f4800cd29b4, O_RDONLY|O_CLOEXEC) = 3
read(3, 0x7ffc34ee0c10, 832) = 832
fstat(3, {...}) = 0
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f4800cc9000
mmap(NULL, 3660656, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f47fec65000
mprotect(0x7f47fedcd000, 2097152, PROT_NONE) = 0
mmap(0x7f47fefcd000, 73728, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x168000) = 0x7f47fefcd000
mmap(0x7f47fefdf000, 15216, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x7f47fefdf000
close(3) = 0
access(0x7f4800ad17b3, F_OK) = -1 ENOENT (No such file or directory)
open(0x7f4800cd372b, O_RDONLY|O_CLOEXEC) = 3
read(3, 0x7ffc34ee0bb0, 832) = 832
fstat(3, {...}) = 0
mmap(NULL, 2131624, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f47fea5c000
mprotect(0x7f47fea63000, 2097152, PROT_NONE) = 0
mmap(0x7f47fec63000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x7000) = 0x7f47fec63000
close(3) = 0
access(0x7f4800ad17b3, F_OK) = -1 ENOENT (No such file or directory)
open(0x7f4800cd3b5e, O_RDONLY|O_CLOEXEC) = 3
read(3, 0x7ffc34ee0b80, 832) = 832
fstat(3, {...}) = 0
mmap(NULL, 2109712, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f47fe858000
mprotect(0x7f47fe85b000, 2093056, PROT_NONE) = 0
mmap(0x7f47fea5a000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x2000) = 0x7f47fea5a000
close(3) = 0
access(0x7f4800ad17b3, F_OK) = -1 ENOENT (No such file or directory)
open(0x7f4800cd1562, O_RDONLY|O_CLOEXEC) = 3
read(3, 0x7ffc34ee0b50, 832) = 832
fstat(3, {...}) = 0
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f4800cc8000
mmap(NULL, 2213008, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f47fe63b000
mprotect(0x7f47fe653000, 2093056, PROT_NONE) = 0
mmap(0x7f47fe852000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x17000) = 0x7f47fe852000
mmap(0x7f47fe854000, 13456, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x7f47fe854000
close(3) = 0
access(0x7f4800ad17b3, F_OK) = -1 ENOENT (No such file or directory)
open(0x7f4800cd2b46, O_RDONLY|O_CLOEXEC) = 3
read(3, 0x7ffc34ee0b20, 832) = 832
fstat(3, {...}) = 0
mmap(NULL, 25608208, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f47fcdce000
mprotect(0x7f47fe43a000, 2093056, PROT_NONE) = 0
mmap(0x7f47fe639000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x166b000) = 0x7f47fe639000
close(3) = 0
access(0x7f4800ad17b3, F_OK) = -1 ENOENT (No such file or directory)
open(0x7f4800cd0fa1, O_RDONLY|O_CLOEXEC) = 3
read(3, 0x7ffc34ee0af0, 832) = 832
fstat(3, {...}) = 0
mmap(NULL, 3188384, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f47fcac3000
mprotect(0x7f47fcbaf000, 2097152, PROT_NONE) = 0
mmap(0x7f47fcdaf000, 40960, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0xec000) = 0x7f47fcdaf000
mmap(0x7f47fcdb9000, 83616, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x7f47fcdb9000
close(3) = 0
access(0x7f4800ad17b3, F_OK) = -1 ENOENT (No such file or directory)
open(0x7f4800cd21e7, O_RDONLY|O_CLOEXEC) = 3
read(3, 0x7ffc34ee0ac0, 832) = 832
fstat(3, {...}) = 0
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f4800cc7000
mmap(NULL, 3146072, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f47fc7c2000
mprotect(0x7f47fc8c2000, 2093056, PROT_NONE) = 0
mmap(0x7f47fcac1000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0xff000) = 0x7f47fcac1000
close(3) = 0
access(0x7f4800ad17b3, F_OK) = -1 ENOENT (No such file or directory)
open(0x7f4800cd3485, O_RDONLY|O_CLOEXEC) = 3
read(3, 0x7ffc34ee0a90, 832) = 832
fstat(3, {...}) = 0
mmap(NULL, 2185952, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f47fc5ac000
mprotect(0x7f47fc5c2000, 2093056, PROT_NONE) = 0
mmap(0x7f47fc7c1000, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x15000) = 0x7f47fc7c1000
close(3) = 0
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f4800cc6000
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f4800cc5000
mmap(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f4800cc3000
arch_prctl(ARCH_SET_FS, 0x7f4800cc3780) = 0
mprotect(0x7f47ffc90000, 16384, PROT_READ) = 0
mprotect(0x7f47fcac1000, 4096, PROT_READ) = 0
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f4800cc2000
mprotect(0x7f47fcdaf000, 32768, PROT_READ) = 0
mprotect(0x7f47fe639000, 4096, PROT_READ) = 0
mprotect(0x7f47fe852000, 4096, PROT_READ) = 0
mprotect(0x7f47fea5a000, 4096, PROT_READ) = 0
mprotect(0x7f47fec63000, 4096, PROT_READ) = 0
mprotect(0x7f47fefcd000, 69632, PROT_READ) = 0
mprotect(0x7f47ff264000, 4096, PROT_READ) = 0
mprotect(0x7f4800ab5000, 4096, PROT_READ) = 0
mprotect(0x7f47ff493000, 4096, PROT_READ) = 0
mprotect(0x7f47ff6a6000, 8192, PROT_READ) = 0
mprotect(0x7f47ff8e4000, 36864, PROT_READ) = 0
mprotect(0x7f47ffe9d000, 4096, PROT_READ) = 0
mprotect(0x7f48000d1000, 4096, PROT_READ) = 0
mprotect(0x7f48002d5000, 4096, PROT_READ) = 0
mprotect(0x7f480055d000, 4096, PROT_READ) = 0
mprotect(0x7f4800878000, 40960, PROT_READ) = 0
mprotect(0x662000, 4096, PROT_READ) = 0
mprotect(0x7f4800cd7000, 4096, PROT_READ) = 0
munmap(0x7f4800ccd000, 32252) = 0
set_tid_address(0x7f4800cc3a50) = 13
set_robust_list(0x7f4800cc3a60, 24) = 0
rt_sigaction(SIGRTMIN, {...}, NULL, 8) = 0
rt_sigaction(SIGRT_1, {...}, NULL, 8) = 0
rt_sigprocmask(SIG_UNBLOCK, 0x7ffc34ee2728, NULL, 8) = 0
getrlimit(RLIMIT_STACK, 0x7ffc34ee2710) = 0
futex(0x7f47ff8ee580, FUTEX_WAKE_PRIVATE, 2147483647) = 0
brk(0) = 0x14b1000
brk(0x14d2000) = 0x14d2000
open(0x7f4800658380, O_RDONLY) = 3
fcntl(3, F_GETFD) = 0
fcntl(3, F_SETFD, FD_CLOEXEC) = 0
fstat(3, {...}) = 0
getrusage(0x1 /
RUSAGE_??? /, {...}) = 0
read(3, 0x7ffc34ee2680, 32) = 32
read(3, 0x7ffc34ee2680, 32) = 32
read(3, 0x7ffc34ee2670, 8) = 8
clock_getres(CLOCK_MONOTONIC, {...}) = 0
clock_getres(CLOCK_MONOTONIC, {...}) = 0
stat(0x451919, {...}) = 0
open(0x451919, O_RDONLY) = 4
fstat(4, {...}) = 0
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f4800cd4000
read(4, 0x7f4800cd4000, 4096) = 4096
read(4, 0x7f4800cd4000, 4096) = 716
read(4, 0x7f4800cd4000, 4096) = 0
close(4) = 0
munmap(0x7f4800cd4000, 4096) = 0
stat(0x14b1ab0, 0x7ffc34ee2570) = -1 ENOENT (No such file or directory)
ioctl(2, SNDCTL_TMR_TIMEBASE or SNDRV_TIMER_IOCTL_NEXT_DEVICE or TCGETS, 0x7ffc34ee25a0) = -1 ENOTTY (Inappropriate ioctl for device)
rt_sigaction(SIGHUP, {...}, {...}, 8) = 0
rt_sigaction(SIGHUP, {...}, {...}, 8) = 0
rt_sigaction(SIGUSR1, {...}, {...}, 8) = 0
rt_sigaction(SIGPIPE, {...}, {...}, 8) = 0
rt_sigaction(SIGWINCH, {...}, {...}, 8) = 0
open(0x7f47ffa55a50, O_RDONLY) = 4
fstat(4, {...}) = 0
mmap(NULL, 26258, PROT_READ, MAP_SHARED, 4, 0) = 0x7f4800cce000
close(4) = 0
futex(0x7f47ffc958f8, FUTEX_WAKE_PRIVATE, 2147483647) = 0
stat(0x14b2130, 0x7ffc34ee21d0) = -1 ENOENT (No such file or directory)
open(0x7f47ffa538fd, O_RDONLY|O_CLOEXEC) = 4
fstat(4, {...}) = 0
fstat(4, {...}) = 0
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f4800ccd000
read(4, 0x7f4800ccd000, 4096) = 118
lseek(4, -62, SEEK_CUR) = 56
read(4, 0x7f4800ccd000, 4096) = 62
close(4) = 0
munmap(0x7f4800ccd000, 4096) = 0
fstat(3, {...}) = 0
open(0x7f480064d810, O_RDONLY) = 4
fstat(4, {...}) = 0
fstat(4, {...}) = 0
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f4800ccd000
lseek(4, 0, SEEK_CUR) = 0
mmap(NULL, 241664, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f4800c87000
read(4, 0x7f4800c87010, 237568) = 237568
read(4, 0x7f4800ccd000, 4096) = 478
read(4, 0x7f4800ccd000, 4096) = 0
close(4) = 0
munmap(0x7f4800ccd000, 4096) = 0
brk(0x14f3000) = 0x14f3000
brk(0x1514000) = 0x1514000
brk(0x1535000) = 0x1535000
brk(0x1556000) = 0x1556000
brk(0x1577000) = 0x1577000
brk(0x1598000) = 0x1598000
brk(0x15b9000) = 0x15b9000
brk(0x15da000) = 0x15da000
brk(0x15fb000) = 0x15fb000
brk(0x161c000) = 0x161c000
brk(0x163d000) = 0x163d000
brk(0x165e000) = 0x165e000
brk(0x167f000) = 0x167f000
brk(0x16a0000) = 0x16a0000
brk(0x16c1000) = 0x16c1000
brk(0x16e2000) = 0x16e2000
brk(0x1703000) = 0x1703000
brk(0x1724000) = 0x1724000
brk(0x1745000) = 0x1745000
brk(0x1766000) = 0x1766000
brk(0x1787000) = 0x1787000
brk(0x17a8000) = 0x17a8000
brk(0x17c9000) = 0x17c9000
brk(0x17ea000) = 0x17ea000
brk(0x180b000) = 0x180b000
brk(0x182c000) = 0x182c000
brk(0x184d000) = 0x184d000
brk(0x186e000) = 0x186e000
brk(0x188f000) = 0x188f000
brk(0x18b0000) = 0x18b0000
brk(0x18d1000) = 0x18d1000
brk(0x18f2000) = 0x18f2000
munmap(0x7f4800c87000, 241664) = 0
stat(0x7ffc34ee1c00, 0x7ffc34ee1c70) = -1 ENOENT (No such file or directory)
socket(PF_LOCAL, SOCK_STREAM|SOCK_CLOEXEC|SOCK_NONBLOCK, 0) = 4
connect(4, {...}, 110) = -1 ENOENT (No such file or directory)
close(4) = 0
socket(PF_LOCAL, SOCK_STREAM|SOCK_CLOEXEC|SOCK_NONBLOCK, 0) = 4
connect(4, {...}, 110) = -1 ENOENT (No such file or directory)
close(4) = 0
open(0x7f47ffa55325, O_RDONLY|O_CLOEXEC) = 4
fstat(4, {...}) = 0
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f4800ccd000
read(4, 0x7f4800ccd000, 4096) = 497
read(4, 0x7f4800ccd000, 4096) = 0
close(4) = 0
munmap(0x7f4800ccd000, 4096) = 0
open(0x7f47ffa5521d, O_RDONLY|O_CLOEXEC) = 4
fstat(4, {...}) = 0
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f4800ccd000
read(4, 0x7f4800ccd000, 4096) = 9
read(4, 0x7f4800ccd000, 4096) = 0
close(4) = 0
munmap(0x7f4800ccd000, 4096) = 0
futex(0x7f47ffc98bc4, FUTEX_WAKE_PRIVATE, 2147483647) = 0
open(0x7f47ffa551d6, O_RDONLY|O_CLOEXEC) = 4
fstat(4, {...}) = 0
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f4800ccd000
read(4, 0x7f4800ccd000, 4096) = 589
read(4, 0x7f4800ccd000, 4096) = 0
close(4) = 0
munmap(0x7f4800ccd000, 4096) = 0
uname({...}) = 0
open(0x7f4800ad1ce9, O_RDONLY|O_CLOEXEC) = 4
fstat(4, {...}) = 0
mmap(NULL, 32252, PROT_READ, MAP_PRIVATE, 4, 0) = 0x7f4800cba000
close(4) = 0
access(0x7f4800ad17b3, F_OK) = -1 ENOENT (No such file or directory)
open(0x7f4800cbed45, O_RDONLY|O_CLOEXEC) = 4
read(4, 0x7ffc34ee0b60, 832) = 832
fstat(4, {...}) = 0
mmap(NULL, 2144392, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 4, 0) = 0x7f47fc3a0000
mprotect(0x7f47fc3ab000, 2093056, PROT_NONE) = 0
mmap(0x7f47fc5aa000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 4, 0xa000) = 0x7f47fc5aa000
close(4) = 0
mprotect(0x7f47fc5aa000, 4096, PROT_READ) = 0
munmap(0x7f4800cba000, 32252) = 0
open(0x7f47fc3a8bd1, O_RDONLY|O_CLOEXEC) = 4
fstat(4, {...}) = 0
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f4800ccd000
read(4, 0x7f4800ccd000, 4096) = 174
read(4, 0x7f4800ccd000, 4096) = 0
close(4) = 0
munmap(0x7f4800ccd000, 4096) = 0
open(0x7f4800ad1ce9, O_RDONLY|O_CLOEXEC) = 4
fstat(4, {...}) = 0
mmap(NULL, 32252, PROT_READ, MAP_PRIVATE, 4, 0) = 0x7f4800cba000
close(4) = 0
access(0x7f4800ad17b3, F_OK) = -1 ENOENT (No such file or directory)
open(0x7f4800cbedb7, O_RDONLY|O_CLOEXEC) = 4
read(4, 0x7ffc34ee0b70, 832) = 832
fstat(4, {...}) = 0
mmap(NULL, 2117896, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 4, 0) = 0x7f47fc19a000
mprotect(0x7f47fc19f000, 2093056, PROT_NONE) = 0
mmap(0x7f47fc39e000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 4, 0x4000) = 0x7f47fc39e000
close(4) = 0
access(0x7f4800ad17b3, F_OK) = -1 ENOENT (No such file or directory)
open(0x7f4800cbe413, O_RDONLY|O_CLOEXEC) = 4
read(4, 0x7ffc34ee08d0, 832) = 832
fstat(4, {...}) = 0
mmap(NULL, 2189928, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 4, 0) = 0x7f47fbf83000
mprotect(0x7f47fbf97000, 2093056, PROT_NONE) = 0
mmap(0x7f47fc196000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 4, 0x13000) = 0x7f47fc196000
mmap(0x7f47fc198000, 6760, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x7f47fc198000
close(4) = 0
mprotect(0x7f47fc196000, 4096, PROT_READ) = 0
mprotect(0x7f47fc39e000, 4096, PROT_READ) = 0
munmap(0x7f4800cba000, 32252) = 0
stat(0x7f47ffa551d6, {...}) = 0
open(0x7f47ffa551d6, O_RDONLY|O_CLOEXEC) = 4
fstat(4, {...}) = 0
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f4800ccd000
read(4, 0x7f4800ccd000, 4096) = 589
read(4, 0x7f4800ccd000, 4096) = 0
close(4) = 0
munmap(0x7f4800ccd000, 4096) = 0
uname({...}) = 0
socket(PF_INET, SOCK_DGRAM|SOCK_NONBLOCK, IPPROTO_IP) = 4
connect(4, {...}, 16) = 0
poll([?] 0x7ffc34edfd40, 1, 0) = 1
sendmmsg(4, {0x7ffc34edfd80, 0x7ffc34edfdc0}, 2, MSG_NOSIGNAL) = 2
poll([?] 0x7ffc34edfd40, 1, 5000) = 1
ioctl(4, FIONREAD, 0x7ffc34edfd3c) = 0
recvfrom(4, 0x7ffc34ee0b40, 2048, 0, {...}, [?]) = 110
poll([?] 0x7ffc34edfd40, 1, 4888) = 1
ioctl(4, FIONREAD, 0x7ffc34ee1374) = 0
recvfrom(4, 0x18ddf20, 65536, 0, {...}, [?]) = 146
close(4) = 0
open(0x7f47ffa53e3f, O_RDONLY|O_CLOEXEC) = 4
fstat(4, {...}) = 0
fstat(4, {...}) = 0
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f4800ccd000
read(4, 0x7f4800ccd000, 4096) = 2584
read(4, 0x7f4800ccd000, 4096) = 0
close(4) = 0
munmap(0x7f4800ccd000, 4096) = 0
futex(0x7f47ffc96f64, FUTEX_WAKE_PRIVATE, 2147483647) = 0
socket(PF_NETLINK, SOCK_RAW, NETLINK_ROUTE) = 4
bind(4, {...}, 12) = 0
getsockname(4, 0x7ffc34ee1830, 0x7ffc34ee182c) = 0
sendto(4, 0x7ffc34ee1780, 20, 0, {...}, 12) = 20
recvmsg(4, 0x7ffc34ee17a0, 0) = 164
recvmsg(4, 0x7ffc34ee17a0, 0) = 20
close(4) = 0
socket(PF_INET, SOCK_DGRAM, IPPROTO_IP) = 4
connect(4, {...}, 16) = 0
getsockname(4, 0x7ffc34ee1888, 0x7ffc34ee1a80) = 0
connect(4, {...}, 16) = 0
connect(4, {...}, 16) = 0
getsockname(4, 0x7ffc34ee18b8, 0x7ffc34ee1a80) = 0
connect(4, {...}, 16) = 0
connect(4, {...}, 16) = 0
getsockname(4, 0x7ffc34ee18e8, 0x7ffc34ee1a80) = 0
close(4) = 0
socket(PF_INET6, SOCK_DGRAM, IPPROTO_IP) = 4
connect(4, {...}, 28) = -1 EADDRNOTAVAIL (Cannot assign requested address)
connect(4, {...}, 16) = 0
connect(4, {...}, 28) = -1 EADDRNOTAVAIL (Cannot assign requested address)
connect(4, {...}, 16) = 0
connect(4, {...}, 28) = -1 EADDRNOTAVAIL (Cannot assign requested address)
close(4) = 0
socket(PF_INET, SOCK_STREAM, IPPROTO_IP) = 4
connect(4, {...}, 16) = 0
writev(4, [?] 0x7ffc34ee1900, 1) = 231
recvfrom(4, 0x18ddf70, 5, 0, NULL, NULL) = 5
recvfrom(4, 0x18e2920, 93, 0, NULL, NULL) = 93
recvfrom(4, 0x18ddf70, 5, 0, NULL, NULL) = 5
recvfrom(4, 0x18e2920, 2911, 0, NULL, NULL) = 2911
recvfrom(4, 0x18df2e0, 5, 0, NULL, NULL) = 5
recvfrom(4, 0x18e3c90, 621, 0, NULL, NULL) = 621
recvfrom(4, 0x18df2e0, 5, 0, NULL, NULL) = 5
recvfrom(4, 0x18e3c90, 4, 0, NULL, NULL) = 4
getrusage(0x1 /
RUSAGE_??? */, {...}) = 0
writev(4, [?] 0x7ffc34ee1970, 3) = 158
recvfrom(4, 0x18df2e0, 5, 0, NULL, NULL) = 5
recvfrom(4, 0x18e3c90, 1, 0, NULL, NULL) = 1
recvfrom(4, 0x18df2e0, 5, 0, NULL, NULL) = 5
recvfrom(4, 0x18e3c90, 40, 0, NULL, NULL) = 40
select(5, NULL, [?], NULL, {...}) = 1 (left {...})
writev(4, [?] 0x7ffc34ee18b0, 1) = 168
select(5, [?], NULL, NULL, {...}) = 1 (left {...})
select(5, [?], NULL, NULL, {...}) = 1 (left {...})
fcntl(4, F_GETFL) = 0x2 (flags O_RDWR)
fcntl(4, F_SETFL, O_RDWR|O_NONBLOCK) = 0
select(5, [?], NULL, NULL, {...}) = 1 (left {...})
recvfrom(4, 0x18ddf70, 5, 0, NULL, NULL) = 5
recvfrom(4, 0x18e2920, 5221, 0, NULL, NULL) = 5221
fcntl(4, F_SETFL, O_RDWR) = 0
stat(0x14b2130, 0x7ffc34ee1ca0) = -1 ENOENT (No such file or directory)
open(0x14d1f40, O_RDONLY|O_NOFOLLOW) = -1 ENOENT (No such file or directory)
fstat(1, {...}) = 0
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f4800ccd000
write(1, 0x7f4800ccd000, 126) = 126
fcntl(4, F_GETFL) = 0x2 (flags O_RDWR)
fcntl(4, F_SETFL, O_RDWR|O_NONBLOCK) = 0
fcntl(4, F_SETFL, O_RDWR) = 0
write(1, 0x7f4800ccd000, 4096) = 4096
write(1, 0x7f4800ccd000, 590) = 590
stat(0x7f47ffa538fd, {...}) = 0
close(1) = 0
munmap(0x7f4800ccd000, 4096) = 0
exit_group(0) = ?
+++ exited with 0 +++
OK
Removing intermediate container ce7a22be63cf
---> 610e7580af93
Successfully built 610e7580af93
Successfully tagged test:latest

Which seems to work fine.

@mvgijssel
Copy link
Author

Tried another approach to get strace working in docker build with buildkit enabled. First created the Dockerfile.base_test file:

FROM ruby:2.4.5-jessie
RUN apt-get update && apt-get install -y strace

And build that image without buildkit

export DOCKER_BUILDKIT=0
docker build --tag base_test -f Dockerfile.base_test .

Next create another dockerfile Dockerfile.test

FROM base_test
RUN strace wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | apt-key add -

and building that with buildkit enabled:

docker build --tag test -f Dockerfile.test .

which hangs at the same apt-config shell APT_DIR Dir as the initial issue

[+] Building 204.5s (4/5)
 => [internal] load .dockerignore                                                                                                                                                                                                      0.0s
 => => transferring context: 2B                                                                                                                                                                                                        0.0s
 => [internal] load build definition from Dockerfile.test2                                                                                                                                                                             0.0s
 => => transferring dockerfile: 42B                                                                                                                                                                                                    0.0s
 => [internal] load metadata for docker.io/library/base_test:latest                                                                                                                                                                    0.0s
 => CACHED [1/2] FROM docker.io/library/base_test                                                                                                                                                                                      0.0s
 => [2/2] RUN strace wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | apt-key add -                                                                                                                            204.4s
 => => # strace: test_ptrace_setoptions_for_all: PTRACE_TRACEME doesn't work: Operation not permitted
 => => # strace: test_ptrace_setoptions_for_all: unexpected exit status 1

@afbjorklund
Copy link
Collaborator

So the problem is only when using BuildKit (buildx), not with the default docker build ?

@afbjorklund
Copy link
Collaborator

afbjorklund commented Mar 18, 2020

The wget seems fine, but apt-key is doing something strange when running in the container.

For some reason it is trying to close every file description under the sun, way over the usual...

[pid    21] fcntl(3, F_SETFD, FD_CLOEXEC) = 0
[pid    21] getrlimit(RLIMIT_NOFILE, 0x7ffd94eae7f0) = 0
[pid    21] fcntl(4, F_SETFD, FD_CLOEXEC) = 0
[pid    21] getrlimit(RLIMIT_NOFILE, 0x7ffd94eae7f0) = 0
[pid    21] fcntl(5, F_SETFD, FD_CLOEXEC) = 0
[pid    21] getrlimit(RLIMIT_NOFILE, 0x7ffd94eae7f0) = 0
[pid    21] fcntl(6, F_SETFD, FD_CLOEXEC) = -1 EBADF (Bad file descriptor)
[pid    21] getrlimit(RLIMIT_NOFILE, 0x7ffd94eae7f0) = 0
[pid    21] fcntl(7, F_SETFD, FD_CLOEXEC) = -1 EBADF (Bad file descriptor)
[pid    21] getrlimit(RLIMIT_NOFILE, 0x7ffd94eae7f0) = 0
[pid    21] fcntl(8, F_SETFD, FD_CLOEXEC) = -1 EBADF (Bad file descriptor)
...

https://packages.debian.org/jessie/apt

https://salsa.debian.org/apt-team/apt/-/blob/debian/jessie/apt-pkg/contrib/fileutl.cc

      // Close all of our FDs - just in case
      for (int K = 3; K != sysconf(_SC_OPEN_MAX); K++)
      {
	 if(KeepFDs.find(K) == KeepFDs.end())
	    fcntl(K,F_SETFD,FD_CLOEXEC);
      }
   }

Unfortunately, it never checks the return value of sysconf. Like if it is -1 or so.

Or if it returns a really big number. Say for instance 1048576, like in Docker...

@afbjorklund
Copy link
Collaborator

afbjorklund commented Mar 18, 2020

Takes 30 seconds on mine, as compared with 1 second if I add a ulimit -n 1024; before it.

But the "hang" above seems more like someone managed to set it back to "infinity" again ?

@afbjorklund
Copy link
Collaborator

This bug is also fixed, if you upgrade from Debain oldoldstable (jessie) to stable (buster).

Then it uses /proc/self/fd in apt, rather than looping over every possible file descriptor...

@afbjorklund
Copy link
Collaborator

The root cause is a difference in ulimit, between build and buildkit:

time(seconds)        unlimited
file(blocks)         unlimited
data(kbytes)         unlimited
stack(kbytes)        8192
coredump(blocks)     unlimited
memory(kbytes)       unlimited
locked memory(kbytes) 65536
process              unlimited
nofiles              1048576
vmemory(kbytes)      unlimited
locks                unlimited
time(seconds)        unlimited
file(blocks)         unlimited
data(kbytes)         unlimited
stack(kbytes)        8192
coredump(blocks)     unlimited
memory(kbytes)       unlimited
locked memory(kbytes) 65536
process              unlimited
nofiles              1073741816
vmemory(kbytes)      unlimited
locks                unlimited

So it is looping over 1024x more files, than the regular docker build.

@afbjorklund afbjorklund added the co/runtime/docker Issues specific to a docker runtime label Mar 18, 2020
@mvgijssel
Copy link
Author

Wow, thanks for the detailed answer! Very helpful ❤️. This gives me something to work with whenever I encounter this behaviour, so closing this.

@GolubevV
Copy link

GolubevV commented May 12, 2020

Also hit the same behaviour, using skaffold+minikube+buildkit.
Had the same long running yum install on centos7 image.

As far as I understand, the only real fix for now is to explicitly set ulimit inside Dockerfile which is far from perfect, as in example here

So not clear why the ticket was closed - are there any plans to address the issue within minikube?
Or is it expected to be somehow fixed within buildkit?

@chanseokoh
Copy link

Assuming this issue was solely due to the apt behavior as diagnosed in this issue thread:

The latest apt fixed this issue by closing only those files that the (child-)process owns. The extremely inefficient code block to blindly and unnecessarily trying to close every potentially thinkable file descriptor (i.e., attempting to close whopping 1073741816 file descriptors, where 1073741816 is the limit set by BuildKit) is the last resort when /proc/self/fd is not available. (I can't really imagine a system where /proc/self/fd is not available).

The issue was only when using BuildKit, because BuildKit set the ulimit to 1073741816, whereas the number is 1048576 without BuildKit; for example, if closing 1048576 fds took 1 second, then closing 1073741816 would take 1024 seconds (=~17 minutes). And I guess this seems to matter a lot with MiniKube only because it's much more slower to use the internal Docker in MiniKube.

So personally I'd put the most blame on the old apt behavior, as setting a higher fd limit shouldn't be a problem in general. But you could argue BuildKit may not need to set the number that higher; whether it is the intended config, I don't know. You could still argue BuildKit or MiniKube could lower the number to mitigate this particular issue, but I don't really think they have to. The issue is fixed in the latest apt.

@iconoeugen
Copy link

This behaviour is observable in multiple situations and not only with apt-key add command.

I have added a more in details explanations with a solution to overcome this situation here: moby/moby#40398 (comment)

@chanseokoh
Copy link

This behaviour is observable in multiple situations

I looked in your link with detailed explanations, and it is what exactly has been discussed in this thread. If the "behavior" you are talking about is the ulimit of 1073741816, then yes, BuildKit sets it to 1073741816. And as I said, there's nothing inherently wrong for them to set 1073741816 as their default; it may be their intention (or not).

@GolubevV
Copy link

Please note that the behaviour was also observed in running yum install so it is not Debian specific

@chanseokoh
Copy link

@GolubevV I suggest you file a bug against yum saying "yum install takes too much time (or hangs) when nofile ulimit is set to a very high number (e.g., 1073741816)". I think the most likely cause for your symptom (assuming the symptom goes away when setting the nofile ulimit to a low value) is a bug in yum.

@afbjorklund
Copy link
Collaborator

This was fixed in rpm-software-management/rpm@5e6f05c for rpm (as used by yum)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/runtime/docker Issues specific to a docker runtime kind/support Categorizes issue or PR as a support question. triage/needs-information Indicates an issue needs more information in order to work on it.
Projects
None yet
Development

No branches or pull requests

9 participants