Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Load balancer is waiting for logs that already exist. #592

Closed
ybettan opened this issue May 8, 2021 · 12 comments · Fixed by #637
Closed

Load balancer is waiting for logs that already exist. #592

ybettan opened this issue May 8, 2021 · 12 comments · Fixed by #637
Assignees
Labels
bug Something isn't working
Milestone

Comments

@ybettan
Copy link

ybettan commented May 8, 2021

What did you do

  • How was the cluster created?

    • k3d cluster create --trace
  • What did you do afterwards?
    Nothing

What did you expect to happen

k3d command to finish without error.

Screenshots or terminal output

 $ k3d cluster create --trace
DEBU[0000] Runtime Info:
&{Name:docker Endpoint:/var/run/docker.sock Version:19.03.15 OSType:linux OS:Red Hat Enterprise Linux 8.2 (Ootpa) Arch:x86_64 CgroupVersion: CgroupDriver:cgroupfs Filesystem:xfs}
DEBU[0000] Configuration:
agents: 0
image: docker.io/rancher/k3s:v1.20.6-k3s1
network: ""
options:
  k3d:
    disablehostipinjection: false
    disableimagevolume: false
    disableloadbalancer: false
    disablerollback: false
    timeout: 0s
    wait: true
  k3s:
    extraagentargs: []
    extraserverargs: []
  kubeconfig:
    switchcurrentcontext: true
    updatedefaultkubeconfig: true
  runtime:
    agentsmemory: ""
    gpurequest: ""
    serversmemory: ""
registries:
  config: ""
  create: false
  use: []
servers: 1
subnet: ""
token: ""
DEBU[0000] Additional CLI Configuration:
cli:
  api-port: ""
  env: []
  labels: []
  ports: []
  volumes: []
DEBU[0000] ========== Simple Config ==========
{TypeMeta:{Kind: APIVersion:} Name: Servers:1 Agents:0 ExposeAPI:{Host: HostIP: HostPort:} Image:docker.io/rancher/k3s:v1.20.6-k3s1 Network: Subnet: ClusterToken: Volumes:[] Ports:[] Labels:[] Options:{K3dOptions:{Wait:true Timeout:0s Disa
bleLoadbalancer:false DisableImageVolume:false NoRollback:false PrepDisableHostIPInjection:false NodeHookActions:[]} K3sOptions:{ExtraServerArgs:[] ExtraAgentArgs:[]} KubeconfigOptions:{UpdateDefaultKubeconfig:true SwitchCurrentContext:tru
e} Runtime:{GPURequest: ServersMemory: AgentsMemory:}} Env:[] Registries:{Use:[] Create:false Config:}}
==========================
DEBU[0000] Port Exposure Mapping didn't specify hostPort, choosing one randomly...
DEBU[0000] Got free port for Port Exposure: '44271'
TRAC[0000] VolumeFilterMap: map[]
TRAC[0000] PortFilterMap: map[]
TRAC[0000] LabelFilterMap: map[]
TRAC[0000] EnvFilterMap: map[]
DEBU[0000] ========== Merged Simple Config ==========
{TypeMeta:{Kind: APIVersion:} Name: Servers:1 Agents:0 ExposeAPI:{Host: HostIP:0.0.0.0 HostPort:44271} Image:docker.io/rancher/k3s:v1.20.6-k3s1 Network: Subnet: ClusterToken: Volumes:[] Ports:[] Labels:[] Options:{K3dOptions:{Wait:true Tim
eout:0s DisableLoadbalancer:false DisableImageVolume:false NoRollback:false PrepDisableHostIPInjection:false NodeHookActions:[]} K3sOptions:{ExtraServerArgs:[] ExtraAgentArgs:[]} KubeconfigOptions:{UpdateDefaultKubeconfig:true SwitchCurren
tContext:true} Runtime:{GPURequest: ServersMemory: AgentsMemory:}} Env:[] Registries:{Use:[] Create:false Config:}}
==========================
DEBU[0000] ===== Merged Cluster Config =====
&{TypeMeta:{Kind: APIVersion:} Cluster:{Name:k3s-default Network:{Name: ID: External:false IPAM:{IPPrefix:invalid IP prefix IPsUsed:[] Managed:false}} Token: Nodes:[0xc00041a840] InitNode:<nil> ExternalDatastore:<nil> KubeAPI:0xc0004009c0
ServerLoadBalancer:0xc00041a6e0 ImageVolume:} ClusterCreateOpts:{PrepDisableHostIPInjection:false DisableImageVolume:false WaitForServer:true Timeout:0s DisableLoadBalancer:false K3sServerArgs:[] K3sAgentArgs:[] GPURequest: ServersMemory:
AgentsMemory: NodeHooks:[] GlobalLabels:map[app:k3d] GlobalEnv:[] Registries:{Create:<nil> Use:[] Config:<nil>}} KubeconfigOpts:{UpdateDefaultKubeconfig:true SwitchCurrentContext:true}}
===== ===== =====
DEBU[0000] ===== Processed Cluster Config =====
&{TypeMeta:{Kind: APIVersion:} Cluster:{Name:k3s-default Network:{Name: ID: External:false IPAM:{IPPrefix:invalid IP prefix IPsUsed:[] Managed:false}} Token: Nodes:[0xc00041a840] InitNode:<nil> ExternalDatastore:<nil> KubeAPI:0xc0004009c0
ServerLoadBalancer:0xc00041a6e0 ImageVolume:} ClusterCreateOpts:{PrepDisableHostIPInjection:false DisableImageVolume:false WaitForServer:true Timeout:0s DisableLoadBalancer:false K3sServerArgs:[] K3sAgentArgs:[] GPURequest: ServersMemory:
AgentsMemory: NodeHooks:[] GlobalLabels:map[app:k3d] GlobalEnv:[] Registries:{Create:<nil> Use:[] Config:<nil>}} KubeconfigOpts:{UpdateDefaultKubeconfig:true SwitchCurrentContext:true}}
===== ===== =====
DEBU[0000] '--kubeconfig-update-default set: enabling wait-for-server
INFO[0000] Prep: Network
INFO[0000] Created network 'k3d-k3s-default' (a84a27c62f0fba729e4927d28083d9e9b1d81d9912453404f9524d53ac0ab69c)
INFO[0000] Created volume 'k3d-k3s-default-images'
TRAC[0000] Using Registries: []
TRAC[0000]
===== Creating Cluster =====

Runtime:
{}

Cluster:
&{Name:k3s-default Network:{Name:k3d-k3s-default ID:a84a27c62f0fba729e4927d28083d9e9b1d81d9912453404f9524d53ac0ab69c External:false IPAM:{IPPrefix:172.28.0.0/16 IPsUsed:[] Managed:false}} Token: Nodes:[0xc00041a840] InitNode:<nil> External
Datastore:<nil> KubeAPI:0xc0004009c0 ServerLoadBalancer:0xc00041a6e0 ImageVolume:}

ClusterCreatOpts:
&{PrepDisableHostIPInjection:false DisableImageVolume:false WaitForServer:true Timeout:0s DisableLoadBalancer:false K3sServerArgs:[] K3sAgentArgs:[] GPURequest: ServersMemory: AgentsMemory: NodeHooks:[] GlobalLabels:map[app:k3d k3d.cluster
.imageVolume:k3d-k3s-default-images k3d.cluster.network:k3d-k3s-default k3d.cluster.network.external:false k3d.cluster.network.id:a84a27c62f0fba729e4927d28083d9e9b1d81d9912453404f9524d53ac0ab69c k3d.cluster.network.iprange:172.28.0.0/16] G
lobalEnv:[] Registries:{Create:<nil> Use:[] Config:<nil>}}

============================

INFO[0001] Creating node 'k3d-k3s-default-server-0'
TRAC[0001] Creating node from spec
&{Name:k3d-k3s-default-server-0 Role:server Image:docker.io/rancher/k3s:v1.20.6-k3s1 Volumes:[k3d-k3s-default-images:/k3d/images] Env:[K3S_TOKEN=NbqBnzEqvtlzcesauAdS] Cmd:[] Args:[] Ports:map[] Restart:true Created: Labels:map[app:k3d k3d.
cluster:k3s-default k3d.cluster.imageVolume:k3d-k3s-default-images k3d.cluster.network:k3d-k3s-default k3d.cluster.network.external:false k3d.cluster.network.id:a84a27c62f0fba729e4927d28083d9e9b1d81d9912453404f9524d53ac0ab69c k3d.cluster.n
etwork.iprange:172.28.0.0/16 k3d.cluster.token:NbqBnzEqvtlzcesauAdS k3d.cluster.url:https://k3d-k3s-default-server-0:6443] Networks:[k3d-k3s-default] ExtraHosts:[] ServerOpts:{IsInit:false KubeAPI:0xc0004009c0} AgentOpts:{} GPURequest: Mem
ory: State:{Running:false Status: Started:} IP:{IP:invalid IP Static:false}}
DEBU[0001] DockerHost:
TRAC[0001] Creating docker container with translated config
&{ContainerConfig:{Hostname:k3d-k3s-default-server-0 Domainname: User: AttachStdin:false AttachStdout:false AttachStderr:false ExposedPorts:map[] Tty:false OpenStdin:false StdinOnce:false Env:[K3S_TOKEN=NbqBnzEqvtlzcesauAdS K3S_KUBECONFIG_
OUTPUT=/output/kubeconfig.yaml] Cmd:[server --tls-san 0.0.0.0] Healthcheck:<nil> ArgsEscaped:false Image:docker.io/rancher/k3s:v1.20.6-k3s1 Volumes:map[] WorkingDir: Entrypoint:[] NetworkDisabled:false MacAddress: OnBuild:[] Labels:map[app
:k3d k3d.cluster:k3s-default k3d.cluster.imageVolume:k3d-k3s-default-images k3d.cluster.network:k3d-k3s-default k3d.cluster.network.external:false k3d.cluster.network.id:a84a27c62f0fba729e4927d28083d9e9b1d81d9912453404f9524d53ac0ab69c k3d.
cluster.network.iprange:172.28.0.0/16 k3d.cluster.token:NbqBnzEqvtlzcesauAdS k3d.cluster.url:https://k3d-k3s-default-server-0:6443 k3d.role:server k3d.server.api.host:0.0.0.0 k3d.server.api.hostIP:0.0.0.0 k3d.server.api.port:44271 k3d.vers
ion:v4.4.3] StopSignal: StopTimeout:<nil> Shell:[]} HostConfig:{Binds:[k3d-k3s-default-images:/k3d/images] ContainerIDFile: LogConfig:{Type: Config:map[]} NetworkMode: PortBindings:map[] RestartPolicy:{Name:unless-stopped MaximumRetryCount
:0} AutoRemove:false VolumeDriver: VolumesFrom:[] CapAdd:[] CapDrop:[] CgroupnsMode: DNS:[] DNSOptions:[] DNSSearch:[] ExtraHosts:[] GroupAdd:[] IpcMode: Cgroup: Links:[] OomScoreAdj:0 PidMode: Privileged:true PublishAllPorts:false Readonl
yRootfs:false SecurityOpt:[] StorageOpt:map[] Tmpfs:map[/run: /var/run:] UTSMode: UsernsMode: ShmSize:0 Sysctls:map[] Runtime: ConsoleSize:[0 0] Isolation: Resources:{CPUShares:0 Memory:0 NanoCPUs:0 CgroupParent: BlkioWeight:0 BlkioWeightD
evice:[] BlkioDeviceReadBps:[] BlkioDeviceWriteBps:[] BlkioDeviceReadIOps:[] BlkioDeviceWriteIOps:[] CPUPeriod:0 CPUQuota:0 CPURealtimePeriod:0 CPURealtimeRuntime:0 CpusetCpus: CpusetMems: Devices:[] DeviceCgroupRules:[] DeviceRequests:[]
KernelMemory:0 KernelMemoryTCP:0 MemoryReservation:0 MemorySwap:0 MemorySwappiness:<nil> OomKillDisable:<nil> PidsLimit:<nil> Ulimits:[] CPUCount:0 CPUPercent:0 IOMaximumIOps:0 IOMaximumBandwidth:0} Mounts:[] MaskedPaths:[] ReadonlyPaths:[
] Init:0xc00019fc4f} NetworkingConfig:{EndpointsConfig:map[k3d-k3s-default:0xc0003a2000]}}
DEBU[0001] Created container k3d-k3s-default-server-0 (ID: 76e90842af0e570c58543f645df6d1641e93805a3646d66c73a657d0a0e4a5ad)
DEBU[0001] Created node 'k3d-k3s-default-server-0'
INFO[0001] Creating LoadBalancer 'k3d-k3s-default-serverlb'
TRAC[0001] Creating node from spec
&{Name:k3d-k3s-default-serverlb Role:loadbalancer Image:docker.io/rancher/k3d-proxy:v4.4.3 Volumes:[] Env:[SERVERS=k3d-k3s-default-server-0 PORTS=6443 WORKER_PROCESSES=1] Cmd:[] Args:[] Ports:map[6443:[{HostIP:0.0.0.0 HostPort:44271}]] Res
tart:true Created: Labels:map[app:k3d k3d.cluster:k3s-default k3d.cluster.imageVolume:k3d-k3s-default-images k3d.cluster.network:k3d-k3s-default k3d.cluster.network.external:false k3d.cluster.network.id:a84a27c62f0fba729e4927d28083d9e9b1d8
1d9912453404f9524d53ac0ab69c k3d.cluster.network.iprange:172.28.0.0/16 k3d.cluster.token:NbqBnzEqvtlzcesauAdS k3d.cluster.url:https://k3d-k3s-default-server-0:6443] Networks:[k3d-k3s-default] ExtraHosts:[] ServerOpts:{IsInit:false KubeAPI:
<nil>} AgentOpts:{} GPURequest: Memory: State:{Running:false Status: Started:} IP:{IP:invalid IP Static:false}}
TRAC[0001] Creating docker container with translated config
&{ContainerConfig:{Hostname:k3d-k3s-default-serverlb Domainname: User: AttachStdin:false AttachStdout:false AttachStderr:false ExposedPorts:map[6443:{}] Tty:false OpenStdin:false StdinOnce:false Env:[SERVERS=k3d-k3s-default-server-0 PORTS=
6443 WORKER_PROCESSES=1 K3S_KUBECONFIG_OUTPUT=/output/kubeconfig.yaml] Cmd:[] Healthcheck:<nil> ArgsEscaped:false Image:docker.io/rancher/k3d-proxy:v4.4.3 Volumes:map[] WorkingDir: Entrypoint:[] NetworkDisabled:false MacAddress: OnBuild:[]
 Labels:map[app:k3d k3d.cluster:k3s-default k3d.cluster.imageVolume:k3d-k3s-default-images k3d.cluster.network:k3d-k3s-default k3d.cluster.network.external:false k3d.cluster.network.id:a84a27c62f0fba729e4927d28083d9e9b1d81d9912453404f9524d
53ac0ab69c k3d.cluster.network.iprange:172.28.0.0/16 k3d.cluster.token:NbqBnzEqvtlzcesauAdS k3d.cluster.url:https://k3d-k3s-default-server-0:6443 k3d.role:loadbalancer k3d.version:v4.4.3] StopSignal: StopTimeout:<nil> Shell:[]} HostConfig:
{Binds:[] ContainerIDFile: LogConfig:{Type: Config:map[]} NetworkMode: PortBindings:map[6443:[{HostIP:0.0.0.0 HostPort:44271}]] RestartPolicy:{Name:unless-stopped MaximumRetryCount:0} AutoRemove:false VolumeDriver: VolumesFrom:[] CapAdd:[]
 CapDrop:[] CgroupnsMode: DNS:[] DNSOptions:[] DNSSearch:[] ExtraHosts:[] GroupAdd:[] IpcMode: Cgroup: Links:[] OomScoreAdj:0 PidMode: Privileged:true PublishAllPorts:false ReadonlyRootfs:false SecurityOpt:[] StorageOpt:map[] Tmpfs:map[/ru
n: /var/run:] UTSMode: UsernsMode: ShmSize:0 Sysctls:map[] Runtime: ConsoleSize:[0 0] Isolation: Resources:{CPUShares:0 Memory:0 NanoCPUs:0 CgroupParent: BlkioWeight:0 BlkioWeightDevice:[] BlkioDeviceReadBps:[] BlkioDeviceWriteBps:[] Blkio
DeviceReadIOps:[] BlkioDeviceWriteIOps:[] CPUPeriod:0 CPUQuota:0 CPURealtimePeriod:0 CPURealtimeRuntime:0 CpusetCpus: CpusetMems: Devices:[] DeviceCgroupRules:[] DeviceRequests:[] KernelMemory:0 KernelMemoryTCP:0 MemoryReservation:0 Memory
Swap:0 MemorySwappiness:<nil> OomKillDisable:<nil> PidsLimit:<nil> Ulimits:[] CPUCount:0 CPUPercent:0 IOMaximumIOps:0 IOMaximumBandwidth:0} Mounts:[] MaskedPaths:[] ReadonlyPaths:[] Init:0xc0000fa32f} NetworkingConfig:{EndpointsConfig:map[
k3d-k3s-default:0xc0005c4000]}}
DEBU[0001] Created container k3d-k3s-default-serverlb (ID: 0b0b7f834d0fa4f59d8fe0eb6bf9200d73cfde1048ec9b8f911d497cc3d6268e)
DEBU[0001] Created loadbalancer 'k3d-k3s-default-serverlb'
INFO[0001] Starting cluster 'k3s-default'
TRAC[0001] Servers before sort:
TRAC[0001] Server 0 - k3d-k3s-default-server-0
TRAC[0001] Servers after sort:
TRAC[0001] Server 0 - k3d-k3s-default-server-0
INFO[0001] Starting servers...
DEBU[0001] Node k3d-k3s-default-server-0 Start Time: 2021-05-08 20:21:26.111778842 +0000 UTC m=+1.237250395
TRAC[0001] Starting node 'k3d-k3s-default-server-0'
INFO[0001] Starting Node 'k3d-k3s-default-server-0'
DEBU[0001] Waiting for node k3d-k3s-default-server-0 to get ready (Log: 'k3s is up and running')
TRAC[0001] NodeWaitForLogMessage: Node 'k3d-k3s-default-server-0' waiting for log message 'k3s is up and running' since '2021-05-08 20:21:26.350310007 +0000 UTC'
DEBU[0005] Finished waiting for log message 'k3s is up and running' from node 'k3d-k3s-default-server-0'
INFO[0005] Starting agents...
INFO[0005] Starting helpers...
DEBU[0005] Node k3d-k3s-default-serverlb Start Time: 2021-05-08 20:21:30.393313428 +0000 UTC m=+5.518784977
TRAC[0005] Starting node 'k3d-k3s-default-serverlb'
INFO[0005] Starting Node 'k3d-k3s-default-serverlb'
DEBU[0005] Waiting for node k3d-k3s-default-serverlb to get ready (Log: 'start worker processes')
TRAC[0005] NodeWaitForLogMessage: Node 'k3d-k3s-default-serverlb' waiting for log message 'start worker processes' since '2021-05-08 20:21:30.664256482 +0000 UTC'

Which OS & Architecture

$ cat /etc/redhat-release
Red Hat Enterprise Linux release 8.2 (Ootpa)

Which version of k3d

$ k3d version
k3d version v4.4.3
k3s version v1.20.6-k3s1 (default)

Which version of docker

$ docker version
Client: Docker Engine - Community
 Version:           20.10.6
 API version:       1.40
 Go version:        go1.13.15
 Git commit:        370c289
 Built:             Fri Apr  9 22:44:36 2021
 OS/Arch:           linux/amd64
 Context:           default
 Experimental:      true

Server: Docker Engine - Community
 Engine:
  Version:          19.03.15
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.13.15
  Git commit:       99e3ed8919
  Built:            Sat Jan 30 03:15:19 2021
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.2.6
  GitCommit:        894b81a4b802e4eb2a91d1ce216b8817763c29fb
 runc:
  Version:          1.0.0-rc8
  GitCommit:        425e105d5a03fabd737a126ad93d62a9eeede87f
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683
$ docker info
Client:
 Context:    default
 Debug Mode: false
 Plugins:
  app: Docker App (Docker Inc., v0.9.1-beta3)
  buildx: Build with BuildKit (Docker Inc., v0.5.1-docker)
  scan: Docker Scan (Docker Inc.)

Server:
 Containers: 8
  Running: 2
  Paused: 0
  Stopped: 6
 Images: 48
 Server Version: 19.03.15
 Storage Driver: overlay2
  Backing Filesystem: xfs
  Supports d_type: true
  Native Overlay Diff: true
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: 894b81a4b802e4eb2a91d1ce216b8817763c29fb
 runc version: 425e105d5a03fabd737a126ad93d62a9eeede87f
 init version: fec3683
 Security Options:
  seccomp
   Profile: default
 Kernel Version: 4.18.0-193.el8.x86_64
 Operating System: Red Hat Enterprise Linux 8.2 (Ootpa)
 OSType: linux
 Architecture: x86_64
 CPUs: 8
 Total Memory: 62.61GiB
 Name: f13-h26-b02-5039ms.rdu2.scalelab.redhat.com
 ID: 5SUV:MRLB:E2BH:QJS5:EBRX:AO55:T65R:P6GH:AWIJ:3IUI:ME3S:QU2I
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Username: ybettan
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false

Seems like k3d wait for logs that already exist:

$ docker logs k3d-k3s-default-serverlb
2021-05-08T20:21:30Z k3d-k3s-default-serverlb confd[7]: INFO Backend set to env
2021-05-08T20:21:30Z k3d-k3s-default-serverlb confd[7]: INFO Starting confd
2021-05-08T20:21:30Z k3d-k3s-default-serverlb confd[7]: INFO Backend source(s) set to
2021-05-08T20:21:30Z k3d-k3s-default-serverlb confd[7]: INFO /etc/nginx/nginx.conf has md5sum 9c9065b8e74f4b01f6eed5a7af1141b6 should be ab00b91435ed084fe99763a1cc04db57
2021-05-08T20:21:30Z k3d-k3s-default-serverlb confd[7]: INFO Target config /etc/nginx/nginx.conf out of sync
2021-05-08T20:21:30Z k3d-k3s-default-serverlb confd[7]: INFO Target config /etc/nginx/nginx.conf has been updated
===== Initial nginx configuration =====
error_log stderr notice;

worker_processes auto;
events {
  multi_accept on;
  use epoll;
  worker_connections 1025;
}

stream {

  #######
  # TCP #
  #######
  upstream server_nodes_6443 {
    server k3d-k3s-default-server-0:6443 max_fails=1 fail_timeout=10s;
  }

  server {
    listen        6443;
    proxy_pass    server_nodes_6443;
    proxy_timeout 600;
    proxy_connect_timeout 2s;
  }

  #######
  # UDP #
  #######
}
=======================================
2021/05/08 20:21:30 [notice] 13#13: using the "epoll" event method
2021/05/08 20:21:30 [notice] 13#13: nginx/1.19.10
2021/05/08 20:21:30 [notice] 13#13: built by gcc 10.2.1 20201203 (Alpine 10.2.1_pre1)
2021/05/08 20:21:30 [notice] 13#13: OS: Linux 4.18.0-193.el8.x86_64
2021/05/08 20:21:30 [notice] 13#13: getrlimit(RLIMIT_NOFILE): 1048576:1048576
2021/05/08 20:21:30 [notice] 13#13: start worker processes
2021/05/08 20:21:30 [notice] 13#13: start worker process 14
2021/05/08 20:21:30 [notice] 13#13: start worker process 15
2021/05/08 20:21:30 [notice] 13#13: start worker process 16
2021/05/08 20:21:30 [notice] 13#13: start worker process 17
2021/05/08 20:21:30 [notice] 13#13: start worker process 18
2021/05/08 20:21:30 [notice] 13#13: start worker process 19
2021/05/08 20:21:30 [notice] 13#13: start worker process 20
2021/05/08 20:21:30 [notice] 13#13: start worker process 21
@ybettan ybettan added the bug Something isn't working label May 8, 2021
@ybettan ybettan changed the title [BUG] Load balancer is waiting for logs that already exist. May 8, 2021
@iwilltry42
Copy link
Member

Hi @ybettan , thanks for opening this issue!
This looks indeed weird.
Is the server container running? Usually one only sees this if the server is restarting constantly and the loadbalancer cannot connect 🤔
Are you running on a remote docker daemon (e.g. ssh connection)?

On a side-note: which cgroup version is your system running? Seems like it's not shown in the docker info 🤔

@iwilltry42 iwilltry42 added this to the Backlog milestone May 10, 2021
@iwilltry42 iwilltry42 self-assigned this May 10, 2021
@ybettan
Copy link
Author

ybettan commented May 11, 2021

Is the server container running?

Thank you for your answer @iwilltry42

If you are talking about the docker demon, then yes, it is running:

$ systemctl is-active docker
active

Usually one only sees this if the server is restarting constantly and the loadbalancer cannot connect

I don't think this is the case here.

Are you running on a remote docker daemon (e.g. ssh connection)?

I am SSHing to that entire machine, k3d and docker demon are running on the same machine.

which cgroup version is your system running?

After following https://unix.stackexchange.com/questions/471476/how-do-i-check-cgroup-v2-is-installed-on-my-machine, seems like I have V2.

Maybe some extra info will help, when I reboot the machine, usually, the first try to install the cluster works BUT the kubectl get stuck on any command until I kill the process, then I delete the created cluster and try to create it again and then the error persists.

@iwilltry42
Copy link
Member

If you are talking about the docker demon, then yes, it is running:

I actually meant the containers, which you can check via docker ps -a.
Then you can check the logs of the server container via docker logs k3d-k3s-default-server-0.

seems like I have V2.

Then you would need to run export K3D_FIX_CGROUPV2=true before doing anything with k3d so that it (k3s inside docker) works on cgroups v2.

@ybettan
Copy link
Author

ybettan commented May 12, 2021

I actually meant the containers, which you can check via docker ps -a.
Then you can check the logs of the server container via docker logs k3d-k3s-default-server-0.

Yes, it runs and the logs exist.

Then you would need to run export K3D_FIX_CGROUPV2=true before doing anything with k3d

That didn't work, same issue, first installation works but kubectl isn't responding (process got stuck until I kill it). After delete the cluster and recreating it I get the original issue - get stuck on load balancer but logs do exist in the pod.

@iwilltry42
Copy link
Member

Yes, it runs and the logs exist.

Can you please paste them here or in a Gist/pastebin?

@ybettan
Copy link
Author

ybettan commented May 18, 2021

$ export K3D_FIX_CGROUPV2=true
$
$ k3d cluster create
INFO[0000] Prep: Network
INFO[0000] Created network 'k3d-k3s-default' (051a5188ad52a0c6f74a44fad94d72dc973598aad7d4a1f3544b3ef4fc8af6ff)
INFO[0000] Created volume 'k3d-k3s-default-images'
INFO[0001] Creating node 'k3d-k3s-default-server-0'
INFO[0002] Creating LoadBalancer 'k3d-k3s-default-serverlb'
INFO[0002] Starting cluster 'k3s-default'
INFO[0002] Starting servers...
INFO[0002] Starting Node 'k3d-k3s-default-server-0'
INFO[0008] Starting agents...
INFO[0008] Starting helpers...
INFO[0008] Starting Node 'k3d-k3s-default-serverlb'
...
(stuck)

$ docker logs k3d-k3s-default-server-0
time="2021-05-18T09:16:58.384962227Z" level=info msg="Starting k3s v1.20.6+k3s1 (8d043282)"
time="2021-05-18T09:16:58.388277933Z" level=info msg="Configuring sqlite3 database connection pooling: maxIdleConns=2, maxOpenConns=0, connMaxLifetime=0s"
time="2021-05-18T09:16:58.388317637Z" level=info msg="Configuring database table schema and indexes, this may take a moment..."
time="2021-05-18T09:16:58.389932412Z" level=info msg="Database tables and indexes are up to date"
time="2021-05-18T09:16:58.390884061Z" level=info msg="Kine listening on unix://kine.sock"
time="2021-05-18T09:16:58.402153965Z" level=info msg="certificate CN=system:admin,O=system:masters signed by CN=k3s-client-ca@1621329418: notBefore=2021-05-18 09:16:58 +0000 UTC notAfter=2022-05-18 09:16:58 +0000 UTC"
time="2021-05-18T09:16:58.402751410Z" level=info msg="certificate CN=system:kube-controller-manager signed by CN=k3s-client-ca@1621329418: notBefore=2021-05-18 09:16:58 +0000 UTC notAfter=2022-05-18 09:16:58 +0000 UTC"
time="2021-05-18T09:16:58.403293372Z" level=info msg="certificate CN=system:kube-scheduler signed by CN=k3s-client-ca@1621329418: notBefore=2021-05-18 09:16:58 +0000 UTC notAfter=2022-05-18 09:16:58 +0000 UTC"
time="2021-05-18T09:16:58.403791412Z" level=info msg="certificate CN=kube-apiserver signed by CN=k3s-client-ca@1621329418: notBefore=2021-05-18 09:16:58 +0000 UTC notAfter=2022-05-18 09:16:58 +0000 UTC"
time="2021-05-18T09:16:58.404358678Z" level=info msg="certificate CN=system:kube-proxy signed by CN=k3s-client-ca@1621329418: notBefore=2021-05-18 09:16:58 +0000 UTC notAfter=2022-05-18 09:16:58 +0000 UTC"
time="2021-05-18T09:16:58.404867615Z" level=info msg="certificate CN=system:k3s-controller signed by CN=k3s-client-ca@1621329418: notBefore=2021-05-18 09:16:58 +0000 UTC notAfter=2022-05-18 09:16:58 +0000 UTC"
time="2021-05-18T09:16:58.405486091Z" level=info msg="certificate CN=cloud-controller-manager signed by CN=k3s-client-ca@1621329418: notBefore=2021-05-18 09:16:58 +0000 UTC notAfter=2022-05-18 09:16:58 +0000 UTC"
time="2021-05-18T09:16:58.406225986Z" level=info msg="certificate CN=kube-apiserver signed by CN=k3s-server-ca@1621329418: notBefore=2021-05-18 09:16:58 +0000 UTC notAfter=2022-05-18 09:16:58 +0000 UTC"
time="2021-05-18T09:16:58.407216731Z" level=info msg="certificate CN=system:auth-proxy signed by CN=k3s-request-header-ca@1621329418: notBefore=2021-05-18 09:16:58 +0000 UTC notAfter=2022-05-18 09:16:58 +0000 UTC"                                                                                                                                                        time="2021-05-18T09:16:58.408114102Z" level=info msg="certificate CN=etcd-server signed by CN=etcd-server-ca@1621329418: notBefore=2021-05-18 09:16:58 +0000 UTC notAfter=2022-05-18 09:16:58 +0000 UTC"
time="2021-05-18T09:16:58.408671648Z" level=info msg="certificate CN=etcd-client signed by CN=etcd-server-ca@1621329418: notBefore=2021-05-18 09:16:58 +0000 UTC notAfter=2022-05-18 09:16:58 +0000 UTC"                                                                                                                                                                     time="2021-05-18T09:16:58.409575031Z" level=info msg="certificate CN=etcd-peer signed by CN=etcd-peer-ca@1621329418: notBefore=2021-05-18 09:16:58 +0000 UTC notAfter=2022-05-18 09:16:58 +0000 UTC"
time="2021-05-18T09:16:58.560672208Z" level=info msg="certificate CN=k3s,O=k3s signed by CN=k3s-server-ca@1621329418: notBefore=2021-05-18 09:16:58 +0000 UTC notAfter=2022-05-18 09:16:58 +0000 UTC"                                                                                                                                                                        time="2021-05-18T09:16:58.560988269Z" level=info msg="Active TLS secret  (ver=) (count 8): map[listener.cattle.io/cn-0.0.0.0:0.0.0.0 listener.cattle.io/cn-10.43.0.1:10.43.0.1 listener.cattle.io/cn-127.0.0.1:127.0.0.1 listener.cattle.io/cn-192.168.96.2:192.168.96.2 listener.cattle.io/cn-kubernetes:kubernetes listener.cattle.io/cn-kubernetes.default:kubernetes.defa
ult listener.cattle.io/cn-kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local listener.cattle.io/cn-localhost:localhost listener.cattle.io/fingerprint:SHA1=4103A790209DDB05C6EDB463C4BC449F1A1DBFA5]"                                                                                                                                                 time="2021-05-18T09:16:58.565626972Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/s
erver/tls/client-ca.crt --enable-admission-plugins=NodeRestriction --etcd-servers=unix://kine.sock --feature-gates=ServiceAccountIssuerDiscovery=false --insecure-port=0 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --request
header-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
Flag --insecure-port has been deprecated, This flag has no effect now and will be removed in v1.24.                                                                                                                                                                                                                                                                          I0518 09:16:58.566775       6 server.go:659] external host was not specified, using 192.168.96.2
I0518 09:16:58.567019       6 server.go:196] Version: v1.20.6+k3s1                                                                                                                                                                                                                                                                                                           I0518 09:16:58.949967       6 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0518 09:16:58.950307       6 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.                                                  I0518 09:16:58.950149       6 shared_informer.go:240] Waiting for caches to sync for node_authorizer
I0518 09:16:58.951488       6 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.              I0518 09:16:58.951505       6 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
I0518 09:16:58.968013       6 instance.go:289] Using reconciler: lease                                                                                                                                                                                                                                                                                                       I0518 09:16:58.997929       6 rest.go:131] the default service ipfamily for this cluster is: IPv4
W0518 09:16:59.267158       6 genericapiserver.go:425] Skipping API batch/v2alpha1 because it has no resources.                                                                                                                                                                                                                                                              W0518 09:16:59.280332       6 genericapiserver.go:425] Skipping API discovery.k8s.io/v1alpha1 because it has no resources.
W0518 09:16:59.292155       6 genericapiserver.go:425] Skipping API node.k8s.io/v1alpha1 because it has no resources.                                                                                                                                                                                                                                                        W0518 09:16:59.300151       6 genericapiserver.go:425] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W0518 09:16:59.304588       6 genericapiserver.go:425] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.                                                                                                                                                                                                                                                  W0518 09:16:59.311527       6 genericapiserver.go:425] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W0518 09:16:59.315050       6 genericapiserver.go:425] Skipping API flowcontrol.apiserver.k8s.io/v1alpha1 because it has no resources.                                                                                                                                                                                                                                       W0518 09:16:59.320427       6 genericapiserver.go:425] Skipping API apps/v1beta2 because it has no resources.
W0518 09:16:59.320446       6 genericapiserver.go:425] Skipping API apps/v1beta1 because it has no resources.                                                                                                                                                                                                                                                                I0518 09:16:59.332724       6 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0518 09:16:59.332743       6 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.                                                  time="2021-05-18T09:16:59.345349591Z" level=info msg="Running kube-scheduler --address=127.0.0.1 --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --port=10251 --profiling=false --secure-port=0"
time="2021-05-18T09:16:59.345366663Z" level=info msg="Waiting for API server to become available"                                                                                                                                                                                                                                                                            time="2021-05-18T09:16:59.345704719Z" level=info msg="Running kube-controller-manager --address=127.0.0.1 --allocate-node-cidrs=true --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --configure-cloud-routes=false -
-controllers=*,-service,-route,-cloud-node-lifecycle --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --port=10252 --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=0 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --use-service-account-credentials=true"
time="2021-05-18T09:16:59.346559191Z" level=info msg="Node token is available at /var/lib/rancher/k3s/server/token"                                                                                                                                                                                                                                                          time="2021-05-18T09:16:59.346682122Z" level=info msg="To join node to cluster: k3s agent -s https://192.168.96.2:6443 -t ${NODE_TOKEN}"
time="2021-05-18T09:16:59.347568592Z" level=info msg="Wrote kubeconfig /output/kubeconfig.yaml"                                                                                                                                                                                                                                                                              time="2021-05-18T09:16:59.347731953Z" level=info msg="Run: k3s kubectl"
time="2021-05-18T09:16:59.347892344Z" level=info msg="Module overlay was already loaded"                                                                                                                                                                                                                                                                                     time="2021-05-18T09:16:59.347921563Z" level=info msg="Module nf_conntrack was already loaded"
time="2021-05-18T09:16:59.347936896Z" level=info msg="Module br_netfilter was already loaded"                                                                                                                                                                                                                                                                                time="2021-05-18T09:16:59.348702833Z" level=warning msg="Failed to start iptable_nat module"
time="2021-05-18T09:16:59.348823615Z" level=warning msg="Failed to write value 1 at /proc/sys/net/bridge/bridge-nf-call-iptables: open /proc/sys/net/bridge/bridge-nf-call-iptables: no such file or directory"                                                                                                                                                              time="2021-05-18T09:16:59.348837106Z" level=warning msg="Failed to write value 1 at /proc/sys/net/bridge/bridge-nf-call-ip6tables: open /proc/sys/net/bridge/bridge-nf-call-ip6tables: no such file or directory"
time="2021-05-18T09:16:59.373904047Z" level=info msg="Cluster-Http-Server 2021/05/18 09:16:59 http: TLS handshake error from 127.0.0.1:38926: remote error: tls: bad certificate"                                                                                                                                                                                            time="2021-05-18T09:16:59.377356590Z" level=info msg="Cluster-Http-Server 2021/05/18 09:16:59 http: TLS handshake error from 127.0.0.1:38932: remote error: tls: bad certificate"
time="2021-05-18T09:16:59.387438331Z" level=info msg="certificate CN=k3d-k3s-default-server-0 signed by CN=k3s-server-ca@1621329418: notBefore=2021-05-18 09:16:58 +0000 UTC notAfter=2022-05-18 09:16:59 +0000 UTC"                                                                                                                                                         time="2021-05-18T09:16:59.389884222Z" level=info msg="certificate CN=system:node:k3d-k3s-default-server-0,O=system:nodes signed by CN=k3s-client-ca@1621329418: notBefore=2021-05-18 09:16:58 +0000 UTC notAfter=2022-05-18 09:16:59 +0000 UTC"
time="2021-05-18T09:16:59.411144118Z" level=info msg="Logging containerd to /var/lib/rancher/k3s/agent/containerd/containerd.log"                                                                                                                                                                                                                                            time="2021-05-18T09:16:59.411369189Z" level=info msg="Running containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd"
time="2021-05-18T09:17:00.412744029Z" level=info msg="Containerd is now running"
time="2021-05-18T09:17:00.426370889Z" level=info msg="Connecting to proxy" url="wss://127.0.0.1:6443/v1-k3s/connect"
time="2021-05-18T09:17:00.430169873Z" level=info msg="Handling backend connection request [k3d-k3s-default-server-0]"
time="2021-05-18T09:17:00.431388944Z" level=warning msg="Disabling CPU quotas due to missing cpu.cfs_period_us"
time="2021-05-18T09:17:00.431455099Z" level=info msg="Running kubelet --address=0.0.0.0 --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=cgroupfs --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --cni-bin-dir=/bin --cni
-conf-dir=/var/lib/rancher/k3s/agent/etc/cni/net.d --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --container-runtime=remote --containerd=/run/k3s/containerd/containerd.sock --cpu-cfs-quota=false --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=fa
lse --healthz-bind-address=127.0.0.1 --hostname-override=k3d-k3s-default-server-0 --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --kubelet-cgroups=/k3s --node-labels= --pod-manifest-path=/var/lib/rancher/k3s/agent/pod-manifests --read-only-port=0 --resolv-conf=/tmp/k3s-resolv.conf --runtime-cgroups=/k3s --serialize-image-pulls=false --tls-cert-file=/v
ar/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key"
time="2021-05-18T09:17:00.431997683Z" level=info msg="Running kube-proxy --cluster-cidr=10.42.0.0/16 --healthz-bind-address=127.0.0.1 --hostname-override=k3d-k3s-default-server-0 --kubeconfig=/var/lib/rancher/k3s/agent/kubeproxy.kubeconfig --proxy-mode=iptables"
Flag --cloud-provider has been deprecated, will be removed in 1.23, in favor of removing cloud provider code from Kubelet.
Flag --containerd has been deprecated, This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.
I0518 09:17:00.435743       6 server.go:412] Version: v1.20.6+k3s1
W0518 09:17:00.439863       6 server.go:226] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP.
W0518 09:17:00.440314       6 proxier.go:651] Failed to read file /lib/modules/4.18.0-193.el8.x86_64/modules.builtin with error open /lib/modules/4.18.0-193.el8.x86_64/modules.builtin: no such file or directory. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
time="2021-05-18T09:17:00.444401774Z" level=info msg="Waiting for node k3d-k3s-default-server-0: nodes \"k3d-k3s-default-server-0\" not found"
W0518 09:17:00.446241       6 proxier.go:661] Failed to load kernel module ip_vs with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W0518 09:17:00.450022       6 proxier.go:661] Failed to load kernel module ip_vs_rr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W0518 09:17:00.461051       6 proxier.go:661] Failed to load kernel module ip_vs_wrr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
I0518 09:17:00.462379       6 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/agent/client-ca.crt
W0518 09:17:00.469181       6 proxier.go:661] Failed to load kernel module ip_vs_sh with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W0518 09:17:00.472383       6 proxier.go:661] Failed to load kernel module nf_conntrack_ipv4 with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
E0518 09:17:00.478724       6 node.go:161] Failed to retrieve node info: nodes "k3d-k3s-default-server-0" is forbidden: User "system:kube-proxy" cannot get resource "nodes" in API group "" at the cluster scope
I0518 09:17:01.475388       6 dynamic_cafile_content.go:167] Starting request-header::/var/lib/rancher/k3s/server/tls/request-header-ca.crt
I0518 09:17:01.475428       6 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/server/tls/client-ca.crt
I0518 09:17:01.475861       6 dynamic_serving_content.go:130] Starting serving-cert::/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt::/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key
I0518 09:17:01.476193       6 secure_serving.go:197] Serving securely on 127.0.0.1:6444
I0518 09:17:01.476301       6 tlsconfig.go:240] Starting DynamicServingCertificateController
I0518 09:17:01.476687       6 dynamic_serving_content.go:130] Starting aggregator-proxy-cert::/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt::/var/lib/rancher/k3s/server/tls/client-auth-proxy.key
I0518 09:17:01.476704       6 available_controller.go:475] Starting AvailableConditionController
I0518 09:17:01.476707       6 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I0518 09:17:01.476717       6 controller.go:83] Starting OpenAPI AggregationController
I0518 09:17:01.477238       6 apiservice_controller.go:97] Starting APIServiceRegistrationController
I0518 09:17:01.477259       6 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I0518 09:17:01.477274       6 autoregister_controller.go:141] Starting autoregister controller
I0518 09:17:01.477277       6 cache.go:32] Waiting for caches to sync for autoregister controller
I0518 09:17:01.477334       6 apf_controller.go:261] Starting API Priority and Fairness config controller
I0518 09:17:01.477783       6 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
I0518 09:17:01.477790       6 shared_informer.go:240] Waiting for caches to sync for cluster_authentication_trust_controller
I0518 09:17:01.479432       6 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/server/tls/client-ca.crt
I0518 09:17:01.479466       6 dynamic_cafile_content.go:167] Starting request-header::/var/lib/rancher/k3s/server/tls/request-header-ca.crt
I0518 09:17:01.479894       6 customresource_discovery_controller.go:209] Starting DiscoveryController
I0518 09:17:01.480054       6 controller.go:86] Starting OpenAPI controller
I0518 09:17:01.480077       6 naming_controller.go:291] Starting NamingConditionController
I0518 09:17:01.480091       6 establishing_controller.go:76] Starting EstablishingController
I0518 09:17:01.480102       6 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
I0518 09:17:01.480114       6 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
I0518 09:17:01.480125       6 crd_finalizer.go:266] Starting CRDFinalizer
I0518 09:17:01.482644       6 crdregistration_controller.go:111] Starting crd-autoregister controller
I0518 09:17:01.482658       6 shared_informer.go:240] Waiting for caches to sync for crd-autoregister
time="2021-05-18T09:17:01.503454203Z" level=info msg="Waiting for cloudcontroller rbac role to be created"
I0518 09:17:01.550450       6 shared_informer.go:247] Caches are synced for node_authorizer
E0518 09:17:01.575843       6 node.go:161] Failed to retrieve node info: nodes "k3d-k3s-default-server-0" is forbidden: User "system:kube-proxy" cannot get resource "nodes" in API group "" at the cluster scope
I0518 09:17:01.576805       6 cache.go:39] Caches are synced for AvailableConditionController controller
I0518 09:17:01.577352       6 cache.go:39] Caches are synced for autoregister controller
I0518 09:17:01.577645       6 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0518 09:17:01.577728       6 apf_controller.go:266] Running API Priority and Fairness config worker
I0518 09:17:01.577815       6 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller
I0518 09:17:01.582694       6 shared_informer.go:247] Caches are synced for crd-autoregister
I0518 09:17:01.583771       6 controller.go:609] quota admission added evaluator for: namespaces
E0518 09:17:01.617130       6 controller.go:151] Unable to perform initial Kubernetes service initialization: Service "kubernetes" is invalid: spec.clusterIPs: Invalid value: []string{"10.43.0.1"}: failed to allocated ip:10.43.0.1 with error:cannot allocate resources of type serviceipallocations at this time
E0518 09:17:01.617676       6 controller.go:156] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.96.2, ResourceVersion: 0, AdditionalErrorMsg:
time="2021-05-18T09:17:02.449520898Z" level=info msg="Waiting for node k3d-k3s-default-server-0: nodes \"k3d-k3s-default-server-0\" not found"
I0518 09:17:02.475363       6 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I0518 09:17:02.475382       6 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0518 09:17:02.483646       6 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
I0518 09:17:02.486720       6 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
I0518 09:17:02.486736       6 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I0518 09:17:03.198435       6 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0518 09:17:03.225211       6 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
W0518 09:17:03.351657       6 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.96.2]
I0518 09:17:03.352482       6 controller.go:609] quota admission added evaluator for: endpoints
I0518 09:17:03.356058       6 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
time="2021-05-18T09:17:03.504302300Z" level=info msg="Kube API server is now running"
time="2021-05-18T09:17:03.504328886Z" level=info msg="k3s is up and running"
Flag --address has been deprecated, see --bind-address instead.
I0518 09:17:03.508417       6 controllermanager.go:176] Version: v1.20.6+k3s1
I0518 09:17:03.508637       6 deprecated_insecure_serving.go:53] Serving insecurely on 127.0.0.1:10252
time="2021-05-18T09:17:03.511550779Z" level=info msg="Creating CRD addons.k3s.cattle.io"
time="2021-05-18T09:17:03.519681885Z" level=info msg="Creating CRD helmcharts.helm.cattle.io"
W0518 09:17:03.520169       6 authorization.go:47] Authorization is disabled
W0518 09:17:03.520181       6 authentication.go:40] Authentication is disabled
I0518 09:17:03.520189       6 deprecated_insecure_serving.go:51] Serving healthz insecurely on 127.0.0.1:10251
time="2021-05-18T09:17:03.522158722Z" level=info msg="Creating CRD helmchartconfigs.helm.cattle.io"
time="2021-05-18T09:17:03.527325378Z" level=info msg="Waiting for CRD addons.k3s.cattle.io to become available"
E0518 09:17:03.630538       6 node.go:161] Failed to retrieve node info: nodes "k3d-k3s-default-server-0" not found
time="2021-05-18T09:17:04.030611936Z" level=info msg="Done waiting for CRD addons.k3s.cattle.io to become available"
time="2021-05-18T09:17:04.030633592Z" level=info msg="Waiting for CRD helmcharts.helm.cattle.io to become available"
time="2021-05-18T09:17:04.454106623Z" level=info msg="Waiting for node k3d-k3s-default-server-0: nodes \"k3d-k3s-default-server-0\" not found"
time="2021-05-18T09:17:04.533002952Z" level=info msg="Done waiting for CRD helmcharts.helm.cattle.io to become available"
time="2021-05-18T09:17:04.533024576Z" level=info msg="Waiting for CRD helmchartconfigs.helm.cattle.io to become available"
time="2021-05-18T09:17:05.035912885Z" level=info msg="Done waiting for CRD helmchartconfigs.helm.cattle.io to become available"
time="2021-05-18T09:17:05.043640866Z" level=info msg="Writing static file: /var/lib/rancher/k3s/server/static/charts/traefik-1.81.0.tgz"
time="2021-05-18T09:17:05.043943442Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/rolebindings.yaml"
time="2021-05-18T09:17:05.044082148Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/coredns.yaml"
time="2021-05-18T09:17:05.044224892Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/auth-delegator.yaml"
time="2021-05-18T09:17:05.044318100Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/auth-reader.yaml"
time="2021-05-18T09:17:05.044408757Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/resource-reader.yaml"
time="2021-05-18T09:17:05.044508290Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-deployment.yaml"
time="2021-05-18T09:17:05.044598048Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-service.yaml"
time="2021-05-18T09:17:05.044693646Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/traefik.yaml"
time="2021-05-18T09:17:05.044787854Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/ccm.yaml"
time="2021-05-18T09:17:05.044908380Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/local-storage.yaml"
time="2021-05-18T09:17:05.044997496Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/aggregated-metrics-reader.yaml"
time="2021-05-18T09:17:05.045092037Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/metrics-apiservice.yaml"
time="2021-05-18T09:17:05.147069527Z" level=info msg="Starting k3s.cattle.io/v1, Kind=Addon controller"
time="2021-05-18T09:17:05.151709566Z" level=info msg="Waiting for control-plane node k3d-k3s-default-server-0 startup: nodes \"k3d-k3s-default-server-0\" not found"
time="2021-05-18T09:17:05.159241573Z" level=info msg="Cluster dns configmap has been set successfully"
I0518 09:17:05.173379       6 request.go:655] Throttling request took 1.049009777s, request: GET:https://127.0.0.1:6444/apis/scheduling.k8s.io/v1beta1?timeout=32s
I0518 09:17:05.175131       6 controller.go:609] quota admission added evaluator for: addons.k3s.cattle.io
I0518 09:17:05.186670       6 controller.go:609] quota admission added evaluator for: serviceaccounts
I0518 09:17:05.221595       6 controller.go:609] quota admission added evaluator for: deployments.apps
time="2021-05-18T09:17:05.248510091Z" level=info msg="Starting helm.cattle.io/v1, Kind=HelmChartConfig controller"
time="2021-05-18T09:17:05.248543623Z" level=info msg="Starting /v1, Kind=Node controller"
time="2021-05-18T09:17:05.248560328Z" level=info msg="Starting /v1, Kind=Service controller"
time="2021-05-18T09:17:05.248526355Z" level=info msg="Starting helm.cattle.io/v1, Kind=HelmChart controller"
time="2021-05-18T09:17:05.248562211Z" level=info msg="Starting /v1, Kind=Pod controller"
time="2021-05-18T09:17:05.248533829Z" level=info msg="Starting batch/v1, Kind=Job controller"
time="2021-05-18T09:17:05.248538802Z" level=info msg="Starting /v1, Kind=Endpoints controller"
time="2021-05-18T09:17:05.430029198Z" level=info msg="Stopped tunnel to 127.0.0.1:6443"
time="2021-05-18T09:17:05.430087044Z" level=info msg="Connecting to proxy" url="wss://192.168.96.2:6443/v1-k3s/connect"
time="2021-05-18T09:17:05.430104779Z" level=info msg="Proxy done" err="context canceled" url="wss://127.0.0.1:6443/v1-k3s/connect"
time="2021-05-18T09:17:05.430233024Z" level=info msg="error in remotedialer server [400]: websocket: close 1006 (abnormal closure): unexpected EOF"
time="2021-05-18T09:17:05.432232198Z" level=info msg="Handling backend connection request [k3d-k3s-default-server-0]"
I0518 09:17:05.459414       6 controller.go:609] quota admission added evaluator for: helmcharts.helm.cattle.io
W0518 09:17:05.470963       6 info.go:53] Couldn't collect info from any of the files in "/etc/machine-id,/var/lib/dbus/machine-id"
I0518 09:17:05.471333       6 server.go:645] --cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /
I0518 09:17:05.471658       6 container_manager_linux.go:287] container manager verified user specified cgroup-root exists: []
I0518 09:17:05.471672       6 container_manager_linux.go:292] Creating Container Manager object based on Node Config: {RuntimeCgroupsName:/k3s SystemCgroupsName: KubeletCgroupsName:/k3s ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroup
Name: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s
MinReclaim:<nil>}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:false CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none Rootless:false}
I0518 09:17:05.471742       6 topology_manager.go:120] [topologymanager] Creating topology manager with none policy per container scope
I0518 09:17:05.471755       6 container_manager_linux.go:323] [topologymanager] Initializing Topology Manager with none policy and container-level scope
I0518 09:17:05.471761       6 container_manager_linux.go:328] Creating device plugin manager: true
I0518 09:17:05.472054       6 kubelet.go:265] Adding pod path: /var/lib/rancher/k3s/agent/pod-manifests
I0518 09:17:05.472079       6 kubelet.go:276] Watching apiserver
I0518 09:17:05.472129       6 kubelet.go:453] Kubelet client is not nil
I0518 09:17:05.492619       6 controller.go:609] quota admission added evaluator for: jobs.batch
I0518 09:17:05.493565       6 kuberuntime_manager.go:216] Container runtime containerd initialized, version: v1.4.4-k3s1, apiVersion: v1alpha2
W0518 09:17:05.493777       6 probe.go:268] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
I0518 09:17:05.494317       6 server.go:1177] Started kubelet
I0518 09:17:05.497143       6 server.go:148] Starting to listen on 0.0.0.0:10250
I0518 09:17:05.497599       6 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer
I0518 09:17:05.497892       6 server.go:410] Adding debug handlers to kubelet server.
E0518 09:17:05.498464       6 cri_stats_provider.go:376] Failed to get the info of the filesystem with mountpoint "/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.overlayfs": unable to find data in memory cache.
E0518 09:17:05.498483       6 kubelet.go:1296] Image garbage collection failed once. Stats initialization may not have completed yet: invalid capacity 0 on image filesystem
I0518 09:17:05.500400       6 volume_manager.go:271] Starting Kubelet Volume Manager
I0518 09:17:05.500477       6 desired_state_of_world_populator.go:142] Desired state populator starts to run
E0518 09:17:05.515169       6 nodelease.go:49] failed to get node "k3d-k3s-default-server-0" when trying to set owner ref to the node lease: nodes "k3d-k3s-default-server-0" not found
I0518 09:17:05.546092       6 cpu_manager.go:193] [cpumanager] starting with none policy
I0518 09:17:05.546109       6 cpu_manager.go:194] [cpumanager] reconciling every 10s
I0518 09:17:05.546133       6 state_mem.go:36] [cpumanager] initializing new in-memory state store
I0518 09:17:05.547260       6 policy_none.go:43] [cpumanager] none policy: Start
W0518 09:17:05.552061       6 manager.go:594] Failed to retrieve checkpoint for "kubelet_internal_checkpoint": checkpoint is not found
I0518 09:17:05.552588       6 plugin_manager.go:114] Starting Kubelet Plugin Manager
E0518 09:17:05.553213       6 eviction_manager.go:260] eviction manager: failed to get summary stats: failed to get node info: node "k3d-k3s-default-server-0" not found
I0518 09:17:05.602021       6 kubelet_node_status.go:71] Attempting to register node k3d-k3s-default-server-0
E0518 09:17:05.604046       6 kubelet.go:2268] node "k3d-k3s-default-server-0" not found
I0518 09:17:05.685748       6 kubelet_network_linux.go:56] Initialized IPv4 iptables rules.
I0518 09:17:05.685776       6 status_manager.go:158] Starting to sync pod status with apiserver
I0518 09:17:05.685795       6 kubelet.go:1833] Starting kubelet main sync loop.
E0518 09:17:05.685838       6 kubelet.go:1857] skipping pod synchronization - PLEG is not healthy: pleg has yet to be successful
E0518 09:17:05.704277       6 kubelet.go:2268] node "k3d-k3s-default-server-0" not found
time="2021-05-18T09:17:05.773344593Z" level=info msg="Starting /v1, Kind=Secret controller"
I0518 09:17:05.803572       6 reconciler.go:157] Reconciler: start to sync state
E0518 09:17:05.804770       6 kubelet.go:2268] node "k3d-k3s-default-server-0" not found
E0518 09:17:05.905151       6 kubelet.go:2268] node "k3d-k3s-default-server-0" not found
time="2021-05-18T09:17:05.950425309Z" level=info msg="Active TLS secret k3s-serving (ver=263) (count 8): map[listener.cattle.io/cn-0.0.0.0:0.0.0.0 listener.cattle.io/cn-10.43.0.1:10.43.0.1 listener.cattle.io/cn-127.0.0.1:127.0.0.1 listener.cattle.io/cn-192.168.96.2:192.168.96.2 listener.cattle.io/cn-kubernetes:kubernetes listener.cattle.io/cn-kubernetes.default:k
ubernetes.default listener.cattle.io/cn-kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local listener.cattle.io/cn-localhost:localhost listener.cattle.io/fingerprint:SHA1=4103A790209DDB05C6EDB463C4BC449F1A1DBFA5]"
I0518 09:17:05.955184       6 kubelet_node_status.go:74] Successfully registered node k3d-k3s-default-server-0
I0518 09:17:06.076693       6 shared_informer.go:240] Waiting for caches to sync for tokens
I0518 09:17:06.085334       6 controllermanager.go:554] Started "clusterrole-aggregation"
I0518 09:17:06.085473       6 clusterroleaggregation_controller.go:149] Starting ClusterRoleAggregator
I0518 09:17:06.085482       6 shared_informer.go:240] Waiting for caches to sync for ClusterRoleAggregator
I0518 09:17:06.092956       6 controllermanager.go:554] Started "pvc-protection"
I0518 09:17:06.093082       6 pvc_protection_controller.go:110] Starting PVC protection controller
I0518 09:17:06.093091       6 shared_informer.go:240] Waiting for caches to sync for PVC protection
I0518 09:17:06.101648       6 controllermanager.go:554] Started "pv-protection"
W0518 09:17:06.101666       6 controllermanager.go:546] Skipping "ttl-after-finished"
I0518 09:17:06.101746       6 pv_protection_controller.go:83] Starting PV protection controller
I0518 09:17:06.101756       6 shared_informer.go:240] Waiting for caches to sync for PV protection
I0518 09:17:06.108370       6 controllermanager.go:554] Started "replicationcontroller"
I0518 09:17:06.108508       6 replica_set.go:182] Starting replicationcontroller controller
I0518 09:17:06.108517       6 shared_informer.go:240] Waiting for caches to sync for ReplicationController
I0518 09:17:06.118520       6 controllermanager.go:554] Started "cronjob"
I0518 09:17:06.118614       6 cronjob_controller.go:96] Starting CronJob Manager
I0518 09:17:06.126255       6 controllermanager.go:554] Started "persistentvolume-expander"
I0518 09:17:06.126363       6 expand_controller.go:310] Starting expand controller
I0518 09:17:06.126372       6 shared_informer.go:240] Waiting for caches to sync for expand
I0518 09:17:06.134636       6 controllermanager.go:554] Started "statefulset"
I0518 09:17:06.134773       6 stateful_set.go:146] Starting stateful set controller
I0518 09:17:06.134781       6 shared_informer.go:240] Waiting for caches to sync for stateful set
I0518 09:17:06.141832       6 node_ipam_controller.go:91] Sending events to api server.
I0518 09:17:06.176930       6 shared_informer.go:247] Caches are synced for tokens
time="2021-05-18T09:17:06.351605369Z" level=info msg="Updated coredns node hosts entry [192.168.96.2 k3d-k3s-default-server-0]"
time="2021-05-18T09:17:06.457361941Z" level=info msg="Waiting for node k3d-k3s-default-server-0 CIDR not assigned yet"
time="2021-05-18T09:17:06.509908149Z" level=info msg="Running cloud-controller-manager with args --profiling=false"
I0518 09:17:06.516285       6 controllermanager.go:141] Version: v1.20.6+k3s1
time="2021-05-18T09:17:06.953058661Z" level=info msg="Control-plane role label has been set successfully on node: k3d-k3s-default-server-0"
W0518 09:17:06.964736       6 handler_proxy.go:102] no RequestInfo found in the context
E0518 09:17:06.964785       6 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
I0518 09:17:06.964799       6 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0518 09:17:07.802943       6 node.go:172] Successfully retrieved node IP: 192.168.96.2
I0518 09:17:07.802972       6 server_others.go:143] kube-proxy node IP is an IPv4 address (192.168.96.2), assume IPv4 operation
I0518 09:17:07.803946       6 server_others.go:186] Using iptables Proxier.
I0518 09:17:07.804128       6 server.go:650] Version: v1.20.6+k3s1
I0518 09:17:07.804445       6 conntrack.go:52] Setting nf_conntrack_max to 262144
I0518 09:17:07.804512       6 conntrack.go:103] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I0518 09:17:07.804545       6 conntrack.go:103] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I0518 09:17:07.804738       6 config.go:315] Starting service config controller
I0518 09:17:07.804746       6 shared_informer.go:240] Waiting for caches to sync for service config
I0518 09:17:07.805151       6 config.go:224] Starting endpoint slice config controller
I0518 09:17:07.805158       6 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I0518 09:17:07.904860       6 shared_informer.go:247] Caches are synced for service config
I0518 09:17:07.905260       6 shared_informer.go:247] Caches are synced for endpoint slice config
time="2021-05-18T09:17:08.462107544Z" level=info msg="Waiting for node k3d-k3s-default-server-0 CIDR not assigned yet"
E0518 09:17:09.230721       6 controllermanager.go:391] unable to get all supported resources from server: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0518 09:17:09.232038       6 node_controller.go:115] Sending events to api server.
I0518 09:17:09.232106       6 controllermanager.go:258] Started "cloud-node"
I0518 09:17:09.232147       6 node_controller.go:154] Waiting for informer caches to sync
I0518 09:17:09.233171       6 node_lifecycle_controller.go:77] Sending events to api server
I0518 09:17:09.233198       6 controllermanager.go:258] Started "cloud-node-lifecycle"
I0518 09:17:09.332329       6 node_controller.go:390] Initializing node k3d-k3s-default-server-0 with cloud provider
time="2021-05-18T09:17:09.332381968Z" level=info msg="Couldn't find node internal ip label on node k3d-k3s-default-server-0"
time="2021-05-18T09:17:09.332429432Z" level=info msg="Couldn't find node hostname label on node k3d-k3s-default-server-0"
time="2021-05-18T09:17:09.335188206Z" level=info msg="Couldn't find node internal ip label on node k3d-k3s-default-server-0"
time="2021-05-18T09:17:09.335210346Z" level=info msg="Couldn't find node hostname label on node k3d-k3s-default-server-0"
I0518 09:17:09.337664       6 node_controller.go:454] Successfully initialized node k3d-k3s-default-server-0 with cloud provider
I0518 09:17:09.337865       6 event.go:291] "Event occurred" object="k3d-k3s-default-server-0" kind="Node" apiVersion="v1" type="Normal" reason="Synced" message="Node synced successfully"
time="2021-05-18T09:17:10.469600113Z" level=info msg="Waiting for node k3d-k3s-default-server-0 CIDR not assigned yet"
time="2021-05-18T09:17:12.475127841Z" level=info msg="Waiting for node k3d-k3s-default-server-0 CIDR not assigned yet"
time="2021-05-18T09:17:14.485372935Z" level=info msg="Waiting for node k3d-k3s-default-server-0 CIDR not assigned yet"
I0518 09:17:15.528391       6 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
I0518 09:17:16.148232       6 range_allocator.go:82] Sending events to api server.
I0518 09:17:16.148353       6 range_allocator.go:110] No Service CIDR provided. Skipping filtering out service addresses.
I0518 09:17:16.148362       6 range_allocator.go:116] No Secondary Service CIDR provided. Skipping filtering out secondary service addresses.
I0518 09:17:16.148386       6 controllermanager.go:554] Started "nodeipam"
I0518 09:17:16.148452       6 node_ipam_controller.go:159] Starting ipam controller
I0518 09:17:16.148462       6 shared_informer.go:240] Waiting for caches to sync for node
I0518 09:17:16.154626       6 controllermanager.go:554] Started "persistentvolume-binder"
W0518 09:17:16.154642       6 controllermanager.go:533] "route" is disabled
I0518 09:17:16.154732       6 pv_controller_base.go:307] Starting persistent volume controller
I0518 09:17:16.154739       6 shared_informer.go:240] Waiting for caches to sync for persistent volume
I0518 09:17:16.162148       6 controllermanager.go:554] Started "endpointslice"
W0518 09:17:16.162164       6 controllermanager.go:533] "bootstrapsigner" is disabled
W0518 09:17:16.162169       6 controllermanager.go:533] "service" is disabled
I0518 09:17:16.162252       6 endpointslice_controller.go:237] Starting endpoint slice controller
I0518 09:17:16.162258       6 shared_informer.go:240] Waiting for caches to sync for endpoint_slice
I0518 09:17:16.167404       6 controllermanager.go:554] Started "csrcleaner"
I0518 09:17:16.167479       6 cleaner.go:82] Starting CSR cleaner controller
I0518 09:17:16.169694       6 node_lifecycle_controller.go:380] Sending events to api server.
I0518 09:17:16.169811       6 taint_manager.go:163] Sending events to api server.
I0518 09:17:16.169852       6 node_lifecycle_controller.go:508] Controller will reconcile labels.
I0518 09:17:16.169869       6 controllermanager.go:554] Started "nodelifecycle"
W0518 09:17:16.169874       6 controllermanager.go:533] "cloud-node-lifecycle" is disabled
I0518 09:17:16.169914       6 node_lifecycle_controller.go:542] Starting node controller
I0518 09:17:16.169922       6 shared_informer.go:240] Waiting for caches to sync for taint
I0518 09:17:16.177932       6 controllermanager.go:554] Started "root-ca-cert-publisher"
I0518 09:17:16.178033       6 publisher.go:98] Starting root CA certificate configmap publisher
I0518 09:17:16.178040       6 shared_informer.go:240] Waiting for caches to sync for crt configmap
I0518 09:17:16.184784       6 controllermanager.go:554] Started "endpoint"
I0518 09:17:16.184900       6 endpoints_controller.go:184] Starting endpoint controller
I0518 09:17:16.184907       6 shared_informer.go:240] Waiting for caches to sync for endpoint
I0518 09:17:16.192143       6 controllermanager.go:554] Started "daemonset"
I0518 09:17:16.192253       6 daemon_controller.go:285] Starting daemon sets controller
I0518 09:17:16.192262       6 shared_informer.go:240] Waiting for caches to sync for daemon sets
I0518 09:17:16.197843       6 controllermanager.go:554] Started "replicaset"
I0518 09:17:16.197946       6 replica_set.go:182] Starting replicaset controller
I0518 09:17:16.197954       6 shared_informer.go:240] Waiting for caches to sync for ReplicaSet
I0518 09:17:16.204240       6 controllermanager.go:554] Started "attachdetach"
I0518 09:17:16.204339       6 attach_detach_controller.go:328] Starting attach detach controller
I0518 09:17:16.204346       6 shared_informer.go:240] Waiting for caches to sync for attach detach
I0518 09:17:16.210143       6 controllermanager.go:554] Started "podgc"
I0518 09:17:16.210234       6 gc_controller.go:89] Starting GC controller
I0518 09:17:16.210241       6 shared_informer.go:240] Waiting for caches to sync for GC
I0518 09:17:16.295163       6 controllermanager.go:554] Started "ttl"
W0518 09:17:16.295185       6 controllermanager.go:533] "tokencleaner" is disabled
I0518 09:17:16.295225       6 ttl_controller.go:121] Starting TTL controller
I0518 09:17:16.295231       6 shared_informer.go:240] Waiting for caches to sync for TTL
I0518 09:17:16.345035       6 certificate_controller.go:118] Starting certificate controller "csrsigning-kubelet-serving"
I0518 09:17:16.345053       6 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
I0518 09:17:16.345088       6 dynamic_serving_content.go:130] Starting csr-controller::/var/lib/rancher/k3s/server/tls/client-ca.crt::/var/lib/rancher/k3s/server/tls/client-ca.key
I0518 09:17:16.345323       6 certificate_controller.go:118] Starting certificate controller "csrsigning-kubelet-client"
I0518 09:17:16.345330       6 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-kubelet-client
I0518 09:17:16.345348       6 dynamic_serving_content.go:130] Starting csr-controller::/var/lib/rancher/k3s/server/tls/client-ca.crt::/var/lib/rancher/k3s/server/tls/client-ca.key
I0518 09:17:16.345499       6 controllermanager.go:554] Started "csrsigning"
I0518 09:17:16.345538       6 certificate_controller.go:118] Starting certificate controller "csrsigning-kube-apiserver-client"
I0518 09:17:16.345541       6 certificate_controller.go:118] Starting certificate controller "csrsigning-legacy-unknown"
I0518 09:17:16.345545       6 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
I0518 09:17:16.345550       6 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
I0518 09:17:16.345559       6 dynamic_serving_content.go:130] Starting csr-controller::/var/lib/rancher/k3s/server/tls/client-ca.crt::/var/lib/rancher/k3s/server/tls/client-ca.key
I0518 09:17:16.345567       6 dynamic_serving_content.go:130] Starting csr-controller::/var/lib/rancher/k3s/server/tls/client-ca.crt::/var/lib/rancher/k3s/server/tls/client-ca.key
time="2021-05-18T09:17:16.489009751Z" level=info msg="Waiting for node k3d-k3s-default-server-0 CIDR not assigned yet"
E0518 09:17:17.148716       6 resource_quota_controller.go:162] initial discovery check failure, continuing and counting on future sync update: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0518 09:17:17.148760       6 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for deployments.apps
W0518 09:17:17.148779       6 shared_informer.go:494] resyncPeriod 12h58m57.936817417s is smaller than resyncCheckPeriod 14h8m12.603370856s and the informer has already started. Changing it to 14h8m12.603370856s
I0518 09:17:17.148830       6 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for serviceaccounts
I0518 09:17:17.148857       6 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for controllerrevisions.apps
I0518 09:17:17.148884       6 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for jobs.batch
I0518 09:17:17.148910       6 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for ingresses.networking.k8s.io
I0518 09:17:17.148947       6 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for events.events.k8s.io
I0518 09:17:17.148979       6 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for endpoints
I0518 09:17:17.148995       6 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for replicasets.apps
I0518 09:17:17.149010       6 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for daemonsets.apps
I0518 09:17:17.149029       6 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for poddisruptionbudgets.policy
I0518 09:17:17.149055       6 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for helmcharts.helm.cattle.io
I0518 09:17:17.149090       6 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for limitranges
I0518 09:17:17.149145       6 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for cronjobs.batch
I0518 09:17:17.149162       6 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for endpointslices.discovery.k8s.io
I0518 09:17:17.149181       6 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for horizontalpodautoscalers.autoscaling
I0518 09:17:17.149196       6 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for ingresses.extensions
I0518 09:17:17.149220       6 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for roles.rbac.authorization.k8s.io
I0518 09:17:17.149243       6 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for leases.coordination.k8s.io
I0518 09:17:17.149264       6 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for addons.k3s.cattle.io
I0518 09:17:17.149284       6 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for podtemplates
I0518 09:17:17.149315       6 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for helmchartconfigs.helm.cattle.io
I0518 09:17:17.149342       6 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for statefulsets.apps
I0518 09:17:17.149359       6 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for networkpolicies.networking.k8s.io
I0518 09:17:17.149375       6 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for rolebindings.rbac.authorization.k8s.io
I0518 09:17:17.149394       6 controllermanager.go:554] Started "resourcequota"
I0518 09:17:17.149408       6 resource_quota_controller.go:273] Starting resource quota controller
I0518 09:17:17.149416       6 shared_informer.go:240] Waiting for caches to sync for resource quota
I0518 09:17:17.149430       6 resource_quota_monitor.go:304] QuotaMonitor running
E0518 09:17:17.167367       6 namespaced_resources_deleter.go:161] unable to get all supported resources from server: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0518 09:17:17.167435       6 controllermanager.go:554] Started "namespace"
I0518 09:17:17.167567       6 namespace_controller.go:200] Starting namespace controller
I0518 09:17:17.167577       6 shared_informer.go:240] Waiting for caches to sync for namespace
I0518 09:17:17.177365       6 garbagecollector.go:142] Starting garbage collector controller
I0518 09:17:17.177381       6 shared_informer.go:240] Waiting for caches to sync for garbage collector
I0518 09:17:17.177397       6 graph_builder.go:289] GraphBuilder running
I0518 09:17:17.177504       6 controllermanager.go:554] Started "garbagecollector"
I0518 09:17:17.294847       6 controllermanager.go:554] Started "deployment"
I0518 09:17:17.294882       6 deployment_controller.go:153] Starting deployment controller
I0518 09:17:17.294887       6 shared_informer.go:240] Waiting for caches to sync for deployment
I0518 09:17:17.344275       6 controllermanager.go:554] Started "csrapproving"
I0518 09:17:17.344314       6 certificate_controller.go:118] Starting certificate controller "csrapproving"
I0518 09:17:17.344322       6 shared_informer.go:240] Waiting for caches to sync for certificate-csrapproving
I0518 09:17:17.495649       6 controllermanager.go:554] Started "endpointslicemirroring"
I0518 09:17:17.495683       6 endpointslicemirroring_controller.go:211] Starting EndpointSliceMirroring controller
I0518 09:17:17.495688       6 shared_informer.go:240] Waiting for caches to sync for endpoint_slice_mirroring
I0518 09:17:17.662127       6 controllermanager.go:554] Started "serviceaccount"
I0518 09:17:17.662170       6 serviceaccounts_controller.go:117] Starting service account controller
I0518 09:17:17.662179       6 shared_informer.go:240] Waiting for caches to sync for service account
I0518 09:17:17.796394       6 controllermanager.go:554] Started "job"
I0518 09:17:17.796437       6 job_controller.go:148] Starting job controller
I0518 09:17:17.796444       6 shared_informer.go:240] Waiting for caches to sync for job
I0518 09:17:18.094182       6 controllermanager.go:554] Started "horizontalpodautoscaling"
I0518 09:17:18.094225       6 horizontal.go:169] Starting HPA controller
I0518 09:17:18.094232       6 shared_informer.go:240] Waiting for caches to sync for HPA
I0518 09:17:18.295543       6 controllermanager.go:554] Started "disruption"
W0518 09:17:18.295559       6 controllermanager.go:546] Skipping "ephemeral-volume"
I0518 09:17:18.295745       6 disruption.go:331] Starting disruption controller
I0518 09:17:18.295754       6 shared_informer.go:240] Waiting for caches to sync for disruption
I0518 09:17:18.297732       6 request.go:655] Throttling request took 1.047070305s, request: GET:https://127.0.0.1:6444/apis/scheduling.k8s.io/v1?timeout=32s
W0518 09:17:18.309128       6 actual_state_of_world.go:534] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="k3d-k3s-default-server-0" does not exist
I0518 09:17:18.311608       6 shared_informer.go:247] Caches are synced for GC
I0518 09:17:18.312145       6 shared_informer.go:247] Caches are synced for ReplicationController
I0518 09:17:18.327219       6 shared_informer.go:247] Caches are synced for expand
I0518 09:17:18.344358       6 shared_informer.go:247] Caches are synced for certificate-csrapproving
I0518 09:17:18.345203       6 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving
I0518 09:17:18.345865       6 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown
I0518 09:17:18.345877       6 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client
I0518 09:17:18.345888       6 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client
I0518 09:17:18.348555       6 shared_informer.go:247] Caches are synced for node
I0518 09:17:18.348571       6 range_allocator.go:172] Starting range CIDR allocator
I0518 09:17:18.348574       6 shared_informer.go:240] Waiting for caches to sync for cidrallocator
I0518 09:17:18.348577       6 shared_informer.go:247] Caches are synced for cidrallocator
I0518 09:17:18.353100       6 range_allocator.go:373] Set node k3d-k3s-default-server-0 PodCIDR to [10.42.0.0/24]
I0518 09:17:18.362307       6 shared_informer.go:247] Caches are synced for endpoint_slice
I0518 09:17:18.369952       6 shared_informer.go:247] Caches are synced for taint
I0518 09:17:18.370018       6 node_lifecycle_controller.go:1429] Initializing eviction metric for zone:
W0518 09:17:18.370053       6 node_lifecycle_controller.go:1044] Missing timestamp for Node k3d-k3s-default-server-0. Assuming now as a timestamp.
I0518 09:17:18.370092       6 node_lifecycle_controller.go:1245] Controller detected that zone  is now in state Normal.
I0518 09:17:18.370335       6 taint_manager.go:187] Starting NoExecuteTaintManager
I0518 09:17:18.370521       6 event.go:291] "Event occurred" object="k3d-k3s-default-server-0" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node k3d-k3s-default-server-0 event: Registered Node k3d-k3s-default-server-0 in Controller"
I0518 09:17:18.378112       6 shared_informer.go:247] Caches are synced for crt configmap
I0518 09:17:18.385054       6 shared_informer.go:247] Caches are synced for endpoint
I0518 09:17:18.386037       6 shared_informer.go:247] Caches are synced for ClusterRoleAggregator
I0518 09:17:18.393118       6 shared_informer.go:247] Caches are synced for PVC protection
I0518 09:17:18.394269       6 shared_informer.go:247] Caches are synced for HPA
I0518 09:17:18.394919       6 shared_informer.go:247] Caches are synced for deployment
I0518 09:17:18.395323       6 shared_informer.go:247] Caches are synced for TTL
I0518 09:17:18.395757       6 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring
I0518 09:17:18.395811       6 shared_informer.go:247] Caches are synced for disruption
I0518 09:17:18.395817       6 disruption.go:339] Sending events to api server.
I0518 09:17:18.396469       6 shared_informer.go:247] Caches are synced for job
I0518 09:17:18.398013       6 shared_informer.go:247] Caches are synced for ReplicaSet
I0518 09:17:18.402182       6 shared_informer.go:247] Caches are synced for PV protection
I0518 09:17:18.404380       6 shared_informer.go:247] Caches are synced for attach detach
I0518 09:17:18.444219       6 kuberuntime_manager.go:1006] updating runtime config through cri with podcidr 10.42.0.0/24
I0518 09:17:18.445937       6 kubelet_network.go:77] Setting Pod CIDR:  -> 10.42.0.0/24
I0518 09:17:18.455113       6 shared_informer.go:247] Caches are synced for persistent volume
time="2021-05-18T09:17:18.491659926Z" level=info msg="Node CIDR assigned for: k3d-k3s-default-server-0"
I0518 09:17:18.491786       6 flannel.go:92] Determining IP address of default interface
I0518 09:17:18.492003       6 flannel.go:105] Using interface with name eth0 and address 192.168.96.2
I0518 09:17:18.494194       6 kube.go:117] Waiting 10m0s for node controller to sync
I0518 09:17:18.494228       6 kube.go:300] Starting kube subnet manager
time="2021-05-18T09:17:18.502214544Z" level=info msg="labels have been set successfully on node: k3d-k3s-default-server-0"
I0518 09:17:18.549524       6 shared_informer.go:247] Caches are synced for resource quota
I0518 09:17:18.562287       6 shared_informer.go:247] Caches are synced for service account
I0518 09:17:18.568035       6 shared_informer.go:247] Caches are synced for namespace
I0518 09:17:18.592354       6 shared_informer.go:247] Caches are synced for daemon sets
I0518 09:17:18.615225       6 network_policy_controller.go:138] Starting network policy controller
I0518 09:17:18.630513       6 network_policy_controller.go:145] Starting network policy controller full sync goroutine
I0518 09:17:18.635714       6 shared_informer.go:247] Caches are synced for stateful set
E0518 09:17:18.802324       6 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
E0518 09:17:18.803047       6 clusterroleaggregation_controller.go:181] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again
E0518 09:17:18.810420       6 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
I0518 09:17:18.847512       6 controller.go:609] quota admission added evaluator for: replicasets.apps
I0518 09:17:18.849053       6 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-86cbb8457f to 1"
I0518 09:17:18.849845       6 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-854c77959c to 1"
I0518 09:17:18.850085       6 event.go:291] "Event occurred" object="kube-system/local-path-provisioner" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set local-path-provisioner-5ff76fc89d to 1"
I0518 09:17:19.000974       6 event.go:291] "Event occurred" object="kube-system/helm-install-traefik" kind="Job" apiVersion="batch/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: helm-install-traefik-n754d"
I0518 09:17:19.004127       6 topology_manager.go:187] [topologymanager] Topology Admit Handler
I0518 09:17:19.006400       6 controller.go:609] quota admission added evaluator for: events.events.k8s.io
I0518 09:17:19.144771       6 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "values" (UniqueName: "kubernetes.io/configmap/7603922a-aaef-4577-ae9b-d6f1c45a2a86-values") pod "helm-install-traefik-n754d" (UID: "7603922a-aaef-4577-ae9b-d6f1c45a2a86")
I0518 09:17:19.144799       6 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "content" (UniqueName: "kubernetes.io/configmap/7603922a-aaef-4577-ae9b-d6f1c45a2a86-content") pod "helm-install-traefik-n754d" (UID: "7603922a-aaef-4577-ae9b-d6f1c45a2a86")
I0518 09:17:19.144819       6 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "helm-traefik-token-s66fq" (UniqueName: "kubernetes.io/secret/7603922a-aaef-4577-ae9b-d6f1c45a2a86-helm-traefik-token-s66fq") pod "helm-install-traefik-n754d" (UID: "7603922a-aaef-4577-ae9b-d6f1c45a2a86")
I0518 09:17:19.199219       6 event.go:291] "Event occurred" object="kube-system/local-path-provisioner-5ff76fc89d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: local-path-provisioner-5ff76fc89d-9krml"
I0518 09:17:19.199402       6 event.go:291] "Event occurred" object="kube-system/coredns-854c77959c" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-854c77959c-972lp"
I0518 09:17:19.201546       6 event.go:291] "Event occurred" object="kube-system/metrics-server-86cbb8457f" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-86cbb8457f-7gpqw"
I0518 09:17:19.206031       6 topology_manager.go:187] [topologymanager] Topology Admit Handler
I0518 09:17:19.206141       6 topology_manager.go:187] [topologymanager] Topology Admit Handler
I0518 09:17:19.223462       6 topology_manager.go:187] [topologymanager] Topology Admit Handler
E0518 09:17:19.248708       6 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0518 09:17:19.248783       6 shared_informer.go:240] Waiting for caches to sync for resource quota
I0518 09:17:19.248804       6 shared_informer.go:247] Caches are synced for resource quota
I0518 09:17:19.345123       6 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "local-path-provisioner-service-account-token-d5bzc" (UniqueName: "kubernetes.io/secret/115ba17b-70ac-4ef9-a400-2c6eb3ee4e67-local-path-provisioner-service-account-token-d5bzc") pod "local-path-provisioner-5ff76fc89d-9krml" (UID: "115ba17b-70ac-4ef
9-a400-2c6eb3ee4e67")
I0518 09:17:19.345164       6 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "metrics-server-token-9cj9h" (UniqueName: "kubernetes.io/secret/fd3ed426-2e16-4940-835a-d846e3cc42d3-metrics-server-token-9cj9h") pod "metrics-server-86cbb8457f-7gpqw" (UID: "fd3ed426-2e16-4940-835a-d846e3cc42d3")
I0518 09:17:19.345187       6 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/11f7885d-4eb1-4694-8574-07b60266c2c1-config-volume") pod "coredns-854c77959c-972lp" (UID: "11f7885d-4eb1-4694-8574-07b60266c2c1")
I0518 09:17:19.345208       6 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/115ba17b-70ac-4ef9-a400-2c6eb3ee4e67-config-volume") pod "local-path-provisioner-5ff76fc89d-9krml" (UID: "115ba17b-70ac-4ef9-a400-2c6eb3ee4e67")
I0518 09:17:19.345321       6 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp-dir" (UniqueName: "kubernetes.io/empty-dir/fd3ed426-2e16-4940-835a-d846e3cc42d3-tmp-dir") pod "metrics-server-86cbb8457f-7gpqw" (UID: "fd3ed426-2e16-4940-835a-d846e3cc42d3")
I0518 09:17:19.345364       6 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-59zrj" (UniqueName: "kubernetes.io/secret/11f7885d-4eb1-4694-8574-07b60266c2c1-coredns-token-59zrj") pod "coredns-854c77959c-972lp" (UID: "11f7885d-4eb1-4694-8574-07b60266c2c1")
I0518 09:17:19.494314       6 kube.go:124] Node controller sync successful
I0518 09:17:19.494382       6 vxlan.go:121] VXLAN config: VNI=1 Port=0 GBP=false Learning=false DirectRouting=false
I0518 09:17:19.505648       6 flannel.go:78] Wrote subnet file to /run/flannel/subnet.env
I0518 09:17:19.505662       6 flannel.go:82] Running backend.
I0518 09:17:19.505667       6 vxlan_network.go:60] watching for new subnet leases
I0518 09:17:19.507734       6 iptables.go:145] Some iptables rules are missing; deleting and recreating rules
I0518 09:17:19.507748       6 iptables.go:167] Deleting iptables rule: -s 10.42.0.0/16 -d 10.42.0.0/16 -j RETURN
I0518 09:17:19.510274       6 iptables.go:145] Some iptables rules are missing; deleting and recreating rules
I0518 09:17:19.510278       6 iptables.go:167] Deleting iptables rule: -s 10.42.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE --random-fully
I0518 09:17:19.510285       6 iptables.go:167] Deleting iptables rule: -s 10.42.0.0/16 -j ACCEPT
I0518 09:17:19.511209       6 iptables.go:167] Deleting iptables rule: -d 10.42.0.0/16 -j ACCEPT
I0518 09:17:19.511981       6 iptables.go:167] Deleting iptables rule: ! -s 10.42.0.0/16 -d 10.42.0.0/24 -j RETURN
I0518 09:17:19.512887       6 iptables.go:167] Deleting iptables rule: ! -s 10.42.0.0/16 -d 10.42.0.0/16 -j MASQUERADE --random-fully
I0518 09:17:19.513588       6 iptables.go:155] Adding iptables rule: -s 10.42.0.0/16 -d 10.42.0.0/16 -j RETURN
I0518 09:17:19.513694       6 iptables.go:155] Adding iptables rule: -s 10.42.0.0/16 -j ACCEPT
I0518 09:17:19.515860       6 iptables.go:155] Adding iptables rule: -d 10.42.0.0/16 -j ACCEPT
I0518 09:17:19.516320       6 iptables.go:155] Adding iptables rule: -s 10.42.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE --random-fully
I0518 09:17:19.518740       6 iptables.go:155] Adding iptables rule: ! -s 10.42.0.0/16 -d 10.42.0.0/24 -j RETURN
I0518 09:17:19.520066       6 iptables.go:155] Adding iptables rule: ! -s 10.42.0.0/16 -d 10.42.0.0/16 -j MASQUERADE --random-fully
W0518 09:17:19.709586       6 handler_proxy.go:102] no RequestInfo found in the context
E0518 09:17:19.709655       6 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
I0518 09:17:19.709668       6 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
W0518 09:17:19.898308       6 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0518 09:17:19.904607       6 memcache.go:196] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E0518 09:17:20.552069       6 memcache.go:101] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0518 09:17:20.554502       6 shared_informer.go:240] Waiting for caches to sync for garbage collector
I0518 09:17:20.577470       6 shared_informer.go:247] Caches are synced for garbage collector
I0518 09:17:20.577489       6 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0518 09:17:20.654674       6 shared_informer.go:247] Caches are synced for garbage collector
E0518 09:17:47.458496       6 remote_runtime.go:116] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to get sandbox image "docker.io/rancher/pause:3.1": failed to pull image "docker.io/rancher/pause:3.1": failed to pull and unpack image "docker.io/rancher/pause:3.1": failed to resolve reference "docker.io/rancher/pause:3.1": fai
led to do request: Head "https://registry-1.docker.io/v2/rancher/pause/manifests/3.1": dial tcp 54.161.109.204:443: i/o timeout
E0518 09:17:47.458546       6 kuberuntime_sandbox.go:70] CreatePodSandbox for pod "local-path-provisioner-5ff76fc89d-9krml_kube-system(115ba17b-70ac-4ef9-a400-2c6eb3ee4e67)" failed: rpc error: code = Unknown desc = failed to get sandbox image "docker.io/rancher/pause:3.1": failed to pull image "docker.io/rancher/pause:3.1": failed to pull and unpack image "docker
.io/rancher/pause:3.1": failed to resolve reference "docker.io/rancher/pause:3.1": failed to do request: Head "https://registry-1.docker.io/v2/rancher/pause/manifests/3.1": dial tcp 54.161.109.204:443: i/o timeout
E0518 09:17:47.458562       6 kuberuntime_manager.go:755] createPodSandbox for pod "local-path-provisioner-5ff76fc89d-9krml_kube-system(115ba17b-70ac-4ef9-a400-2c6eb3ee4e67)" failed: rpc error: code = Unknown desc = failed to get sandbox image "docker.io/rancher/pause:3.1": failed to pull image "docker.io/rancher/pause:3.1": failed to pull and unpack image "docke
r.io/rancher/pause:3.1": failed to resolve reference "docker.io/rancher/pause:3.1": failed to do request: Head "https://registry-1.docker.io/v2/rancher/pause/manifests/3.1": dial tcp 54.161.109.204:443: i/o timeout
E0518 09:17:47.458609       6 pod_workers.go:191] Error syncing pod 115ba17b-70ac-4ef9-a400-2c6eb3ee4e67 ("local-path-provisioner-5ff76fc89d-9krml_kube-system(115ba17b-70ac-4ef9-a400-2c6eb3ee4e67)"), skipping: failed to "CreatePodSandbox" for "local-path-provisioner-5ff76fc89d-9krml_kube-system(115ba17b-70ac-4ef9-a400-2c6eb3ee4e67)" with CreatePodSandboxError: "C
reatePodSandbox for pod \"local-path-provisioner-5ff76fc89d-9krml_kube-system(115ba17b-70ac-4ef9-a400-2c6eb3ee4e67)\" failed: rpc error: code = Unknown desc = failed to get sandbox image \"docker.io/rancher/pause:3.1\": failed to pull image \"docker.io/rancher/pause:3.1\": failed to pull and unpack image \"docker.io/rancher/pause:3.1\": failed to resolve referenc
e \"docker.io/rancher/pause:3.1\": failed to do request: Head \"https://registry-1.docker.io/v2/rancher/pause/manifests/3.1\": dial tcp 54.161.109.204:443: i/o timeout"
E0518 09:17:49.850232       6 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0518 09:17:52.404894       6 request.go:655] Throttling request took 1.048997448s, request: GET:https://127.0.0.1:6444/apis/extensions/v1beta1?timeout=32s
W0518 09:17:53.355542       6 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0518 09:17:57.497804       6 remote_runtime.go:116] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to get sandbox image "docker.io/rancher/pause:3.1": failed to pull image "docker.io/rancher/pause:3.1": failed to pull and unpack image "docker.io/rancher/pause:3.1": failed to resolve reference "docker.io/rancher/pause:3.1": fai
led to do request: Head "https://registry-1.docker.io/v2/rancher/pause/manifests/3.1": dial tcp 54.161.109.204:443: connect: no route to host
E0518 09:17:57.497844       6 kuberuntime_sandbox.go:70] CreatePodSandbox for pod "helm-install-traefik-n754d_kube-system(7603922a-aaef-4577-ae9b-d6f1c45a2a86)" failed: rpc error: code = Unknown desc = failed to get sandbox image "docker.io/rancher/pause:3.1": failed to pull image "docker.io/rancher/pause:3.1": failed to pull and unpack image "docker.io/rancher/p
ause:3.1": failed to resolve reference "docker.io/rancher/pause:3.1": failed to do request: Head "https://registry-1.docker.io/v2/rancher/pause/manifests/3.1": dial tcp 54.161.109.204:443: connect: no route to host
E0518 09:17:57.497857       6 kuberuntime_manager.go:755] createPodSandbox for pod "helm-install-traefik-n754d_kube-system(7603922a-aaef-4577-ae9b-d6f1c45a2a86)" failed: rpc error: code = Unknown desc = failed to get sandbox image "docker.io/rancher/pause:3.1": failed to pull image "docker.io/rancher/pause:3.1": failed to pull and unpack image "docker.io/rancher/
pause:3.1": failed to resolve reference "docker.io/rancher/pause:3.1": failed to do request: Head "https://registry-1.docker.io/v2/rancher/pause/manifests/3.1": dial tcp 54.161.109.204:443: connect: no route to host
E0518 09:17:57.497904       6 pod_workers.go:191] Error syncing pod 7603922a-aaef-4577-ae9b-d6f1c45a2a86 ("helm-install-traefik-n754d_kube-system(7603922a-aaef-4577-ae9b-d6f1c45a2a86)"), skipping: failed to "CreatePodSandbox" for "helm-install-traefik-n754d_kube-system(7603922a-aaef-4577-ae9b-d6f1c45a2a86)" with CreatePodSandboxError: "CreatePodSandbox for pod \"
helm-install-traefik-n754d_kube-system(7603922a-aaef-4577-ae9b-d6f1c45a2a86)\" failed: rpc error: code = Unknown desc = failed to get sandbox image \"docker.io/rancher/pause:3.1\": failed to pull image \"docker.io/rancher/pause:3.1\": failed to pull and unpack image \"docker.io/rancher/pause:3.1\": failed to resolve reference \"docker.io/rancher/pause:3.1\": fail
ed to do request: Head \"https://registry-1.docker.io/v2/rancher/pause/manifests/3.1\": dial tcp 54.161.109.204:443: connect: no route to host"
E0518 09:18:02.558279       6 remote_runtime.go:116] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to get sandbox image "docker.io/rancher/pause:3.1": failed to pull image "docker.io/rancher/pause:3.1": failed to pull and unpack image "docker.io/rancher/pause:3.1": failed to resolve reference "docker.io/rancher/pause:3.1": fai
led to do request: Head "https://registry-1.docker.io/v2/rancher/pause/manifests/3.1": dial tcp 52.55.168.20:443: connect: no route to host
E0518 09:18:02.558322       6 kuberuntime_sandbox.go:70] CreatePodSandbox for pod "coredns-854c77959c-972lp_kube-system(11f7885d-4eb1-4694-8574-07b60266c2c1)" failed: rpc error: code = Unknown desc = failed to get sandbox image "docker.io/rancher/pause:3.1": failed to pull image "docker.io/rancher/pause:3.1": failed to pull and unpack image "docker.io/rancher/pau
se:3.1": failed to resolve reference "docker.io/rancher/pause:3.1": failed to do request: Head "https://registry-1.docker.io/v2/rancher/pause/manifests/3.1": dial tcp 52.55.168.20:443: connect: no route to host
E0518 09:18:02.558334       6 kuberuntime_manager.go:755] createPodSandbox for pod "coredns-854c77959c-972lp_kube-system(11f7885d-4eb1-4694-8574-07b60266c2c1)" failed: rpc error: code = Unknown desc = failed to get sandbox image "docker.io/rancher/pause:3.1": failed to pull image "docker.io/rancher/pause:3.1": failed to pull and unpack image "docker.io/rancher/pa
use:3.1": failed to resolve reference "docker.io/rancher/pause:3.1": failed to do request: Head "https://registry-1.docker.io/v2/rancher/pause/manifests/3.1": dial tcp 52.55.168.20:443: connect: no route to host
E0518 09:18:02.558373       6 pod_workers.go:191] Error syncing pod 11f7885d-4eb1-4694-8574-07b60266c2c1 ("coredns-854c77959c-972lp_kube-system(11f7885d-4eb1-4694-8574-07b60266c2c1)"), skipping: failed to "CreatePodSandbox" for "coredns-854c77959c-972lp_kube-system(11f7885d-4eb1-4694-8574-07b60266c2c1)" with CreatePodSandboxError: "CreatePodSandbox for pod \"core
dns-854c77959c-972lp_kube-system(11f7885d-4eb1-4694-8574-07b60266c2c1)\" failed: rpc error: code = Unknown desc = failed to get sandbox image \"docker.io/rancher/pause:3.1\": failed to pull image \"docker.io/rancher/pause:3.1\": failed to pull and unpack image \"docker.io/rancher/pause:3.1\": failed to resolve reference \"docker.io/rancher/pause:3.1\": failed to
do request: Head \"https://registry-1.docker.io/v2/rancher/pause/manifests/3.1\": dial tcp 52.55.168.20:443: connect: no route to host"
E0518 09:18:08.426298       6 remote_runtime.go:116] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to get sandbox image "docker.io/rancher/pause:3.1": failed to pull image "docker.io/rancher/pause:3.1": failed to pull and unpack image "docker.io/rancher/pause:3.1": failed to resolve reference "docker.io/rancher/pause:3.1": fai
led to do request: Head "https://registry-1.docker.io/v2/rancher/pause/manifests/3.1": dial tcp 34.197.211.151:443: i/o timeout
E0518 09:18:08.426333       6 kuberuntime_sandbox.go:70] CreatePodSandbox for pod "metrics-server-86cbb8457f-7gpqw_kube-system(fd3ed426-2e16-4940-835a-d846e3cc42d3)" failed: rpc error: code = Unknown desc = failed to get sandbox image "docker.io/rancher/pause:3.1": failed to pull image "docker.io/rancher/pause:3.1": failed to pull and unpack image "docker.io/ranc
her/pause:3.1": failed to resolve reference "docker.io/rancher/pause:3.1": failed to do request: Head "https://registry-1.docker.io/v2/rancher/pause/manifests/3.1": dial tcp 34.197.211.151:443: i/o timeout
E0518 09:18:08.426342       6 kuberuntime_manager.go:755] createPodSandbox for pod "metrics-server-86cbb8457f-7gpqw_kube-system(fd3ed426-2e16-4940-835a-d846e3cc42d3)" failed: rpc error: code = Unknown desc = failed to get sandbox image "docker.io/rancher/pause:3.1": failed to pull image "docker.io/rancher/pause:3.1": failed to pull and unpack image "docker.io/ran
cher/pause:3.1": failed to resolve reference "docker.io/rancher/pause:3.1": failed to do request: Head "https://registry-1.docker.io/v2/rancher/pause/manifests/3.1": dial tcp 34.197.211.151:443: i/o timeout
E0518 09:18:08.426379       6 pod_workers.go:191] Error syncing pod fd3ed426-2e16-4940-835a-d846e3cc42d3 ("metrics-server-86cbb8457f-7gpqw_kube-system(fd3ed426-2e16-4940-835a-d846e3cc42d3)"), skipping: failed to "CreatePodSandbox" for "metrics-server-86cbb8457f-7gpqw_kube-system(fd3ed426-2e16-4940-835a-d846e3cc42d3)" with CreatePodSandboxError: "CreatePodSandbox
for pod \"metrics-server-86cbb8457f-7gpqw_kube-system(fd3ed426-2e16-4940-835a-d846e3cc42d3)\" failed: rpc error: code = Unknown desc = failed to get sandbox image \"docker.io/rancher/pause:3.1\": failed to pull image \"docker.io/rancher/pause:3.1\": failed to pull and unpack image \"docker.io/rancher/pause:3.1\": failed to resolve reference \"docker.io/rancher/pa
use:3.1\": failed to do request: Head \"https://registry-1.docker.io/v2/rancher/pause/manifests/3.1\": dial tcp 34.197.211.151:443: i/o timeout"
W0518 09:18:19.709912       6 handler_proxy.go:102] no RequestInfo found in the context
E0518 09:18:19.709975       6 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
I0518 09:18:19.709984       6 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
E0518 09:18:20.450999       6 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0518 09:18:25.105826       6 request.go:655] Throttling request took 1.049082826s, request: GET:https://127.0.0.1:6444/apis/node.k8s.io/v1beta1?timeout=32s
W0518 09:18:26.056505       6 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0518 09:18:33.404878       6 remote_runtime.go:116] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to get sandbox image "docker.io/rancher/pause:3.1": failed to pull image "docker.io/rancher/pause:3.1": failed to pull and unpack image "docker.io/rancher/pause:3.1": failed to resolve reference "docker.io/rancher/pause:3.1": fai
led to do request: Head "https://registry-1.docker.io/v2/rancher/pause/manifests/3.1": dial tcp 54.236.165.68:443: connect: no route to host
E0518 09:18:33.404919       6 kuberuntime_sandbox.go:70] CreatePodSandbox for pod "local-path-provisioner-5ff76fc89d-9krml_kube-system(115ba17b-70ac-4ef9-a400-2c6eb3ee4e67)" failed: rpc error: code = Unknown desc = failed to get sandbox image "docker.io/rancher/pause:3.1": failed to pull image "docker.io/rancher/pause:3.1": failed to pull and unpack image "docker
.io/rancher/pause:3.1": failed to resolve reference "docker.io/rancher/pause:3.1": failed to do request: Head "https://registry-1.docker.io/v2/rancher/pause/manifests/3.1": dial tcp 54.236.165.68:443: connect: no route to host
E0518 09:18:33.404934       6 kuberuntime_manager.go:755] createPodSandbox for pod "local-path-provisioner-5ff76fc89d-9krml_kube-system(115ba17b-70ac-4ef9-a400-2c6eb3ee4e67)" failed: rpc error: code = Unknown desc = failed to get sandbox image "docker.io/rancher/pause:3.1": failed to pull image "docker.io/rancher/pause:3.1": failed to pull and unpack image "docke
r.io/rancher/pause:3.1": failed to resolve reference "docker.io/rancher/pause:3.1": failed to do request: Head "https://registry-1.docker.io/v2/rancher/pause/manifests/3.1": dial tcp 54.236.165.68:443: connect: no route to host
E0518 09:18:33.404988       6 pod_workers.go:191] Error syncing pod 115ba17b-70ac-4ef9-a400-2c6eb3ee4e67 ("local-path-provisioner-5ff76fc89d-9krml_kube-system(115ba17b-70ac-4ef9-a400-2c6eb3ee4e67)"), skipping: failed to "CreatePodSandbox" for "local-path-provisioner-5ff76fc89d-9krml_kube-system(115ba17b-70ac-4ef9-a400-2c6eb3ee4e67)" with CreatePodSandboxError: "C
reatePodSandbox for pod \"local-path-provisioner-5ff76fc89d-9krml_kube-system(115ba17b-70ac-4ef9-a400-2c6eb3ee4e67)\" failed: rpc error: code = Unknown desc = failed to get sandbox image \"docker.io/rancher/pause:3.1\": failed to pull image \"docker.io/rancher/pause:3.1\": failed to pull and unpack image \"docker.io/rancher/pause:3.1\": failed to resolve referenc
e \"docker.io/rancher/pause:3.1\": failed to do request: Head \"https://registry-1.docker.io/v2/rancher/pause/manifests/3.1\": dial tcp 54.236.165.68:443: connect: no route to host"

DNS issue?

@iwilltry42
Copy link
Member

Yeah, it does look like some networking issue on your end there 🤔
Are you maybe running behind a corporate proxy or with a specific dns config?

@ybettan
Copy link
Author

ybettan commented May 23, 2021

I believe I do, I am running RHEL8 on provisioned VMs at RedHat.
I will do some homework and come back to you if the issue persist.

Thank you for your support.

@iwilltry42
Copy link
Member

Hey @ybettan , do you have any update on this issue? 👍

@ybettan
Copy link
Author

ybettan commented Jun 14, 2021

Hey @iwilltry42 , I didn't really have the time to continue investigating it.

It works on my mac but doesn't on the Linux server I am using for work so I keep using minikube until I can find some time to debug it.

Also, I have seen that multiple issues have been opened mentioning this one so I have assumes there is a real bug here.

@iwilltry42
Copy link
Member

Hey @ybettan , this seems to be the only related issue: #621

Any chance you have a VM image that I can use to reproduce your failing system?

@iwilltry42
Copy link
Member

iwilltry42 commented Jun 14, 2021

So I created a debug release to at least try to get a clue about what's happening here.
Would be great, if you could give it a try: https://drive.google.com/drive/folders/1JLlq6IUUn3OV_Mm7sQMjx8kg7epW-wxv?usp=sharing
Download the release matching your setup and try it with the following command:
K3D_LOG_NODE_WAIT_LOGS=loadbalancer k3d cluster create debug --trace

This should generate logs where the loadbalancer logs are included in the output to ensure that there's no clock skew involved (though, to be save, I also dropped subsecond precision from the wait command).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants