Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] openyurt v1.2.0 - pool-coordinator not working because of missing images and invalid certificates #1182

Closed
batthebee opened this issue Jan 31, 2023 · 27 comments
Labels
kind/bug kind/bug

Comments

@batthebee
Copy link
Contributor

What happened:

When I deploy the openyurt helm chart as described, the pool coordinator does not start because google_containers/etcd and google_containers/kube-apiserver do not exist on docker hub.

After replacing the images with the following:

poolCoordinator:
        apiserverImage:
          registry: registry.k8s.io
          repository: kube-apiserver
          tag: v1.22.17
        etcdImage:
          registry: quay.io
          repository: coreos/etcd
          tag: v3.5.0

the pool-coorinatior runs in CrashLoopBackup because the certificates are not correct:

kube-apiserver

I0131 17:10:46.948544       1 server.go:553] external host was not specified, using 192.168.88.248
I0131 17:10:46.949264       1 server.go:161] Version: v1.22.17
I0131 17:10:48.087533       1 shared_informer.go:240] Waiting for caches to sync for node_authorizer
I0131 17:10:48.089422       1 plugins.go:158] Loaded 11 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0131 17:10:48.089444       1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
I0131 17:10:48.091339       1 plugins.go:158] Loaded 11 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0131 17:10:48.091359       1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
W0131 17:10:48.255284       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:12379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: authentication handshake failed: x509: certificate is valid for 10.110.120.6, 10.110.120.6, not 127.0.0.1". Reconnecting...
W0131 17:10:49.102574       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:12379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: authentication handshake failed: x509: certificate is valid for 10.110.120.6, 10.110.120.6, not 127.0.0.1". Reconnecting...
W0131 17:10:49.272927       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:12379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: authentication handshake failed: x509: certificate is valid for 10.110.120.6, 10.110.120.6, not 127.0.0.1". Reconnecting...

etcd

{"level":"info","ts":"2023-01-31T17:10:47.354Z","caller":"etcdmain/etcd.go:72","msg":"Running: ","args":["etcd","--advertise-client-urls=https://0.0.0.0:12379","--listen-client-urls=https://0.0.0.0:12379","--cert-file=/etc/kubernetes/pki/etcd-server.crt","--client-cert-auth=true","--max-txn-ops=102400","--data-dir=/var/lib/etcd","--max-request-bytes=100000000","--key-file=/etc/kubernetes/pki/etcd-server.key","--listen-metrics-urls=http://0.0.0.0:12381","--snapshot-count=10000","--trusted-ca-file=/etc/kubernetes/pki/ca.crt"]}
{"level":"info","ts":"2023-01-31T17:10:47.354Z","caller":"embed/etcd.go:131","msg":"configuring peer listeners","listen-peer-urls":["http://localhost:2380"]}
{"level":"info","ts":"2023-01-31T17:10:47.355Z","caller":"embed/etcd.go:139","msg":"configuring client listeners","listen-client-urls":["https://0.0.0.0:12379"]}
{"level":"info","ts":"2023-01-31T17:10:47.356Z","caller":"embed/etcd.go:307","msg":"starting an etcd server","etcd-version":"3.5.0","git-sha":"946a5a6f2","go-version":"go1.16.3","go-os":"linux","go-arch":"amd64","max-cpu-set":4,"max-cpu-available":4,"member-initialized":false,"name":"default","data-dir":"/var/lib/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["http://localhost:2380"],"listen-peer-urls":["http://localhost:2380"],"advertise-client-urls":["https://0.0.0.0:12379"],"listen-client-urls":["https://0.0.0.0:12379"],"listen-metrics-urls":["http://0.0.0.0:12381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"default=http://localhost:2380","initial-cluster-state":"new","initial-cluster-token":"etcd-cluster","quota-size-bytes":2147483648,"pre-vote":true,"initial-corrupt-check":false,"corrupt-check-time-interval":"0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
{"level":"warn","ts":"2023-01-31T17:10:47.356Z","caller":"etcdserver/server.go:342","msg":"exceeded recommended request limit","max-request-bytes":100000000,"max-request-size":"100 MB","recommended-request-bytes":10485760,"recommended-request-size":"10 MB"}
{"level":"warn","ts":1675185047.3563454,"caller":"fileutil/fileutil.go:57","msg":"check file permission","error":"directory \"/var/lib/etcd\" exist, but the permission is \"dtrwxrwxrwx\". The recommended permission is \"-rwx------\" to prevent possible unprivileged access to the data"}
{"level":"info","ts":"2023-01-31T17:10:47.356Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/etcd/member/snap/db","took":"282.354µs"}
{"level":"info","ts":"2023-01-31T17:10:47.460Z","caller":"etcdserver/raft.go:448","msg":"starting local member","local-member-id":"8e9e05c52164694d","cluster-id":"cdf818194e3a8c32"}
{"level":"info","ts":"2023-01-31T17:10:47.460Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e9e05c52164694d switched to configuration voters=()"}
{"level":"info","ts":"2023-01-31T17:10:47.460Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e9e05c52164694d became follower at term 0"}
{"level":"info","ts":"2023-01-31T17:10:47.460Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 8e9e05c52164694d [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]"}
{"level":"info","ts":"2023-01-31T17:10:47.460Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e9e05c52164694d became follower at term 1"}
{"level":"info","ts":"2023-01-31T17:10:47.460Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e9e05c52164694d switched to configuration voters=(10276657743932975437)"}
{"level":"warn","ts":"2023-01-31T17:10:47.460Z","caller":"auth/store.go:1220","msg":"simple token is not cryptographically signed"}
{"level":"info","ts":"2023-01-31T17:10:47.549Z","caller":"mvcc/kvstore.go:415","msg":"kvstore restored","current-rev":1}
{"level":"info","ts":"2023-01-31T17:10:47.550Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
{"level":"info","ts":"2023-01-31T17:10:47.550Z","caller":"etcdserver/server.go:843","msg":"starting etcd server","local-member-id":"8e9e05c52164694d","local-server-version":"3.5.0","cluster-version":"to_be_decided"}
{"level":"info","ts":"2023-01-31T17:10:47.550Z","caller":"etcdserver/server.go:728","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"8e9e05c52164694d","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
{"level":"info","ts":"2023-01-31T17:10:47.551Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e9e05c52164694d switched to configuration voters=(10276657743932975437)"}
{"level":"info","ts":"2023-01-31T17:10:47.552Z","caller":"membership/cluster.go:393","msg":"added member","cluster-id":"cdf818194e3a8c32","local-member-id":"8e9e05c52164694d","added-peer-id":"8e9e05c52164694d","added-peer-peer-urls":["http://localhost:2380"]}
{"level":"info","ts":"2023-01-31T17:10:47.555Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /etc/kubernetes/pki/etcd-server.crt, key = /etc/kubernetes/pki/etcd-server.key, client-cert=, client-key=, trusted-ca = /etc/kubernetes/pki/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
{"level":"info","ts":"2023-01-31T17:10:47.649Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"127.0.0.1:2380"}
{"level":"info","ts":"2023-01-31T17:10:47.650Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"127.0.0.1:2380"}
{"level":"info","ts":"2023-01-31T17:10:47.649Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"8e9e05c52164694d","initial-advertise-peer-urls":["http://localhost:2380"],"listen-peer-urls":["http://localhost:2380"],"advertise-client-urls":["https://0.0.0.0:12379"],"listen-client-urls":["https://0.0.0.0:12379"],"listen-metrics-urls":["http://0.0.0.0:12381"]}
{"level":"info","ts":"2023-01-31T17:10:47.649Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://0.0.0.0:12381"}
{"level":"info","ts":"2023-01-31T17:10:48.161Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e9e05c52164694d is starting a new election at term 1"}
{"level":"info","ts":"2023-01-31T17:10:48.161Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e9e05c52164694d became pre-candidate at term 1"}
{"level":"info","ts":"2023-01-31T17:10:48.161Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e9e05c52164694d received MsgPreVoteResp from 8e9e05c52164694d at term 1"}
{"level":"info","ts":"2023-01-31T17:10:48.162Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e9e05c52164694d became candidate at term 2"}
{"level":"info","ts":"2023-01-31T17:10:48.162Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e9e05c52164694d received MsgVoteResp from 8e9e05c52164694d at term 2"}
{"level":"info","ts":"2023-01-31T17:10:48.162Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e9e05c52164694d became leader at term 2"}
{"level":"info","ts":"2023-01-31T17:10:48.162Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8e9e05c52164694d elected leader 8e9e05c52164694d at term 2"}
{"level":"info","ts":"2023-01-31T17:10:48.162Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
{"level":"info","ts":"2023-01-31T17:10:48.162Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-01-31T17:10:48.162Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"8e9e05c52164694d","local-member-attributes":"{Name:default ClientURLs:[https://0.0.0.0:12379]}","request-path":"/0/members/8e9e05c52164694d/attributes","cluster-id":"cdf818194e3a8c32","publish-timeout":"7s"}
{"level":"info","ts":"2023-01-31T17:10:48.163Z","caller":"membership/cluster.go:531","msg":"set initial cluster version","cluster-id":"cdf818194e3a8c32","local-member-id":"8e9e05c52164694d","cluster-version":"3.5"}
{"level":"info","ts":"2023-01-31T17:10:48.163Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2023-01-31T17:10:48.163Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
{"level":"info","ts":"2023-01-31T17:10:48.163Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
{"level":"info","ts":"2023-01-31T17:10:48.163Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
{"level":"info","ts":"2023-01-31T17:10:48.166Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"[::]:12379"}
{"level":"warn","ts":"2023-01-31T17:10:48.349Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"127.0.0.1:39106","server-name":"","error":"remote error: tls: bad certificate"}
{"level":"warn","ts":"2023-01-31T17:10:48.355Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"127.0.0.1:39122","server-name":"","error":"tls: failed to verify client certificate: x509: certificate specifies an incompatible key usage"}
WARNING: 2023/01/31 17:10:48 [core] grpc: addrConn.createTransport failed to connect to {0.0.0.0:12379 0.0.0.0:12379 <nil> 0 <nil>}. Err: connection error: desc = "transport: authentication handshake failed: remote error: tls: bad certificate". Reconnecting...
{"level":"warn","ts":"2023-01-31T17:10:49.102Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"127.0.0.1:39138","server-name":"","error":"remote error: tls: bad certificate"}
{"level":"warn","ts":"2023-01-31T17:10:49.272Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"127.0.0.1:39152","server-name":"","error":"remote error: tls: bad certificate"}
{"level":"warn","ts":"2023-01-31T17:10:49.457Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"127.0.0.1:39156","server-name":"","error":"tls: failed to verify client certificate: x509: certificate specifies an incompatible key usage"}
WARNING: 2023/01/31 17:10:49 [core] grpc: addrConn.createTransport failed to connect to {0.0.0.0:12379 0.0.0.0:12379 <nil> 0 <nil>}. Err: connection error: desc = "transport: authentication handshake failed: remote error: tls: bad certificate". Reconnecting...
{"level":"warn","ts":"2023-01-31T17:10:50.117Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"127.0.0.1:39162","server-name":"","error":"remote error: tls: bad certificate"}
{"level":"warn","ts":"2023-01-31T17:10:50.999Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"127.0.0.1:39176","server-name":"","error":"tls: failed to verify client certificate: x509: certificate specifies an incompatible key usage"}
WARNING: 2023/01/31 17:10:50 [core] grpc: addrConn.createTransport failed to connect to {0.0.0.0:12379 0.0.0.0:12379 <nil> 0 <nil>}. Err: connection error: desc = "transport: authentication handshake failed: remote error: tls: bad certificate". Reconnecting...
{"level":"warn","ts":"2023-01-31T17:10:51.083Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"127.0.0.1:39182","server-name":"","error":"remote error: tls: bad certificate"}
{"level":"warn","ts":"2023-01-31T17:10:51.756Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"127.0.0.1:39190","server-name":"","error":"remote error: tls: bad certificate"}
{"level":"warn","ts":"2023-01-31T17:10:53.344Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"127.0.0.1:39206","server-name":"","error":"tls: failed to verify client certificate: x509: certificate specifies an incompatible key usage"}
WARNING: 2023/01/31 17:10:53 [core] grpc: addrConn.createTransport failed to connect to {0.0.0.0:12379 0.0.0.0:12379 <nil> 0 <nil>}. Err: connection error: desc = "transport: authentication handshake failed: remote error: tls: bad certificate". Reconnecting...
{"level":"warn","ts":"2023-01-31T17:10:53.909Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"127.0.0.1:48360","server-name":"","error":"remote error: tls: bad certificate"}
{"level":"warn","ts":"2023-01-31T17:10:53.975Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"127.0.0.1:48370","server-name":"","error":"remote error: tls: bad certificate"}
{"level":"warn","ts":"2023-01-31T17:10:57.254Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"127.0.0.1:48384","server-name":"","error":"tls: failed to verify client certificate: x509: certificate specifies an incompatible key usage"}
WARNING: 2023/01/31 17:10:57 [core] grpc: addrConn.createTransport failed to connect to {0.0.0.0:12379 0.0.0.0:12379 <nil> 0 <nil>}. Err: connection error: desc = "transport: authentication handshake failed: remote error: tls: bad certificate". Reconnecting...
{"level":"warn","ts":"2023-01-31T17:10:57.386Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"127.0.0.1:48398","server-name":"","error":"remote error: tls: bad certificate"}
{"level":"warn","ts":"2023-01-31T17:10:57.876Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"127.0.0.1:48408","server-name":"","error":"remote error: tls: bad certificate"}
{"level":"warn","ts":"2023-01-31T17:11:03.072Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"127.0.0.1:48418","server-name":"","error":"tls: failed to verify client certificate: x509: certificate specifies an incompatible key usage"}
WARNING: 2023/01/31 17:11:03 [core] grpc: addrConn.createTransport failed to connect to {0.0.0.0:12379 0.0.0.0:12379 <nil> 0 <nil>}. Err: connection error: desc = "transport: authentication handshake failed: remote error: tls: bad certificate". Reconnecting...
{"level":"warn","ts":"2023-01-31T17:11:03.752Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"127.0.0.1:48430","server-name":"","error":"remote error: tls: bad certificate"}
{"level":"warn","ts":"2023-01-31T17:11:03.773Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"127.0.0.1:41384","server-name":"","error":"remote error: tls: bad certificate"}
{"level":"warn","ts":"2023-01-31T17:11:10.232Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"127.0.0.1:41392","server-name":"","error":"remote error: tls: bad certificate"}
{"level":"warn","ts":"2023-01-31T17:11:11.227Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"127.0.0.1:41404","server-name":"","error":"remote error: tls: bad certificate"}
{"level":"warn","ts":"2023-01-31T17:11:11.246Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"127.0.0.1:41406","server-name":"","error":"remote error: tls: bad certificate"}
{"level":"warn","ts":"2023-01-31T17:11:12.241Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"127.0.0.1:41416","server-name":"","error":"remote error: tls: bad certificate"}
{"level":"warn","ts":"2023-01-31T17:11:13.022Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"127.0.0.1:41428","server-name":"","error":"remote error: tls: bad certificate"}
{"level":"warn","ts":"2023-01-31T17:11:13.776Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"127.0.0.1:46976","server-name":"","error":"remote error: tls: bad certificate"}
{"level":"warn","ts":"2023-01-31T17:11:14.594Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"127.0.0.1:46990","server-name":"","error":"tls: failed to verify client certificate: x509: certificate specifies an incompatible key usage"}
WARNING: 2023/01/31 17:11:14 [core] grpc: addrConn.createTransport failed to connect to {0.0.0.0:12379 0.0.0.0:12379 <nil> 0 <nil>}. Err: connection error: desc = "transport: authentication handshake failed: remote error: tls: bad certificate". Reconnecting...

How to reproduce it (as minimally and precisely as possible):

deploy openyurt by using the Manually Setup instructions and add a edge-worker node.

Environment:

  • OpenYurt version: 1.2.0
  • Kubernetes version (use kubectl version): 1.22.17
  • OS (e.g: cat /etc/os-release): ubuntu 20.04
  • Kernel (e.g. uname -a): 5.4.0-137-generic
  • Install tools: manual/ansible

others

/kind bug

@batthebee batthebee added the kind/bug kind/bug label Jan 31, 2023
@batthebee batthebee changed the title [BUG] openyurt v1.22 - pool-coordinator not working because of missing images and invalid certificates [BUG] openyurt v1.2.0 - pool-coordinator not working because of missing images and invalid certificates Jan 31, 2023
@Congrool
Copy link
Member

Congrool commented Feb 1, 2023

Could you please post the seceret kube-system/pool-coordinator-dynamic-certs ?

@batthebee
Copy link
Contributor Author

batthebee commented Feb 1, 2023

@Congrool Here the output:

k get secret -n kube-system pool-coordinator-dynamic-certs -o yaml
apiVersion: v1
data:
  apiserver.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUV0ekNDQTUrZ0F3SUJBZ0lJWFp3ZE0wbldUTDB3RFFZSktvWklodmNOQVFFTEJRQXdKREVpTUNBR0ExVUUKQXhNWmIzQmxibmwxY25RNmNHOXZiQzFqYjI5eVpHbHVZWFJ2Y2pBZ0Z3MHlNekF4TXpFeE5ETXhORGRhR0E4eQpNVEl6TURFd056RTJNRGd3TTFvd1VqRWlNQ0FHQTFVRUNoTVpiM0JsYm5sMWNuUTZjRzl2YkMxamIyOXlaR2x1CllYUnZjakVzTUNvR0ExVUVBeE1qYjNCbGJubDFjblE2Y0c5dmJDMWpiMjl5WkdsdVlYUnZjanBoY0dselpYSjIKWlhJd2dnRWlNQTBHQ1NxR1NJYjNEUUVCQVFVQUE0SUJEd0F3Z2dFS0FvSUJBUUNzbWxYOE5nc0ZjN0l5R2l3MApEMnJxSlNDMFNNRDlaK0VkUnJjdVR4ckR0cVBwYVJoc2ZZekhIRHhDMFE0VHNEM2wwN3NxYldLVXdwRUdmR3FuCkRtZC9KdE9yQ05GVkFNczJZd2o4L1p6aCtQcmdlUkRia1lQWVpSbWhoaXNCeWgwb1pUblhQOUM4a1hLcU9nQkwKTDJKZEpBbHhpNkNRa0RCZlQ2dXBrQ3dia2d3dDYvc3kzYXludnRPN2RNYTFvMW9uY0g3S1RrOVBYa2t4TXBrTwp5c0NpWE0vYitwZGN6NThuczVtRCtJaUJHUllpeUFuRjRJZkZwTWg1OGlGWTVHems4VTlYandKeDhmOXdibWQ0CnVvcFQ3end4Uk9ud2JibWxNNElBNHd3Vnd4aVk4aGtvR29hcWVQRkhlMlNKYzNjTWlFVVJma21sMHlHdGQwU0cKWkRIUEFnTUJBQUdqZ2dHN01JSUJ0ekFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQgpCUVVIQXdFd0h3WURWUjBqQkJnd0ZvQVU2dlBuZnUvdDFYVkpmRDE4QjlLUy9pa2VWVlF3Z2dGdEJnTlZIUkVFCmdnRmtNSUlCWUlJYWNHOXZiQzFqYjI5eVpHbHVZWFJ2Y2kxaGNHbHpaWEoyWlhLQ0puQnZiMnd0WTI5dmNtUnAKYm1GMGIzSXRZWEJwYzJWeWRtVnlMbXQxWW1VdGMzbHpkR1Z0Z2lwd2IyOXNMV052YjNKa2FXNWhkRzl5TFdGdwphWE5sY25abGNpNXJkV0psTFhONWMzUmxiUzV6ZG1PQ09IQnZiMnd0WTI5dmNtUnBibUYwYjNJdFlYQnBjMlZ5CmRtVnlMbXQxWW1VdGMzbHpkR1Z0TG5OMll5NWpiSFZ6ZEdWeUxteHZZMkZzZ2hwd2IyOXNMV052YjNKa2FXNWgKZEc5eUxXRndhWE5sY25abGNvSW1jRzl2YkMxamIyOXlaR2x1WVhSdmNpMWhjR2x6WlhKMlpYSXVhM1ZpWlMxegplWE4wWlcyQ0tuQnZiMnd0WTI5dmNtUnBibUYwYjNJdFlYQnBjMlZ5ZG1WeUxtdDFZbVV0YzNsemRHVnRMbk4yClk0STRjRzl2YkMxamIyOXlaR2x1WVhSdmNpMWhjR2x6WlhKMlpYSXVhM1ZpWlMxemVYTjBaVzB1YzNaakxtTnMKZFhOMFpYSXViRzlqWVd5SEJBcHA5NkNIQkFwcDk2QXdEUVlKS29aSWh2Y05BUUVMQlFBRGdnRUJBTExKWjhPWAphbC9YVEVIZ2ZHOWtxVFZFSGtpbDJGQjVZMk5BSk1XZVRyc0hPRk5oZmoxems2cmdQdnVRRkt2ZFVlMUZBWEYvCnFuMTVKQnh0SFpKRlJjV2NhcWt6a2RNZG1wM3B5aWdwelNCQW95aGhOMzBCSC95YjByWXlycjZMSVk0TWlMelYKazNpN2FkNUtlcEpRd3JucGVrQ3NLZXZ3M0ZqS3ZadCtEQzRDZklSalNXdWVoYk9DcGV3VW94WXM1U1NLdVlxQgp4bWJxbFVXTE1WMnNPQWhpTzBTSWxocnFQWSt6aGgwOVArTEJHWTF4VEJPUCtWb09CZTNrKzJtS01KbmtoUzRnCnI4L2U3QWpISy95aE1NZW1jc05YSXkzdGV6bFBFUWtCUHlzclJyanFOdUtzRGtrYkVYc0hxeGtMY2hKZWVvVXkKMG1tRHNCNkwrUk1vaFBFPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
  apiserver.key: ****
  etcd-server.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUVpakNDQTNLZ0F3SUJBZ0lJVVlGUVIyWHBBWWN3RFFZSktvWklodmNOQVFFTEJRQXdKREVpTUNBR0ExVUUKQXhNWmIzQmxibmwxY25RNmNHOXZiQzFqYjI5eVpHbHVZWFJ2Y2pBZ0Z3MHlNekF4TXpFeE5ETXhORGRhR0E4eQpNVEl6TURFd056RTJNRGd3TkZvd1RURWlNQ0FHQTFVRUNoTVpiM0JsYm5sMWNuUTZjRzl2YkMxamIyOXlaR2x1CllYUnZjakVuTUNVR0ExVUVBeE1lYjNCbGJubDFjblE2Y0c5dmJDMWpiMjl5WkdsdVlYUnZjanBsZEdOa01JSUIKSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQTQ4cmZ0ODVqRk9IR2M4VnB3aXhnUWVFawpsa2FxYWlIVi9MTnJ5Q0tEUVRwS2NrQmVEdHBIUHRJODg5Q2Z2bzBmUnFBcE5oQjdWSThtMEF6NGs4Nk9FT2YwCnA1Nml6RStMMjRNOFREOVh5UmdTd3psQmJlcmRRTmFMbTY1dWVsYmlZY29GbmdKa1BjeSt6NHRNdDMzM3RXbzkKVjNYVzk2NU83S01jM0ZIbHhhVE51QWI0NExCSDIwOFF1dGNVR2tncG1OTDFEUHZVZVAvUlBWYnNYbFlUaGt5eApsaFlsZUo0Um9ucU0zQnhqQlJxdDlLMDRqeXc1aC9Bb0k0K1Z6NFlyWTB0VTFUZ2J4YnJyZFJ2SXNNUXhKM0NzCmVuOFRWMU8yaVlRS3RCRjdIZDFTTEIxc3pLdElvN29CalVNeDZaRXBHQU9idVZudU9BeE90czNkaFZIYXF3SUQKQVFBQm80SUJrekNDQVk4d0RnWURWUjBQQVFIL0JBUURBZ1dnTUJNR0ExVWRKUVFNTUFvR0NDc0dBUVVGQndNQgpNQjhHQTFVZEl3UVlNQmFBRk9yejUzN3Y3ZFYxU1h3OWZBZlNrdjRwSGxWVU1JSUJSUVlEVlIwUkJJSUJQRENDCkFUaUNGWEJ2YjJ3dFkyOXZjbVJwYm1GMGIzSXRaWFJqWklJaGNHOXZiQzFqYjI5eVpHbHVZWFJ2Y2kxbGRHTmsKTG10MVltVXRjM2x6ZEdWdGdpVndiMjlzTFdOdmIzSmthVzVoZEc5eUxXVjBZMlF1YTNWaVpTMXplWE4wWlcwdQpjM1pqZ2pOd2IyOXNMV052YjNKa2FXNWhkRzl5TFdWMFkyUXVhM1ZpWlMxemVYTjBaVzB1YzNaakxtTnNkWE4wClpYSXViRzlqWVd5Q0ZYQnZiMnd0WTI5dmNtUnBibUYwYjNJdFpYUmpaSUloY0c5dmJDMWpiMjl5WkdsdVlYUnYKY2kxbGRHTmtMbXQxWW1VdGMzbHpkR1Z0Z2lWd2IyOXNMV052YjNKa2FXNWhkRzl5TFdWMFkyUXVhM1ZpWlMxegplWE4wWlcwdWMzWmpnak53YjI5c0xXTnZiM0prYVc1aGRHOXlMV1YwWTJRdWEzVmlaUzF6ZVhOMFpXMHVjM1pqCkxtTnNkWE4wWlhJdWJHOWpZV3lIQkFwdWVBYUhCQXB1ZUFZd0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFKaGEKU2hqa01JZ1luY2QyYXVwdkI3NG9rUzhQeEZKaHZmdGg0NkR3TVlYbjFLUG9qVC95ZHgyUUs0WVhST3lmM3Y0Vgp2R1dNQWxzL0pqRTZxTWFuNHBaTlRQMEFOaU9QZWJCYTR6SmRIeThZaEYreThpS0pmWWN2aEsrbWppUDVOdEtKCkVUMWlacENEeWhNdnRxQ1g2NWl1a2ZKQXV1MVU1cXRxT0grMFBuVEdWcXVIWDZlSXhuZzVKRTVLbUlJeXI0MjEKNnNSNThGYmxVTkdMRUZxZjliL2hrVEVuZjY0R1F1ajlwQ002ZThjTVZBWFZuSnhMd0w5b0RXUDBYdVFXUEhtRAp3T3Y1eDJVSlUrdFFKcllEQi9iUkRSTnlpa1locWtKSFJseUVVeEZ1SkNFVFZnQWVzYW9iZFZhbkRnb0RLQkt2Cm15ZzRTVEdYaUV4aVZRdGZrVjA9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
  etcd-server.key: ****
kind: Secret
metadata:
  creationTimestamp: "2023-01-31T14:31:49Z"
  name: pool-coordinator-dynamic-certs
  namespace: kube-system
  resourceVersion: "27368"
  uid: 69b41b58-f64a-453f-80f7-ba187822cf2e
type: Opaque

@Congrool
Copy link
Member

Congrool commented Feb 1, 2023

It seems that the server cert of etcd does not contain IP 127.0.0.1, while the apiserver uses such address to connect to it. Here's the decoded etcd server cert:

Common Name: openyurt:pool-coordinator:etcd
Issuing Certificate: openyurt:pool-coordinator
...
IP Address:10.110.120.6, IP Address:10.110.120.6

What the version of yurt-controller-manager? Could you check it with cmd

kubectl logs -nkube-system yurt-controller-manager-*** >&1 | head

From my ponit of view, I think you've ever used an old ycm which created such cert without 127.0.0.1. Thus, when deploying the new ycm, it will skip recreating it if it finds that the secret has already existed. So a quick solution: delete the deployment of yurt-controller-manager and also delete the secret pool-coordinator-dynamic-certs, then re-deploy it.

@luc99hen PTAL, do you have any suggestion?

@luc99hen
Copy link
Member

luc99hen commented Feb 1, 2023

I agree. 127.0.0.1 was not added until this PR #1122. Please make sure you have the latest yurt-controller-manager image.

@batthebee
Copy link
Contributor Author

batthebee commented Feb 1, 2023

i use the v1.2.0 tag everywhere:

yurthub

yurthub version: projectinfo.Info{GitVersion:"v1.2.0", GitCommit:"33adccc", BuildDate:"2023-01-30T12:05:56Z", GoVersion:"go1.17.1", Compiler:"gc", Platform:"linux/amd64", AllVersions:[]string{"v1.2.0"}}

yurtcontroller-manager

yurtcontroller-manager version: projectinfo.Info{GitVersion:"v1.2.0", GitCommit:"33adccc", BuildDate:"2023-01-30T12:10:52Z", GoVersion:"go1.17.1", Compiler:"gc", Platform:"linux/amd64", AllVersions:[]string{"v1.2.0"}}

secret is created by yurt-controller-manager

k get po -n kube-system yurt-controller-manager-5984b45ff8-9sdgx -o wide
NAME                                       READY   STATUS    RESTARTS        AGE   IP             NODE                  NOMINATED NODE   READINESS GATES
yurt-controller-manager-5984b45ff8-9sdgx   1/1     Running   1 (11m ago)   12m   143.42.26.28   kubeadm-openyurt-w1   <none>           <none>
k get secret -n kube-system | grep pool
pool-coordinator-ca-certs                        Opaque                                2      12m
pool-coordinator-dynamic-certs                   Opaque                                4      12m
pool-coordinator-monitoring-kubeconfig           Opaque                                1      12m
pool-coordinator-static-certs                    Opaque                                8      12m
pool-coordinator-yurthub-certs                   Opaque                                5      12m

same situation, no 127.0.0.1:

k get secret -n kube-system pool-coordinator-dynamic-certs -o yaml
apiVersion: v1
data:
  apiserver.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUV0ekNDQTUrZ0F3SUJBZ0lJTlBoUDFXbkN4LzB3RFFZSktvWklodmNOQVFFTEJRQXdKREVpTUNBR0ExVUUKQXhNWmIzQmxibmwxY25RNmNHOXZiQzFqYjI5eVpHbHVZWFJ2Y2pBZ0Z3MHlNekF5TURFeU1UTXpNVGxhR0E4eQpNVEl6TURFd09ESXhNemswTjFvd1VqRWlNQ0FHQTFVRUNoTVpiM0JsYm5sMWNuUTZjRzl2YkMxamIyOXlaR2x1CllYUnZjakVzTUNvR0ExVUVBeE1qYjNCbGJubDFjblE2Y0c5dmJDMWpiMjl5WkdsdVlYUnZjanBoY0dselpYSjIKWlhJd2dnRWlNQTBHQ1NxR1NJYjNEUUVCQVFVQUE0SUJEd0F3Z2dFS0FvSUJBUURoc1NMVGcxK0NhQ0txdzFhSApEN0dhK2tFOGdkeGNtUTZDVHpabEN2czBKOVRvSGZRZWlxREZrdGhrN2Exd2dJTVlsVzRPdUhKWnV4Z0JwVlUvCkZJUS9kbm5rUjdKTUZvL3ZuYTNPVXFHdnlqRGJZanpXR0ZPYnFPY2F1dnBFM3BXOVN6L3lOazRwcUFXQnJ5US8KRkhwTDh2UTFZTWNCZWw3U0pPcUJ6cTA4elRKM1V5WmkwdTFVQTI0WE1Sa2RBODBwMzREQVBZaGZCOE10SEdGYgo5WnlLdWhudFpMekxKWTBZcHF6T3MxNXQ3Y1ZIazJQSW50c2U2TXhDelJvSE9lbUV1N0gzUHI5L2dmeHhKc2J1CjgvQmU5b0wzVlVhVW16R2x2dUVCc0poMityL3labW9veWZCcE4rbW9mdEVhekJzYWo4dXhNK1hJQWQ4QTQ1OSsKUWNNQkFnTUJBQUdqZ2dHN01JSUJ0ekFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQgpCUVVIQXdFd0h3WURWUjBqQkJnd0ZvQVVpTXU2SlpLc2hCbDhJNDZHekxMRkl4aW9wOEV3Z2dGdEJnTlZIUkVFCmdnRmtNSUlCWUlJYWNHOXZiQzFqYjI5eVpHbHVZWFJ2Y2kxaGNHbHpaWEoyWlhLQ0puQnZiMnd0WTI5dmNtUnAKYm1GMGIzSXRZWEJwYzJWeWRtVnlMbXQxWW1VdGMzbHpkR1Z0Z2lwd2IyOXNMV052YjNKa2FXNWhkRzl5TFdGdwphWE5sY25abGNpNXJkV0psTFhONWMzUmxiUzV6ZG1PQ09IQnZiMnd0WTI5dmNtUnBibUYwYjNJdFlYQnBjMlZ5CmRtVnlMbXQxWW1VdGMzbHpkR1Z0TG5OMll5NWpiSFZ6ZEdWeUxteHZZMkZzZ2hwd2IyOXNMV052YjNKa2FXNWgKZEc5eUxXRndhWE5sY25abGNvSW1jRzl2YkMxamIyOXlaR2x1WVhSdmNpMWhjR2x6WlhKMlpYSXVhM1ZpWlMxegplWE4wWlcyQ0tuQnZiMnd0WTI5dmNtUnBibUYwYjNJdFlYQnBjMlZ5ZG1WeUxtdDFZbVV0YzNsemRHVnRMbk4yClk0STRjRzl2YkMxamIyOXlaR2x1WVhSdmNpMWhjR2x6WlhKMlpYSXVhM1ZpWlMxemVYTjBaVzB1YzNaakxtTnMKZFhOMFpYSXViRzlqWVd5SEJBcHFIUGVIQkFwcUhQY3dEUVlKS29aSWh2Y05BUUVMQlFBRGdnRUJBSGpjbFp2ZgpydlZhZ3hSR0Y1UDlVZzRWQ1R1dmhKVGxSZHBkNEpGU0QzSlNJT3huZzQ4TytqdFl4ZVMyYmh3M0VKM3dkWXFECnpNNGtmQW1nUjBoZjVSZTFXeDh4QjdVVHFnNkpmMkZWZ2JYM1I1UEFCUVpBR0d5UFk3MFBxWVh0ZS9RdGZlVngKUHRxNjB6VWdINFUremtsdFNabTNyMkphbUFhcUxQNlc1M2Y3OWYxTm1ncTZ4ejBZbWtLbjQzb0hDNUhsb2V1dgpMeTNTWHlydzBYYVBZMXJLZFJJTElTbmxpMkFEME5FdkxnUHBnbWZ1RE9HNng3anYzaTkwbkZONitBT2N1dTFGCmpqa3M5NURMWjhrdzFxL0E3ekFubG9iN212WGI5alFKcTBITGlmNmZlOXUzTjdISFNyelpJYk5tZkpzbXVtaXQKcEt0OURXODdlYzQyWmNRPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
  apiserver.key: ****
  etcd-server.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUVpakNDQTNLZ0F3SUJBZ0lJZmhoUi9MM25oWWt3RFFZSktvWklodmNOQVFFTEJRQXdKREVpTUNBR0ExVUUKQXhNWmIzQmxibmwxY25RNmNHOXZiQzFqYjI5eVpHbHVZWFJ2Y2pBZ0Z3MHlNekF5TURFeU1UTXpNVGxhR0E4eQpNVEl6TURFd09ESXhNemswT0Zvd1RURWlNQ0FHQTFVRUNoTVpiM0JsYm5sMWNuUTZjRzl2YkMxamIyOXlaR2x1CllYUnZjakVuTUNVR0ExVUVBeE1lYjNCbGJubDFjblE2Y0c5dmJDMWpiMjl5WkdsdVlYUnZjanBsZEdOa01JSUIKSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQTBOclVybW9hQmIzZG1KZ2VzTEtkbldiRgpFZVB4a2hTalhOdmFPSUR5RzVNcHBVeXpZdmoveWVNTE9BcVlSdmxGdzFZeVg2SS94VzRlN2VPYzJkOUg3MVdVCk5oVzhpOFYxQk1odlRyR282TDRGdk1uNk90WWlaekhLbGpHNklnZ1MwTUEzYUJETXJwQWx4WFlQYWVSUWdNOFMKV1I2OTZTOUJ5VGpTZVlkS2ZpSVlWb0N3b0tVRlNudUNKZXFOZkRTYncvYkUvanhRMWdFRmRIWk9XcnVIa2ZnYgpTVjFzbWZtTjkvZThiQ1U4OERoTHJ0aUxjanIyVDljdjRybm1sdEJ0VWhXeXg1d21FOE84dkhadkVlZFF2WkJ2ClZoUEtwRzFFRElVclNYdVliUlpmVGhFYlg2WVJJdFNWa0VVeU93djArQnd2N0d3c0V5RStIdTFySXJWQnBRSUQKQVFBQm80SUJrekNDQVk4d0RnWURWUjBQQVFIL0JBUURBZ1dnTUJNR0ExVWRKUVFNTUFvR0NDc0dBUVVGQndNQgpNQjhHQTFVZEl3UVlNQmFBRklqTHVpV1NySVFaZkNPT2hzeXl4U01ZcUtmQk1JSUJSUVlEVlIwUkJJSUJQRENDCkFUaUNGWEJ2YjJ3dFkyOXZjbVJwYm1GMGIzSXRaWFJqWklJaGNHOXZiQzFqYjI5eVpHbHVZWFJ2Y2kxbGRHTmsKTG10MVltVXRjM2x6ZEdWdGdpVndiMjlzTFdOdmIzSmthVzVoZEc5eUxXVjBZMlF1YTNWaVpTMXplWE4wWlcwdQpjM1pqZ2pOd2IyOXNMV052YjNKa2FXNWhkRzl5TFdWMFkyUXVhM1ZpWlMxemVYTjBaVzB1YzNaakxtTnNkWE4wClpYSXViRzlqWVd5Q0ZYQnZiMnd0WTI5dmNtUnBibUYwYjNJdFpYUmpaSUloY0c5dmJDMWpiMjl5WkdsdVlYUnYKY2kxbGRHTmtMbXQxWW1VdGMzbHpkR1Z0Z2lWd2IyOXNMV052YjNKa2FXNWhkRzl5TFdWMFkyUXVhM1ZpWlMxegplWE4wWlcwdWMzWmpnak53YjI5c0xXTnZiM0prYVc1aGRHOXlMV1YwWTJRdWEzVmlaUzF6ZVhOMFpXMHVjM1pqCkxtTnNkWE4wWlhJdWJHOWpZV3lIQkFwc0lQQ0hCQXBzSVBBd0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFGTmwKc2JmS0IxbzYvbGYwN2NwbWpFTG0vRGM1aE9tejM0VlZCdDRSRlRZbGtHS0V2S3NNQUtZSWtjd1dxdXNEdFpoagphTEVpcHJOYmRhK2VZcW5LcjZ4SE9xa3BPd29zVjRCd3ozZGtWWnozZDh6Z3lxWUk2ZzNSaXVvN3BmSmtRMjh2Cnc4STNhS2xhQjdmcXRISEVrV2xLSlIySk5CVDFyUzVIQWN6dHVjY1hkcG9RcUlaN0J6cmZZM0VNZlFOaXc2VEoKT2pQZy95cDV5OTZWNjk4T2cvNUc3cW9IWWttZEtFVkl0cVQ2UmdqVWx5RFBuWE1EMEV2eldUWDRJRnJ3VnRidApIaGVXbHYrK2Y4NFVKK2lmY0pqaUxZUWJEQmJJcmNvaHBIMWg0b1BlRUxWOFJjdk5rYUhuR3ROZ083N1l6RW1tCk16eUkxUFhjMHJSdkNmQURiWmc9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
  etcd-server.key:  ****
kind: Secret
metadata:
  creationTimestamp: "2023-02-01T21:33:21Z"
  name: pool-coordinator-dynamic-certs
  namespace: kube-system
  resourceVersion: "2694"
  uid: 466fb0aa-95e6-40aa-9cb3-e61dae54540f
type: Opaque

i did a node restart, so he uses the exising secret:

k logs -n kube-system yurt-controller-manager-5984b45ff8-9sdgx
yurtcontroller-manager version: projectinfo.Info{GitVersion:"v1.2.0", GitCommit:"33adccc", BuildDate:"2023-01-30T12:10:52Z", GoVersion:"go1.17.1", Compiler:"gc", Platform:"linux/amd64", AllVersions:[]string{"v1.2.0"}}
W0201 21:39:26.805886       1 client_config.go:615] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
I0201 21:39:26.820888       1 leaderelection.go:248] attempting to acquire leader lease kube-system/yurt-controller-manager...
I0201 21:39:42.349704       1 event.go:282] Event(v1.ObjectReference{Kind:"Lease", Namespace:"kube-system", Name:"yurt-controller-manager", UID:"ea177c96-6dfb-41b9-956b-ad8422cad4a8", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"2638", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kubeadm-openyurt-w1_e68e9b3b-bb32-4094-a5c2-240eed6474c8 became leader
I0201 21:39:42.350305       1 leaderelection.go:258] successfully acquired lease kube-system/yurt-controller-manager
I0201 21:39:42.372211       1 controllermanager.go:373] Started "poolcoordinatorcertmanager"
I0201 21:39:42.373525       1 controllermanager.go:373] Started "poolcoordinator"
I0201 21:39:42.374974       1 poolcoordinator_cert_manager.go:220] Starting poolcoordinatorCertManager controller
I0201 21:39:42.389275       1 poolcoordinator_cert_manager.go:368] CA already exist in secret, reuse it
I0201 21:39:42.392040       1 csrapprover.go:125] v1.CertificateSigningRequest is supported.
I0201 21:39:42.397899       1 controllermanager.go:373] Started "yurtcsrapprover"
I0201 21:39:42.398925       1 csrapprover.go:185] starting the crsapprover
I0201 21:39:42.403087       1 controllermanager.go:373] Started "daemonpodupdater"
I0201 21:39:42.403141       1 daemon_pod_updater_controller.go:215] Starting daemonPodUpdater controller
I0201 21:39:42.404086       1 poolcoordinator_cert_manager.go:313] cert apiserver-etcd-client not change, reuse it
I0201 21:39:42.413296       1 poolcoordinator_cert_manager.go:313] cert pool-coordinator-yurthub-client not change, reuse it
I0201 21:39:42.419354       1 servicetopology.go:297] v1.EndpointSlice is supported.
I0201 21:39:42.419493       1 controllermanager.go:373] Started "servicetopologycontroller"
I0201 21:39:42.420614       1 controllermanager.go:373] Started "podbinding"
I0201 21:39:42.430487       1 servicetopology.go:93] starting the service topology controller
I0201 21:39:42.489657       1 csrapprover.go:174] csr(csr-mbmnr) is not yurt-csr
I0201 21:39:42.489761       1 csrapprover.go:174] csr(csr-bm8w5) is not yurt-csr
I0201 21:39:42.489952       1 csrapprover.go:174] csr(csr-ghs7g) is not yurt-csr
I0201 21:39:42.490082       1 csrapprover.go:174] csr(csr-g77xc) is not yurt-csr
I0201 21:39:42.490125       1 csrapprover.go:174] csr(csr-6ft4d) is not yurt-csr
I0201 21:39:42.490170       1 csrapprover.go:174] csr(csr-cdlmq) is not yurt-csr
I0201 21:39:42.531033       1 pod_binding_controller.go:274] start pod binding workers
I0201 21:39:42.531149       1 servicetopology.go:99] sync service topology controller succeed
I0201 21:39:42.575545       1 poolcoordinator_controller.go:223] start node taint workers
I0201 21:39:43.426549       1 util.go:62] pool-coordinator-apiserver service is ready for poolcoordinator_cert_manager
I0201 21:39:43.426617       1 poolcoordinator_cert_manager.go:306] cert apiserver IP has changed
I0201 21:39:44.441708       1 util.go:62] pool-coordinator-etcd service is ready for poolcoordinator_cert_manager
I0201 21:39:44.441950       1 poolcoordinator_cert_manager.go:306] cert etcd-server IP has changed
I0201 21:39:45.460757       1 util.go:62] pool-coordinator-apiserver service is ready for poolcoordinator_cert_manager
I0201 21:39:45.460838       1 poolcoordinator_cert_manager.go:306] cert kubeconfig IP has changed
I0201 21:39:46.479969       1 util.go:62] pool-coordinator-apiserver service is ready for poolcoordinator_cert_manager
I0201 21:39:46.480044       1 poolcoordinator_cert_manager.go:306] cert admin.conf IP has changed
I0201 21:39:47.715376       1 util.go:62] pool-coordinator-apiserver service is ready for poolcoordinator_cert_manager
I0201 21:39:47.750655       1 certificate.go:393] successfully write apiserver cert/key into pool-coordinator-dynamic-certs
I0201 21:39:48.886565       1 util.go:62] pool-coordinator-etcd service is ready for poolcoordinator_cert_manager
I0201 21:39:48.922822       1 certificate.go:393] successfully write etcd-server cert/key into pool-coordinator-dynamic-certs
I0201 21:39:50.179241       1 util.go:62] pool-coordinator-apiserver service is ready for poolcoordinator_cert_manager
I0201 21:39:50.211561       1 certificate.go:408] successfully write kubeconfig into secret pool-coordinator-monitoring-kubeconfig
I0201 21:39:51.623396       1 util.go:62] pool-coordinator-apiserver service is ready for poolcoordinator_cert_manager
I0201 21:39:51.660040       1 certificate.go:408] successfully write kubeconfig into secret pool-coordinator-static-certs
W0201 21:39:51.661273       1 filestore_wrapper.go:49] unexpected error occurred when loading the certificate: no cert/key files read at "/tmp/yurt-controller-manager_poolcoordinator-apiserver-client-current.pem", ("", "") or ("/tmp", "/tmp"), will regenerate it
I0201 21:39:51.661512       1 certificate_manager.go:318] kubernetes.io/kube-apiserver-client: Certificate rotation is enabled
I0201 21:39:51.661839       1 certificate_manager.go:446] kubernetes.io/kube-apiserver-client: Rotating certificates
I0201 21:39:51.671056       1 csrapprover.go:168] non-approved and non-denied csr, enqueue: csr-8p25x
I0201 21:39:51.688995       1 csrapprover.go:282] successfully approve yurt-csr(csr-8p25x)
I0201 21:39:52.755951       1 certificate_manager.go:590] kubernetes.io/kube-apiserver-client: Certificate expiration is 2024-02-01 21:34:51 +0000 UTC, rotation deadline is 2023-10-19 15:37:39.605823813 +0000 UTC
I0201 21:39:52.756086       1 certificate_manager.go:324] kubernetes.io/kube-apiserver-client: Waiting 6233h57m46.849744514s for next certificate rotation
I0201 21:39:53.757118       1 certificate_manager.go:590] kubernetes.io/kube-apiserver-client: Certificate expiration is 2024-02-01 21:34:51 +0000 UTC, rotation deadline is 2023-12-11 19:33:02.562444636 +0000 UTC
I0201 21:39:53.757175       1 certificate_manager.go:324] kubernetes.io/kube-apiserver-client: Waiting 7509h53m8.805274568s for next certificate rotation
I0201 21:39:56.662426       1 certificate.go:309] yurt-controller-manager_poolcoordinator certificate signed successfully
I0201 21:39:56.731058       1 certificate.go:357] successfully write apiserver-kubelet-client cert/key pair into pool-coordinator-static-certs
W0201 21:39:56.731268       1 filestore_wrapper.go:49] unexpected error occurred when loading the certificate: no cert/key files read at "/tmp/yurthub-current.pem", ("", "") or ("/tmp", "/tmp"), will regenerate it
I0201 21:39:56.731288       1 certificate_manager.go:318] kubernetes.io/kube-apiserver-client: Certificate rotation is enabled
I0201 21:39:56.731366       1 certificate_manager.go:446] kubernetes.io/kube-apiserver-client: Rotating certificates
I0201 21:39:56.741852       1 csrapprover.go:168] non-approved and non-denied csr, enqueue: csr-gcb7g
I0201 21:39:56.756910       1 csrapprover.go:282] successfully approve yurt-csr(csr-gcb7g)
I0201 21:39:57.776928       1 certificate_manager.go:590] kubernetes.io/kube-apiserver-client: Certificate expiration is 2024-02-01 21:34:56 +0000 UTC, rotation deadline is 2023-12-07 18:03:00.066118681 +0000 UTC
I0201 21:39:57.777023       1 certificate_manager.go:324] kubernetes.io/kube-apiserver-client: Waiting 7412h23m2.289103329s for next certificate rotation
I0201 21:39:58.777618       1 certificate_manager.go:590] kubernetes.io/kube-apiserver-client: Certificate expiration is 2024-02-01 21:34:56 +0000 UTC, rotation deadline is 2023-11-12 12:45:58.181170244 +0000 UTC
I0201 21:39:58.778087       1 certificate_manager.go:324] kubernetes.io/kube-apiserver-client: Waiting 6807h5m59.403093244s for next certificate rotation
I0201 21:40:01.733463       1 certificate.go:309] yurt-controller-manager_poolcoordinator certificate signed successfully
I0201 21:40:01.792956       1 certificate.go:357] successfully write node-lease-proxy-client cert/key pair into pool-coordinator-yurthub-certs
I0201 21:40:01.855719       1 certificate.go:393] successfully write ca cert/key into pool-coordinator-static-certs
I0201 21:40:01.888241       1 certificate.go:393] successfully write ca cert/key into pool-coordinator-yurthub-certs
I0201 21:40:02.168209       1 certificate.go:438] successfully write key pair into secret pool-coordinator-static-certs
I0201 21:40:24.774068       1 pod_binding_controller.go:132] pod(kube-system/pool-coordinator-edge-wnfrg-6f6599575c-hqb59) tolerations should be handled for pod update
I0201 21:40:24.774569       1 pod_binding_controller.go:197] pod(kube-system/pool-coordinator-edge-wnfrg-6f6599575c-hqb59) => toleratesNodeNotReady=false, toleratesNodeUnreachable=false, tolerationSeconds=<nil>
I0201 21:41:34.091635       1 pod_binding_controller.go:132] pod(kube-system/pool-coordinator-edge-wnfrg-6f6599575c-hqb59) tolerations should be handled for pod update
I0201 21:41:34.091755       1 pod_binding_controller.go:197] pod(kube-system/pool-coordinator-edge-wnfrg-6f6599575c-hqb59) => toleratesNodeNotReady=false, toleratesNodeUnreachable=false, tolerationSeconds=<nil>
I0201 21:41:34.467399       1 pod_binding_controller.go:132] pod(kube-system/pool-coordinator-edge-wnfrg-6f6599575c-hqb59) tolerations should be handled for pod update

after deleting pod and secret:

k logs -n kube-system yurt-controller-manager-5984b45ff8-x6lgq -f
yurtcontroller-manager version: projectinfo.Info{GitVersion:"v1.2.0", GitCommit:"33adccc", BuildDate:"2023-01-30T12:10:52Z", GoVersion:"go1.17.1", Compiler:"gc", Platform:"linux/amd64", AllVersions:[]string{"v1.2.0"}}
W0201 21:55:40.443570       1 client_config.go:615] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
I0201 21:55:40.456031       1 leaderelection.go:248] attempting to acquire leader lease kube-system/yurt-controller-manager...
I0201 21:55:57.329058       1 leaderelection.go:258] successfully acquired lease kube-system/yurt-controller-manager
I0201 21:55:57.332669       1 event.go:282] Event(v1.ObjectReference{Kind:"Lease", Namespace:"kube-system", Name:"yurt-controller-manager", UID:"ea177c96-6dfb-41b9-956b-ad8422cad4a8", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"6647", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kubeadm-openyurt-w2_7b81be19-7f2f-4fab-b69e-8eda25b8b2c5 became leader
I0201 21:55:57.346621       1 csrapprover.go:125] v1.CertificateSigningRequest is supported.
I0201 21:55:57.346774       1 controllermanager.go:373] Started "yurtcsrapprover"
I0201 21:55:57.346805       1 csrapprover.go:185] starting the crsapprover
I0201 21:55:57.349832       1 controllermanager.go:373] Started "daemonpodupdater"
I0201 21:55:57.349975       1 daemon_pod_updater_controller.go:215] Starting daemonPodUpdater controller
I0201 21:55:57.354597       1 servicetopology.go:297] v1.EndpointSlice is supported.
I0201 21:55:57.354747       1 controllermanager.go:373] Started "servicetopologycontroller"
I0201 21:55:57.356259       1 servicetopology.go:93] starting the service topology controller
I0201 21:55:57.361047       1 controllermanager.go:373] Started "podbinding"
I0201 21:55:57.362885       1 controllermanager.go:373] Started "poolcoordinatorcertmanager"
I0201 21:55:57.364094       1 controllermanager.go:373] Started "poolcoordinator"
I0201 21:55:57.365155       1 poolcoordinator_cert_manager.go:220] Starting poolcoordinatorCertManager controller
I0201 21:55:57.377058       1 csrapprover.go:174] csr(csr-bm8w5) is not yurt-csr
I0201 21:55:57.377236       1 csrapprover.go:174] csr(csr-ghs7g) is not yurt-csr
I0201 21:55:57.379191       1 csrapprover.go:174] csr(csr-g77xc) is not yurt-csr
I0201 21:55:57.379258       1 csrapprover.go:174] csr(csr-6ft4d) is not yurt-csr
I0201 21:55:57.379277       1 csrapprover.go:174] csr(csr-cdlmq) is not yurt-csr
I0201 21:55:57.379406       1 csrapprover.go:174] csr(csr-mbmnr) is not yurt-csr
I0201 21:55:57.381911       1 poolcoordinator_cert_manager.go:368] CA already exist in secret, reuse it
I0201 21:55:57.393848       1 poolcoordinator_cert_manager.go:313] cert apiserver-etcd-client not change, reuse it
I0201 21:55:57.398824       1 poolcoordinator_cert_manager.go:313] cert pool-coordinator-yurthub-client not change, reuse it
I0201 21:55:57.403009       1 poolcoordinator_cert_manager.go:285] can not load cert apiserver from pool-coordinator-dynamic-certs secret
I0201 21:55:57.406382       1 poolcoordinator_cert_manager.go:285] can not load cert etcd-server from pool-coordinator-dynamic-certs secret
I0201 21:55:57.458295       1 servicetopology.go:99] sync service topology controller succeed
I0201 21:55:57.462459       1 pod_binding_controller.go:274] start pod binding workers
I0201 21:55:57.465703       1 poolcoordinator_controller.go:223] start node taint workers
I0201 21:55:58.417661       1 util.go:62] pool-coordinator-apiserver service is ready for poolcoordinator_cert_manager
I0201 21:55:58.417690       1 poolcoordinator_cert_manager.go:306] cert kubeconfig IP has changed
I0201 21:55:59.432748       1 util.go:62] pool-coordinator-apiserver service is ready for poolcoordinator_cert_manager
I0201 21:55:59.432783       1 poolcoordinator_cert_manager.go:306] cert admin.conf IP has changed
I0201 21:56:00.596307       1 util.go:62] pool-coordinator-apiserver service is ready for poolcoordinator_cert_manager
I0201 21:56:00.625945       1 certificate.go:393] successfully write apiserver cert/key into pool-coordinator-dynamic-certs

still no 127.0.0.1:

apiVersion: v1
data:
  apiserver.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUVCRENDQXV5Z0F3SUJBZ0lJYnlVdEdDVEo0eG93RFFZSktvWklodmNOQVFFTEJRQXdKREVpTUNBR0ExVUUKQXhNWmIzQmxibmwxY25RNmNHOXZiQzFqYjI5eVpHbHVZWFJ2Y2pBZ0Z3MHlNekF5TURFeU1UTXpNVGxhR0E4eQpNVEl6TURFd09ESXhOVFl3TUZvd1VqRWlNQ0FHQTFVRUNoTVpiM0JsYm5sMWNuUTZjRzl2YkMxamIyOXlaR2x1CllYUnZjakVzTUNvR0ExVUVBeE1qYjNCbGJubDFjblE2Y0c5dmJDMWpiMjl5WkdsdVlYUnZjanBoY0dselpYSjIKWlhJd2dnRWlNQTBHQ1NxR1NJYjNEUUVCQVFVQUE0SUJEd0F3Z2dFS0FvSUJBUUNsVmhLclErbG9WNmsybVFkOApacTRDL1ZkQXh1WTgvckF0R0xPMkh1Tm1XODNYNVEyQ3cyVXIzNUkrbTkrSkZtK2xQeGErVkttTTVaWmFsTCtHCi9HQ1hsWk5kL0JEMk9kUllteVNvNzVLL3FieVRXejJaRGZnbVJoSTlxcDJqaUpxSG9BcGNaQ1N4ckNLSUkrTzEKZkRzNkp2aUZlaWhRN0xvaiszSDI2Ym5aTkY3MGRGRUNQcjNEOFZhd085UzlaRHc5UWpncGJDajc0Z1REemQ1cwo5Yk92SEJhVzVMMkpHV3V5MlY2T1YyUnpwdXM2SHpqaGl6bE1EUkZMYjBWWldKVyt3QWQ3WGJ5WTNieGRPdUZ5CnNiNllTK3c4dHZ5WTlLdVR3eVRPVmE2SEFaWHpsaFJhMWFKRzBkSFByc3hyK3c0bkZGaG8zdURrdE8rWXorcU0Kd0xrSEFnTUJBQUdqZ2dFSU1JSUJCREFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQgpCUVVIQXdFd0h3WURWUjBqQkJnd0ZvQVVpTXU2SlpLc2hCbDhJNDZHekxMRkl4aW9wOEV3Z2JzR0ExVWRFUVNCCnN6Q0JzSUlhY0c5dmJDMWpiMjl5WkdsdVlYUnZjaTFoY0dselpYSjJaWEtDSm5CdmIyd3RZMjl2Y21ScGJtRjAKYjNJdFlYQnBjMlZ5ZG1WeUxtdDFZbVV0YzNsemRHVnRnaXB3YjI5c0xXTnZiM0prYVc1aGRHOXlMV0Z3YVhObApjblpsY2k1cmRXSmxMWE41YzNSbGJTNXpkbU9DT0hCdmIyd3RZMjl2Y21ScGJtRjBiM0l0WVhCcGMyVnlkbVZ5CkxtdDFZbVV0YzNsemRHVnRMbk4yWXk1amJIVnpkR1Z5TG14dlkyRnNod1FLYWh6M01BMEdDU3FHU0liM0RRRUIKQ3dVQUE0SUJBUUJlMDNFdFVaMklVNWlESkV2Y3k1ajRHVFJkN2ozLzNWVU04aFJvQklMa2VwL3B3K2I2dHk4ago3ekJsUmNNbUFtaEIxQVRBSzNlczNXcUY4RVg4TUZ5dlBxcm82cGZyczlSeTR1R1k0R1U3QXVhVUtGRGczOGdnCjlYS0R4Q2Z1S3RyNDJjNkhrVjdNNitmVjdQU1g1K0poL2ZhRkJCZnUzaDVWSGcreFJ5U0ZXTVJTU0NKSllLVXQKUHBSdGE5TlNLdUlSeVFZWjBMdmwzczVyZ3VKaXBlenZNSGN0aGtQd3MwNzlZVnQydmM2cGE3ZTJkUHBRZTVXYQpTM0swMEVXOThBOUpWMy9DYmNTTTRGdVcvSnFBZ21nc3ZEaEdkZUdyczdOT09peEdyb2lGYjZzemhZWGZrem5JCkVVUjdSS2xZUnZZem5oemh2ejFIanJSWUVsYk41TUFYCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
  apiserver.key: ****
  etcd-server.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUQ3ekNDQXRlZ0F3SUJBZ0lJSU9hYzZ1ZjNPQmd3RFFZSktvWklodmNOQVFFTEJRQXdKREVpTUNBR0ExVUUKQXhNWmIzQmxibmwxY25RNmNHOXZiQzFqYjI5eVpHbHVZWFJ2Y2pBZ0Z3MHlNekF5TURFeU1UTXpNVGxhR0E4eQpNVEl6TURFd09ESXhOVFl3TVZvd1RURWlNQ0FHQTFVRUNoTVpiM0JsYm5sMWNuUTZjRzl2YkMxamIyOXlaR2x1CllYUnZjakVuTUNVR0ExVUVBeE1lYjNCbGJubDFjblE2Y0c5dmJDMWpiMjl5WkdsdVlYUnZjanBsZEdOa01JSUIKSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQTI4ZElMZnd4Wkd1Z0FzRGd1cUsxTGNpTQpPVjJEcHFVYlhLd3BYeDA2S291UWVySVhpUFNvMHgxdG0vRzIyVys3aGVvSFdHV0FsUTFZcXpsMGE3bisyb3hNCnJER20wY0VDUDJLWkRQTWw5bE1xSjJPNDRDUTU3cGt3eE04SGJERjVPdWNRVkFxblpuUnNuQWNPQ3BTU3dQd3cKbTRQNEN4TXNuOHE1SnpHMERGY1BVMlZKakt5TGpNVnhMTTdOMUh3R0d4ajVFcHZTdmRJc1ZQaDBpNWtTelcrSApLU2lpT2JTYzA0Sm1WVzBiZEFhL0g4OXV2amhZVkZoTnBOSzhXeTkxc0EyOUJZMnRCeGZMSkNyWHlGSlFub2VEClFTZlg0OFlqbmlkb0t2NHVYOVkwb1NUTExxV09Yb0ZtaklMWnlvUDZoNlFBVXB2dVBKMXBNNElBL1hPM0Z3SUQKQVFBQm80SDVNSUgyTUE0R0ExVWREd0VCL3dRRUF3SUZvREFUQmdOVkhTVUVEREFLQmdnckJnRUZCUWNEQVRBZgpCZ05WSFNNRUdEQVdnQlNJeTdvbGtxeUVHWHdqam9iTXNzVWpHS2lud1RDQnJRWURWUjBSQklHbE1JR2lnaFZ3CmIyOXNMV052YjNKa2FXNWhkRzl5TFdWMFkyU0NJWEJ2YjJ3dFkyOXZjbVJwYm1GMGIzSXRaWFJqWkM1cmRXSmwKTFhONWMzUmxiWUlsY0c5dmJDMWpiMjl5WkdsdVlYUnZjaTFsZEdOa0xtdDFZbVV0YzNsemRHVnRMbk4yWTRJegpjRzl2YkMxamIyOXlaR2x1WVhSdmNpMWxkR05rTG10MVltVXRjM2x6ZEdWdExuTjJZeTVqYkhWemRHVnlMbXh2ClkyRnNod1IvQUFBQmh3UUtiQ0R3TUEwR0NTcUdTSWIzRFFFQkN3VUFBNElCQVFBNVF0SitVUkZITUgrNUFqTUIKQzIwRlJidmJmZTVMYkFmVVVVTWxnZytnM3JzY1JwNlgxUmd4WGIxcERERmZSVWxYSEJtZ2REU1Y5QnYydkVvUQp5WHg2azlxWGZIZm5pWVB6V2gvcElnUWVQRC9yQ3U0T3FEd3VyMDlaSFEzSWtnVUdBNkxFQW1hM3lacUg4bmI4CkEzOVZuOXhJVW5QWGU3WllLSGp2Sy91WVh6a0lZUGVLSWJldytmTVRqRjZnL3V4Ty8zdTVJZjc5WFZvNnZBUEsKRzc1Zm9VVGRJK3hMSVVNYzNYWkZmZVpxQ3gxNW5wenFDbmpuRFQwQ1g5S1FkY0ZaL05ZOUdMQVlUNFcvQ3dyQgp6T3Y1eDJHODdrMjltVDZmekM0cktsTUdsZ0NxNXA3QkVRUTRmbzVEd2hvbWdZN2NxQ256U0lUdlRjTE9TLytaCnlFaGgKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
  etcd-server.key: ****
kind: Secret
metadata:
  creationTimestamp: "2023-02-01T21:56:00Z"
  name: pool-coordinator-dynamic-certs
  namespace: kube-system
  resourceVersion: "6673"
  uid: bb90ae04-f74e-4b64-888f-20c7747b88b0
type: Opaque

@luc99hen
Copy link
Member

luc99hen commented Feb 2, 2023

I have done some test. It's wired that image [email protected] could not create right etcd-server-cert as @batthebee said. However, yurt-controller-manager@latest is OK. Maybe we should update the image @rambohe-ch .

@batthebee
Copy link
Contributor Author

Using "latest" tag for yurt-controller-manager causes:

k logs -n kube-system yurt-controller-manager-68bd8d5bf4-n87zd
yurtcontroller-manager version: projectinfo.Info{GitVersion:"-5f7f1a2", GitCommit:"5f7f1a2", BuildDate:"2023-01-24T02:05:31Z", GoVersion:"go1.17.1", Compiler:"gc", Platform:"linux/amd64", AllVersions:[]string{""}}
W0202 17:26:03.303221       1 client_config.go:615] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
I0202 17:26:03.311249       1 leaderelection.go:248] attempting to acquire leader lease kube-system/yurt-controller-manager...
I0202 17:26:03.423187       1 leaderelection.go:258] successfully acquired lease kube-system/yurt-controller-manager
I0202 17:26:03.423645       1 event.go:282] Event(v1.ObjectReference{Kind:"Lease", Namespace:"kube-system", Name:"yurt-controller-manager", UID:"032934be-9b33-45da-a46f-7fb603132f5a", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"1504", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kubeadm-openyurt-w1_c8734fbe-fd1b-4fa0-993d-e0f11fd1a382 became leader
I0202 17:26:03.447863       1 controllermanager.go:373] Started "webhookmanager"
I0202 17:26:03.450178       1 controllermanager.go:373] Started "poolcoordinatorcertmanager"
I0202 17:26:03.451626       1 controllermanager.go:373] Started "poolcoordinator"
I0202 17:26:03.450491       1 poolcoordinator_cert_manager.go:219] Starting poolcoordinatorCertManager controller
I0202 17:26:03.472793       1 csrapprover.go:125] v1.CertificateSigningRequest is supported.
I0202 17:26:03.472903       1 controllermanager.go:373] Started "yurtcsrapprover"
I0202 17:26:03.477290       1 controllermanager.go:373] Started "daemonpodupdater"
I0202 17:26:03.490294       1 csrapprover.go:185] starting the crsapprover
I0202 17:26:03.490796       1 daemon_pod_updater_controller.go:215] Starting daemonPodUpdater controller
I0202 17:26:03.491301       1 poolcoordinator_cert_manager.go:367] fail to get CA from secret: secrets "pool-coordinator-ca-certs" not found, create new CA
I0202 17:26:03.505174       1 servicetopology.go:297] v1.EndpointSlice is supported.
I0202 17:26:03.505540       1 controllermanager.go:373] Started "servicetopologycontroller"
I0202 17:26:03.506578       1 servicetopology.go:93] starting the service topology controller
I0202 17:26:03.595764       1 csrapprover.go:174] csr(csr-l9xss) is not yurt-csr
I0202 17:26:03.596104       1 csrapprover.go:174] csr(csr-dn7lk) is not yurt-csr
I0202 17:26:03.597914       1 csrapprover.go:174] csr(csr-f6spv) is not yurt-csr
I0202 17:26:03.598382       1 csrapprover.go:174] csr(csr-kmk2d) is not yurt-csr
I0202 17:26:03.598696       1 csrapprover.go:174] csr(csr-wfjxv) is not yurt-csr
I0202 17:26:03.598908       1 csrapprover.go:174] csr(csr-5jfjk) is not yurt-csr
I0202 17:26:03.666165       1 poolcoordinator_controller.go:227] start node taint workers
I0202 17:26:03.713486       1 servicetopology.go:99] sync service topology controller succeed
I0202 17:26:03.828490       1 certificate.go:393] successfully write ca cert/key into pool-coordinator-ca-certs
I0202 17:26:04.063921       1 certificate.go:393] successfully write apiserver-etcd-client cert/key into pool-coordinator-static-certs
I0202 17:26:04.237699       1 webhook.go:99] tls key and cert ok.
I0202 17:26:04.238454       1 poolcoordinator_webhook.go:659] populate nodepool map
I0202 17:26:04.238714       1 poolcoordinator_webhook.go:666] start nodepool maintenance worker
F0202 17:26:04.244908       1 poolcoordinator_webhook.go:595] failed to get validatewebhookconfiguration yurt-controller-manager, validatingwebhookconfigurations.admissionregistration.k8s.io "yurt-controller-manager" is forbidden: User "system:serviceaccount:kube-system:yurt-controller-manager" cannot get resource "validatingwebhookconfigurations" in API group "admissionregistration.k8s.io" at the cluster scope
goroutine 106 [running]:
k8s.io/klog/v2.stacks(0x1)
	/go/pkg/mod/k8s.io/klog/[email protected]/klog.go:1026 +0x8a
k8s.io/klog/v2.(*loggingT).output(0x2d2aae0, 0x3, {0x0, 0x0}, 0xc0003ca9a0, 0x0, {0x22d4e3f, 0xc00052ca60}, 0x0, 0x0)
	/go/pkg/mod/k8s.io/klog/[email protected]/klog.go:975 +0x63d
k8s.io/klog/v2.(*loggingT).printf(0x0, 0x0, {0x0, 0x0}, {0x0, 0x0}, {0x1bdd972, 0x31}, {0xc00052ca60, 0x2, ...})
	/go/pkg/mod/k8s.io/klog/[email protected]/klog.go:753 +0x1e5
k8s.io/klog/v2.Fatalf(...)
	/go/pkg/mod/k8s.io/klog/[email protected]/klog.go:1514
github.com/openyurtio/openyurt/pkg/webhook.(*PoolCoordinatorWebhook).ensureValidatingConfiguration(0xc000167e00, 0xc00059e460)
	/build/pkg/webhook/poolcoordinator_webhook.go:595 +0x79c
github.com/openyurtio/openyurt/pkg/webhook.(*PoolCoordinatorWebhook).Init(0xc000167e00, 0x1, 0xc000131200)
	/build/pkg/webhook/poolcoordinator_webhook.go:674 +0x3b7
github.com/openyurtio/openyurt/pkg/webhook.(*WebhookManager).Run(0xc0005b23c0, 0x0)
	/build/pkg/webhook/webhook.go:108 +0x76d
created by github.com/openyurtio/openyurt/cmd/yurt-controller-manager/app.startWebhookManager
	/build/cmd/yurt-controller-manager/app/core.go:96 +0xb2

Tested with v1.2.0 and master branch helm charts.

@batthebee
Copy link
Contributor Author

Adding this to yurt-controller-managerclusterrole temorary fixes the problem:

- apiGroups:
  - admissionregistration.k8s.io
  resources:
  - validatingwebhookconfigurations
  - mutatingwebhookconfigurations
  verbs:
  - get
  - list
  - watch
  - patch
  - create
  - delete
  - update

@batthebee
Copy link
Contributor Author

batthebee commented Feb 2, 2023

With latest image there are still problems:

k logs -n kube-system yurt-controller-manager-68bd8d5bf4-87hgp
yurtcontroller-manager version: projectinfo.Info{GitVersion:"-5f7f1a2", GitCommit:"5f7f1a2", BuildDate:"2023-01-25T02:05:25Z", GoVersion:"go1.17.1", Compiler:"gc", Platform:"linux/amd64", AllVersions:[]string{""}}
W0202 19:51:28.472792       1 client_config.go:615] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
I0202 19:51:28.491652       1 leaderelection.go:248] attempting to acquire leader lease kube-system/yurt-controller-manager...
E0202 19:51:28.503968       1 leaderelection.go:330] error retrieving resource lock kube-system/yurt-controller-manager: can not cache for yurt-controller-manager get leases: /apis/coordination.k8s.io/v1/namespaces/kube-system/leases/yurt-controller-manager?timeout=10s
E0202 19:51:32.885599       1 leaderelection.go:330] error retrieving resource lock kube-system/yurt-controller-manager: can not cache for yurt-controller-manager get leases: /apis/coordination.k8s.io/v1/namespaces/kube-system/leases/yurt-controller-manager?timeout=10s
E0202 19:51:35.581132       1 leaderelection.go:330] error retrieving resource lock kube-system/yurt-controller-manager: can not cache for yurt-controller-manager get leases: /apis/coordination.k8s.io/v1/namespaces/kube-system/leases/yurt-controller-manager?timeout=10s
E0202 19:51:37.976188       1 leaderelection.go:330] error retrieving resource lock kube-system/yurt-controller-manager: can not cache for yurt-controller-manager get leases: /apis/coordination.k8s.io/v1/namespaces/kube-system/leases/yurt-controller-manager?timeout=10s
E0202 19:51:41.060548       1 leaderelection.go:330] error retrieving resource lock kube-system/yurt-controller-manager: can not cache for yurt-controller-manager get leases: /apis/coordination.k8s.io/v1/namespaces/kube-system/leases/yurt-controller-manager?timeout=10s
E0202 19:51:44.315333       1 leaderelection.go:330] error retrieving resource lock kube-system/yurt-controller-manager: can not cache for yurt-controller-manager get leases: /apis/coordination.k8s.io/v1/namespaces/kube-system/leases/yurt-controller-manager?timeout=10s
E0202 19:51:46.540470       1 leaderelection.go:330] error retrieving resource lock kube-system/yurt-controller-manager: Unauthorized
E0202 19:51:49.278509       1 leaderelection.go:330] error retrieving resource lock kube-system/yurt-controller-manager: Unauthorized
E0202 19:51:51.701447       1 leaderelection.go:330] error retrieving resource lock kube-system/yurt-controller-manager: Unauthorized
E0202 19:51:54.902678       1 leaderelection.go:330] error retrieving resource lock kube-system/yurt-controller-manager: Unauthorized
E0202 19:51:58.328947       1 leaderelection.go:330] error retrieving resource lock kube-system/yurt-controller-manager: Unauthorized
E0202 19:52:01.884111       1 leaderelection.go:330] error retrieving resource lock kube-system/yurt-controller-manager: Unauthorized
E0202 19:52:04.845366       1 leaderelection.go:330] error retrieving resource lock kube-system/yurt-controller-manager: Unauthorized
E0202 19:52:08.043270       1 leaderelection.go:330] error retrieving resource lock kube-system/yurt-controller-manager: Unauthorized
E0202 19:52:10.285588       1 leaderelection.go:330] error retrieving resource lock kube-system/yurt-controller-manager: Unauthorized
E0202 19:52:13.058170       1 leaderelection.go:330] error retrieving resource lock kube-system/yurt-controller-manager: Unauthorized
E0202 19:52:16.359104       1 leaderelection.go:330] error retrieving resource lock kube-system/yurt-controller-manager: Unauthorized
E0202 19:52:19.827403       1 leaderelection.go:330] error retrieving resource lock kube-system/yurt-controller-manager: Unauthorized
E0202 19:52:23.590931       1 leaderelection.go:330] error retrieving resource lock kube-system/yurt-controller-manager: Unauthorized
E0202 19:52:26.912766       1 leaderelection.go:330] error retrieving resource lock kube-system/yurt-controller-manager: Unauthorized
E0202 19:52:30.797597       1 leaderelection.go:330] error retrieving resource lock kube-system/yurt-controller-manager: Unauthorized
E0202 19:52:33.407730       1 leaderelection.go:330] error retrieving resource lock kube-system/yurt-controller-manager: Unauthorized
E0202 19:52:35.863103       1 leaderelection.go:330] error retrieving resource lock kube-system/yurt-controller-manager: Unauthorized
E0202 19:52:39.910155       1 leaderelection.go:330] error retrieving resource lock kube-system/yurt-controller-manager: Unauthorized
E0202 19:52:43.206484       1 leaderelection.go:330] error retrieving resource lock kube-system/yurt-controller-manager: Unauthorized
E0202 19:52:46.781587       1 leaderelection.go:330] error retrieving resource lock kube-system/yurt-controller-manager: Unauthorized
E0202 19:52:51.178169       1 leaderelection.go:330] error retrieving resource lock kube-system/yurt-controller-manager: Unauthorized
E0202 19:52:55.125942       1 leaderelection.go:330] error retrieving resource lock kube-system/yurt-controller-manager: Unauthorized
E0202 19:52:58.206025       1 leaderelection.go:330] error retrieving resource lock kube-system/yurt-controller-manager: Unauthorized
E0202 19:53:02.339002       1 leaderelection.go:330] error retrieving resource lock kube-system/yurt-controller-manager: Unauthorized
E0202 19:53:06.011651       1 leaderelection.go:330] error retrieving resource lock kube-system/yurt-controller-manager: Unauthorized
I0202 19:53:25.711437       1 leaderelection.go:258] successfully acquired lease kube-system/yurt-controller-manager
I0202 19:53:25.713495       1 event.go:282] Event(v1.ObjectReference{Kind:"Lease", Namespace:"kube-system", Name:"yurt-controller-manager", UID:"99a06dd0-81fb-4843-ac68-b50c419dadbe", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"13737", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kubeadm-openyurt-w2_a549ab4a-9469-4258-a9c9-37901670e4d2 became leader
I0202 19:53:25.723931       1 controllermanager.go:373] Started "webhookmanager"
I0202 19:53:25.729562       1 controllermanager.go:373] Started "poolcoordinatorcertmanager"
I0202 19:53:25.730450       1 controllermanager.go:373] Started "poolcoordinator"
I0202 19:53:25.735710       1 poolcoordinator_cert_manager.go:219] Starting poolcoordinatorCertManager controller
I0202 19:53:25.738438       1 csrapprover.go:125] v1.CertificateSigningRequest is supported.
I0202 19:53:25.738486       1 controllermanager.go:373] Started "yurtcsrapprover"
I0202 19:53:25.740498       1 controllermanager.go:373] Started "daemonpodupdater"
I0202 19:53:25.741936       1 csrapprover.go:185] starting the crsapprover
I0202 19:53:25.741970       1 daemon_pod_updater_controller.go:215] Starting daemonPodUpdater controller
I0202 19:53:25.743030       1 poolcoordinator_cert_manager.go:362] CA already exist in secret, reuse it
I0202 19:53:25.745117       1 servicetopology.go:297] v1.EndpointSlice is supported.
I0202 19:53:25.745190       1 controllermanager.go:373] Started "servicetopologycontroller"
I0202 19:53:25.745423       1 servicetopology.go:93] starting the service topology controller
I0202 19:53:25.749868       1 poolcoordinator_cert_manager.go:312] cert apiserver-etcd-client not change, reuse it
I0202 19:53:25.771799       1 poolcoordinator_cert_manager.go:312] cert pool-coordinator-yurthub-client not change, reuse it
I0202 19:53:25.787392       1 csrapprover.go:174] csr(csr-fszws) is not yurt-csr
I0202 19:53:25.787427       1 csrapprover.go:174] csr(csr-slnjr) is not yurt-csr
I0202 19:53:25.787694       1 csrapprover.go:174] csr(csr-lklbr) is not yurt-csr
I0202 19:53:25.787812       1 csrapprover.go:174] csr(csr-m8qv6) is not yurt-csr
I0202 19:53:25.787919       1 csrapprover.go:174] csr(csr-hj4gp) is not yurt-csr
I0202 19:53:25.787994       1 csrapprover.go:174] csr(csr-mqgd7) is not yurt-csr
I0202 19:53:25.838704       1 poolcoordinator_controller.go:227] start node taint workers
I0202 19:53:25.846168       1 servicetopology.go:99] sync service topology controller succeed
I0202 19:53:26.557731       1 webhook.go:99] tls key and cert ok.
I0202 19:53:26.557859       1 poolcoordinator_webhook.go:659] populate nodepool map
I0202 19:53:26.557894       1 poolcoordinator_webhook.go:666] start nodepool maintenance worker
I0202 19:53:26.567058       1 poolcoordinator_webhook.go:598] validatewebhookconfiguration yurt-controller-manager has already existed, skip create
I0202 19:53:26.571382       1 poolcoordinator_webhook.go:649] mutatingwebhookconfiguration yurt-controller-manager has already existed, skip create
I0202 19:53:26.571444       1 webhook.go:114] Listening on port 9443 ...
I0202 19:53:26.798413       1 util.go:62] pool-coordinator-apiserver service is ready for poolcoordinator_cert_manager
I0202 19:53:26.798448       1 poolcoordinator_cert_manager.go:305] cert apiserver IP has changed
I0202 19:53:27.811917       1 util.go:62] pool-coordinator-etcd service is ready for poolcoordinator_cert_manager
I0202 19:53:27.811963       1 poolcoordinator_cert_manager.go:305] cert etcd-server IP has changed
I0202 19:53:28.822188       1 util.go:62] pool-coordinator-apiserver service is ready for poolcoordinator_cert_manager
I0202 19:53:28.822236       1 poolcoordinator_cert_manager.go:305] cert kubeconfig IP has changed
I0202 19:53:29.830171       1 util.go:62] pool-coordinator-apiserver service is ready for poolcoordinator_cert_manager
I0202 19:53:29.830203       1 poolcoordinator_cert_manager.go:305] cert admin.conf IP has changed
I0202 19:53:31.062229       1 util.go:62] pool-coordinator-apiserver service is ready for poolcoordinator_cert_manager
I0202 19:53:31.095219       1 certificate.go:393] successfully write apiserver cert/key into pool-coordinator-dynamic-certs
I0202 19:53:32.167997       1 util.go:62] pool-coordinator-etcd service is ready for poolcoordinator_cert_manager
I0202 19:53:32.196108       1 certificate.go:393] successfully write etcd-server cert/key into pool-coordinator-dynamic-certs
I0202 19:53:33.414289       1 util.go:62] pool-coordinator-apiserver service is ready for poolcoordinator_cert_manager
I0202 19:53:33.440205       1 certificate.go:408] successfully write kubeconfig into secret pool-coordinator-monitoring-kubeconfig
I0202 19:53:34.667303       1 util.go:62] pool-coordinator-apiserver service is ready for poolcoordinator_cert_manager
I0202 19:53:34.693217       1 certificate.go:408] successfully write kubeconfig into secret pool-coordinator-static-certs
W0202 19:53:34.693488       1 filestore_wrapper.go:49] unexpected error occurred when loading the certificate: no cert/key files read at "/tmp/yurt-controller-manager_poolcoordinator-apiserver-client-current.pem", ("", "") or ("/tmp", "/tmp"), will regenerate it
I0202 19:53:34.693518       1 certificate_manager.go:318] kubernetes.io/kube-apiserver-client: Certificate rotation is enabled
I0202 19:53:34.693561       1 certificate_manager.go:446] kubernetes.io/kube-apiserver-client: Rotating certificates
I0202 19:53:34.701774       1 csrapprover.go:168] non-approved and non-denied csr, enqueue: csr-wlxjl
I0202 19:53:34.714923       1 csrapprover.go:282] successfully approve yurt-csr(csr-wlxjl)
I0202 19:53:35.752944       1 certificate_manager.go:590] kubernetes.io/kube-apiserver-client: Certificate expiration is 2024-02-02 19:48:34 +0000 UTC, rotation deadline is 2023-10-20 01:52:19.354243943 +0000 UTC
I0202 19:53:35.752997       1 certificate_manager.go:324] kubernetes.io/kube-apiserver-client: Waiting 6221h58m43.601251164s for next certificate rotation
I0202 19:53:36.753832       1 certificate_manager.go:590] kubernetes.io/kube-apiserver-client: Certificate expiration is 2024-02-02 19:48:34 +0000 UTC, rotation deadline is 2023-11-26 01:48:57.67911553 +0000 UTC
I0202 19:53:36.753886       1 certificate_manager.go:324] kubernetes.io/kube-apiserver-client: Waiting 7109h55m20.925234826s for next certificate rotation
I0202 19:53:39.693714       1 certificate.go:309] yurt-controller-manager_poolcoordinator certificate signed successfully
I0202 19:53:39.733631       1 certificate.go:357] successfully write apiserver-kubelet-client cert/key pair into pool-coordinator-static-certs
I0202 19:53:39.747338       1 certificate.go:393] successfully write ca cert/key into pool-coordinator-static-certs
I0202 19:53:39.761211       1 certificate.go:393] successfully write ca cert/key into pool-coordinator-yurthub-certs
I0202 19:53:39.855333       1 certificate.go:438] successfully write key pair into secret pool-coordinator-static-certs
k delete po -n kube-system pool-coordinator-edge-gcgh4-6f6599575c-lfbrb
Error from server (InternalError): Internal error occurred: failed calling webhook "vpoolcoordinator.openyurt.io": Post "https://yurt-controller-manager-webhook.kube-system.svc:443/pool-coordinator-webhook-validate?timeout=10s": service "yurt-controller-manager-webhook" not found

etcd

WARNING: 2023/02/02 19:57:09 [core] grpc: addrConn.createTransport failed to connect to {0.0.0.0:12379 0.0.0.0:12379 <nil> 0 <nil>}. Err: connection error: desc = "transport: authentication handshake failed: remote error: tls: bad certificate". Reconnecting...
{"level":"warn","ts":"2023-02-02T19:58:35.924Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"127.0.0.1:41876","server-name":"","error":"remote error: tls: bad certificate"}
{"level":"warn","ts":"2023-02-02T19:58:36.924Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"127.0.0.1:36860","server-name":"","error":"remote error: tls: bad certificate"}
{"level":"warn","ts":"2023-02-02T19:58:36.931Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"127.0.0.1:36862","server-name":"","error":"remote error: tls: bad certificate"}
{"level":"warn","ts":"2023-02-02T19:58:37.939Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"127.0.0.1:36876","server-name":"","error":"remote error: tls: bad certificate"}
{"level":"warn","ts":"2023-02-02T19:58:38.857Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"127.0.0.1:36884","server-name":"","error":"remote error: tls: bad certificate"}
{"level":"warn","ts":"2023-02-02T19:58:39.722Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"127.0.0.1:36886","server-name":"","error":"remote error: tls: bad certificate"}
{"level":"warn","ts":"2023-02-02T19:58:41.864Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"127.0.0.1:36892","server-name":"","error":"remote error: tls: bad certificate"}
{"level":"warn","ts":"2023-02-02T19:58:42.139Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"127.0.0.1:36896","server-name":"","error":"remote error: tls: bad certificate"}
{"level":"warn","ts":"2023-02-02T19:58:45.984Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"127.0.0.1:36900","server-name":"","error":"remote error: tls: bad certificate"}
{"level":"warn","ts":"2023-02-02T19:58:46.044Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"127.0.0.1:36902","server-name":"","error":"remote error: tls: bad certificate"}
{"level":"warn","ts":"2023-02-02T19:58:51.805Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"127.0.0.1:40224","server-name":"","error":"remote error: tls: bad certificate"}
{"level":"warn","ts":"2023-02-02T19:58:52.102Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"127.0.0.1:40232","server-name":"","error":"remote error: tls: bad certificate"}
{"level":"warn","ts":"2023-02-02T19:59:21.426Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"127.0.0.1:43366","server-name":"","error":"tls: failed to verify client certificate: x509: certificate specifies an incompatible key usage"}
WARNING: 2023/02/02 19:59:21 [core] grpc: addrConn.createTransport failed to connect to {0.0.0.0:12379 0.0.0.0:12379 <nil> 0 <nil>}. Err: connection error: desc = "transport: authentication handshake failed: remote error: tls: bad certificate". Reconnecting...

yurt-hub seems to work:

yurthub version: projectinfo.Info{GitVersion:"-5f7f1a2", GitCommit:"5f7f1a2", BuildDate:"2023-01-22T02:00:12Z", GoVersion:"go1.17.1", Compiler:"gc", Platform:"linux/amd64", AllVersions:[]string{""}}
I0202 19:54:02.731565       1 start.go:64] FLAG: --access-server-through-hub="true"
I0202 19:54:02.731655       1 start.go:64] FLAG: --add_dir_header="false"
I0202 19:54:02.731673       1 start.go:64] FLAG: --alsologtostderr="false"
I0202 19:54:02.731689       1 start.go:64] FLAG: --bind-address="127.0.0.1"
I0202 19:54:02.731707       1 start.go:64] FLAG: --bind-proxy-address="127.0.0.1"
I0202 19:54:02.731723       1 start.go:64] FLAG: --coordinator-server-addr="https://pool-coordinator-apiserver:443"
I0202 19:54:02.731740       1 start.go:64] FLAG: --coordinator-storage-addr="https://pool-coordinator-etcd:2379"
I0202 19:54:02.731757       1 start.go:64] FLAG: --coordinator-storage-prefix="/registry"
I0202 19:54:02.731773       1 start.go:64] FLAG: --disabled-resource-filters="[]"
I0202 19:54:02.731793       1 start.go:64] FLAG: --discovery-token-ca-cert-hash="[]"
I0202 19:54:02.731833       1 start.go:64] FLAG: --discovery-token-unsafe-skip-ca-verification="true"
I0202 19:54:02.731850       1 start.go:64] FLAG: --disk-cache-path="/etc/kubernetes/cache/"
I0202 19:54:02.731866       1 start.go:64] FLAG: --dummy-if-ip=""
I0202 19:54:02.731881       1 start.go:64] FLAG: --dummy-if-name="yurthub-dummy0"
I0202 19:54:02.731909       1 start.go:64] FLAG: --enable-coordinator="false"
I0202 19:54:02.731930       1 start.go:64] FLAG: --enable-dummy-if="true"
I0202 19:54:02.731946       1 start.go:64] FLAG: --enable-iptables="true"
I0202 19:54:02.731961       1 start.go:64] FLAG: --enable-node-pool="true"
I0202 19:54:02.731976       1 start.go:64] FLAG: --enable-resource-filter="true"
I0202 19:54:02.731992       1 start.go:64] FLAG: --gc-frequency="120"
I0202 19:54:02.732029       1 start.go:64] FLAG: --heartbeat-failed-retry="3"
I0202 19:54:02.732045       1 start.go:64] FLAG: --heartbeat-healthy-threshold="2"
I0202 19:54:02.732060       1 start.go:64] FLAG: --heartbeat-interval-seconds="10"
I0202 19:54:02.732076       1 start.go:64] FLAG: --heartbeat-timeout-seconds="2"
I0202 19:54:02.732091       1 start.go:64] FLAG: --help="false"
I0202 19:54:02.732106       1 start.go:64] FLAG: --hub-cert-organizations="[]"
I0202 19:54:02.732124       1 start.go:64] FLAG: --join-token="zyiuya.arh4fu3nmt9jehk6"
I0202 19:54:02.732141       1 start.go:64] FLAG: --kubelet-health-grace-period="40s"
I0202 19:54:02.732159       1 start.go:64] FLAG: --lb-mode="rr"
I0202 19:54:02.732175       1 start.go:64] FLAG: --leader-elect="true"
I0202 19:54:02.732190       1 start.go:64] FLAG: --leader-elect-lease-duration="15s"
I0202 19:54:02.732206       1 start.go:64] FLAG: --leader-elect-renew-deadline="10s"
I0202 19:54:02.732221       1 start.go:64] FLAG: --leader-elect-resource-lock="leases"
I0202 19:54:02.732237       1 start.go:64] FLAG: --leader-elect-resource-name="yurthub"
I0202 19:54:02.732252       1 start.go:64] FLAG: --leader-elect-resource-namespace="kube-system"
I0202 19:54:02.732268       1 start.go:64] FLAG: --leader-elect-retry-period="2s"
I0202 19:54:02.732283       1 start.go:64] FLAG: --log-flush-frequency="5s"
I0202 19:54:02.732299       1 start.go:64] FLAG: --log_backtrace_at=":0"
I0202 19:54:02.732319       1 start.go:64] FLAG: --log_dir=""
I0202 19:54:02.732335       1 start.go:64] FLAG: --log_file=""
I0202 19:54:02.732350       1 start.go:64] FLAG: --log_file_max_size="1800"
I0202 19:54:02.732366       1 start.go:64] FLAG: --logtostderr="true"
I0202 19:54:02.732382       1 start.go:64] FLAG: --max-requests-in-flight="250"
I0202 19:54:02.732399       1 start.go:64] FLAG: --min-request-timeout="30m0s"
I0202 19:54:02.732416       1 start.go:64] FLAG: --node-name="mylittlefutro"
I0202 19:54:02.732432       1 start.go:64] FLAG: --nodepool-name=""
I0202 19:54:02.732447       1 start.go:64] FLAG: --one_output="false"
I0202 19:54:02.732462       1 start.go:64] FLAG: --profiling="true"
I0202 19:54:02.732477       1 start.go:64] FLAG: --proxy-port="10261"
I0202 19:54:02.732493       1 start.go:64] FLAG: --proxy-secure-port="10268"
I0202 19:54:02.732509       1 start.go:64] FLAG: --root-dir="/var/lib/yurthub"
I0202 19:54:02.732525       1 start.go:64] FLAG: --serve-port="10267"
I0202 19:54:02.732541       1 start.go:64] FLAG: --server-addr="https://143.42.26.120:6443"
I0202 19:54:02.732557       1 start.go:64] FLAG: --skip_headers="false"
I0202 19:54:02.732573       1 start.go:64] FLAG: --skip_log_headers="false"
I0202 19:54:02.732588       1 start.go:64] FLAG: --stderrthreshold="2"
I0202 19:54:02.732603       1 start.go:64] FLAG: --v="2"
I0202 19:54:02.732619       1 start.go:64] FLAG: --version="false"
I0202 19:54:02.732634       1 start.go:64] FLAG: --vmodule=""
I0202 19:54:02.732650       1 start.go:64] FLAG: --working-mode="edge"
I0202 19:54:02.733609       1 options.go:248] dummy ip not set, will use 169.254.2.1 as default
I0202 19:54:02.736330       1 config.go:226] yurthub would connect remote servers: https://143.42.26.120:6443
I0202 19:54:02.737415       1 storage.go:86] yurthub disk storage will run in enhancement mode
I0202 19:54:02.746210       1 restmapper.go:101] reset DynamicRESTMapper to map[/v1, Resource=event:/v1, Kind=Event /v1, Resource=events:/v1, Kind=Event apps.openyurt.io/v1alpha1, Resource=nodepool:apps.openyurt.io/v1alpha1, Kind=NodePool apps.openyurt.io/v1alpha1, Resource=nodepools:apps.openyurt.io/v1alpha1, Kind=NodePool]
I0202 19:54:02.760722       1 filter.go:93] Filter servicetopology registered successfully
I0202 19:54:02.760763       1 filter.go:93] Filter masterservice registered successfully
I0202 19:54:02.760789       1 filter.go:93] Filter discardcloudservice registered successfully
I0202 19:54:02.760814       1 filter.go:93] Filter inclusterconfig registered successfully
I0202 19:54:02.762443       1 filter.go:124] prepare local disk storage to sync node(mylittlefutro) for edge working mode
I0202 19:54:02.762497       1 filter.go:73] Filter servicetopology initialize successfully
I0202 19:54:02.762534       1 filter.go:73] Filter masterservice initialize successfully
I0202 19:54:02.762566       1 filter.go:73] Filter discardcloudservice initialize successfully
I0202 19:54:02.762599       1 filter.go:73] Filter inclusterconfig initialize successfully
I0202 19:54:02.762756       1 approver.go:180] current filter setting: map[coredns/endpoints/list:servicetopology coredns/endpoints/watch:servicetopology coredns/endpointslices/list:servicetopology coredns/endpointslices/watch:servicetopology kube-proxy/endpoints/list:servicetopology kube-proxy/endpoints/watch:servicetopology kube-proxy/endpointslices/list:servicetopology kube-proxy/endpointslices/watch:servicetopology kube-proxy/services/list:discardcloudservice kube-proxy/services/watch:discardcloudservice kubelet/configmaps/get:inclusterconfig kubelet/configmaps/list:inclusterconfig kubelet/configmaps/watch:inclusterconfig kubelet/services/list:masterservice kubelet/services/watch:masterservice nginx-ingress-controller/endpoints/list:servicetopology nginx-ingress-controller/endpoints/watch:servicetopology nginx-ingress-controller/endpointslices/list:servicetopology nginx-ingress-controller/endpointslices/watch:servicetopology] after init
I0202 19:54:02.766620       1 token.go:160] apiServer name https://143.42.26.120:6443 not changed
I0202 19:54:02.766992       1 certificate_store.go:130] Loading cert/key pair from "/var/lib/yurthub/pki/yurthub-current.pem".
I0202 19:54:02.770492       1 certificate_store.go:130] Loading cert/key pair from "/var/lib/yurthub/pki/yurthub-server-current.pem".
I0202 19:54:02.771109       1 token.go:199] /var/lib/yurthub/bootstrap-hub.conf file already exists, so reuse it
I0202 19:54:02.772231       1 token.go:214] /var/lib/yurthub/yurthub.conf file already exists, so reuse it
I0202 19:54:02.772277       1 token.go:230] /var/lib/yurthub/pki/ca.crt file already exists, so reuse it
I0202 19:54:02.772294       1 certificate_manager.go:318] kubernetes.io/kube-apiserver-client: Certificate rotation is enabled
I0202 19:54:02.772326       1 certificate_manager.go:318] kubernetes.io/kubelet-serving: Certificate rotation is enabled
I0202 19:54:02.772377       1 config.go:186] create dummy network interface yurthub-dummy0(169.254.2.1) and init iptables manager
I0202 19:54:02.773172       1 certificate_manager.go:590] kubernetes.io/kube-apiserver-client: Certificate expiration is 2024-02-02 19:41:25 +0000 UTC, rotation deadline is 2023-10-30 04:37:55.078244378 +0000 UTC
I0202 19:54:02.773175       1 certificate_manager.go:590] kubernetes.io/kubelet-serving: Certificate expiration is 2024-02-02 19:41:30 +0000 UTC, rotation deadline is 2023-11-06 21:02:23.966652869 +0000 UTC
I0202 19:54:02.773299       1 certificate_manager.go:324] kubernetes.io/kube-apiserver-client: Waiting 6464h43m52.304954044s for next certificate rotation
I0202 19:54:02.773304       1 certificate_manager.go:324] kubernetes.io/kubelet-serving: Waiting 6649h8m21.193355835s for next certificate rotation
I0202 19:54:02.821387       1 config.go:403] server cert path is: /var/lib/yurthub/pki/yurthub-server-current.pem, ca path is: /var/lib/yurthub/pki/ca.crt
I0202 19:54:02.822769       1 dynamic_cafile_content.go:117] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/var/lib/yurthub/pki/ca.crt"
I0202 19:54:02.823171       1 dynamic_serving_content.go:110] "Loaded a new cert/key pair" name="serving-cert::/var/lib/yurthub/pki/yurthub-server-current.pem::/var/lib/yurthub/pki/yurthub-server-current.pem"
I0202 19:54:02.823200       1 start.go:74] yurthub cfg: &config.YurtHubConfiguration{LBMode:"rr", RemoteServers:[]*url.URL{(*url.URL)(0xc0004034d0)}, GCFrequency:120, NodeName:"mylittlefutro", HeartbeatFailedRetry:3, HeartbeatHealthyThreshold:2, HeartbeatTimeoutSeconds:2, HeartbeatIntervalSeconds:10, MaxRequestInFlight:250, EnableProfiling:true, StorageWrapper:(*cachemanager.storageWrapper)(0xc0000b3600), SerializerManager:(*serializer.SerializerManager)(0xc0000b3640), RESTMapperManager:(*meta.RESTMapperManager)(0xc0000b36c0), SharedFactory:(*informers.sharedInformerFactory)(0xc000095a90), YurtSharedFactory:(*externalversions.sharedInformerFactory)(0xc000095ae0), WorkingMode:"edge", KubeletHealthGracePeriod:40000000000, FilterManager:(*manager.Manager)(0xc0005cf038), CoordinatorServer:(*url.URL)(nil), MinRequestTimeout:1800000000000, TenantNs:"", NetworkMgr:(*network.NetworkManager)(0xc00004c280), CertManager:(*token.yurtHubCertManager)(0xc000443440), YurtHubServerServing:(*server.DeprecatedInsecureServingInfo)(0xc0005c6ae0), YurtHubProxyServerServing:(*server.DeprecatedInsecureServingInfo)(0xc0005c6b00), YurtHubDummyProxyServerServing:(*server.DeprecatedInsecureServingInfo)(0xc0005c6b20), YurtHubSecureProxyServerServing:(*server.SecureServingInfo)(0xc00038ec00), YurtHubProxyServerAddr:"127.0.0.1:10261", ProxiedClient:(*kubernetes.Clientset)(0xc00017c420), DiskCachePath:"/etc/kubernetes/cache/", CoordinatorPKIDir:"/var/lib/yurthub/poolcoordinator", EnableCoordinator:false, CoordinatorServerURL:(*url.URL)(nil), CoordinatorStoragePrefix:"/registry", CoordinatorStorageAddr:"https://pool-coordinator-etcd:2379", CoordinatorClient:kubernetes.Interface(nil), LeaderElection:config.LeaderElectionConfiguration{LeaderElect:true, LeaseDuration:v1.Duration{Duration:15000000000}, RenewDeadline:v1.Duration{Duration:10000000000}, RetryPeriod:v1.Duration{Duration:2000000000}, ResourceLock:"leases", ResourceName:"yurthub", ResourceNamespace:"kube-system"}}
I0202 19:54:02.823340       1 start.go:90] 1. new transport manager
I0202 19:54:02.823364       1 transport.go:67] use /var/lib/yurthub/pki/ca.crt ca cert file to access remote server
I0202 19:54:02.823650       1 start.go:97] 2. prepare cloud kube clients
I0202 19:54:02.824038       1 start.go:106] 3. create health checkers for remote servers and pool coordinator
I0202 19:54:02.839051       1 connrotation.go:151] create a connection from 192.168.88.248:54284 to 143.42.26.120:6443, total 1 connections in transport manager dialer
I0202 19:54:02.902047       1 prober.go:132] healthy status of remote server https://143.42.26.120:6443 in init phase is healthy
I0202 19:54:02.902101       1 start.go:119] 4. new restConfig manager
I0202 19:54:02.902117       1 start.go:128] 5. new cache manager with storage wrapper and serializer manager
I0202 19:54:02.902157       1 cache_agent.go:54] init cache agents to map[coredns:{} flanneld:{} kube-proxy:{} kubelet:{} yurthub:{} yurttunnel-agent:{}]
I0202 19:54:02.902204       1 start.go:136] 6. new gc manager for node mylittlefutro, and gc frequency is a random time between 120 min and 360 min
I0202 19:54:02.902456       1 gc.go:107] list pod keys from storage, total: 4
I0202 19:54:02.903486       1 config.go:64] re-fix hub rest config host successfully with server https://143.42.26.120:6443
I0202 19:54:02.989636       1 gc.go:146] list all of pod that on the node: total: 4
I0202 19:54:02.989677       1 gc.go:156] it's dangerous to gc all cache pods, so skip gc
I0202 19:54:02.989702       1 start.go:147] 7. new tenant sa manager
I0202 19:54:02.989722       1 tenant.go:66] parse tenant ns: 
I0202 19:54:02.989815       1 gc.go:76] start gc events after waiting 113.52µs from previous gc
I0202 19:54:02.990562       1 start.go:174] 8. new reverse proxy handler for remote servers
I0202 19:54:02.991076       1 proxy.go:175] tenant ns is empty, no need to substitute 
I0202 19:54:02.991101       1 start.go:194] 9. new yurthub server and begin to serve
I0202 19:54:02.991478       1 deprecated_insecure_serving.go:56] Serving insecurely on 127.0.0.1:10267
I0202 19:54:02.991604       1 deprecated_insecure_serving.go:56] Serving insecurely on 127.0.0.1:10261
I0202 19:54:02.991621       1 deprecated_insecure_serving.go:56] Serving insecurely on 169.254.2.1:10261
I0202 19:54:02.991636       1 secure_serving.go:57] Forcing use of http/1.1 only
I0202 19:54:02.991733       1 config.go:64] re-fix hub rest config host successfully with server https://143.42.26.120:6443
I0202 19:54:02.992131       1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca-bundle::/var/lib/yurthub/pki/ca.crt" certDetail="\"kubernetes\" [] validServingFor=[kubernetes] issuer=\"<self>\" (2023-02-02 18:38:05 +0000 UTC to 2033-01-30 18:38:05 +0000 UTC (now=2023-02-02 19:54:02.992090455 +0000 UTC))"
I0202 19:54:02.992360       1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/lib/yurthub/pki/yurthub-server-current.pem::/var/lib/yurthub/pki/yurthub-server-current.pem" certDetail="\"system:node:mylittlefutro\" [serving] groups=[system:nodes] validServingFor=[169.254.2.1,127.0.0.1] issuer=\"kubernetes\" (2023-02-02 19:41:30 +0000 UTC to 2024-02-02 19:41:30 +0000 UTC (now=2023-02-02 19:54:02.992330833 +0000 UTC))"
I0202 19:54:02.992410       1 secure_serving.go:200] Serving securely on 169.254.2.1:10268
I0202 19:54:02.992446       1 util.go:293] start proxying: get /apis/apps.openyurt.io/v1alpha1/nodepools?limit=500&resourceVersion=0, in flight requests: 1
I0202 19:54:02.992446       1 util.go:293] start proxying: get /api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dyurt-hub-cfg&limit=500&resourceVersion=0, in flight requests: 2
I0202 19:54:02.992489       1 dynamic_cafile_content.go:155] "Starting controller" name="client-ca-bundle::/var/lib/yurthub/pki/ca.crt"
I0202 19:54:02.992546       1 dynamic_serving_content.go:129] "Starting controller" name="serving-cert::/var/lib/yurthub/pki/yurthub-server-current.pem::/var/lib/yurthub/pki/yurthub-server-current.pem"
I0202 19:54:02.992590       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0202 19:54:02.992670       1 util.go:293] start proxying: get /api/v1/services?limit=500&resourceVersion=0, in flight requests: 3
E0202 19:54:02.994920       1 gc.go:181] could not list keys for kubelet events, specified key is not found
E0202 19:54:02.994953       1 gc.go:181] could not list keys for kube-proxy events, specified key is not found
I0202 19:54:03.007501       1 util.go:252] yurthub list configmaps: /api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dyurt-hub-cfg&limit=500&resourceVersion=0 with status code 200, spent 14.981945ms
I0202 19:54:03.009464       1 approver.go:180] current filter setting: map[coredns/endpoints/list:servicetopology coredns/endpoints/watch:servicetopology coredns/endpointslices/list:servicetopology coredns/endpointslices/watch:servicetopology kube-proxy/endpoints/list:servicetopology kube-proxy/endpoints/watch:servicetopology kube-proxy/endpointslices/list:servicetopology kube-proxy/endpointslices/watch:servicetopology kube-proxy/services/list:discardcloudservice kube-proxy/services/watch:discardcloudservice kubelet/configmaps/get:inclusterconfig kubelet/configmaps/list:inclusterconfig kubelet/configmaps/watch:inclusterconfig kubelet/services/list:masterservice kubelet/services/watch:masterservice nginx-ingress-controller/endpoints/list:servicetopology nginx-ingress-controller/endpoints/watch:servicetopology nginx-ingress-controller/endpointslices/list:servicetopology nginx-ingress-controller/endpointslices/watch:servicetopology] after add
I0202 19:54:03.009747       1 util.go:293] start proxying: get /api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dyurt-hub-cfg&resourceVersion=13380&timeout=7m42s&timeoutSeconds=462&watch=true, in flight requests: 3
I0202 19:54:03.108314       1 util.go:252] yurthub list nodepools: /apis/apps.openyurt.io/v1alpha1/nodepools?limit=500&resourceVersion=0 with status code 200, spent 115.783546ms
I0202 19:54:03.108461       1 serializer.go:200] schema.GroupVersionResource{Group:"apps.openyurt.io", Version:"v1alpha1", Resource:"nodepools"} is not found in client-go runtime scheme
I0202 19:54:03.108853       1 util.go:252] yurthub list services: /api/v1/services?limit=500&resourceVersion=0 with status code 200, spent 116.126256ms
W0202 19:54:03.109014       1 warnings.go:70] apps.openyurt.io/v1alpha1 NodePool is deprecated in v1.0.0+, unavailable in v1.2.0+; use apps.openyurt.io/v1beta1 NodePool
I0202 19:54:03.112607       1 util.go:293] start proxying: get /apis/apps.openyurt.io/v1alpha1/nodepools?allowWatchBookmarks=true&resourceVersion=13835&timeout=7m24s&timeoutSeconds=444&watch=true, in flight requests: 2
I0202 19:54:03.114269       1 util.go:293] start proxying: get /api/v1/services?allowWatchBookmarks=true&resourceVersion=13070&timeout=9m22s&timeoutSeconds=562&watch=true, in flight requests: 3
W0202 19:54:03.126203       1 warnings.go:70] apps.openyurt.io/v1alpha1 NodePool is deprecated in v1.0.0+, unavailable in v1.2.0+; use apps.openyurt.io/v1beta1 NodePool
I0202 19:54:03.352867       1 util.go:293] start proxying: get /apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0, in flight requests: 4
I0202 19:54:03.372044       1 util.go:252] kubelet list csidrivers: /apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0 with status code 200, spent 19.008331ms
I0202 19:54:03.374034       1 util.go:293] start proxying: get /apis/storage.k8s.io/v1/csidrivers?allowWatchBookmarks=true&resourceVersion=13070&timeout=5m20s&timeoutSeconds=320&watch=true, in flight requests: 4
I0202 19:54:03.464881       1 util.go:293] start proxying: get /api/v1/namespaces/kube-system/pods/yurt-hub-mylittlefutro, in flight requests: 5
I0202 19:54:03.480721       1 util.go:252] kubelet get pods: /api/v1/namespaces/kube-system/pods/yurt-hub-mylittlefutro with status code 404, spent 15.742288ms
I0202 19:54:03.642173       1 util.go:293] start proxying: get /apis/storage.k8s.io/v1/csinodes/mylittlefutro, in flight requests: 5
I0202 19:54:03.659996       1 util.go:252] kubelet get csinodes: /apis/storage.k8s.io/v1/csinodes/mylittlefutro with status code 200, spent 17.688806ms
I0202 19:54:03.661223       1 util.go:293] start proxying: get /apis/storage.k8s.io/v1/csinodes/mylittlefutro, in flight requests: 5
I0202 19:54:03.666381       1 util.go:293] start proxying: get /apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/mylittlefutro?timeout=10s, in flight requests: 6
I0202 19:54:03.667173       1 util.go:252] kubelet get leases: /apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/mylittlefutro?timeout=10s with status code 200, spent 642.673µs
I0202 19:54:03.668106       1 util.go:293] start proxying: put /apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/mylittlefutro?timeout=10s, in flight requests: 6
I0202 19:54:03.668272       1 util.go:252] kubelet update leases: /apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/mylittlefutro?timeout=10s with status code 200, spent 85.558µs
I0202 19:54:03.676880       1 util.go:252] kubelet get csinodes: /apis/storage.k8s.io/v1/csinodes/mylittlefutro with status code 200, spent 15.472938ms
I0202 19:54:03.998817       1 util.go:293] start proxying: post /api/v1/nodes, in flight requests: 5
I0202 19:54:04.022981       1 util.go:252] kubelet create nodes: /api/v1/nodes with status code 409, spent 23.882568ms
I0202 19:54:04.027045       1 util.go:293] start proxying: get /api/v1/nodes/mylittlefutro, in flight requests: 5
I0202 19:54:04.046489       1 util.go:252] kubelet get nodes: /api/v1/nodes/mylittlefutro with status code 200, spent 19.257917ms
I0202 19:54:04.052692       1 util.go:293] start proxying: get /api/v1/nodes/mylittlefutro?resourceVersion=0&timeout=10s, in flight requests: 5
I0202 19:54:04.065221       1 util.go:252] kubelet get nodes: /api/v1/nodes/mylittlefutro?resourceVersion=0&timeout=10s with status code 200, spent 12.429931ms
I0202 19:54:04.105628       1 util.go:293] start proxying: patch /api/v1/nodes/mylittlefutro/status?timeout=10s, in flight requests: 5
I0202 19:54:04.127191       1 util.go:252] kubelet patch nodes: /api/v1/nodes/mylittlefutro/status?timeout=10s with status code 200, spent 21.488865ms
I0202 19:54:04.765405       1 util.go:293] start proxying: get /api/v1/nodes?fieldSelector=metadata.name%3Dmylittlefutro&limit=500&resourceVersion=0, in flight requests: 5
I0202 19:54:04.781895       1 util.go:252] kubelet list nodes: /api/v1/nodes?fieldSelector=metadata.name%3Dmylittlefutro&limit=500&resourceVersion=0 with status code 200, spent 16.290545ms
I0202 19:54:04.783349       1 util.go:293] start proxying: get /api/v1/nodes?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dmylittlefutro&resourceVersion=13880&timeout=9m10s&timeoutSeconds=550&watch=true, in flight requests: 5
I0202 19:54:04.797337       1 util.go:293] start proxying: get /api/v1/nodes/mylittlefutro?resourceVersion=0&timeout=10s, in flight requests: 6
I0202 19:54:04.810646       1 util.go:252] kubelet get nodes: /api/v1/nodes/mylittlefutro?resourceVersion=0&timeout=10s with status code 200, spent 13.233973ms
I0202 19:54:04.820024       1 util.go:293] start proxying: get /api/v1/services?limit=500&resourceVersion=0, in flight requests: 6
I0202 19:54:04.822598       1 util.go:293] start proxying: post /api/v1/namespaces/default/events, in flight requests: 7
I0202 19:54:04.836099       1 handler.go:76] mutate master service with ClusterIP:Port=169.254.2.1:10268
I0202 19:54:04.839322       1 util.go:252] kubelet list services: /api/v1/services?limit=500&resourceVersion=0 with status code 200, spent 19.171121ms
I0202 19:54:04.840356       1 util.go:293] start proxying: get /api/v1/services?allowWatchBookmarks=true&resourceVersion=13070&timeout=6m32s&timeoutSeconds=392&watch=true, in flight requests: 7
I0202 19:54:04.841115       1 util.go:252] kubelet create events: /api/v1/namespaces/default/events with status code 201, spent 18.443503ms
I0202 19:54:04.844602       1 util.go:293] start proxying: post /api/v1/namespaces/default/events, in flight requests: 7
I0202 19:54:04.859371       1 util.go:252] kubelet create events: /api/v1/namespaces/default/events with status code 201, spent 14.700746ms
I0202 19:54:04.859895       1 util.go:293] start proxying: post /api/v1/namespaces/default/events, in flight requests: 7
I0202 19:54:04.875283       1 util.go:252] kubelet create events: /api/v1/namespaces/default/events with status code 201, spent 15.323885ms
I0202 19:54:04.875996       1 util.go:293] start proxying: post /api/v1/namespaces/default/events, in flight requests: 7
I0202 19:54:04.892409       1 util.go:252] kubelet create events: /api/v1/namespaces/default/events with status code 201, spent 16.338328ms
I0202 19:54:04.894368       1 util.go:293] start proxying: patch /api/v1/namespaces/default/events/mylittlefutro.174019fe2c376851, in flight requests: 7
I0202 19:54:04.911707       1 util.go:252] kubelet patch events: /api/v1/namespaces/default/events/mylittlefutro.174019fe2c376851 with status code 200, spent 17.230631ms
I0202 19:54:04.913370       1 util.go:293] start proxying: patch /api/v1/namespaces/default/events/mylittlefutro.174019fe2c37cc83, in flight requests: 7
I0202 19:54:04.932692       1 util.go:252] kubelet patch events: /api/v1/namespaces/default/events/mylittlefutro.174019fe2c37cc83 with status code 200, spent 19.225499ms
I0202 19:54:04.934051       1 util.go:293] start proxying: patch /api/v1/namespaces/default/events/mylittlefutro.174019fe2c380529, in flight requests: 7
I0202 19:54:04.953888       1 util.go:252] kubelet patch events: /api/v1/namespaces/default/events/mylittlefutro.174019fe2c380529 with status code 200, spent 19.711682ms
I0202 19:54:04.955057       1 util.go:293] start proxying: post /api/v1/namespaces/default/events, in flight requests: 7
I0202 19:54:04.970264       1 util.go:252] kubelet create events: /api/v1/namespaces/default/events with status code 201, spent 15.124385ms
I0202 19:54:04.971621       1 util.go:293] start proxying: patch /api/v1/namespaces/default/events/mylittlefutro.174019fe2c376851, in flight requests: 7
I0202 19:54:04.991290       1 util.go:252] kubelet patch events: /api/v1/namespaces/default/events/mylittlefutro.174019fe2c376851 with status code 200, spent 19.430363ms
I0202 19:54:04.992583       1 util.go:293] start proxying: patch /api/v1/namespaces/default/events/mylittlefutro.174019fe2c37cc83, in flight requests: 7
I0202 19:54:05.011275       1 util.go:252] kubelet patch events: /api/v1/namespaces/default/events/mylittlefutro.174019fe2c37cc83 with status code 200, spent 18.626521ms
I0202 19:54:05.022789       1 util.go:293] start proxying: patch /api/v1/namespaces/default/events/mylittlefutro.174019fe2c380529, in flight requests: 7
I0202 19:54:05.041676       1 util.go:252] kubelet patch events: /api/v1/namespaces/default/events/mylittlefutro.174019fe2c380529 with status code 200, spent 18.794045ms
I0202 19:54:05.147081       1 util.go:293] start proxying: get /apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0, in flight requests: 7
I0202 19:54:05.160991       1 util.go:252] kubelet list runtimeclasses: /apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0 with status code 200, spent 13.734204ms
I0202 19:54:05.161954       1 util.go:293] start proxying: get /apis/node.k8s.io/v1/runtimeclasses?allowWatchBookmarks=true&resourceVersion=13070&timeout=7m26s&timeoutSeconds=446&watch=true, in flight requests: 7
I0202 19:54:05.223759       1 util.go:293] start proxying: patch /api/v1/namespaces/default/events/mylittlefutro.174019fe2c376851, in flight requests: 8
I0202 19:54:05.244144       1 util.go:252] kubelet patch events: /api/v1/namespaces/default/events/mylittlefutro.174019fe2c376851 with status code 200, spent 20.262726ms
I0202 19:54:05.423334       1 util.go:293] start proxying: patch /api/v1/namespaces/default/events/mylittlefutro.174019fe2c37cc83, in flight requests: 8
I0202 19:54:05.447084       1 util.go:252] kubelet patch events: /api/v1/namespaces/default/events/mylittlefutro.174019fe2c37cc83 with status code 200, spent 23.617283ms
I0202 19:54:05.621071       1 util.go:293] start proxying: get /api/v1/pods?fieldSelector=spec.nodeName%3Dmylittlefutro&limit=500&resourceVersion=0, in flight requests: 8
I0202 19:54:05.623142       1 util.go:293] start proxying: patch /api/v1/namespaces/default/events/mylittlefutro.174019fe2c380529, in flight requests: 9
I0202 19:54:05.637253       1 util.go:252] kubelet list pods: /api/v1/pods?fieldSelector=spec.nodeName%3Dmylittlefutro&limit=500&resourceVersion=0 with status code 200, spent 16.056401ms
I0202 19:54:05.639789       1 util.go:293] start proxying: get /api/v1/pods?allowWatchBookmarks=true&fieldSelector=spec.nodeName%3Dmylittlefutro&resourceVersion=13818&timeoutSeconds=568&watch=true, in flight requests: 9
I0202 19:54:05.645456       1 util.go:293] start proxying: get /api/v1/namespaces/kube-flannel/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0, in flight requests: 10
I0202 19:54:05.647111       1 util.go:252] kubelet patch events: /api/v1/namespaces/default/events/mylittlefutro.174019fe2c380529 with status code 200, spent 23.918611ms
I0202 19:54:05.649021       1 util.go:293] start proxying: get /api/v1/namespaces/kube-flannel/configmaps?fieldSelector=metadata.name%3Dkube-flannel-cfg&limit=500&resourceVersion=0, in flight requests: 10
I0202 19:54:05.649452       1 util.go:293] start proxying: get /api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dcoredns&limit=500&resourceVersion=0, in flight requests: 11
I0202 19:54:05.649756       1 util.go:293] start proxying: get /api/v1/namespaces/kube-system/pods/pool-coordinator-edge-gcgh4-6f6599575c-lfbrb, in flight requests: 12
I0202 19:54:05.650086       1 util.go:293] start proxying: get /api/v1/namespaces/kube-system/secrets?fieldSelector=metadata.name%3Dpool-coordinator-static-certs&limit=500&resourceVersion=0, in flight requests: 13
I0202 19:54:05.650416       1 util.go:293] start proxying: get /api/v1/namespaces/kube-system/secrets?fieldSelector=metadata.name%3Dpool-coordinator-dynamic-certs&limit=500&resourceVersion=0, in flight requests: 14
I0202 19:54:05.650745       1 util.go:293] start proxying: get /api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0, in flight requests: 15
I0202 19:54:05.650848       1 util.go:293] start proxying: get /api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-proxy&limit=500&resourceVersion=0, in flight requests: 16
I0202 19:54:05.657723       1 util.go:252] kubelet list configmaps: /api/v1/namespaces/kube-flannel/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 with status code 200, spent 12.164299ms
I0202 19:54:05.658945       1 util.go:293] start proxying: get /api/v1/namespaces/kube-flannel/configmaps?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=13380&timeout=8m48s&timeoutSeconds=528&watch=true, in flight requests: 16
I0202 19:54:05.660631       1 util.go:252] kubelet list configmaps: /api/v1/namespaces/kube-flannel/configmaps?fieldSelector=metadata.name%3Dkube-flannel-cfg&limit=500&resourceVersion=0 with status code 200, spent 11.502914ms
I0202 19:54:05.661580       1 util.go:293] start proxying: get /api/v1/namespaces/kube-flannel/configmaps?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dkube-flannel-cfg&resourceVersion=13380&timeout=5m59s&timeoutSeconds=359&watch=true, in flight requests: 16
I0202 19:54:05.662169       1 util.go:252] kubelet list configmaps: /api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dcoredns&limit=500&resourceVersion=0 with status code 200, spent 12.667576ms
I0202 19:54:05.663665       1 util.go:293] start proxying: get /api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dcoredns&resourceVersion=13380&timeout=7m32s&timeoutSeconds=452&watch=true, in flight requests: 16
I0202 19:54:05.663980       1 handler.go:79] kubeconfig in configmap(kube-system/kube-proxy) has been commented, new config.conf: 
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
bindAddressHardFail: false
clientConnection:
  acceptContentTypes: ""
  burst: 0
  contentType: ""
  #kubeconfig: /var/lib/kube-proxy/kubeconfig.conf
  qps: 0
clusterCIDR: 10.244.0.0/16
configSyncPeriod: 0s
conntrack:
  maxPerCore: null
  min: null
  tcpCloseWaitTimeout: null
  tcpEstablishedTimeout: null
detectLocalMode: ""
enableProfiling: false
healthzBindAddress: ""
hostnameOverride: ""
iptables:
  masqueradeAll: false
  masqueradeBit: null
  minSyncPeriod: 0s
  syncPeriod: 0s
ipvs:
  excludeCIDRs: null
  minSyncPeriod: 0s
  scheduler: ""
  strictARP: false
  syncPeriod: 0s
  tcpFinTimeout: 0s
  tcpTimeout: 0s
  udpTimeout: 0s
kind: KubeProxyConfiguration
metricsBindAddress: ""
mode: ""
nodePortAddresses: null
oomScoreAdj: null
portRange: ""
showHiddenMetricsForVersion: ""
udpIdleTimeout: 0s
winkernel:
  enableDSR: false
  networkName: ""
  sourceVip: ""
I0202 19:54:05.664145       1 util.go:252] kubelet list secrets: /api/v1/namespaces/kube-system/secrets?fieldSelector=metadata.name%3Dpool-coordinator-dynamic-certs&limit=500&resourceVersion=0 with status code 200, spent 13.226948ms
I0202 19:54:05.664204       1 util.go:252] kubelet list configmaps: /api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-proxy&limit=500&resourceVersion=0 with status code 200, spent 12.850166ms
I0202 19:54:05.664500       1 util.go:252] kubelet list secrets: /api/v1/namespaces/kube-system/secrets?fieldSelector=metadata.name%3Dpool-coordinator-static-certs&limit=500&resourceVersion=0 with status code 200, spent 14.35072ms
I0202 19:54:05.664657       1 util.go:252] kubelet list configmaps: /api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 with status code 200, spent 13.497003ms
I0202 19:54:05.665514       1 util.go:252] kubelet get pods: /api/v1/namespaces/kube-system/pods/pool-coordinator-edge-gcgh4-6f6599575c-lfbrb with status code 200, spent 15.723411ms
I0202 19:54:05.666226       1 storage.go:569] key(kubelet/pods.v1.core) storage is pending, skip to store key(kubelet/pods.v1.core/kube-system/pool-coordinator-edge-gcgh4-6f6599575c-lfbrb)
I0202 19:54:05.666249       1 cache_manager.go:650] skip to cache watch event because key({%!s(bool=false) kubelet/pods.v1.core/kube-system/pool-coordinator-edge-gcgh4-6f6599575c-lfbrb}) is under processing
I0202 19:54:05.666990       1 util.go:293] start proxying: get /api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=13380&timeout=7m43s&timeoutSeconds=463&watch=true, in flight requests: 12
I0202 19:54:05.667365       1 util.go:293] start proxying: get /api/v1/namespaces/kube-system/secrets?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dpool-coordinator-dynamic-certs&resourceVersion=13804&timeout=5m17s&timeoutSeconds=317&watch=true, in flight requests: 13
I0202 19:54:05.667659       1 util.go:293] start proxying: get /api/v1/namespaces/kube-system/secrets?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dpool-coordinator-static-certs&resourceVersion=13804&timeout=8m21s&timeoutSeconds=501&watch=true, in flight requests: 14
I0202 19:54:05.667921       1 util.go:293] start proxying: get /api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dkube-proxy&resourceVersion=13380&timeout=9m17s&timeoutSeconds=557&watch=true, in flight requests: 15
I0202 19:54:05.669490       1 util.go:293] start proxying: patch /api/v1/namespaces/kube-system/pods/pool-coordinator-edge-gcgh4-6f6599575c-lfbrb/status, in flight requests: 16
I0202 19:54:05.693258       1 util.go:252] kubelet patch pods: /api/v1/namespaces/kube-system/pods/pool-coordinator-edge-gcgh4-6f6599575c-lfbrb/status with status code 200, spent 23.708001ms
I0202 19:54:05.693942       1 storage.go:562] key(kubelet/pods.v1.core/kube-system/pool-coordinator-edge-gcgh4-6f6599575c-lfbrb) storage is pending, just skip it
I0202 19:54:05.693964       1 cache_manager.go:650] skip to cache watch event because key({%!s(bool=false) kubelet/pods.v1.core/kube-system/pool-coordinator-edge-gcgh4-6f6599575c-lfbrb}) is under processing
I0202 19:54:05.699329       1 cache_manager.go:439] pod(kubelet/pods.v1.core/kube-system/pool-coordinator-edge-gcgh4-6f6599575c-lfbrb) is MODIFIED
I0202 19:54:05.821813       1 util.go:293] start proxying: get /api/v1/namespaces/kube-system/pods/kube-proxy-kzqvq, in flight requests: 16
I0202 19:54:05.821891       1 util.go:293] start proxying: patch /api/v1/namespaces/default/events/mylittlefutro.174019fe2c376851, in flight requests: 17
I0202 19:54:05.838396       1 util.go:252] kubelet get pods: /api/v1/namespaces/kube-system/pods/kube-proxy-kzqvq with status code 200, spent 16.495016ms
I0202 19:54:05.840854       1 util.go:252] kubelet patch events: /api/v1/namespaces/default/events/mylittlefutro.174019fe2c376851 with status code 200, spent 18.925726ms
I0202 19:54:06.022404       1 util.go:293] start proxying: patch /api/v1/namespaces/default/events/mylittlefutro.174019fe2c37cc83, in flight requests: 17
I0202 19:54:06.022404       1 util.go:293] start proxying: post /api/v1/namespaces/kube-system/serviceaccounts/coredns/token, in flight requests: 17
I0202 19:54:06.043373       1 util.go:252] kubelet patch events: /api/v1/namespaces/default/events/mylittlefutro.174019fe2c37cc83 with status code 200, spent 20.835703ms
I0202 19:54:06.045296       1 util.go:252] kubelet create serviceaccounts: /api/v1/namespaces/kube-system/serviceaccounts/coredns/token with status code 201, spent 22.632562ms
I0202 19:54:06.222446       1 util.go:293] start proxying: patch /api/v1/namespaces/default/events/mylittlefutro.174019fe2c380529, in flight requests: 17
I0202 19:54:06.222445       1 util.go:293] start proxying: post /api/v1/namespaces/kube-system/serviceaccounts/default/token, in flight requests: 16
I0202 19:54:06.244307       1 util.go:252] kubelet create serviceaccounts: /api/v1/namespaces/kube-system/serviceaccounts/default/token with status code 201, spent 21.525972ms
I0202 19:54:06.245066       1 util.go:252] kubelet patch events: /api/v1/namespaces/default/events/mylittlefutro.174019fe2c380529 with status code 200, spent 22.479519ms
I0202 19:54:06.422020       1 util.go:293] start proxying: patch /api/v1/namespaces/default/events/mylittlefutro.174019fe2c376851, in flight requests: 17
I0202 19:54:06.422020       1 util.go:293] start proxying: post /api/v1/namespaces/kube-system/serviceaccounts/kube-proxy/token, in flight requests: 17
I0202 19:54:06.442466       1 util.go:252] kubelet create serviceaccounts: /api/v1/namespaces/kube-system/serviceaccounts/kube-proxy/token with status code 201, spent 20.267497ms
I0202 19:54:06.443859       1 util.go:252] kubelet patch events: /api/v1/namespaces/default/events/mylittlefutro.174019fe2c376851 with status code 200, spent 21.75051ms
I0202 19:54:06.622485       1 util.go:293] start proxying: post /api/v1/namespaces/kube-flannel/serviceaccounts/flannel/token, in flight requests: 16
I0202 19:54:06.622837       1 util.go:293] start proxying: patch /api/v1/namespaces/default/events/mylittlefutro.174019fe2c37cc83, in flight requests: 17
I0202 19:54:06.644220       1 util.go:252] kubelet patch events: /api/v1/namespaces/default/events/mylittlefutro.174019fe2c37cc83 with status code 200, spent 21.32803ms
I0202 19:54:06.647322       1 util.go:252] kubelet create serviceaccounts: /api/v1/namespaces/kube-flannel/serviceaccounts/flannel/token with status code 201, spent 24.764826ms
I0202 19:54:06.821353       1 util.go:293] start proxying: patch /api/v1/namespaces/kube-system/pods/kube-proxy-kzqvq/status, in flight requests: 16
I0202 19:54:06.822249       1 util.go:293] start proxying: patch /api/v1/namespaces/default/events/mylittlefutro.174019fe2c380529, in flight requests: 17
I0202 19:54:06.849385       1 util.go:252] kubelet patch events: /api/v1/namespaces/default/events/mylittlefutro.174019fe2c380529 with status code 200, spent 27.078852ms
I0202 19:54:06.858465       1 util.go:252] kubelet patch pods: /api/v1/namespaces/kube-system/pods/kube-proxy-kzqvq/status with status code 200, spent 37.047125ms
I0202 19:54:06.859007       1 storage.go:562] key(kubelet/pods.v1.core/kube-system/kube-proxy-kzqvq) storage is pending, just skip it
I0202 19:54:06.859023       1 cache_manager.go:650] skip to cache watch event because key({%!s(bool=false) kubelet/pods.v1.core/kube-system/kube-proxy-kzqvq}) is under processing
I0202 19:54:06.863728       1 cache_manager.go:439] pod(kubelet/pods.v1.core/kube-system/kube-proxy-kzqvq) is MODIFIED
I0202 19:54:07.021027       1 util.go:293] start proxying: get /api/v1/namespaces/kube-flannel/pods/kube-flannel-ds-pcvpx, in flight requests: 16
I0202 19:54:07.021985       1 util.go:293] start proxying: post /api/v1/namespaces/kube-system/events, in flight requests: 17
I0202 19:54:07.037180       1 util.go:252] kubelet get pods: /api/v1/namespaces/kube-flannel/pods/kube-flannel-ds-pcvpx with status code 200, spent 16.027305ms
I0202 19:54:07.040067       1 util.go:252] kubelet create events: /api/v1/namespaces/kube-system/events with status code 201, spent 18.034212ms
I0202 19:54:07.230334       1 util.go:293] start proxying: patch /api/v1/namespaces/kube-flannel/pods/kube-flannel-ds-pcvpx/status, in flight requests: 17
I0202 19:54:07.230334       1 util.go:293] start proxying: post /api/v1/namespaces/kube-system/events, in flight requests: 16
I0202 19:54:07.248931       1 util.go:252] kubelet create events: /api/v1/namespaces/kube-system/events with status code 201, spent 18.442915ms
I0202 19:54:07.253888       1 util.go:252] kubelet patch pods: /api/v1/namespaces/kube-flannel/pods/kube-flannel-ds-pcvpx/status with status code 200, spent 23.437221ms
I0202 19:54:07.254757       1 storage.go:562] key(kubelet/pods.v1.core/kube-flannel/kube-flannel-ds-pcvpx) storage is pending, just skip it
I0202 19:54:07.254781       1 cache_manager.go:650] skip to cache watch event because key({%!s(bool=false) kubelet/pods.v1.core/kube-flannel/kube-flannel-ds-pcvpx}) is under processing
I0202 19:54:07.258376       1 cache_manager.go:439] pod(kubelet/pods.v1.core/kube-flannel/kube-flannel-ds-pcvpx) is MODIFIED
I0202 19:54:07.421613       1 util.go:293] start proxying: get /api/v1/namespaces/kube-system/pods/coredns-z8bgt, in flight requests: 16
I0202 19:54:07.426091       1 util.go:293] start proxying: post /api/v1/namespaces/kube-system/events, in flight requests: 17
I0202 19:54:07.436754       1 util.go:252] kubelet get pods: /api/v1/namespaces/kube-system/pods/coredns-z8bgt with status code 200, spent 15.079895ms
I0202 19:54:07.443728       1 util.go:252] kubelet create events: /api/v1/namespaces/kube-system/events with status code 201, spent 17.568063ms
I0202 19:54:07.622158       1 util.go:293] start proxying: patch /api/v1/namespaces/kube-system/pods/coredns-z8bgt/status, in flight requests: 16
I0202 19:54:07.622206       1 util.go:293] start proxying: patch /api/v1/namespaces/default/events/mylittlefutro.174019fe2c376851, in flight requests: 17
I0202 19:54:07.646316       1 util.go:252] kubelet patch events: /api/v1/namespaces/default/events/mylittlefutro.174019fe2c376851 with status code 200, spent 24.036549ms
I0202 19:54:07.646383       1 util.go:252] kubelet patch pods: /api/v1/namespaces/kube-system/pods/coredns-z8bgt/status with status code 200, spent 24.126556ms
I0202 19:54:07.647705       1 storage.go:562] key(kubelet/pods.v1.core/kube-system/coredns-z8bgt) storage is pending, just skip it
I0202 19:54:07.647759       1 cache_manager.go:650] skip to cache watch event because key({%!s(bool=false) kubelet/pods.v1.core/kube-system/coredns-z8bgt}) is under processing
I0202 19:54:07.681203       1 cache_manager.go:439] pod(kubelet/pods.v1.core/kube-system/coredns-z8bgt) is MODIFIED
I0202 19:54:07.821658       1 util.go:293] start proxying: get /api/v1/namespaces/kube-system/pods/kube-proxy-kzqvq, in flight requests: 16
I0202 19:54:07.822587       1 util.go:293] start proxying: patch /api/v1/namespaces/default/events/mylittlefutro.174019fe2c37cc83, in flight requests: 17
I0202 19:54:07.837668       1 util.go:252] kubelet get pods: /api/v1/namespaces/kube-system/pods/kube-proxy-kzqvq with status code 200, spent 15.921525ms
I0202 19:54:07.841322       1 util.go:252] kubelet patch events: /api/v1/namespaces/default/events/mylittlefutro.174019fe2c37cc83 with status code 200, spent 18.601204ms
I0202 19:54:08.021558       1 util.go:293] start proxying: patch /api/v1/namespaces/kube-system/pods/kube-proxy-kzqvq/status, in flight requests: 16
I0202 19:54:08.022980       1 util.go:293] start proxying: patch /api/v1/namespaces/default/events/mylittlefutro.174019fe2c380529, in flight requests: 17
I0202 19:54:08.047042       1 util.go:252] kubelet patch pods: /api/v1/namespaces/kube-system/pods/kube-proxy-kzqvq/status with status code 200, spent 25.411415ms
I0202 19:54:08.047042       1 util.go:252] kubelet patch events: /api/v1/namespaces/default/events/mylittlefutro.174019fe2c380529 with status code 200, spent 23.998974ms
I0202 19:54:08.048178       1 storage.go:562] key(kubelet/pods.v1.core/kube-system/kube-proxy-kzqvq) storage is pending, just skip it
I0202 19:54:08.048219       1 cache_manager.go:650] skip to cache watch event because key({%!s(bool=false) kubelet/pods.v1.core/kube-system/kube-proxy-kzqvq}) is under processing
I0202 19:54:08.062464       1 cache_manager.go:439] pod(kubelet/pods.v1.core/kube-system/kube-proxy-kzqvq) is MODIFIED
I0202 19:54:08.221535       1 util.go:293] start proxying: get /api/v1/namespaces/kube-system/pods/pool-coordinator-edge-gcgh4-6f6599575c-lfbrb, in flight requests: 16
I0202 19:54:08.222502       1 util.go:293] start proxying: post /api/v1/namespaces/kube-system/events, in flight requests: 17
I0202 19:54:08.241470       1 util.go:252] kubelet get pods: /api/v1/namespaces/kube-system/pods/pool-coordinator-edge-gcgh4-6f6599575c-lfbrb with status code 200, spent 19.81814ms
I0202 19:54:08.242095       1 util.go:252] kubelet create events: /api/v1/namespaces/kube-system/events with status code 201, spent 19.513818ms
I0202 19:54:08.421145       1 util.go:293] start proxying: patch /api/v1/namespaces/kube-system/pods/pool-coordinator-edge-gcgh4-6f6599575c-lfbrb/status, in flight requests: 16
I0202 19:54:08.421902       1 util.go:293] start proxying: patch /api/v1/namespaces/default/events/mylittlefutro.174019fe2c376851, in flight requests: 17
I0202 19:54:08.442717       1 util.go:252] kubelet patch events: /api/v1/namespaces/default/events/mylittlefutro.174019fe2c376851 with status code 200, spent 20.762896ms
I0202 19:54:08.444357       1 util.go:252] kubelet patch pods: /api/v1/namespaces/kube-system/pods/pool-coordinator-edge-gcgh4-6f6599575c-lfbrb/status with status code 200, spent 23.137074ms
I0202 19:54:08.445123       1 storage.go:562] key(kubelet/pods.v1.core/kube-system/pool-coordinator-edge-gcgh4-6f6599575c-lfbrb) storage is pending, just skip it
I0202 19:54:08.445145       1 cache_manager.go:650] skip to cache watch event because key({%!s(bool=false) kubelet/pods.v1.core/kube-system/pool-coordinator-edge-gcgh4-6f6599575c-lfbrb}) is under processing
I0202 19:54:08.458161       1 cache_manager.go:439] pod(kubelet/pods.v1.core/kube-system/pool-coordinator-edge-gcgh4-6f6599575c-lfbrb) is MODIFIED
I0202 19:54:08.622627       1 util.go:293] start proxying: patch /api/v1/namespaces/default/events/mylittlefutro.174019fe2c37cc83, in flight requests: 16
I0202 19:54:08.642997       1 util.go:252] kubelet patch events: /api/v1/namespaces/default/events/mylittlefutro.174019fe2c37cc83 with status code 200, spent 20.261047ms
I0202 19:54:08.822396       1 util.go:293] start proxying: post /api/v1/namespaces/kube-system/events, in flight requests: 16
I0202 19:54:08.839591       1 util.go:252] kubelet create events: /api/v1/namespaces/kube-system/events with status code 201, spent 17.127252ms
I0202 19:54:09.022680       1 util.go:293] start proxying: post /api/v1/namespaces/kube-system/events, in flight requests: 16
I0202 19:54:09.043243       1 util.go:252] kubelet create events: /api/v1/namespaces/kube-system/events with status code 201, spent 20.478639ms
I0202 19:54:09.167464       1 util.go:293] start proxying: get /api/v1/namespaces/kube-flannel/pods/kube-flannel-ds-pcvpx, in flight requests: 16
I0202 19:54:09.184443       1 util.go:252] kubelet get pods: /api/v1/namespaces/kube-flannel/pods/kube-flannel-ds-pcvpx with status code 200, spent 16.894951ms
I0202 19:54:09.187271       1 util.go:293] start proxying: patch /api/v1/namespaces/kube-flannel/pods/kube-flannel-ds-pcvpx/status, in flight requests: 16
I0202 19:54:09.207603       1 util.go:252] kubelet patch pods: /api/v1/namespaces/kube-flannel/pods/kube-flannel-ds-pcvpx/status with status code 200, spent 20.269342ms
I0202 19:54:09.208942       1 storage.go:562] key(kubelet/pods.v1.core/kube-flannel/kube-flannel-ds-pcvpx) storage is pending, just skip it
I0202 19:54:09.208985       1 cache_manager.go:650] skip to cache watch event because key({%!s(bool=false) kubelet/pods.v1.core/kube-flannel/kube-flannel-ds-pcvpx}) is under processing
I0202 19:54:09.214246       1 storage.go:562] key(kubelet/pods.v1.core/kube-flannel/kube-flannel-ds-pcvpx) storage is pending, just skip it
I0202 19:54:09.214283       1 cache_manager.go:650] skip to cache watch event because key({%!s(bool=false) kubelet/pods.v1.core/kube-flannel/kube-flannel-ds-pcvpx}) is under processing
I0202 19:54:09.214308       1 cache_manager.go:439] pod(kubelet/pods.v1.core/kube-flannel/kube-flannel-ds-pcvpx) is MODIFIED
I0202 19:54:09.222774       1 util.go:293] start proxying: post /api/v1/namespaces/kube-system/events, in flight requests: 16
I0202 19:54:09.239775       1 util.go:252] kubelet create events: /api/v1/namespaces/kube-system/events with status code 201, spent 16.937391ms
I0202 19:54:09.422267       1 util.go:293] start proxying: post /api/v1/namespaces/kube-system/events, in flight requests: 16
I0202 19:54:09.439656       1 util.go:252] kubelet create events: /api/v1/namespaces/kube-system/events with status code 201, spent 17.315355ms
I0202 19:54:09.622498       1 util.go:293] start proxying: post /api/v1/namespaces/kube-system/events, in flight requests: 16
I0202 19:54:09.641794       1 util.go:252] kubelet create events: /api/v1/namespaces/kube-system/events with status code 201, spent 19.220815ms
I0202 19:54:09.822318       1 util.go:293] start proxying: post /api/v1/namespaces/kube-flannel/events, in flight requests: 16
I0202 19:54:09.840382       1 util.go:252] kubelet create events: /api/v1/namespaces/kube-flannel/events with status code 201, spent 18.003851ms
I0202 19:54:10.022519       1 util.go:293] start proxying: post /api/v1/namespaces/kube-system/events, in flight requests: 16
I0202 19:54:10.039559       1 util.go:252] kubelet create events: /api/v1/namespaces/kube-system/events with status code 201, spent 16.967882ms
I0202 19:54:10.061036       1 util.go:293] start proxying: get /api/v1/nodes/mylittlefutro, in flight requests: 16
I0202 19:54:10.074508       1 connrotation.go:151] create a connection from 192.168.88.248:43934 to 143.42.26.120:6443, total 2 connections in transport manager dialer
I0202 19:54:10.121117       1 util.go:252] kube-proxy get nodes: /api/v1/nodes/mylittlefutro with status code 200, spent 60.015411ms
I0202 19:54:10.217267       1 util.go:293] start proxying: get /apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0, in flight requests: 16
I0202 19:54:10.221981       1 util.go:293] start proxying: post /api/v1/namespaces/kube-system/events, in flight requests: 17
I0202 19:54:10.230881       1 util.go:293] start proxying: get /api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0, in flight requests: 18
I0202 19:54:10.236970       1 util.go:252] kube-proxy list endpointslices: /apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0 with status code 200, spent 19.415208ms
I0202 19:54:10.241871       1 util.go:252] kubelet create events: /api/v1/namespaces/kube-system/events with status code 201, spent 19.743684ms
I0202 19:54:10.247280       1 util.go:252] kube-proxy list services: /api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0 with status code 200, spent 16.059857ms
I0202 19:54:10.250039       1 util.go:293] start proxying: get /api/v1/namespaces/kube-flannel/pods/kube-flannel-ds-pcvpx, in flight requests: 17
I0202 19:54:10.253174       1 util.go:293] start proxying: get /apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=13922&timeout=5m22s&timeoutSeconds=322&watch=true, in flight requests: 17
I0202 19:54:10.272222       1 util.go:252] kubelet get pods: /api/v1/namespaces/kube-flannel/pods/kube-flannel-ds-pcvpx with status code 200, spent 19.427144ms
I0202 19:54:10.274078       1 util.go:293] start proxying: patch /api/v1/namespaces/kube-flannel/pods/kube-flannel-ds-pcvpx/status, in flight requests: 17
I0202 19:54:10.284047       1 util.go:293] start proxying: post /apis/events.k8s.io/v1/namespaces/default/events, in flight requests: 18
I0202 19:54:10.292712       1 util.go:293] start proxying: get /api/v1/services?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=13070&timeout=5m14s&timeoutSeconds=314&watch=true, in flight requests: 19
I0202 19:54:10.295042       1 storage.go:562] key(kubelet/pods.v1.core/kube-flannel/kube-flannel-ds-pcvpx) storage is pending, just skip it
I0202 19:54:10.295076       1 cache_manager.go:650] skip to cache watch event because key({%!s(bool=false) kubelet/pods.v1.core/kube-flannel/kube-flannel-ds-pcvpx}) is under processing
I0202 19:54:10.295098       1 cache_manager.go:439] pod(kubelet/pods.v1.core/kube-flannel/kube-flannel-ds-pcvpx) is MODIFIED
I0202 19:54:10.295365       1 util.go:252] kubelet patch pods: /api/v1/namespaces/kube-flannel/pods/kube-flannel-ds-pcvpx/status with status code 200, spent 21.237665ms
I0202 19:54:10.296072       1 storage.go:562] key(kubelet/pods.v1.core/kube-flannel/kube-flannel-ds-pcvpx) storage is pending, just skip it
I0202 19:54:10.296092       1 cache_manager.go:650] skip to cache watch event because key({%!s(bool=false) kubelet/pods.v1.core/kube-flannel/kube-flannel-ds-pcvpx}) is under processing
I0202 19:54:10.311872       1 util.go:252] kube-proxy create events: /apis/events.k8s.io/v1/namespaces/default/events with status code 201, spent 27.762795ms
I0202 19:54:10.422577       1 util.go:293] start proxying: post /api/v1/namespaces/kube-system/events, in flight requests: 18
I0202 19:54:10.450357       1 util.go:252] kubelet create events: /api/v1/namespaces/kube-system/events with status code 201, spent 27.639867ms
I0202 19:54:10.622379       1 util.go:293] start proxying: post /api/v1/namespaces/kube-system/events, in flight requests: 18
I0202 19:54:10.638709       1 util.go:252] kubelet create events: /api/v1/namespaces/kube-system/events with status code 201, spent 16.268632ms
I0202 19:54:10.822418       1 util.go:293] start proxying: post /api/v1/namespaces/kube-system/events, in flight requests: 18
I0202 19:54:10.839748       1 util.go:252] kubelet create events: /api/v1/namespaces/kube-system/events with status code 201, spent 17.261534ms
I0202 19:54:10.931601       1 util.go:293] start proxying: get /api/v1/namespaces/kube-flannel/pods/kube-flannel-ds-pcvpx, in flight requests: 18
I0202 19:54:10.968745       1 util.go:252] flanneld get pods: /api/v1/namespaces/kube-flannel/pods/kube-flannel-ds-pcvpx with status code 200, spent 37.061107ms
I0202 19:54:10.986007       1 util.go:293] start proxying: get /api/v1/nodes?limit=500&resourceVersion=0, in flight requests: 18
I0202 19:54:11.012215       1 util.go:252] flanneld list nodes: /api/v1/nodes?limit=500&resourceVersion=0 with status code 200, spent 26.09828ms
I0202 19:54:11.018540       1 util.go:293] start proxying: get /api/v1/nodes?allowWatchBookmarks=true&resourceVersion=13891&timeout=8m1s&timeoutSeconds=481&watch=true, in flight requests: 18
I0202 19:54:11.022281       1 util.go:293] start proxying: post /api/v1/namespaces/kube-system/events, in flight requests: 19
I0202 19:54:11.040077       1 util.go:252] kubelet create events: /api/v1/namespaces/kube-system/events with status code 201, spent 17.722363ms
I0202 19:54:11.222317       1 util.go:293] start proxying: post /api/v1/namespaces/kube-system/events, in flight requests: 19
I0202 19:54:11.242012       1 util.go:252] kubelet create events: /api/v1/namespaces/kube-system/events with status code 201, spent 19.629246ms
I0202 19:54:11.277317       1 util.go:293] start proxying: get /api/v1/namespaces/kube-flannel/pods/kube-flannel-ds-pcvpx, in flight requests: 19
I0202 19:54:11.294235       1 util.go:252] kubelet get pods: /api/v1/namespaces/kube-flannel/pods/kube-flannel-ds-pcvpx with status code 200, spent 16.841728ms
I0202 19:54:11.295943       1 util.go:293] start proxying: patch /api/v1/namespaces/kube-flannel/pods/kube-flannel-ds-pcvpx/status, in flight requests: 19
I0202 19:54:11.322360       1 util.go:252] kubelet patch pods: /api/v1/namespaces/kube-flannel/pods/kube-flannel-ds-pcvpx/status with status code 200, spent 26.358695ms
I0202 19:54:11.323015       1 storage.go:562] key(kubelet/pods.v1.core/kube-flannel/kube-flannel-ds-pcvpx) storage is pending, just skip it
I0202 19:54:11.323034       1 cache_manager.go:650] skip to cache watch event because key({%!s(bool=false) kubelet/pods.v1.core/kube-flannel/kube-flannel-ds-pcvpx}) is under processing
I0202 19:54:11.337622       1 cache_manager.go:439] pod(kubelet/pods.v1.core/kube-flannel/kube-flannel-ds-pcvpx) is MODIFIED
I0202 19:54:11.422040       1 util.go:293] start proxying: post /api/v1/namespaces/kube-system/events, in flight requests: 19
I0202 19:54:11.438805       1 util.go:252] kubelet create events: /api/v1/namespaces/kube-system/events with status code 201, spent 16.694467ms
I0202 19:54:11.622832       1 util.go:293] start proxying: post /api/v1/namespaces/kube-flannel/events, in flight requests: 19
I0202 19:54:11.642605       1 util.go:252] kubelet create events: /api/v1/namespaces/kube-flannel/events with status code 201, spent 19.706781ms
I0202 19:54:11.822064       1 util.go:293] start proxying: patch /api/v1/namespaces/kube-system/events/coredns-z8bgt.174019ff71a8489f, in flight requests: 19
I0202 19:54:11.840918       1 util.go:252] kubelet patch events: /api/v1/namespaces/kube-system/events/coredns-z8bgt.174019ff71a8489f with status code 200, spent 18.792213ms
I0202 19:54:12.018967       1 util.go:293] start proxying: patch /api/v1/nodes/mylittlefutro/status, in flight requests: 19
I0202 19:54:12.023010       1 util.go:293] start proxying: post /api/v1/namespaces/kube-flannel/events, in flight requests: 20
I0202 19:54:12.042477       1 util.go:252] kubelet create events: /api/v1/namespaces/kube-flannel/events with status code 201, spent 19.40543ms
I0202 19:54:12.045065       1 util.go:252] flanneld patch nodes: /api/v1/nodes/mylittlefutro/status with status code 200, spent 26.031313ms
I0202 19:54:12.124093       1 util.go:293] start proxying: patch /api/v1/nodes/mylittlefutro/status, in flight requests: 19
I0202 19:54:12.151217       1 util.go:252] flanneld patch nodes: /api/v1/nodes/mylittlefutro/status with status code 200, spent 27.049722ms
I0202 19:54:12.152356       1 storage.go:562] key(flanneld/nodes.v1.core/mylittlefutro) storage is pending, just skip it
I0202 19:54:12.152383       1 cache_manager.go:650] skip to cache watch event because key({%!s(bool=false) flanneld/nodes.v1.core/mylittlefutro}) is under processing
I0202 19:54:12.222682       1 util.go:293] start proxying: post /api/v1/namespaces/kube-flannel/events, in flight requests: 19
I0202 19:54:12.240427       1 util.go:252] kubelet create events: /api/v1/namespaces/kube-flannel/events with status code 201, spent 17.631598ms
I0202 19:54:12.422849       1 util.go:293] start proxying: post /api/v1/namespaces/kube-flannel/events, in flight requests: 19
I0202 19:54:12.442049       1 util.go:252] kubelet create events: /api/v1/namespaces/kube-flannel/events with status code 201, spent 19.130292ms
I0202 19:54:12.622660       1 util.go:293] start proxying: post /api/v1/namespaces/kube-flannel/events, in flight requests: 19
I0202 19:54:12.641046       1 util.go:252] kubelet create events: /api/v1/namespaces/kube-flannel/events with status code 201, spent 18.319019ms
I0202 19:54:12.822863       1 util.go:293] start proxying: post /api/v1/namespaces/kube-system/events, in flight requests: 19
I0202 19:54:12.846323       1 util.go:252] kubelet create events: /api/v1/namespaces/kube-system/events with status code 201, spent 23.393432ms
I0202 19:54:13.023138       1 util.go:293] start proxying: post /api/v1/namespaces/kube-flannel/events, in flight requests: 19
I0202 19:54:13.040960       1 util.go:252] kubelet create events: /api/v1/namespaces/kube-flannel/events with status code 201, spent 17.755874ms
I0202 19:54:13.222467       1 util.go:293] start proxying: patch /api/v1/namespaces/kube-system/events/coredns-z8bgt.174019ff71a8489f, in flight requests: 19
I0202 19:54:13.242959       1 util.go:252] kubelet patch events: /api/v1/namespaces/kube-system/events/coredns-z8bgt.174019ff71a8489f with status code 200, spent 20.414714ms
I0202 19:54:13.362143       1 util.go:293] start proxying: get /api/v1/namespaces/kube-system/pods/coredns-z8bgt, in flight requests: 19
I0202 19:54:13.379825       1 util.go:252] kubelet get pods: /api/v1/namespaces/kube-system/pods/coredns-z8bgt with status code 200, spent 17.606711ms
I0202 19:54:13.381375       1 util.go:293] start proxying: patch /api/v1/namespaces/kube-system/pods/coredns-z8bgt/status, in flight requests: 19
I0202 19:54:13.403883       1 util.go:252] kubelet patch pods: /api/v1/namespaces/kube-system/pods/coredns-z8bgt/status with status code 200, spent 22.425896ms
I0202 19:54:13.405003       1 storage.go:562] key(kubelet/pods.v1.core/kube-system/coredns-z8bgt) storage is pending, just skip it
I0202 19:54:13.405043       1 cache_manager.go:650] skip to cache watch event because key({%!s(bool=false) kubelet/pods.v1.core/kube-system/coredns-z8bgt}) is under processing
I0202 19:54:13.407497       1 cache_manager.go:439] pod(kubelet/pods.v1.core/kube-system/coredns-z8bgt) is MODIFIED

For all my tests I spin up a new kubeadm multi control plane cluster.

@rambohe-ch
Copy link
Member

Using "latest" tag for yurt-controller-manager causes:

@batthebee would you be able to check the creation time of latest image? and i am afraid that you have used a old latest version.

@batthebee
Copy link
Contributor Author

batthebee commented Feb 3, 2023

@rambohe-ch you are right, sorry. It was an old image. Now I changed the 'imagePullPolicy' to always.

The problem still exists but I think, i found the cause.

After initial setup, there is a valid certificate with 127.0.0.1 in pool-coordinator-dynamic-certs secret. But when i delete all the pods on my workers and edge devices (except yurt-hub-*), or restart the node, a new certificate is created in this secret, but without the 127.0.0.1.

@batthebee
Copy link
Contributor Author

i found this in the code:

	{
		CertName:     "apiserver",
		SecretName:   PoolcoordinatorDynamicSecertName,
		IsKubeConfig: false,
		ExtKeyUsage:  []x509.ExtKeyUsage{x509.ExtKeyUsageServerAuth},
		CommonName:   PoolcoordinatorAPIServerCN,
		Organization: []string{PoolcoordinatorOrg},
		certInit: func(i client.Interface, c <-chan struct{}) ([]net.IP, []string, error) {
			return waitUntilSVCReady(i, PoolcoordinatorAPIServerSVC, c)
		},
	},
	{
		CertName:     "etcd-server",
		SecretName:   PoolcoordinatorDynamicSecertName,
		IsKubeConfig: false,
		ExtKeyUsage:  []x509.ExtKeyUsage{x509.ExtKeyUsageServerAuth},
		IPs: []net.IP{
			net.ParseIP("127.0.0.1"),
		},
		CommonName:   PoolcoordinatorETCDCN,
		Organization: []string{PoolcoordinatorOrg},
		certInit: func(i client.Interface, c <-chan struct{}) ([]net.IP, []string, error) {
			return waitUntilSVCReady(i, PoolcoordinatorETCDSVC, c)
		},
	},

Does the api server also need the 127.0.0.1 ? because of this api-server logs:

docker logs f95904f95f35
I0203 15:34:11.845994       1 server.go:553] external host was not specified, using 192.168.88.248
I0203 15:34:11.846649       1 server.go:161] Version: v1.22.17
I0203 15:34:13.240119       1 shared_informer.go:240] Waiting for caches to sync for node_authorizer
I0203 15:34:13.244202       1 plugins.go:158] Loaded 11 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0203 15:34:13.244242       1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
I0203 15:34:13.246282       1 plugins.go:158] Loaded 11 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0203 15:34:13.246307       1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
W0203 15:34:13.256778       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:12379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: authentication handshake failed: x509: certificate is valid for 10.101.163.92, 10.101.163.92, not 127.0.0.1". Reconnecting...
W0203 15:34:14.246632       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:12379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: authentication handshake failed: x509: certificate is valid for 10.101.163.92, 10.101.163.92, not 127.0.0.1". Reconnecting...

@batthebee
Copy link
Contributor Author

batthebee commented Feb 3, 2023

Additional Infos:

Recreation of certificates without 127.0.0.1 is caused by yurt-controller-manager. This recreation occurs when restarting the pod. Then the secret will be rewritten multiple times (triggered by the yurt-csr!?) and results in etcd certificate without the 127.0.0.1.

Here the logs:

k logs -n kube-system yurt-controller-manager-8454b957d4-fwxqw
yurtcontroller-manager version: projectinfo.Info{GitVersion:"-33adccc", GitCommit:"33adccc", BuildDate:"2023-02-03T02:07:05Z", GoVersion:"go1.17.1", Compiler:"gc", Platform:"linux/amd64", AllVersions:[]string{""}}
W0203 22:06:35.840660       1 client_config.go:615] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
I0203 22:06:35.849663       1 leaderelection.go:248] attempting to acquire leader lease kube-system/yurt-controller-manager...
I0203 22:06:51.971816       1 leaderelection.go:258] successfully acquired lease kube-system/yurt-controller-manager
I0203 22:06:51.971893       1 event.go:282] Event(v1.ObjectReference{Kind:"Lease", Namespace:"kube-system", Name:"yurt-controller-manager", UID:"5856a3b3-6ea3-45c4-8f9f-24f1a6fd7169", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"8208", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kubeadm-openyurt-w1_8d11cbf5-6d24-4f7a-a38a-5e69cdaea447 became leader
I0203 22:06:51.994742       1 controllermanager.go:373] Started "podbinding"
I0203 22:06:51.995959       1 controllermanager.go:373] Started "poolcoordinatorcertmanager"
I0203 22:06:51.997041       1 poolcoordinator_cert_manager.go:220] Starting poolcoordinatorCertManager controller
I0203 22:06:51.998841       1 controllermanager.go:373] Started "poolcoordinator"
I0203 22:06:52.000549       1 poolcoordinator_cert_manager.go:368] CA already exist in secret, reuse it
I0203 22:06:52.004921       1 poolcoordinator_cert_manager.go:313] cert apiserver-etcd-client not change, reuse it
I0203 22:06:52.010324       1 csrapprover.go:125] v1.CertificateSigningRequest is supported.
I0203 22:06:52.010625       1 controllermanager.go:373] Started "yurtcsrapprover"
I0203 22:06:52.012071       1 controllermanager.go:373] Started "daemonpodupdater"
I0203 22:06:52.017407       1 csrapprover.go:185] starting the crsapprover
I0203 22:06:52.017515       1 daemon_pod_updater_controller.go:215] Starting daemonPodUpdater controller
I0203 22:06:52.017619       1 poolcoordinator_cert_manager.go:313] cert pool-coordinator-yurthub-client not change, reuse it
I0203 22:06:52.020998       1 servicetopology.go:297] v1.EndpointSlice is supported.
I0203 22:06:52.023664       1 controllermanager.go:373] Started "servicetopologycontroller"
I0203 22:06:52.024558       1 servicetopology.go:93] starting the service topology controller
I0203 22:06:52.101032       1 csrapprover.go:174] csr(csr-5qcps) is not yurt-csr
I0203 22:06:52.101951       1 csrapprover.go:174] csr(csr-fzxpk) is not yurt-csr
I0203 22:06:52.103017       1 csrapprover.go:174] csr(csr-xbxmz) is not yurt-csr
I0203 22:06:52.103325       1 csrapprover.go:174] csr(csr-5blwj) is not yurt-csr
I0203 22:06:52.103610       1 csrapprover.go:174] csr(csr-s2fq6) is not yurt-csr
I0203 22:06:52.103820       1 csrapprover.go:174] csr(csr-jb5jh) is not yurt-csr
I0203 22:06:52.105957       1 poolcoordinator_controller.go:223] start node taint workers
I0203 22:06:52.124698       1 servicetopology.go:99] sync service topology controller succeed
I0203 22:06:52.196203       1 pod_binding_controller.go:274] start pod binding workers
I0203 22:06:53.037041       1 util.go:62] pool-coordinator-apiserver service is ready for poolcoordinator_cert_manager
I0203 22:06:53.037106       1 poolcoordinator_cert_manager.go:306] cert apiserver IP has changed
I0203 22:06:54.049706       1 util.go:62] pool-coordinator-etcd service is ready for poolcoordinator_cert_manager
I0203 22:06:54.049776       1 poolcoordinator_cert_manager.go:306] cert etcd-server IP has changed
I0203 22:06:55.066680       1 util.go:62] pool-coordinator-apiserver service is ready for poolcoordinator_cert_manager
I0203 22:06:55.066739       1 poolcoordinator_cert_manager.go:306] cert kubeconfig IP has changed
I0203 22:06:56.081050       1 util.go:62] pool-coordinator-apiserver service is ready for poolcoordinator_cert_manager
I0203 22:06:56.081098       1 poolcoordinator_cert_manager.go:306] cert admin.conf IP has changed
I0203 22:06:57.377994       1 util.go:62] pool-coordinator-apiserver service is ready for poolcoordinator_cert_manager
I0203 22:06:57.412114       1 certificate.go:393] successfully write apiserver cert/key into pool-coordinator-dynamic-certs
I0203 22:06:59.294491       1 util.go:62] pool-coordinator-etcd service is ready for poolcoordinator_cert_manager
I0203 22:06:59.332505       1 certificate.go:393] successfully write etcd-server cert/key into pool-coordinator-dynamic-certs
I0203 22:07:00.762652       1 util.go:62] pool-coordinator-apiserver service is ready for poolcoordinator_cert_manager
I0203 22:07:00.798470       1 certificate.go:408] successfully write kubeconfig into secret pool-coordinator-monitoring-kubeconfig
I0203 22:07:02.156613       1 util.go:62] pool-coordinator-apiserver service is ready for poolcoordinator_cert_manager
I0203 22:07:02.201252       1 certificate.go:408] successfully write kubeconfig into secret pool-coordinator-static-certs
W0203 22:07:02.202020       1 filestore_wrapper.go:49] unexpected error occurred when loading the certificate: no cert/key files read at "/tmp/yurt-controller-manager_poolcoordinator-apiserver-client-current.pem", ("", "") or ("/tmp", "/tmp"), will regenerate it
I0203 22:07:02.202076       1 certificate_manager.go:318] kubernetes.io/kube-apiserver-client: Certificate rotation is enabled
I0203 22:07:02.202649       1 certificate_manager.go:446] kubernetes.io/kube-apiserver-client: Rotating certificates
I0203 22:07:02.212895       1 csrapprover.go:168] non-approved and non-denied csr, enqueue: csr-464tg
I0203 22:07:02.226709       1 csrapprover.go:282] successfully approve yurt-csr(csr-464tg)
I0203 22:07:03.240025       1 certificate_manager.go:590] kubernetes.io/kube-apiserver-client: Certificate expiration is 2024-02-03 22:02:02 +0000 UTC, rotation deadline is 2023-12-24 22:22:45.832097149 +0000 UTC
I0203 22:07:03.240202       1 certificate_manager.go:324] kubernetes.io/kube-apiserver-client: Waiting 7776h15m42.591900711s for next certificate rotation
I0203 22:07:04.240536       1 certificate_manager.go:590] kubernetes.io/kube-apiserver-client: Certificate expiration is 2024-02-03 22:02:02 +0000 UTC, rotation deadline is 2023-12-12 02:51:12.296544265 +0000 UTC
I0203 22:07:04.240618       1 certificate_manager.go:324] kubernetes.io/kube-apiserver-client: Waiting 7468h44m8.055931505s for next certificate rotation
I0203 22:07:07.202320       1 certificate.go:309] yurt-controller-manager_poolcoordinator certificate signed successfully
I0203 22:07:07.243095       1 certificate.go:357] successfully write apiserver-kubelet-client cert/key pair into pool-coordinator-static-certs
W0203 22:07:07.243414       1 filestore_wrapper.go:49] unexpected error occurred when loading the certificate: no cert/key files read at "/tmp/yurthub-current.pem", ("", "") or ("/tmp", "/tmp"), will regenerate it
I0203 22:07:07.243461       1 certificate_manager.go:318] kubernetes.io/kube-apiserver-client: Certificate rotation is enabled
I0203 22:07:07.243585       1 certificate_manager.go:446] kubernetes.io/kube-apiserver-client: Rotating certificates
I0203 22:07:07.254080       1 csrapprover.go:168] non-approved and non-denied csr, enqueue: csr-x4mq2
I0203 22:07:07.265343       1 csrapprover.go:282] successfully approve yurt-csr(csr-x4mq2)
I0203 22:07:08.274533       1 certificate_manager.go:590] kubernetes.io/kube-apiserver-client: Certificate expiration is 2024-02-03 22:02:07 +0000 UTC, rotation deadline is 2023-12-15 22:30:56.22852082 +0000 UTC
I0203 22:07:08.274611       1 certificate_manager.go:324] kubernetes.io/kube-apiserver-client: Waiting 7560h23m47.953917393s for next certificate rotation
I0203 22:07:09.275418       1 certificate_manager.go:590] kubernetes.io/kube-apiserver-client: Certificate expiration is 2024-02-03 22:02:07 +0000 UTC, rotation deadline is 2023-11-02 17:08:08.556922623 +0000 UTC
I0203 22:07:09.275579       1 certificate_manager.go:324] kubernetes.io/kube-apiserver-client: Waiting 6523h0m59.28134985s for next certificate rotation
I0203 22:07:12.243769       1 certificate.go:309] yurt-controller-manager_poolcoordinator certificate signed successfully
I0203 22:07:12.276826       1 certificate.go:357] successfully write node-lease-proxy-client cert/key pair into pool-coordinator-yurthub-certs
I0203 22:07:12.297217       1 certificate.go:393] successfully write ca cert/key into pool-coordinator-static-certs
I0203 22:07:12.312313       1 certificate.go:393] successfully write ca cert/key into pool-coordinator-yurthub-certs

@luc99hen
Copy link
Member

luc99hen commented Feb 4, 2023

It's wired that it was right at first and crashed when restarting. We haven't met this problem before :<.

I suggested to add a log line here to find out the contents of the added secret item for etcd-server cert. You can use make docker-build-yurt-controller-manager to build a new image and see its behavior.

@batthebee
Copy link
Contributor Author

@Congrool @luc99hen

I'm working on last commit:

git log
commit a36f3bebdfccca1c30ae37283f9331e89a26641f (HEAD -> master, origin/master, origin/HEAD)
Author: Frank Zhao <[email protected]>
Date:   Fri Feb 3 10:55:29 2023 +0800

    [Doc] Add contribution leaderboard badge (#1184)
    
    * Update README.md
    
    * Update README.zh.md

Added some debug output:

@@ -72,6 +75,24 @@ func NewSecretClient(clientSet client.Interface, ns, name string) (*SecretClient
 func (c *SecretClient) AddData(key string, val []byte) error {
 
        patchBytes, _ := json.Marshal(map[string]interface{}{"data": map[string][]byte{key: val}})
+
+       if key == "etcd-server.crt" || key == "apiserver.crt" {
+               block, rest := pem.Decode(val)
+               if block == nil || len(rest) > 0 {
+                       return fmt.Errorf("failed to decode PEM block containing the public key")
+               }
+               cert, err := x509.ParseCertificate(block.Bytes)
+               if err != nil {
+                       return fmt.Errorf("failed to parse certificate: %v", err)
+               }
+
+               result, err := certinfo.CertificateText(cert)
+               if err != nil {
+                       return err
+               }
+               klog.Infof("key: %s cert: %s", key, result)
+       }
+

There are no pool-coordinator secrets:

kubectl get secrets -n kube-system 
NAME                                             TYPE                                  DATA   AGE
attachdetach-controller-token-99r62              kubernetes.io/service-account-token   3      25h
bootstrap-signer-token-b99vd                     kubernetes.io/service-account-token   3      25h
certificate-controller-token-xk7qb               kubernetes.io/service-account-token   3      25h
clusterrole-aggregation-controller-token-pq24t   kubernetes.io/service-account-token   3      25h
coredns-token-57mcd                              kubernetes.io/service-account-token   3      25h
cronjob-controller-token-9727d                   kubernetes.io/service-account-token   3      25h
daemon-set-controller-token-bwrxh                kubernetes.io/service-account-token   3      25h
default-token-rpp96                              kubernetes.io/service-account-token   3      25h
deployment-controller-token-mlc7h                kubernetes.io/service-account-token   3      25h
disruption-controller-token-zdffk                kubernetes.io/service-account-token   3      25h
endpoint-controller-token-m7n7n                  kubernetes.io/service-account-token   3      25h
endpointslice-controller-token-6ddjq             kubernetes.io/service-account-token   3      25h
endpointslicemirroring-controller-token-c5sln    kubernetes.io/service-account-token   3      25h
ephemeral-volume-controller-token-7pgwb          kubernetes.io/service-account-token   3      25h
expand-controller-token-kdj9j                    kubernetes.io/service-account-token   3      25h
generic-garbage-collector-token-7dmx9            kubernetes.io/service-account-token   3      25h
horizontal-pod-autoscaler-token-vgjtt            kubernetes.io/service-account-token   3      25h
job-controller-token-256k6                       kubernetes.io/service-account-token   3      25h
kube-proxy-token-4j8z9                           kubernetes.io/service-account-token   3      25h
namespace-controller-token-q7hrg                 kubernetes.io/service-account-token   3      25h
node-controller-token-5rj64                      kubernetes.io/service-account-token   3      25h
persistent-volume-binder-token-55m74             kubernetes.io/service-account-token   3      25h
pod-garbage-collector-token-w7tjr                kubernetes.io/service-account-token   3      25h
pv-protection-controller-token-b6zss             kubernetes.io/service-account-token   3      25h
pvc-protection-controller-token-q4qcs            kubernetes.io/service-account-token   3      25h
raven-agent-account-token-7phjb                  kubernetes.io/service-account-token   3      24h
raven-agent-secret                               Opaque                                1      24h
raven-controller-manager-token-dwpxr             kubernetes.io/service-account-token   3      24h
raven-webhook-certs                              Opaque                                6      24h
replicaset-controller-token-6smqg                kubernetes.io/service-account-token   3      25h
replication-controller-token-dnldm               kubernetes.io/service-account-token   3      25h
resourcequota-controller-token-phvnz             kubernetes.io/service-account-token   3      25h
root-ca-cert-publisher-token-nqmg4               kubernetes.io/service-account-token   3      25h
service-account-controller-token-44m6v           kubernetes.io/service-account-token   3      25h
service-controller-token-475q2                   kubernetes.io/service-account-token   3      25h
sh.helm.release.v1.app.v1                        helm.sh/release.v1                    1      93m
sh.helm.release.v1.yurt-app-manager.v1           helm.sh/release.v1                    1      24h
statefulset-controller-token-5djcd               kubernetes.io/service-account-token   3      25h
token-cleaner-token-t4bql                        kubernetes.io/service-account-token   3      25h
ttl-after-finished-controller-token-8zjdd        kubernetes.io/service-account-token   3      25h
ttl-controller-token-n2rp2                       kubernetes.io/service-account-token   3      25h
yurt-app-manager                                 Opaque                                0      24h
yurt-app-manager-admission                       Opaque                                3      24h
yurt-app-manager-token-22r4r                     kubernetes.io/service-account-token   3      24h
yurt-controller-manager-token-dvbd9              kubernetes.io/service-account-token   3      93m

starting yurt-controller-manager:

./app # dlv exec --headless --api-version=2 --listen :2345 ./yurt-controller-manager
API server listening at: [::]:2345
2023-02-04T22:43:38Z warning layer=rpc Listening for remote connections (connections are not authenticated nor encrypted)
yurtcontroller-manager version: projectinfo.Info{GitVersion:"v0.0.0", GitCommit:"unknown", BuildDate:"1970-01-01T00:00:00Z", GoVersion:"go1.18.10", Compiler:"gc", Platform:"linux/amd64", AllVersions:[]string{"unknown"}}
W0204 22:44:31.862834    7560 client_config.go:615] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
I0204 22:44:31.867145    7560 leaderelection.go:248] attempting to acquire leader lease kube-system/yurt-controller-manager...
I0204 22:44:48.375530    7560 leaderelection.go:258] successfully acquired lease kube-system/yurt-controller-manager
I0204 22:44:48.375859    7560 event.go:282] Event(v1.ObjectReference{Kind:"Lease", Namespace:"kube-system", Name:"yurt-controller-manager", UID:"5856a3b3-6ea3-45c4-8f9f-24f1a6fd7169", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"351065", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kubeadm-openyurt-w2_18a79c84-7ebd-4821-9f87-573a06bafd02 became leader
I0204 22:44:48.404132    7560 servicetopology.go:297] v1.EndpointSlice is supported.
I0204 22:44:48.404714    7560 controllermanager.go:373] Started "servicetopologycontroller"
I0204 22:44:48.404860    7560 servicetopology.go:93] starting the service topology controller
I0204 22:44:48.406486    7560 controllermanager.go:373] Started "podbinding"
I0204 22:44:48.407597    7560 controllermanager.go:373] Started "poolcoordinatorcertmanager"
I0204 22:44:48.407824    7560 poolcoordinator_cert_manager.go:220] Starting poolcoordinatorCertManager controller
I0204 22:44:48.409220    7560 controllermanager.go:373] Started "poolcoordinator"
I0204 22:44:48.413655    7560 poolcoordinator_cert_manager.go:373] fail to get CA from secret: secrets "pool-coordinator-ca-certs" not found, create new CA
I0204 22:44:48.419778    7560 csrapprover.go:125] v1.CertificateSigningRequest is supported.
I0204 22:44:48.419855    7560 controllermanager.go:373] Started "yurtcsrapprover"
I0204 22:44:48.421390    7560 controllermanager.go:373] Started "daemonpodupdater"
I0204 22:44:48.422292    7560 csrapprover.go:185] starting the crsapprover
I0204 22:44:48.422485    7560 daemon_pod_updater_controller.go:215] Starting daemonPodUpdater controller
I0204 22:44:48.506460    7560 servicetopology.go:99] sync service topology controller succeed
I0204 22:44:48.507644    7560 pod_binding_controller.go:274] start pod binding workers
I0204 22:44:48.509854    7560 poolcoordinator_controller.go:223] start node taint workers
I0204 22:44:49.017727    7560 certificate.go:393] successfully write ca cert/key into pool-coordinator-ca-certs
I0204 22:44:49.500175    7560 certificate.go:393] successfully write apiserver-etcd-client cert/key into pool-coordinator-static-certs
I0204 22:44:49.856814    7560 certificate.go:393] successfully write pool-coordinator-yurthub-client cert/key into pool-coordinator-yurthub-certs
I0204 22:44:51.065831    7560 util.go:62] pool-coordinator-apiserver service is ready for poolcoordinator_cert_manager
I0204 22:44:51.084126    7560 secret.go:93] key: apiserver.crt cert: Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number: 2784851986256578629 (0x26a5c82070dd4845)
    Signature Algorithm: SHA256-RSA
        Issuer: CN=openyurt:pool-coordinator
        Validity
            Not Before: Feb 4 22:44:48 2023 UTC
            Not After : Jan 11 22:44:51 2123 UTC
        Subject: O=openyurt:pool-coordinator,CN=openyurt:pool-coordinator:apiserver
        Subject Public Key Info:
            Public Key Algorithm: RSA
                Public-Key: (2048 bit)
                Modulus:
                    ad:dd:8b:e4:2c:4a:a4:8a:72:f4:83:6a:97:99:a1:
                    c0:d1:e2:8a:aa:d3:21:d7:09:21:37:d1:7e:f8:76:
                    9c:05:b7:c7:d3:b2:74:97:62:0b:f0:b4:e4:f4:bd:
                    7c:34:74:5a:2d:df:06:68:62:4b:36:09:93:44:43:
                    28:38:d8:09:dd:e9:c4:ed:77:f4:5f:6d:17:97:8e:
                    46:35:e9:d6:0b:22:a2:21:28:ce:a1:a4:65:8c:a7:
                    6b:d9:a9:44:64:da:63:83:df:de:e2:1b:39:8e:03:
                    b8:3c:f9:01:52:20:fd:56:86:2e:a9:b3:ca:c5:95:
                    ca:86:c4:8c:20:9f:70:fa:cb:bb:3f:e6:6d:46:6f:
                    56:00:5c:2a:5f:8a:f9:97:9e:86:b7:29:c5:3b:73:
                    12:29:33:37:ed:d1:ad:11:61:48:b1:df:34:ed:22:
                    b3:e7:58:f8:cb:0c:7d:6f:63:4e:46:53:0e:d3:eb:
                    25:b5:21:e8:1c:d1:e4:76:9a:83:c0:c4:e0:46:9b:
                    82:c3:87:d3:9d:4d:3f:a4:f3:67:d8:c9:5e:c2:0b:
                    0d:55:f5:f5:49:ab:ff:d2:ad:fc:11:f5:58:fe:53:
                    36:c8:05:6f:0e:26:ee:36:83:7a:04:84:1e:95:0d:
                    06:a8:82:5b:62:2d:e1:ad:6c:8d:a5:ed:a5:5d:41:
                    a5
                Exponent: 65537 (0x10001)
        X509v3 extensions:
            X509v3 Key Usage: critical
                Digital Signature, Key Encipherment
            X509v3 Extended Key Usage:
                TLS Web Server Authentication
            X509v3 Authority Key Identifier:
                keyid:08:D7:84:B8:FF:B7:CA:C1:3F:20:48:B6:9E:33:A4:96:5D:3C:D8:1B
            X509v3 Subject Alternative Name:
                DNS:pool-coordinator-apiserver, DNS:pool-coordinator-apiserver.kube-system, DNS:pool-coordinator-apiserver.kube-system.svc, DNS:pool-coordinator-apiserver.kube-system.svc.cluster.local
                IP Address:10.101.224.203

    Signature Algorithm: SHA256-RSA
         2c:d3:a5:66:e9:1a:9d:a9:aa:83:ef:c5:2d:96:69:5c:bb:8a:
         99:07:22:ae:7d:37:51:d7:e1:69:9d:ba:61:44:64:86:8f:b2:
         df:65:26:2f:30:bd:ac:c0:c7:42:21:5b:d2:76:9c:1b:13:34:
         e1:af:15:25:1d:b0:6b:d6:7d:ec:d7:2c:76:d9:04:63:df:f4:
         73:09:7c:ce:c0:17:d4:67:5a:27:41:a2:e1:6c:25:67:28:68:
         66:de:24:08:fa:9e:f2:09:e5:2c:65:55:52:31:e7:a4:64:ab:
         a1:ef:30:f2:b0:fe:80:ad:60:35:c0:1b:09:e4:88:90:fa:b9:
         90:41:0c:dd:cd:4d:98:53:81:2a:83:fa:88:c8:3d:2c:69:9c:
         b1:b5:fa:46:eb:3f:6a:33:27:17:df:ec:57:56:4e:d8:4c:28:
         c0:cb:a3:cd:a1:01:26:9b:54:d8:c0:44:51:af:64:fc:fe:98:
         b7:aa:71:e3:35:91:38:5e:32:f1:b3:83:6f:b9:37:92:c8:e3:
         17:50:96:53:f8:b3:db:b2:79:eb:8d:45:b0:d0:78:be:54:22:
         50:6f:a4:f4:6d:d8:dd:ff:4a:08:96:d6:ea:e4:61:75:0a:50:
         1b:6f:2c:35:45:1f:9f:73:26:d4:97:c3:73:8c:51:d8:3f:35:
         64:3b:cc:1d
I0204 22:44:51.091060    7560 certificate.go:393] successfully write apiserver cert/key into pool-coordinator-dynamic-certs
I0204 22:44:52.251213    7560 util.go:62] pool-coordinator-etcd service is ready for poolcoordinator_cert_manager
I0204 22:44:52.278363    7560 secret.go:93] key: etcd-server.crt cert: Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number: 2433871316534301105 (0x21c6d911330a11b1)
    Signature Algorithm: SHA256-RSA
        Issuer: CN=openyurt:pool-coordinator
        Validity
            Not Before: Feb 4 22:44:48 2023 UTC
            Not After : Jan 11 22:44:52 2123 UTC
        Subject: O=openyurt:pool-coordinator,CN=openyurt:pool-coordinator:etcd
        Subject Public Key Info:
            Public Key Algorithm: RSA
                Public-Key: (2048 bit)
                Modulus:
                    ae:61:05:95:0c:7b:c0:4e:4a:92:e1:15:15:63:b5:
                    0b:bb:be:ec:b2:c8:b3:a0:54:f0:b6:d5:07:7f:1e:
                    25:c3:44:be:96:7a:94:c7:21:bb:38:d4:bc:24:84:
                    e9:04:50:06:e7:1b:2f:0e:5c:b5:d3:a0:90:7f:26:
                    2b:9d:cc:6b:8c:d5:4f:a1:1d:51:a5:b0:4a:da:1c:
                    c3:9f:9a:96:69:62:1f:5c:12:85:7d:7d:28:97:3f:
                    93:c7:85:08:45:97:9e:ca:45:ca:1b:e5:cd:3d:13:
                    4d:36:51:67:8a:08:b4:6e:f0:33:e4:b5:62:44:57:
                    42:d6:ed:3d:ac:db:38:0d:b2:79:93:b1:c7:50:69:
                    dc:69:36:ba:f6:31:13:9c:f1:b8:4c:f9:42:8b:29:
                    61:5e:64:a3:d8:7a:b4:57:d0:cb:26:45:18:a4:fd:
                    35:b8:ca:1e:ec:86:7a:1a:36:59:f2:19:5d:9f:01:
                    3f:49:ef:84:dd:35:ea:93:5d:7a:18:2b:cd:11:57:
                    ba:71:24:ca:e1:25:26:e1:b0:8e:82:93:35:cb:c1:
                    a3:fa:97:14:85:b5:70:3f:bd:dd:f9:0c:82:01:e4:
                    34:2c:de:70:ba:b6:c5:82:e6:62:64:92:2d:45:81:
                    3b:53:13:ce:4a:20:79:4c:c6:3c:2c:31:96:8f:9b:
                    f9
                Exponent: 65537 (0x10001)
        X509v3 extensions:
            X509v3 Key Usage: critical
                Digital Signature, Key Encipherment
            X509v3 Extended Key Usage:
                TLS Web Server Authentication
            X509v3 Authority Key Identifier:
                keyid:08:D7:84:B8:FF:B7:CA:C1:3F:20:48:B6:9E:33:A4:96:5D:3C:D8:1B
            X509v3 Subject Alternative Name:
                DNS:pool-coordinator-etcd, DNS:pool-coordinator-etcd.kube-system, DNS:pool-coordinator-etcd.kube-system.svc, DNS:pool-coordinator-etcd.kube-system.svc.cluster.local
                IP Address:127.0.0.1, IP Address:10.104.163.9

    Signature Algorithm: SHA256-RSA
         81:8d:9f:4f:ca:3c:ca:2c:6c:44:02:33:01:60:97:88:0f:84:
         53:c0:aa:3e:38:ba:91:f5:61:15:18:25:79:ef:55:e6:62:77:
         4f:d4:88:72:4f:ba:37:32:3a:72:38:e7:0e:27:2b:a5:43:2a:
         a6:5f:2c:aa:dd:f6:92:7a:3e:7d:06:b4:52:b6:57:7e:dd:da:
         5e:d1:18:6b:7a:e3:b2:41:2b:ee:56:e3:dc:c1:0f:f7:91:01:
         05:5d:ba:62:29:51:a3:6e:28:77:37:72:72:8f:61:b2:b1:91:
         ed:bd:b0:10:1c:61:66:aa:b6:d2:c6:b4:af:33:6a:c2:40:d9:
         8a:ab:82:40:0a:f9:20:c1:13:52:e5:93:b4:c0:00:91:63:a1:
         8d:1a:ca:96:a9:59:4d:50:a4:1d:04:f9:5d:e3:63:d9:cf:a6:
         35:b6:4c:2f:23:d5:eb:3e:29:70:13:b8:0e:85:0f:5f:af:14:
         f6:11:bf:78:f5:9f:ec:a6:5d:7f:03:51:ac:01:1c:27:ba:6a:
         1f:4e:e0:6e:46:f5:9b:51:5d:c5:f5:8f:84:a9:3e:42:fb:6c:
         0b:ad:7a:83:aa:a4:07:e5:c6:09:f1:cb:ee:ac:7a:81:b4:a7:
         04:89:38:9a:b9:04:b8:be:b1:11:50:db:93:0a:d7:d8:f6:5a:
         21:a7:7a:fd
I0204 22:44:52.295483    7560 certificate.go:393] successfully write etcd-server cert/key into pool-coordinator-dynamic-certs
I0204 22:44:54.018871    7560 util.go:62] pool-coordinator-apiserver service is ready for poolcoordinator_cert_manager
I0204 22:44:54.052376    7560 certificate.go:408] successfully write kubeconfig into secret pool-coordinator-monitoring-kubeconfig
I0204 22:44:55.357655    7560 util.go:62] pool-coordinator-apiserver service is ready for poolcoordinator_cert_manager
I0204 22:44:55.391382    7560 certificate.go:408] successfully write kubeconfig into secret pool-coordinator-static-certs
I0204 22:44:55.392520    7560 certificate_store.go:130] Loading cert/key pair from "/tmp/yurt-controller-manager_poolcoordinator-apiserver-client-current.pem".
I0204 22:44:55.393591    7560 certificate_manager.go:318] kubernetes.io/kube-apiserver-client: Certificate rotation is enabled
I0204 22:44:55.393719    7560 certificate_manager.go:590] kubernetes.io/kube-apiserver-client: Certificate expiration is 2024-02-04 21:12:28 +0000 UTC, rotation deadline is 2023-11-08 18:57:39.387706491 +0000 UTC
I0204 22:44:55.393878    7560 certificate_manager.go:324] kubernetes.io/kube-apiserver-client: Waiting 6644h12m43.993853775s for next certificate rotation
I0204 22:45:00.394367    7560 certificate.go:309] yurt-controller-manager_poolcoordinator certificate signed successfully
I0204 22:45:00.433799    7560 certificate.go:357] successfully write apiserver-kubelet-client cert/key pair into pool-coordinator-static-certs
I0204 22:45:00.433969    7560 certificate_store.go:130] Loading cert/key pair from "/tmp/yurthub-current.pem".
I0204 22:45:00.434435    7560 certificate_manager.go:318] kubernetes.io/kube-apiserver-client: Certificate rotation is enabled
I0204 22:45:00.434548    7560 certificate_manager.go:590] kubernetes.io/kube-apiserver-client: Certificate expiration is 2024-02-04 21:12:33 +0000 UTC, rotation deadline is 2023-11-04 00:12:35.047809798 +0000 UTC
I0204 22:45:00.434574    7560 certificate_manager.go:324] kubernetes.io/kube-apiserver-client: Waiting 6529h27m34.613238197s for next certificate rotation
I0204 22:45:05.434991    7560 certificate.go:309] yurt-controller-manager_poolcoordinator certificate signed successfully
I0204 22:45:05.467128    7560 certificate.go:357] successfully write node-lease-proxy-client cert/key pair into pool-coordinator-yurthub-certs
I0204 22:45:05.486931    7560 certificate.go:393] successfully write ca cert/key into pool-coordinator-static-certs
I0204 22:45:05.505430    7560 certificate.go:393] successfully write ca cert/key into pool-coordinator-yurthub-certs
I0204 22:45:05.771682    7560 certificate.go:438] successfully write key pair into secret pool-coordinator-static-certs

etcd-server cert is written correctly with 127.0.0.1, api-server has no 127.0.0.1.

Restarting the yurt-controller-manager without deleting secrets:

dlv exec --headless --api-version=2 --listen :2345 ./yurt-controller-manager
API server listening at: [::]:2345
2023-02-04T22:47:40Z warning layer=rpc Listening for remote connections (connections are not authenticated nor encrypted)
yurtcontroller-manager version: projectinfo.Info{GitVersion:"v0.0.0", GitCommit:"unknown", BuildDate:"1970-01-01T00:00:00Z", GoVersion:"go1.18.10", Compiler:"gc", Platform:"linux/amd64", AllVersions:[]string{"unknown"}}
W0204 22:47:45.616158    7577 client_config.go:615] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
I0204 22:47:45.619349    7577 leaderelection.go:248] attempting to acquire leader lease kube-system/yurt-controller-manager...
I0204 22:48:03.607982    7577 leaderelection.go:258] successfully acquired lease kube-system/yurt-controller-manager
I0204 22:48:03.610358    7577 event.go:282] Event(v1.ObjectReference{Kind:"Lease", Namespace:"kube-system", Name:"yurt-controller-manager", UID:"5856a3b3-6ea3-45c4-8f9f-24f1a6fd7169", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"351778", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kubeadm-openyurt-w2_7856d1ab-61ac-43bc-9884-ec808ce0d4f1 became leader
I0204 22:48:03.622589    7577 controllermanager.go:373] Started "poolcoordinator"
I0204 22:48:03.629944    7577 csrapprover.go:125] v1.CertificateSigningRequest is supported.
I0204 22:48:03.630144    7577 controllermanager.go:373] Started "yurtcsrapprover"
I0204 22:48:03.630180    7577 csrapprover.go:185] starting the crsapprover
I0204 22:48:03.635860    7577 controllermanager.go:373] Started "daemonpodupdater"
I0204 22:48:03.636389    7577 daemon_pod_updater_controller.go:215] Starting daemonPodUpdater controller
I0204 22:48:03.645372    7577 servicetopology.go:297] v1.EndpointSlice is supported.
I0204 22:48:03.645655    7577 controllermanager.go:373] Started "servicetopologycontroller"
I0204 22:48:03.647753    7577 controllermanager.go:373] Started "podbinding"
I0204 22:48:03.650716    7577 controllermanager.go:373] Started "poolcoordinatorcertmanager"
I0204 22:48:03.652294    7577 servicetopology.go:93] starting the service topology controller
I0204 22:48:03.653363    7577 poolcoordinator_cert_manager.go:220] Starting poolcoordinatorCertManager controller
I0204 22:48:03.661452    7577 poolcoordinator_cert_manager.go:368] CA already exist in secret, reuse it
I0204 22:48:03.671038    7577 poolcoordinator_cert_manager.go:313] cert apiserver-etcd-client not change, reuse it
I0204 22:48:03.691097    7577 poolcoordinator_cert_manager.go:313] cert pool-coordinator-yurthub-client not change, reuse it
I0204 22:48:03.723135    7577 poolcoordinator_controller.go:223] start node taint workers
I0204 22:48:03.753374    7577 servicetopology.go:99] sync service topology controller succeed
I0204 22:48:03.756648    7577 pod_binding_controller.go:274] start pod binding workers
I0204 22:48:04.714925    7577 util.go:62] pool-coordinator-apiserver service is ready for poolcoordinator_cert_manager
I0204 22:48:04.715055    7577 poolcoordinator_cert_manager.go:306] cert apiserver IP has changed
I0204 22:48:05.727070    7577 util.go:62] pool-coordinator-etcd service is ready for poolcoordinator_cert_manager
I0204 22:48:05.727133    7577 poolcoordinator_cert_manager.go:306] cert etcd-server IP has changed
I0204 22:48:06.742740    7577 util.go:62] pool-coordinator-apiserver service is ready for poolcoordinator_cert_manager
I0204 22:48:06.742793    7577 poolcoordinator_cert_manager.go:306] cert kubeconfig IP has changed
I0204 22:48:07.756134    7577 util.go:62] pool-coordinator-apiserver service is ready for poolcoordinator_cert_manager
I0204 22:48:07.756207    7577 poolcoordinator_cert_manager.go:306] cert admin.conf IP has changed
I0204 22:48:08.906255    7577 util.go:62] pool-coordinator-apiserver service is ready for poolcoordinator_cert_manager
I0204 22:48:08.937513    7577 secret.go:93] key: apiserver.crt cert: Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number: 3257740584800733765 (0x2d35d1cd376bca45)
    Signature Algorithm: SHA256-RSA
        Issuer: CN=openyurt:pool-coordinator
        Validity
            Not Before: Feb 4 22:44:48 2023 UTC
            Not After : Jan 11 22:48:08 2123 UTC
        Subject: O=openyurt:pool-coordinator,CN=openyurt:pool-coordinator:apiserver
        Subject Public Key Info:
            Public Key Algorithm: RSA
                Public-Key: (2048 bit)
                Modulus:
                    a7:9f:c9:49:ca:08:0d:56:5f:6b:2e:70:20:31:78:
                    92:d0:a5:6d:2d:a7:6b:3b:b9:4f:95:07:4d:0c:fc:
                    98:ea:d6:bd:18:08:e9:5c:dc:48:7b:aa:50:27:69:
                    ab:15:d3:ec:fc:52:51:ea:da:51:fe:e7:18:26:d9:
                    23:b8:3d:3c:1c:76:bc:d3:0e:e8:07:33:19:7d:40:
                    9b:9c:cb:7b:6e:56:05:ac:ff:ef:9d:f9:70:5f:70:
                    e0:a7:46:fa:de:ae:b2:1e:76:8b:ec:d3:d3:c5:ca:
                    80:27:72:32:d6:ec:00:e6:5f:3c:fa:be:71:15:2e:
                    a0:cc:71:73:99:7b:bc:ab:bb:3c:f7:1c:4e:c0:03:
                    2b:4f:d6:ca:a2:e8:2c:18:5d:43:c1:20:74:a3:6c:
                    1f:28:d1:70:42:b5:70:bb:d4:14:b7:ac:8c:d7:66:
                    71:87:d9:fc:d7:b3:6a:1f:ca:53:53:ea:cf:71:d2:
                    be:61:cd:37:30:4a:85:68:79:0c:91:ff:ec:b7:a9:
                    5b:ee:b1:5e:94:4e:3a:e5:fd:3d:4b:fb:77:06:65:
                    8c:39:bf:d6:4c:ee:90:fd:50:00:57:2d:5a:8c:bc:
                    2b:f0:68:6f:b9:08:6e:bc:39:61:7d:9a:a6:d2:48:
                    bf:8f:c7:cb:b7:9f:6d:71:84:fb:b0:45:d2:b8:f0:
                    e1
                Exponent: 65537 (0x10001)
        X509v3 extensions:
            X509v3 Key Usage: critical
                Digital Signature, Key Encipherment
            X509v3 Extended Key Usage:
                TLS Web Server Authentication
            X509v3 Authority Key Identifier:
                keyid:08:D7:84:B8:FF:B7:CA:C1:3F:20:48:B6:9E:33:A4:96:5D:3C:D8:1B
            X509v3 Subject Alternative Name:
                DNS:pool-coordinator-apiserver, DNS:pool-coordinator-apiserver.kube-system, DNS:pool-coordinator-apiserver.kube-system.svc, DNS:pool-coordinator-apiserver.kube-system.svc.cluster.local, DNS:pool-coordinator-apiserver, DNS:pool-coordinator-apiserver.kube-system, DNS:pool-coordinator-apiserver.kube-system.svc, DNS:pool-coordinator-apiserver.kube-system.svc.cluster.local
                IP Address:10.101.224.203, IP Address:10.101.224.203

    Signature Algorithm: SHA256-RSA
         59:88:89:f8:ca:9a:07:76:6a:63:16:fb:84:82:3c:9c:64:d9:
         d1:d1:ba:67:b7:61:ab:c4:45:2e:86:28:78:c4:c9:06:0c:9b:
         1a:f0:77:c5:90:f5:43:22:2f:9f:05:57:fe:0a:08:0b:3f:5e:
         ea:14:ec:55:0e:34:5b:2b:2e:96:20:89:80:4b:75:88:16:60:
         8a:3c:2a:9e:fa:a3:21:89:c0:fb:ab:23:a5:f5:f2:b2:02:51:
         b5:d2:55:18:94:c6:cc:93:59:38:a1:65:e6:8a:00:c1:ca:bb:
         06:1c:f5:2e:2c:59:68:5c:21:d6:22:a2:9d:1e:8a:3b:3a:24:
         ee:c0:01:a3:1c:6a:f0:9c:44:77:95:4f:0c:60:d2:3b:c4:33:
         2f:d9:0c:49:64:98:d8:ad:77:20:a5:96:9c:c1:ef:80:bb:d2:
         d8:a3:f5:7c:14:f8:9f:8d:f5:5f:e5:6f:12:2e:ff:49:48:fb:
         7c:8f:bd:b6:ac:a9:40:aa:f3:ba:fe:f5:c2:3c:85:53:46:60:
         52:39:ed:80:71:79:aa:c3:17:0a:73:9c:66:b5:15:49:93:bd:
         4c:e9:6a:db:1b:ab:ae:31:37:9f:ad:f9:13:bc:42:c9:5c:91:
         ef:a0:22:4e:14:0b:5a:07:77:96:65:45:c5:2a:ec:c9:e2:ff:
         d6:88:d3:99
I0204 22:48:08.944735    7577 certificate.go:393] successfully write apiserver cert/key into pool-coordinator-dynamic-certs
I0204 22:48:10.648453    7577 util.go:62] pool-coordinator-etcd service is ready for poolcoordinator_cert_manager
I0204 22:48:10.676573    7577 secret.go:93] key: etcd-server.crt cert: Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number: 3791363278308242980 (0x349da0d06a05f624)
    Signature Algorithm: SHA256-RSA
        Issuer: CN=openyurt:pool-coordinator
        Validity
            Not Before: Feb 4 22:44:48 2023 UTC
            Not After : Jan 11 22:48:10 2123 UTC
        Subject: O=openyurt:pool-coordinator,CN=openyurt:pool-coordinator:etcd
        Subject Public Key Info:
            Public Key Algorithm: RSA
                Public-Key: (2048 bit)
                Modulus:
                    f1:40:57:51:76:65:74:bc:27:4c:b8:7f:53:0c:b4:
                    d8:ac:1f:e8:f5:b5:cd:24:7a:b4:2f:30:9a:98:d0:
                    16:61:2a:61:57:64:ed:32:b1:1e:9a:36:5c:6a:54:
                    b1:2d:5c:a2:46:77:af:1f:3a:08:78:d2:67:58:2b:
                    bc:ab:6b:6b:06:86:fb:b7:56:0d:b3:ee:00:21:c6:
                    19:79:cc:eb:25:1d:ad:2f:f5:cd:5c:73:c0:01:e6:
                    c7:eb:b5:89:90:a3:ca:83:61:99:d9:47:26:61:2f:
                    23:64:fc:4a:a2:a0:65:21:a4:7d:fd:92:0b:6a:22:
                    6a:f8:17:1a:11:4b:ad:38:b0:55:32:f2:3a:d2:4a:
                    0b:67:db:0a:fb:9a:48:27:aa:b8:b2:56:14:53:eb:
                    8a:71:dc:bc:12:42:f8:41:be:c6:79:50:7c:10:13:
                    80:5a:57:55:cc:7b:e9:07:e8:de:25:51:05:f0:b7:
                    39:ce:e4:00:bf:24:65:f7:bc:25:38:dc:94:2e:ff:
                    a7:23:76:98:fd:97:70:0a:44:a6:1c:50:22:86:38:
                    18:b0:c9:b5:c6:9d:fd:f9:f6:17:e0:ea:6f:34:18:
                    80:8e:5f:bc:c2:28:ac:48:a7:c9:4e:b5:d3:ca:c2:
                    b4:a4:dd:d2:83:82:46:2e:02:a9:cb:68:09:bd:f6:
                    b1
                Exponent: 65537 (0x10001)
        X509v3 extensions:
            X509v3 Key Usage: critical
                Digital Signature, Key Encipherment
            X509v3 Extended Key Usage:
                TLS Web Server Authentication
            X509v3 Authority Key Identifier:
                keyid:08:D7:84:B8:FF:B7:CA:C1:3F:20:48:B6:9E:33:A4:96:5D:3C:D8:1B
            X509v3 Subject Alternative Name:
                DNS:pool-coordinator-etcd, DNS:pool-coordinator-etcd.kube-system, DNS:pool-coordinator-etcd.kube-system.svc, DNS:pool-coordinator-etcd.kube-system.svc.cluster.local, DNS:pool-coordinator-etcd, DNS:pool-coordinator-etcd.kube-system, DNS:pool-coordinator-etcd.kube-system.svc, DNS:pool-coordinator-etcd.kube-system.svc.cluster.local
                IP Address:10.104.163.9, IP Address:10.104.163.9

    Signature Algorithm: SHA256-RSA
         10:0a:29:a3:a8:be:e8:ee:26:3b:de:a4:8f:ef:ae:4c:ab:bd:
         6b:28:ba:bd:e1:51:94:93:44:b3:fd:78:05:12:61:c8:7a:d5:
         4e:85:ec:aa:77:d2:18:1a:5a:c3:f0:8d:48:bd:da:88:55:eb:
         27:fc:21:ed:fb:cb:db:e0:65:41:ee:6c:94:d9:06:5f:d7:26:
         78:21:8b:59:76:de:e5:2f:d4:63:64:a9:26:0b:32:32:6d:8e:
         d6:1a:fd:6a:05:8b:5c:93:06:a6:9e:7e:20:15:70:0d:43:53:
         fa:6d:6e:4e:9c:76:f4:c0:f6:f9:12:3b:53:77:6a:55:b8:50:
         62:78:ad:50:91:ad:36:0b:82:75:81:bd:65:f0:f9:0f:18:c1:
         e5:b4:72:10:e3:e7:ca:0b:6d:90:1a:35:b9:48:22:ab:df:b6:
         63:78:6f:e0:2b:44:17:d8:ec:52:d8:c2:e3:fc:a5:64:7b:ee:
         19:25:db:29:c6:9f:e3:9c:18:5b:cd:1d:60:4b:58:6f:89:53:
         cb:35:38:6c:9c:93:1f:cd:ea:3e:33:8f:f7:e8:a0:9c:17:7d:
         86:37:e4:6e:6b:77:e4:84:0e:a4:47:9f:45:48:de:f7:c2:a4:
         82:f3:57:e5:02:4f:de:9b:76:e9:ac:9d:7f:91:78:04:1b:c9:
         e1:80:7f:4a
I0204 22:48:10.683574    7577 certificate.go:393] successfully write etcd-server cert/key into pool-coordinator-dynamic-certs
I0204 22:48:12.021338    7577 util.go:62] pool-coordinator-apiserver service is ready for poolcoordinator_cert_manager
I0204 22:48:12.048432    7577 certificate.go:408] successfully write kubeconfig into secret pool-coordinator-monitoring-kubeconfig
I0204 22:48:13.630659    7577 util.go:62] pool-coordinator-apiserver service is ready for poolcoordinator_cert_manager
I0204 22:48:13.660289    7577 certificate.go:408] successfully write kubeconfig into secret pool-coordinator-static-certs
I0204 22:48:13.660591    7577 certificate_store.go:130] Loading cert/key pair from "/tmp/yurt-controller-manager_poolcoordinator-apiserver-client-current.pem".
I0204 22:48:13.661255    7577 certificate_manager.go:318] kubernetes.io/kube-apiserver-client: Certificate rotation is enabled
I0204 22:48:13.661652    7577 certificate_manager.go:590] kubernetes.io/kube-apiserver-client: Certificate expiration is 2024-02-04 21:12:28 +0000 UTC, rotation deadline is 2023-12-29 23:54:13.863540016 +0000 UTC
I0204 22:48:13.661695    7577 certificate_manager.go:324] kubernetes.io/kube-apiserver-client: Waiting 7873h6m0.201847809s for next certificate rotation
I0204 22:48:18.662105    7577 certificate.go:309] yurt-controller-manager_poolcoordinator certificate signed successfully
I0204 22:48:18.690990    7577 certificate.go:357] successfully write apiserver-kubelet-client cert/key pair into pool-coordinator-static-certs
I0204 22:48:18.691246    7577 certificate_store.go:130] Loading cert/key pair from "/tmp/yurthub-current.pem".
I0204 22:48:18.692040    7577 certificate_manager.go:318] kubernetes.io/kube-apiserver-client: Certificate rotation is enabled
I0204 22:48:18.692151    7577 certificate_manager.go:590] kubernetes.io/kube-apiserver-client: Certificate expiration is 2024-02-04 21:12:33 +0000 UTC, rotation deadline is 2023-12-12 10:54:20.658717316 +0000 UTC
I0204 22:48:18.692192    7577 certificate_manager.go:324] kubernetes.io/kube-apiserver-client: Waiting 7452h6m1.966527681s for next certificate rotation
I0204 22:48:23.692682    7577 certificate.go:309] yurt-controller-manager_poolcoordinator certificate signed successfully
I0204 22:48:23.724260    7577 certificate.go:357] successfully write node-lease-proxy-client cert/key pair into pool-coordinator-yurthub-certs
I0204 22:48:23.747356    7577 certificate.go:393] successfully write ca cert/key into pool-coordinator-static-certs
I0204 22:48:23.762315    7577 certificate.go:393] successfully write ca cert/key into pool-coordinator-yurthub-certs
I0204 22:48:23.959629    7577 certificate.go:438] successfully write key pair into secret pool-coordinator-static-certs

As you can see, both certificates get a doubled 10.104.163.9 IP Address, etcd hasn't 127.0.0.1 anymore.

@batthebee
Copy link
Contributor Author

@luc99hen @Congrool @rambohe-ch I have found the error and fixed it. I'll do a PR later today.

@batthebee
Copy link
Contributor Author

batthebee commented Feb 5, 2023

There are still some Errors in etcd logs, but i currently see nothing that does not work:

WARNING: 2023/02/05 22:46:14 [core] grpc: addrConn.createTransport failed to connect to {0.0.0.0:12379 0.0.0.0:12379 <nil> 0 <nil>}. Err: connection error: desc = "transport: authentication handshake failed: remote error: tls: bad certificate". Reconnecting...
{"level":"warn","ts":"2023-02-05T22:48:15.373Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"127.0.0.1:51700","server-name":"","error":"tls: failed to verify client certificate: x509: certificate specifies an incompatible key usage"}
WARNING: 2023/02/05 22:48:15 [core] grpc: addrConn.createTransport failed to connect to {0.0.0.0:12379 0.0.0.0:12379 <nil> 0 <nil>}. Err: connection error: desc = "transport: authentication handshake failed: remote error: tls: bad certificate". Reconnecting...
{"level":"info","ts":"2023-02-05T22:49:10.417Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":296}
{"level":"info","ts":"2023-02-05T22:49:10.418Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":296,"took":"341.367µs"}
{"level":"warn","ts":"2023-02-05T22:50:27.633Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"127.0.0.1:49858","server-name":"","error":"tls: failed to verify client certificate: x509: certificate specifies an incompatible key usage"}
WARNING: 2023/02/05 22:50:27 [core] grpc: addrConn.createTransport failed to connect to {0.0.0.0:12379 0.0.0.0:12379 <nil> 0 <nil>}. Err: connection error: desc = "transport: authentication handshake failed: remote error: tls: bad certificate". Reconnecting...

@AndyEWang
Copy link
Contributor

There are still some Errors in etcd logs, but i currently see nothing that does not work:

WARNING: 2023/02/05 22:46:14 [core] grpc: addrConn.createTransport failed to connect to {0.0.0.0:12379 0.0.0.0:12379 <nil> 0 <nil>}. Err: connection error: desc = "transport: authentication handshake failed: remote error: tls: bad certificate". Reconnecting...
{"level":"warn","ts":"2023-02-05T22:48:15.373Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"127.0.0.1:51700","server-name":"","error":"tls: failed to verify client certificate: x509: certificate specifies an incompatible key usage"}
WARNING: 2023/02/05 22:48:15 [core] grpc: addrConn.createTransport failed to connect to {0.0.0.0:12379 0.0.0.0:12379 <nil> 0 <nil>}. Err: connection error: desc = "transport: authentication handshake failed: remote error: tls: bad certificate". Reconnecting...
{"level":"info","ts":"2023-02-05T22:49:10.417Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":296}
{"level":"info","ts":"2023-02-05T22:49:10.418Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":296,"took":"341.367µs"}
{"level":"warn","ts":"2023-02-05T22:50:27.633Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"127.0.0.1:49858","server-name":"","error":"tls: failed to verify client certificate: x509: certificate specifies an incompatible key usage"}
WARNING: 2023/02/05 22:50:27 [core] grpc: addrConn.createTransport failed to connect to {0.0.0.0:12379 0.0.0.0:12379 <nil> 0 <nil>}. Err: connection error: desc = "transport: authentication handshake failed: remote error: tls: bad certificate". Reconnecting...

Any progress on this issue?

@rambohe-ch
Copy link
Member

There are still some Errors in etcd logs, but i currently see nothing that does not work:

WARNING: 2023/02/05 22:46:14 [core] grpc: addrConn.createTransport failed to connect to {0.0.0.0:12379 0.0.0.0:12379 <nil> 0 <nil>}. Err: connection error: desc = "transport: authentication handshake failed: remote error: tls: bad certificate". Reconnecting...
{"level":"warn","ts":"2023-02-05T22:48:15.373Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"127.0.0.1:51700","server-name":"","error":"tls: failed to verify client certificate: x509: certificate specifies an incompatible key usage"}
WARNING: 2023/02/05 22:48:15 [core] grpc: addrConn.createTransport failed to connect to {0.0.0.0:12379 0.0.0.0:12379 <nil> 0 <nil>}. Err: connection error: desc = "transport: authentication handshake failed: remote error: tls: bad certificate". Reconnecting...
{"level":"info","ts":"2023-02-05T22:49:10.417Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":296}
{"level":"info","ts":"2023-02-05T22:49:10.418Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":296,"took":"341.367µs"}
{"level":"warn","ts":"2023-02-05T22:50:27.633Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"127.0.0.1:49858","server-name":"","error":"tls: failed to verify client certificate: x509: certificate specifies an incompatible key usage"}
WARNING: 2023/02/05 22:50:27 [core] grpc: addrConn.createTransport failed to connect to {0.0.0.0:12379 0.0.0.0:12379 <nil> 0 <nil>}. Err: connection error: desc = "transport: authentication handshake failed: remote error: tls: bad certificate". Reconnecting...

Any progress on this issue?

@batthebee It looks like that #1187 has solved bug of this issue, do you have any comments?

and we are waiting some other improvements of pool-coordinator from @Congrool and @LaurenceLiZhixin , and we will release OpenYurt v1.2.1 as soon as possible.

@AndyEWang
Copy link
Contributor

@batthebee @rambohe-ch
Just FYI. I added {x509.ExtKeyUsageServerAuth, x509.ExtKeyUsageClientAuth} to etcd-server cert and finally pool-coordinator etcd stops complaining "bad certificate".

@luc99hen
Copy link
Member

@batthebee @rambohe-ch
Just FYI. I added {x509.ExtKeyUsageServerAuth, x509.ExtKeyUsageClientAuth} to etcd-server cert and finally pool-coordinator etcd stops complaining "bad certificate".

Would you please like to create a PR to fix this?

@rambohe-ch
Copy link
Member

@batthebee @rambohe-ch Just FYI. I added {x509.ExtKeyUsageServerAuth, x509.ExtKeyUsageClientAuth} to etcd-server cert and finally pool-coordinator etcd stops complaining "bad certificate".

@AndyEWang As a server certificate, why do you need to add x509.ExtKeyUsageClientAuth for its usage?

@AndyEWang
Copy link
Contributor

Just got the solution from this comment and not sure if the latest ETCD has improved this case.
etcd-io/etcd#9785 (comment)

@batthebee
Copy link
Contributor Author

batthebee commented Feb 14, 2023

@batthebee It looks like that #1187 has solved bug of this issue, do you have any comments?

and we are waiting some other improvements of pool-coordinator from @Congrool and @LaurenceLiZhixin , and we will release OpenYurt v1.2.1 as soon as possible.

no, everything seems to be running smoothly at the moment. i am looking forward to the 1.2.1 :-) From my side, we can close the issue.

@Congrool
Copy link
Member

Congrool commented Feb 15, 2023

and we are waiting some other improvements of pool-coordinator from @Congrool and @LaurenceLiZhixin , and we will release OpenYurt v1.2.1 as soon as possible.

@rambohe-ch From my point of view, we've solved known bugs. Other issues, such as nodepool logs/exec support and crd cache support, are new features, and better to be put in v1.3. So, I think v1.2.1 is ready for release.

@LaurenceLiZhixin What do you think?

@rambohe-ch
Copy link
Member

@batthebee Thanks for your contribution, OpenYurt v1.2.1 has been released. you can take a try and feel free to open issues if you come across any problems.

@cuiHL
Copy link

cuiHL commented Apr 30, 2024

@batthebee It looks like that #1187 has solved bug of this issue, do you have any comments?
and we are waiting some other improvements of pool-coordinator from @Congrool and @LaurenceLiZhixin , and we will release OpenYurt v1.2.1 as soon as possible.

no, everything seems to be running smoothly at the moment. i am looking forward to the 1.2.1 :-) From my side, we can close the issue.

I had the same problem in version 1.4
image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug kind/bug
Projects
None yet
Development

No branches or pull requests

6 participants