Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kube 1.25.2 init occur unable to create ConfigMap error #2767

Closed
xiedeacc opened this issue Oct 10, 2022 · 15 comments
Closed

kube 1.25.2 init occur unable to create ConfigMap error #2767

xiedeacc opened this issue Oct 10, 2022 · 15 comments
Labels
kind/support Categorizes issue or PR as a support question.

Comments

@xiedeacc
Copy link

kubeadm init --v=5
log

I1010 06:01:48.121143    6913 initconfiguration.go:116] detected and using CRI socket: unix:///var/run/containerd/containerd.sock
I1010 06:01:48.121338    6913 interface.go:432] Looking for default routes with IPv4 addresses
I1010 06:01:48.121346    6913 interface.go:437] Default route transits interface "enp1s0"
I1010 06:01:48.121472    6913 interface.go:209] Interface enp1s0 is up
I1010 06:01:48.121549    6913 interface.go:257] Interface "enp1s0" has 4 addresses :[192.168.101.24/24 2408:8256:3083:6cd5:c028:be3:9e02:b/128 2408:8256:3083:6cd5:5054:ff:fece:fdda/64 fe80::5054:ff:fece:fdda/64].
I1010 06:01:48.121564    6913 interface.go:224] Checking addr  192.168.101.24/24.
I1010 06:01:48.121637    6913 interface.go:231] IP found 192.168.101.24
I1010 06:01:48.121646    6913 interface.go:263] Found valid IPv4 address 192.168.101.24 for interface "enp1s0".
I1010 06:01:48.121688    6913 interface.go:443] Found active IP 192.168.101.24
I1010 06:01:48.121757    6913 kubelet.go:196] the value of KubeletConfiguration.cgroupDriver is empty; setting it to "systemd"
I1010 06:01:48.124722    6913 version.go:187] fetching Kubernetes version from URL: https://dl.k8s.io/release/stable-1.txt
[init] Using Kubernetes version: v1.25.2
[preflight] Running pre-flight checks
I1010 06:01:48.688851    6913 checks.go:568] validating Kubernetes and kubeadm version
I1010 06:01:48.688962    6913 checks.go:168] validating if the firewall is enabled and active
I1010 06:01:48.699036    6913 checks.go:203] validating availability of port 6443
I1010 06:01:48.699292    6913 checks.go:203] validating availability of port 10259
I1010 06:01:48.699421    6913 checks.go:203] validating availability of port 10257
I1010 06:01:48.699538    6913 checks.go:280] validating the existence of file /etc/kubernetes/manifests/kube-apiserver.yaml
I1010 06:01:48.699624    6913 checks.go:280] validating the existence of file /etc/kubernetes/manifests/kube-controller-manager.yaml
I1010 06:01:48.699705    6913 checks.go:280] validating the existence of file /etc/kubernetes/manifests/kube-scheduler.yaml
I1010 06:01:48.699778    6913 checks.go:280] validating the existence of file /etc/kubernetes/manifests/etcd.yaml
I1010 06:01:48.699852    6913 checks.go:430] validating if the connectivity type is via proxy or direct
I1010 06:01:48.699938    6913 checks.go:469] validating http connectivity to first IP address in the CIDR
I1010 06:01:48.700022    6913 checks.go:469] validating http connectivity to first IP address in the CIDR
I1010 06:01:48.700034    6913 checks.go:104] validating the container runtime
I1010 06:01:48.726814    6913 checks.go:329] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables
I1010 06:01:48.726876    6913 checks.go:329] validating the contents of file /proc/sys/net/ipv4/ip_forward
I1010 06:01:48.726907    6913 checks.go:644] validating whether swap is enabled or not
I1010 06:01:48.726941    6913 checks.go:370] validating the presence of executable crictl
I1010 06:01:48.726973    6913 checks.go:370] validating the presence of executable conntrack
I1010 06:01:48.727020    6913 checks.go:370] validating the presence of executable ip
I1010 06:01:48.727046    6913 checks.go:370] validating the presence of executable iptables
I1010 06:01:48.727072    6913 checks.go:370] validating the presence of executable mount
I1010 06:01:48.727096    6913 checks.go:370] validating the presence of executable nsenter
I1010 06:01:48.727118    6913 checks.go:370] validating the presence of executable ebtables
I1010 06:01:48.727141    6913 checks.go:370] validating the presence of executable ethtool
I1010 06:01:48.727163    6913 checks.go:370] validating the presence of executable socat
I1010 06:01:48.727193    6913 checks.go:370] validating the presence of executable tc
I1010 06:01:48.727216    6913 checks.go:370] validating the presence of executable touch
I1010 06:01:48.727239    6913 checks.go:516] running all checks
        [WARNING SystemVerification]: missing optional cgroups: blkio
I1010 06:01:48.742605    6913 checks.go:401] checking whether the given node name is valid and reachable using net.LookupHost
I1010 06:01:48.742629    6913 checks.go:610] validating kubelet version
I1010 06:01:48.802895    6913 checks.go:130] validating if the "kubelet" service is enabled and active
I1010 06:01:48.812587    6913 checks.go:203] validating availability of port 10250
I1010 06:01:48.812636    6913 checks.go:203] validating availability of port 2379
I1010 06:01:48.812654    6913 checks.go:203] validating availability of port 2380
I1010 06:01:48.812671    6913 checks.go:243] validating the existence and emptiness of directory /var/lib/etcd
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I1010 06:01:48.812768    6913 checks.go:832] using image pull policy: IfNotPresent
I1010 06:01:48.828506    6913 checks.go:841] image exists: registry.k8s.io/kube-apiserver:v1.25.2
I1010 06:01:48.841280    6913 checks.go:841] image exists: registry.k8s.io/kube-controller-manager:v1.25.2
I1010 06:01:48.853644    6913 checks.go:841] image exists: registry.k8s.io/kube-scheduler:v1.25.2
I1010 06:01:48.867678    6913 checks.go:841] image exists: registry.k8s.io/kube-proxy:v1.25.2
I1010 06:01:48.878844    6913 checks.go:841] image exists: registry.k8s.io/pause:3.8
I1010 06:01:48.890840    6913 checks.go:841] image exists: registry.k8s.io/etcd:3.5.4-0
I1010 06:01:48.902110    6913 checks.go:841] image exists: registry.k8s.io/coredns/coredns:v1.9.3
[certs] Using certificateDir folder "/etc/kubernetes/pki"
I1010 06:01:48.902161    6913 certs.go:112] creating a new certificate authority for ca
[certs] Generating "ca" certificate and key
I1010 06:01:49.022164    6913 certs.go:522] validating certificate period for ca certificate
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local tiger1] and IPs [10.96.0.1 192.168.101.24]
[certs] Generating "apiserver-kubelet-client" certificate and key
I1010 06:01:49.200931    6913 certs.go:112] creating a new certificate authority for front-proxy-ca
[certs] Generating "front-proxy-ca" certificate and key
I1010 06:01:49.249346    6913 certs.go:522] validating certificate period for front-proxy-ca certificate
[certs] Generating "front-proxy-client" certificate and key
I1010 06:01:49.366162    6913 certs.go:112] creating a new certificate authority for etcd-ca
[certs] Generating "etcd/ca" certificate and key
I1010 06:01:49.525710    6913 certs.go:522] validating certificate period for etcd/ca certificate
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost tiger1] and IPs [192.168.101.24 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost tiger1] and IPs [192.168.101.24 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
I1010 06:01:49.890875    6913 certs.go:78] creating new public/private key files for signing service account users
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1010 06:01:49.962365    6913 kubeconfig.go:103] creating kubeconfig file for admin.conf
[kubeconfig] Writing "admin.conf" kubeconfig file
I1010 06:01:50.082573    6913 kubeconfig.go:103] creating kubeconfig file for kubelet.conf
[kubeconfig] Writing "kubelet.conf" kubeconfig file
I1010 06:01:50.251566    6913 kubeconfig.go:103] creating kubeconfig file for controller-manager.conf
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1010 06:01:50.307282    6913 kubeconfig.go:103] creating kubeconfig file for scheduler.conf
[kubeconfig] Writing "scheduler.conf" kubeconfig file
I1010 06:01:50.450717    6913 kubelet.go:66] Stopping the kubelet
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
I1010 06:01:50.758460    6913 manifests.go:99] [control-plane] getting StaticPodSpecs
I1010 06:01:50.758592    6913 certs.go:522] validating certificate period for CA certificate
I1010 06:01:50.758633    6913 manifests.go:125] [control-plane] adding volume "ca-certs" for component "kube-apiserver"
I1010 06:01:50.758639    6913 manifests.go:125] [control-plane] adding volume "etc-ca-certificates" for component "kube-apiserver"
I1010 06:01:50.758642    6913 manifests.go:125] [control-plane] adding volume "k8s-certs" for component "kube-apiserver"
I1010 06:01:50.758645    6913 manifests.go:125] [control-plane] adding volume "usr-local-share-ca-certificates" for component "kube-apiserver"
I1010 06:01:50.758650    6913 manifests.go:125] [control-plane] adding volume "usr-share-ca-certificates" for component "kube-apiserver"
I1010 06:01:50.760293    6913 manifests.go:154] [control-plane] wrote static Pod manifest for component "kube-apiserver" to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
I1010 06:01:50.760305    6913 manifests.go:99] [control-plane] getting StaticPodSpecs
I1010 06:01:50.760412    6913 manifests.go:125] [control-plane] adding volume "ca-certs" for component "kube-controller-manager"
I1010 06:01:50.760420    6913 manifests.go:125] [control-plane] adding volume "etc-ca-certificates" for component "kube-controller-manager"
I1010 06:01:50.760423    6913 manifests.go:125] [control-plane] adding volume "flexvolume-dir" for component "kube-controller-manager"
I1010 06:01:50.760426    6913 manifests.go:125] [control-plane] adding volume "k8s-certs" for component "kube-controller-manager"
I1010 06:01:50.760431    6913 manifests.go:125] [control-plane] adding volume "kubeconfig" for component "kube-controller-manager"
I1010 06:01:50.760434    6913 manifests.go:125] [control-plane] adding volume "usr-local-share-ca-certificates" for component "kube-controller-manager"
I1010 06:01:50.760439    6913 manifests.go:125] [control-plane] adding volume "usr-share-ca-certificates" for component "kube-controller-manager"
I1010 06:01:50.760788    6913 manifests.go:154] [control-plane] wrote static Pod manifest for component "kube-controller-manager" to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[control-plane] Creating static Pod manifest for "kube-scheduler"
I1010 06:01:50.760799    6913 manifests.go:99] [control-plane] getting StaticPodSpecs
I1010 06:01:50.760892    6913 manifests.go:125] [control-plane] adding volume "kubeconfig" for component "kube-scheduler"
I1010 06:01:50.761100    6913 manifests.go:154] [control-plane] wrote static Pod manifest for component "kube-scheduler" to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1010 06:01:50.761393    6913 local.go:65] [etcd] wrote Static Pod manifest for a local etcd member to "/etc/kubernetes/manifests/etcd.yaml"
I1010 06:01:50.761402    6913 waitcontrolplane.go:83] [wait-control-plane] Waiting for the API server to be healthy
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 14.515357 seconds
I1010 06:02:05.277743    6913 uploadconfig.go:110] [upload-config] Uploading the kubeadm ClusterConfiguration to a ConfigMap
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I1010 06:02:05.297328    6913 uploadconfig.go:124] [upload-config] Uploading the kubelet component config to a ConfigMap
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I1010 06:02:05.505486    6913 uploadconfig.go:129] [upload-config] Preserving the CRISocket information for the control-plane node
I1010 06:02:05.505519    6913 patchnode.go:31] [patchnode] Uploading the CRI Socket information "unix:///var/run/containerd/containerd.sock" to the Node API object "tiger1" as an annotation
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node tiger1 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node tiger1 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: 1gg6tz.hovkboe4jpmzskmq
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I1010 06:02:06.856216    6913 clusterinfo.go:47] [bootstrap-token] loading admin kubeconfig
I1010 06:02:06.856739    6913 clusterinfo.go:58] [bootstrap-token] copying the cluster from admin.conf to the bootstrap kubeconfig
I1010 06:02:06.857002    6913 clusterinfo.go:70] [bootstrap-token] creating/updating ConfigMap in kube-public namespace
I1010 06:02:06.864011    6913 clusterinfo.go:84] creating the RBAC rules for exposing the cluster-info ConfigMap in the kube-public namespace
I1010 06:02:07.031242    6913 kubeletfinalize.go:90] [kubelet-finalize] Assuming that kubelet client certificate rotation is enabled: found "/var/lib/kubelet/pki/kubelet-client-current.pem"
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I1010 06:02:07.032099    6913 kubeletfinalize.go:134] [kubelet-finalize] Restarting the kubelet to enable client certificate rotation
rpc error: code = Unknown desc = malformed header: missing HTTP content-type
unable to create ConfigMap
k8s.io/kubernetes/cmd/kubeadm/app/util/apiclient.CreateOrUpdateConfigMap
        cmd/kubeadm/app/util/apiclient/idempotency.go:48
k8s.io/kubernetes/cmd/kubeadm/app/phases/addons/dns.createCoreDNSAddon
        cmd/kubeadm/app/phases/addons/dns/dns.go:188
k8s.io/kubernetes/cmd/kubeadm/app/phases/addons/dns.coreDNSAddon
        cmd/kubeadm/app/phases/addons/dns/dns.go:159
k8s.io/kubernetes/cmd/kubeadm/app/phases/addons/dns.EnsureDNSAddon
        cmd/kubeadm/app/phases/addons/dns/dns.go:102
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init.runCoreDNSAddon
        cmd/kubeadm/app/cmd/phases/init/addons.go:112
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
        cmd/kubeadm/app/cmd/phases/workflow/runner.go:234
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
        cmd/kubeadm/app/cmd/phases/workflow/runner.go:421
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
        cmd/kubeadm/app/cmd/phases/workflow/runner.go:207
k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdInit.func1
        cmd/kubeadm/app/cmd/init.go:154
github.com/spf13/cobra.(*Command).execute
        vendor/github.com/spf13/cobra/command.go:856
github.com/spf13/cobra.(*Command).ExecuteC
        vendor/github.com/spf13/cobra/command.go:974
github.com/spf13/cobra.(*Command).Execute
        vendor/github.com/spf13/cobra/command.go:902
k8s.io/kubernetes/cmd/kubeadm/app.Run
        cmd/kubeadm/app/kubeadm.go:50
main.main
        cmd/kubeadm/kubeadm.go:25
runtime.main
        /usr/local/go/src/runtime/proc.go:250
runtime.goexit
        /usr/local/go/src/runtime/asm_amd64.s:1594
error execution phase addon/coredns
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
        cmd/kubeadm/app/cmd/phases/workflow/runner.go:235
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
        cmd/kubeadm/app/cmd/phases/workflow/runner.go:421
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
        cmd/kubeadm/app/cmd/phases/workflow/runner.go:207
k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdInit.func1
        cmd/kubeadm/app/cmd/init.go:154
github.com/spf13/cobra.(*Command).execute
        vendor/github.com/spf13/cobra/command.go:856
github.com/spf13/cobra.(*Command).ExecuteC
        vendor/github.com/spf13/cobra/command.go:974
github.com/spf13/cobra.(*Command).Execute
        vendor/github.com/spf13/cobra/command.go:902
k8s.io/kubernetes/cmd/kubeadm/app.Run
        cmd/kubeadm/app/kubeadm.go:50
main.main
        cmd/kubeadm/kubeadm.go:25
runtime.main
        /usr/local/go/src/runtime/proc.go:250
runtime.goexit
        /usr/local/go/src/runtime/asm_amd64.s:1594
@neolit123
Copy link
Member

neolit123 commented Oct 10, 2022

this has been reported before as a Linux host issue where HTTPs traffic is not allowed. (i think)

it's not a kubeadm bug per se.

try asking on support forums or search this issue tracker to find the old ticket.

/support

@github-actions
Copy link

Hello, @xiedeacc 🤖 👋

You seem to have troubles using Kubernetes and kubeadm.
Note that our issue trackers should not be used for providing support to users.
There are special channels for that purpose.

Please see:

@github-actions github-actions bot added the kind/support Categorizes issue or PR as a support question. label Oct 10, 2022
@github-actions
Copy link

Hello, @xiedeacc 🤖 👋

You seem to have troubles using Kubernetes and kubeadm.
Note that our issue trackers should not be used for providing support to users.
There are special channels for that purpose.

Please see:

@xiedeacc
Copy link
Author

@neolit123 it's not the problem you mentioned, in fact traffic works well, more detailed log show

I1010 06:51:47.781177   23019 round_trippers.go:473]     User-Agent: kubeadm/v1.25.2 (linux/amd64) kubernetes/5835544
I1010 06:51:47.783370   23019 round_trippers.go:574] Response Status: 201 Created in 2 milliseconds
I1010 06:51:47.783793   23019 round_trippers.go:463] POST https://192.168.101.24:6443/apis/rbac.authorization.k8s.io/v1/clusterroles?timeout=10s
I1010 06:51:47.783875   23019 round_trippers.go:469] Request Headers:
I1010 06:51:47.783937   23019 round_trippers.go:473]     User-Agent: kubeadm/v1.25.2 (linux/amd64) kubernetes/5835544
I1010 06:51:47.783994   23019 round_trippers.go:473]     Content-Type: application/json
I1010 06:51:47.784054   23019 round_trippers.go:473]     Accept: application/json, */*
I1010 06:51:47.856039   23019 round_trippers.go:574] Response Status: 201 Created in 71 milliseconds
I1010 06:51:47.864478   23019 round_trippers.go:463] POST https://192.168.101.24:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?timeout=10s
I1010 06:51:47.864585   23019 round_trippers.go:469] Request Headers:
I1010 06:51:47.864673   23019 round_trippers.go:473]     Accept: application/json, */*
I1010 06:51:47.864740   23019 round_trippers.go:473]     Content-Type: application/json
I1010 06:51:47.864803   23019 round_trippers.go:473]     User-Agent: kubeadm/v1.25.2 (linux/amd64) kubernetes/5835544
I1010 06:51:47.957180   23019 round_trippers.go:574] Response Status: 201 Created in 92 milliseconds
I1010 06:51:47.957931   23019 round_trippers.go:463] POST https://192.168.101.24:6443/api/v1/namespaces/kube-system/serviceaccounts?timeout=10s
I1010 06:51:47.958026   23019 round_trippers.go:469] Request Headers:
I1010 06:51:47.958091   23019 round_trippers.go:473]     Accept: application/json, */*
I1010 06:51:47.958152   23019 round_trippers.go:473]     Content-Type: application/json
I1010 06:51:47.958207   23019 round_trippers.go:473]     User-Agent: kubeadm/v1.25.2 (linux/amd64) kubernetes/5835544
I1010 06:51:48.032745   23019 round_trippers.go:574] Response Status: 500 Internal Server Error in 74 milliseconds
rpc error: code = Unknown desc = malformed header: missing HTTP content-type
unable to create serviceaccount
k8s.io/kubernetes/cmd/kubeadm/app/util/apiclient.CreateOrUpdateServiceAccount
        cmd/kubeadm/app/util/apiclient/idempotency.go:141
k8s.io/kubernetes/cmd/kubeadm/app/phases/addons/dns.createCoreDNSAddon
        cmd/kubeadm/app/phases/addons/dns/dns.go:235
k8s.io/kubernetes/cmd/kubeadm/app/phases/addons/dns.coreDNSAddon
        cmd/kubeadm/app/phases/addons/dns/dns.go:159
k8s.io/kubernetes/cmd/kubeadm/app/phases/addons/dns.EnsureDNSAddon
        cmd/kubeadm/app/phases/addons/dns/dns.go:102
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init.runCoreDNSAddon
        cmd/kubeadm/app/cmd/phases/init/addons.go:112
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
        cmd/kubeadm/app/cmd/phases/workflow/runner.go:234
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
        cmd/kubeadm/app/cmd/phases/workflow/runner.go:421
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
        cmd/kubeadm/app/cmd/phases/workflow/runner.go:207
k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdInit.func1
        cmd/kubeadm/app/cmd/init.go:154
github.com/spf13/cobra.(*Command).execute
        vendor/github.com/spf13/cobra/command.go:856
github.com/spf13/cobra.(*Command).ExecuteC
        vendor/github.com/spf13/cobra/command.go:974
github.com/spf13/cobra.(*Command).Execute
        vendor/github.com/spf13/cobra/command.go:902
k8s.io/kubernetes/cmd/kubeadm/app.Run
        cmd/kubeadm/app/kubeadm.go:50
main.main
        cmd/kubeadm/kubeadm.go:25
runtime.main
        /usr/local/go/src/runtime/proc.go:250
runtime.goexit
        /usr/local/go/src/runtime/asm_amd64.s:1594
error execution phase addon/coredns
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
        cmd/kubeadm/app/cmd/phases/workflow/runner.go:235
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
        cmd/kubeadm/app/cmd/phases/workflow/runner.go:421
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
        cmd/kubeadm/app/cmd/phases/workflow/runner.go:207
k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdInit.func1
        cmd/kubeadm/app/cmd/init.go:154
github.com/spf13/cobra.(*Command).execute
        vendor/github.com/spf13/cobra/command.go:856
github.com/spf13/cobra.(*Command).ExecuteC
        vendor/github.com/spf13/cobra/command.go:974
github.com/spf13/cobra.(*Command).Execute
        vendor/github.com/spf13/cobra/command.go:902
k8s.io/kubernetes/cmd/kubeadm/app.Run
        cmd/kubeadm/app/kubeadm.go:50
main.main
        cmd/kubeadm/kubeadm.go:25
runtime.main
        /usr/local/go/src/runtime/proc.go:250
runtime.goexit
        /usr/local/go/src/runtime/asm_amd64.s:1594

@xiedeacc
Copy link
Author

so this issue should be reopen again @neolit123

@neolit123
Copy link
Member

it's not a kubeadm bug. it can also be a problem with a load balancer if you have one in front of the apiserver.

k8s constructs a normal go client for connections, so any go app would fail with on this networking setup, not only kubeadm or kubectl.

@xiedeacc
Copy link
Author

I don't have a lb

@xiedeacc
Copy link
Author

from log, you can see some request get response

@xiedeacc
Copy link
Author

in fact I run init in a kvm vm, apiserver it's own ip address, so I really don't think it's network. fxxk U!!!!!!!!!

@neolit123
Copy link
Member

see #2701 (comment)

@xiedeacc
Copy link
Author

ok I destroy this vm then recreate a new one

@xiedeacc
Copy link
Author

kubeadm init success on a physic machine, but still failed on kvm instance, I guess maybe bridge config problem on host os, I ask this problem on https://serverfault.com/questions/1112707/kvm-instance-network-sometimes-fail-when-use-bridge-how-to-fix-this

@ydp
Copy link

ydp commented Dec 9, 2022

Hi, I hit similar problem that kubelet hit connection refused issue when doing kubeadm init, turns out it's my containerd config incorrect, I find resolution here: https://stackoverflow.com/questions/70849989/kube-apiserver-docker-shutting-down-got-signal-terminated/74695838#74695838
my config before

[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
  [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
    SystemdCgroup = true

and after

version = 2
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
  runtime_type = "io.containerd.runc.v2"
  [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
    SystemdCgroup = true

hope this help someone like me

@drriguz
Copy link

drriguz commented Feb 24, 2023

Today I got the same issue, it turns out that I forget to change SystemdCgroup = true in /etc/containerd/config.toml.
Also remember to restart containerd service after fix it, reset and restart the installation:

sudo kubeadm reset
kubeadm init...

Hope it helps.

@trueembark
Copy link

trueembark commented Jul 6, 2023

Hi, I hit similar problem that kubelet hit connection refused issue when doing kubeadm init, turns out it's my containerd config incorrect, I find resolution here: https://stackoverflow.com/questions/70849989/kube-apiserver-docker-shutting-down-got-signal-terminated/74695838#74695838 my config before

[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
  [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
    SystemdCgroup = true

and after

version = 2
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
  runtime_type = "io.containerd.runc.v2"
  [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
    SystemdCgroup = true

hope this help someone like me

I was installing 1.27.3 on Ubuntu 22.04 and was getting frustrated, after spending 5 days and making gradual progress by eliminating errors one by one I found this as the last hurdle to clear. this helped resolve my issue. thanks a lot.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/support Categorizes issue or PR as a support question.
Projects
None yet
Development

No branches or pull requests

5 participants