Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

master node not ready Coredns Pending. #1795

Closed
Inv0k-er opened this issue Sep 20, 2019 · 16 comments
Closed

master node not ready Coredns Pending. #1795

Inv0k-er opened this issue Sep 20, 2019 · 16 comments
Labels
kind/support Categorizes issue or PR as a support question.

Comments

@Inv0k-er
Copy link

Inv0k-er commented Sep 20, 2019

Hello. I installed kubernetes. But when i did kubectl get nodes i saw master node not ready

NAME STATUS ROLES AGE VERSION
master NotReady master 56m v1.16.0
slave03 NotReady 52m v1.16.0
slave40 NotReady 51m v1.16.0

when i did

NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-5644d7b6d9-4qvm9 0/1 Pending 0 31m
kube-system coredns-5644d7b6d9-xljsx 0/1 Pending 0 32m
kube-system etcd-pr02 1/1 Running 0 56m
kube-system kube-apiserver-pr02 1/1 Running 0 56m
kube-system kube-controller-manager-pr02 1/1 Running 0 56m
kube-system kube-flannel-ds-amd64-9dg46 1/1 Running 0 46m
kube-system kube-flannel-ds-amd64-hrpcw 1/1 Running 0 46m
kube-system kube-flannel-ds-amd64-zlfwl 1/1 Running 0 46m
kube-system kube-proxy-2jrhf 1/1 Running 0 53m
kube-system kube-proxy-4dknd 1/1 Running 0 52m
kube-system kube-proxy-qmvmp 1/1 Running 0 57m
kube-system kube-scheduler-pr02 1/1 Running 0 56m

kubectl describe pod/coredns-5644d7b6d9-4qvm9 -n kube-system
Name: coredns-5644d7b6d9-4qvm9
Namespace: kube-system
Priority: 2000000000
Priority Class Name: system-cluster-critical
Node:
Labels: k8s-app=kube-dns
pod-template-hash=5644d7b6d9
Annotations:
Status: Pending
IP:
IPs:
Controlled By: ReplicaSet/coredns-5644d7b6d9
Containers:
coredns:
Image: k8s.gcr.io/coredns:1.6.2
Ports: 53/UDP, 53/TCP, 9153/TCP
Host Ports: 0/UDP, 0/TCP, 0/TCP
Args:
-conf
/etc/coredns/Corefile
Limits:
memory: 170Mi
Requests:
cpu: 100m
memory: 70Mi
Liveness: http-get http://:8080/health delay=60s timeout=5s period=10s #success=1 #failure=5
Readiness: http-get http://:8181/ready delay=0s timeout=1s period=10s #success=1 #failure=3
Environment:
Mounts:
/etc/coredns from config-volume (ro)
/var/run/secrets/kubernetes.io/serviceaccount from coredns-token-rrfvs (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
config-volume:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: coredns
Optional: false
coredns-token-rrfvs:
Type: Secret (a volume populated by a Secret)
SecretName: coredns-token-rrfvs
Optional: false
QoS Class: Burstable
Node-Selectors: beta.kubernetes.io/os=linux
Tolerations: CriticalAddonsOnly
node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message


Warning FailedScheduling default-scheduler 0/3 nodes are available: 3 node(s) had taints that the pod didn't tolerate.
Warning FailedScheduling default-scheduler 0/3 nodes are available: 3 node(s) had taints that the pod didn't tolerate.

kubectl describe pod/coredns-5644d7b6d9-xljsx -n kube-system
Name: coredns-5644d7b6d9-xljsx
Namespace: kube-system
Priority: 2000000000
Priority Class Name: system-cluster-critical
Node:
Labels: k8s-app=kube-dns
pod-template-hash=5644d7b6d9
Annotations:
Status: Pending
IP:
IPs:
Controlled By: ReplicaSet/coredns-5644d7b6d9
Containers:
coredns:
Image: k8s.gcr.io/coredns:1.6.2
Ports: 53/UDP, 53/TCP, 9153/TCP
Host Ports: 0/UDP, 0/TCP, 0/TCP
Args:
-conf
/etc/coredns/Corefile
Limits:
memory: 170Mi
Requests:
cpu: 100m
memory: 70Mi
Liveness: http-get http://:8080/health delay=60s timeout=5s period=10s #success=1 #failure=5
Readiness: http-get http://:8181/ready delay=0s timeout=1s period=10s #success=1 #failure=3
Environment:
Mounts:
/etc/coredns from config-volume (ro)
/var/run/secrets/kubernetes.io/serviceaccount from coredns-token-rrfvs (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
config-volume:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: coredns
Optional: false
coredns-token-rrfvs:
Type: Secret (a volume populated by a Secret)
SecretName: coredns-token-rrfvs
Optional: false
QoS Class: Burstable
Node-Selectors: beta.kubernetes.io/os=linux
Tolerations: CriticalAddonsOnly
node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message


Warning FailedScheduling default-scheduler 0/3 nodes are available: 3 node(s) had taints that the pod didn't tolerate.
Warning FailedScheduling default-scheduler 0/3 nodes are available: 3 node(s) had taints that the pod didn't tolerate.

kubectl describe nodes master
Name: master
Roles: master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=master
kubernetes.io/os=linux
node-role.kubernetes.io/master=
Annotations: flannel.alpha.coreos.com/backend-data: {"VtepMAC":"ee:a0:65: 2f:dd:ea"}
flannel.alpha.coreos.com/backend-type: vxlan
flannel.alpha.coreos.com/kube-subnet-manager: true
flannel.alpha.coreos.com/public-ip: 178.88.161.57
kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim. sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Fri, 20 Sep 2019 23:14:10 +0600
Taints: node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoSchedule
Unschedulable: false
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message


MemoryPressure False Sat, 21 Sep 2019 00:13:38 +0600 Fri, 20 Sep 2019 23 :34:16 +0600 KubeletHasSufficientMemory kubelet has sufficient memory availa ble
DiskPressure False Sat, 21 Sep 2019 00:13:38 +0600 Fri, 20 Sep 2019 23 :34:16 +0600 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Sat, 21 Sep 2019 00:13:38 +0600 Fri, 20 Sep 2019 23 :34:16 +0600 KubeletHasSufficientPID kubelet has sufficient PID available
Ready False Sat, 21 Sep 2019 00:13:38 +0600 Fri, 20 Sep 2019 23 :34:16 +0600 KubeletNotReady runtime network not ready: NetworkRe ady=false reason:NetworkPluginNotReady message:docker: network plugin is not rea dy: cni config uninitialized
Addresses:
InternalIP: 10.2.10.7
Hostname: master
Capacity:
cpu: 4
ephemeral-storage: 27245572Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 7972976Ki
pods: 110
Allocatable:
cpu: 4
ephemeral-storage: 25109519114
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 7870576Ki
pods: 110
System Info:
Machine ID: 5259f055333f4db7868d10a708ef7900
System UUID: 8E1512E4-9C1A-40CF-8E5E-80AD7177FAC2
Boot ID: a223fed7-71d5-4905-9760-260101ba5052
Kernel Version: 3.10.0-957.1.3.el7.x86_64
OS Image: CentOS Linux 7 (Core)
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://19.3.2
Kubelet Version: v1.16.0
Kube-Proxy Version: v1.16.0
PodCIDR: 172.17.0.0/24
PodCIDRs: 172.17.0.0/24
Non-terminated Pods: (6 in total)
Namespace Name CPU Requests CPU L imits Memory Requests Memory Limits AGE


kube-system etcd-master 0 (0%) 0 (0% ) 0 (0%) 0 (0%) 59m
kube-system kube-apiserver-master 250m (6%) 0 (0% ) 0 (0%) 0 (0%) 59m
kube-system kube-controller-manager-master 200m (5%) 0 (0% ) 0 (0%) 0 (0%) 58m
kube-system kube-flannel-ds-amd64-hrpcw 100m (2%) 100m (2%) 50Mi (0%) 50Mi (0%) 49m
kube-system kube-proxy-qmvmp 0 (0%) 0 (0% ) 0 (0%) 0 (0%) 59m
kube-system kube-scheduler-master 100m (2%) 0 (0% ) 0 (0%) 0 (0%) 58m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits


cpu 650m (16%) 100m (2%)
memory 50Mi (0%) 50Mi (0%)
ephemeral-storage 0 (0%) 0 (0%)
Events:
Type Reason Age From Message


Normal Starting 60m kubelet, master Starting kubelet.
Normal NodeAllocatableEnforced 60m kubelet, master Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 60m (x8 over 60m) kubelet, master Node pr0 2 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 60m (x8 over 60m) kubelet, master Node pr0 2 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 60m (x7 over 60m) kubelet, master Node pr0 2 status is now: NodeHasSufficientPID
Normal Starting 59m kube-proxy, master Starting kube-proxy.
Normal Starting 40m kubelet, master Starting kubelet.
Normal NodeHasSufficientMemory 40m (x8 over 40m) kubelet, master Node pr0 2 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 40m (x8 over 40m) kubelet, master Node pr0 2 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 40m (x7 over 40m) kubelet, master Node pr0 2 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 40m kubelet, master Updated Node Allocatable limit across pods
Normal Starting 40m kube-proxy, master Starting kube-proxy.
Normal Starting 19m kubelet, master Starting kubelet.
Normal NodeHasSufficientMemory 19m kubelet, master Node pr0 2 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 19m kubelet, master Node pr0 2 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 19m kubelet, master Node pr0 2 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 19m kubelet, master Updated Node Allocatable limit across pods
Normal Starting 17m kubelet, master Starting kubelet.
Normal NodeHasSufficientMemory 17m kubelet, master Node pr0 2 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 17m kubelet, master Node pr0 2 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 17m kubelet, master Node pr0 2 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 17m kubelet, master Updated

kubeadm version (use kubeadm version):
kubeadm version: &version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:34:01Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
Environment:

  • Kubernetes version (use kubectl version):
    Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:36:53Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
    Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:27:17Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
  • Cloud provider or hardware configuration:
    virtual server
  • OS (e.g. from /etc/os-release):
    Centos 7.3 x86_64
  • Kernel (e.g. uname -a):
    Linux pr02 3.10.0-957.1.3.el7.x86_64 kubeadm join on slave node fails preflight checks #1 SMP Thu Nov 29 14:49:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
  • Others:

What happened?

CoreDNS status pending
Master node not started

==============================================================

Kubernetes:

@trinvh
Copy link

trinvh commented Sep 20, 2019

Same issue. Ran journalctl -fu kubelet says:

Sep 20 11:44:26 ip-10-0-23-4 kubelet[1577]: W0920 11:44:26.620886    1577 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d
Sep 20 11:44:26 ip-10-0-23-4 kubelet[1577]: E0920 11:44:26.998299    1577 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

@pytimer
Copy link

pytimer commented Sep 21, 2019

@trinvh you should check your node labels, i think maybe your cordns nodeSelectors: beta.kubernetes.io/os=linux is wrong, you should update beta.kubernetes.io/os to kubernetes.io/os. Because this labe has deprecated.

Refer to https://kubernetes.io/docs/reference/kubernetes-api/labels-annotations-taints/#beta-kubernetes-io-os-deprecated

@neolit123 neolit123 added the kind/support Categorizes issue or PR as a support question. label Sep 21, 2019
@neolit123
Copy link
Member

neolit123 commented Sep 21, 2019

@Inv0k-er @trinvh

  1. did you have the kubernetes-cni DEB/RPM package installed?
    it should be installed as a dependency of the kubeadm DEB/RPM package.

  2. have you tried another CNI plugin instead of flannel?

@Inv0k-er
Copy link
Author

Same issue. Ran journalctl -fu kubelet says:

Sep 20 11:44:26 ip-10-0-23-4 kubelet[1577]: W0920 11:44:26.620886    1577 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d
Sep 20 11:44:26 ip-10-0-23-4 kubelet[1577]: E0920 11:44:26.998299    1577 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

journalctl -f says:
Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

and
Error validating CNI config &{cbr0 false [0xc00097f780 0xc00097f820]

@Inv0k-er
Copy link
Author

@trinvh you should check your node labels, i think maybe your cordns nodeSelectors: beta.kubernetes.io/os=linux is wrong, you should update beta.kubernetes.io/os to kubernetes.io/os. Because this labe has deprecated.

Refer to https://kubernetes.io/docs/reference/kubernetes-api/labels-annotations-taints/#beta-kubernetes-io-os-deprecated

Thank you for your answer , today I try your receipt.
But what you think ? :
If I reinstall OS to Centos 7.7 (right now I have 7.3) it is help me ? or No

@Inv0k-er
Copy link
Author

@Inv0k-er @trinvh

1. did you have the `kubernetes-cni` DEB/RPM package installed?
   it should be installed as a dependency of the `kubeadm` DEB/RPM package.

2. have you tried another CNI plugin instead of flannel?

Thank you for your answer.

  1. Yes I heave cni it is - kubernetes-cni-0.7.5-0.x86_64.
  2. No I am not tried. Because it is my first step in kubernetes and I dont know how to do it. Maybe you can send me some link ? please.

@pytimer
Copy link

pytimer commented Sep 23, 2019

@Inv0k-er The node pod cidr is 172.17.0.0/24, what's content in your flannel configmap? Is it 172.17.0.0/24?

@eddiesimeon
Copy link

seems 1.16.0 new validates cni version in cni config
the follow pr fixes the issue -> flannel-io/flannel#1181

quick fix:
update file to /etc/cni/net.d/10-flannel.conflist to include the following just as it is in the PR
cniVersion: "0.2.0"

@Inv0k-er
Copy link
Author

Inv0k-er commented Sep 23, 2019

@Inv0k-er The node pod cidr is 172.17.0.0/24, what's content in your flannel configmap? Is it 172.17.0.0/24?

No. I use 172.17.0.1/16
Right now I want reinstal OS. Because I heave 7.3. I want update to 7.7. Mayby its help me.

But Server have 2 ethernet adapter.
First ens -192.168.9.130
Second ens - 10.2.10.7
Default gateway - 192.168.9.1

@Inv0k-er
Copy link
Author

cniVersion: "0.2.0"

I tried. It is not help me.

@pytimer
Copy link

pytimer commented Sep 23, 2019

@Inv0k-er You can see this issue flannel-io/flannel#1178 . Maybe it can help you.

@neolit123
Copy link
Member

this is not a kubeadm bug, so i will close the issue.

if you go here instead of flannel try the steps for another CNI plugin:
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#pod-network

/close

@k8s-ci-robot
Copy link
Contributor

@neolit123: Closing this issue.

In response to this:

this is not a kubeadm bug, so i will close the issue.

if you go here instead of flannel try the steps for another CNI plugin:
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#pod-network

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@neolit123
Copy link
Member

the flannel repository needed a fix.
the kubeadm guide for installing flannel was just updated, see:
https://github.com/kubernetes/website/pull/16575/files

@persunde
Copy link

persunde commented Apr 7, 2020

For others who might have the same issue.
If you start up Kubernetes and coredns pods are not starting and stuck in status PENDING, then you probably need to add a Network Policy Provider.
All I needed to do was follow step 3 and it the cluster is READY.

  1. Run sudo kubeadm init --config kubeadm-config.yaml --upload-certs on this node.
  2. Write the output join commands that are returned to a text file for later use.
  3. Apply the CNI plugin of your choice. The given example is for Weave Net:

kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"

Source: https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/#set-up-the-first-control-plane-node

@ajithgudem
Copy link

I was scratching my head over this issue for quite sometime and figured it was an upstream flannel change and you step 3 worked like charm

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/support Categorizes issue or PR as a support question.
Projects
None yet
Development

No branches or pull requests

8 participants