-
Notifications
You must be signed in to change notification settings - Fork 717
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
master node not ready Coredns Pending. #1795
Comments
Same issue. Ran
|
@trinvh you should check your node labels, i think maybe your cordns |
journalctl -f says: and |
Thank you for your answer , today I try your receipt. |
Thank you for your answer.
|
@Inv0k-er The node pod cidr is |
seems 1.16.0 new validates cni version in cni config quick fix: |
No. I use 172.17.0.1/16 But Server have 2 ethernet adapter. |
I tried. It is not help me. |
@Inv0k-er You can see this issue flannel-io/flannel#1178 . Maybe it can help you. |
this is not a kubeadm bug, so i will close the issue. if you go here instead of flannel try the steps for another CNI plugin: /close |
@neolit123: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
the flannel repository needed a fix. |
For others who might have the same issue.
|
I was scratching my head over this issue for quite sometime and figured it was an upstream flannel change and you step 3 worked like charm |
Hello. I installed kubernetes. But when i did kubectl get nodes i saw master node not ready
NAME STATUS ROLES AGE VERSION
master NotReady master 56m v1.16.0
slave03 NotReady 52m v1.16.0
slave40 NotReady 51m v1.16.0
when i did
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-5644d7b6d9-4qvm9 0/1 Pending 0 31m
kube-system coredns-5644d7b6d9-xljsx 0/1 Pending 0 32m
kube-system etcd-pr02 1/1 Running 0 56m
kube-system kube-apiserver-pr02 1/1 Running 0 56m
kube-system kube-controller-manager-pr02 1/1 Running 0 56m
kube-system kube-flannel-ds-amd64-9dg46 1/1 Running 0 46m
kube-system kube-flannel-ds-amd64-hrpcw 1/1 Running 0 46m
kube-system kube-flannel-ds-amd64-zlfwl 1/1 Running 0 46m
kube-system kube-proxy-2jrhf 1/1 Running 0 53m
kube-system kube-proxy-4dknd 1/1 Running 0 52m
kube-system kube-proxy-qmvmp 1/1 Running 0 57m
kube-system kube-scheduler-pr02 1/1 Running 0 56m
kubectl describe pod/coredns-5644d7b6d9-4qvm9 -n kube-system
Name: coredns-5644d7b6d9-4qvm9
Namespace: kube-system
Priority: 2000000000
Priority Class Name: system-cluster-critical
Node:
Labels: k8s-app=kube-dns
pod-template-hash=5644d7b6d9
Annotations:
Status: Pending
IP:
IPs:
Controlled By: ReplicaSet/coredns-5644d7b6d9
Containers:
coredns:
Image: k8s.gcr.io/coredns:1.6.2
Ports: 53/UDP, 53/TCP, 9153/TCP
Host Ports: 0/UDP, 0/TCP, 0/TCP
Args:
-conf
/etc/coredns/Corefile
Limits:
memory: 170Mi
Requests:
cpu: 100m
memory: 70Mi
Liveness: http-get http://:8080/health delay=60s timeout=5s period=10s #success=1 #failure=5
Readiness: http-get http://:8181/ready delay=0s timeout=1s period=10s #success=1 #failure=3
Environment:
Mounts:
/etc/coredns from config-volume (ro)
/var/run/secrets/kubernetes.io/serviceaccount from coredns-token-rrfvs (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
config-volume:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: coredns
Optional: false
coredns-token-rrfvs:
Type: Secret (a volume populated by a Secret)
SecretName: coredns-token-rrfvs
Optional: false
QoS Class: Burstable
Node-Selectors: beta.kubernetes.io/os=linux
Tolerations: CriticalAddonsOnly
node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
Warning FailedScheduling default-scheduler 0/3 nodes are available: 3 node(s) had taints that the pod didn't tolerate.
Warning FailedScheduling default-scheduler 0/3 nodes are available: 3 node(s) had taints that the pod didn't tolerate.
kubectl describe pod/coredns-5644d7b6d9-xljsx -n kube-system
Name: coredns-5644d7b6d9-xljsx
Namespace: kube-system
Priority: 2000000000
Priority Class Name: system-cluster-critical
Node:
Labels: k8s-app=kube-dns
pod-template-hash=5644d7b6d9
Annotations:
Status: Pending
IP:
IPs:
Controlled By: ReplicaSet/coredns-5644d7b6d9
Containers:
coredns:
Image: k8s.gcr.io/coredns:1.6.2
Ports: 53/UDP, 53/TCP, 9153/TCP
Host Ports: 0/UDP, 0/TCP, 0/TCP
Args:
-conf
/etc/coredns/Corefile
Limits:
memory: 170Mi
Requests:
cpu: 100m
memory: 70Mi
Liveness: http-get http://:8080/health delay=60s timeout=5s period=10s #success=1 #failure=5
Readiness: http-get http://:8181/ready delay=0s timeout=1s period=10s #success=1 #failure=3
Environment:
Mounts:
/etc/coredns from config-volume (ro)
/var/run/secrets/kubernetes.io/serviceaccount from coredns-token-rrfvs (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
config-volume:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: coredns
Optional: false
coredns-token-rrfvs:
Type: Secret (a volume populated by a Secret)
SecretName: coredns-token-rrfvs
Optional: false
QoS Class: Burstable
Node-Selectors: beta.kubernetes.io/os=linux
Tolerations: CriticalAddonsOnly
node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
Warning FailedScheduling default-scheduler 0/3 nodes are available: 3 node(s) had taints that the pod didn't tolerate.
Warning FailedScheduling default-scheduler 0/3 nodes are available: 3 node(s) had taints that the pod didn't tolerate.
kubectl describe nodes master
Name: master
Roles: master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=master
kubernetes.io/os=linux
node-role.kubernetes.io/master=
Annotations: flannel.alpha.coreos.com/backend-data: {"VtepMAC":"ee:a0:65: 2f:dd:ea"}
flannel.alpha.coreos.com/backend-type: vxlan
flannel.alpha.coreos.com/kube-subnet-manager: true
flannel.alpha.coreos.com/public-ip: 178.88.161.57
kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim. sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Fri, 20 Sep 2019 23:14:10 +0600
Taints: node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoSchedule
Unschedulable: false
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
MemoryPressure False Sat, 21 Sep 2019 00:13:38 +0600 Fri, 20 Sep 2019 23 :34:16 +0600 KubeletHasSufficientMemory kubelet has sufficient memory availa ble
DiskPressure False Sat, 21 Sep 2019 00:13:38 +0600 Fri, 20 Sep 2019 23 :34:16 +0600 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Sat, 21 Sep 2019 00:13:38 +0600 Fri, 20 Sep 2019 23 :34:16 +0600 KubeletHasSufficientPID kubelet has sufficient PID available
Ready False Sat, 21 Sep 2019 00:13:38 +0600 Fri, 20 Sep 2019 23 :34:16 +0600 KubeletNotReady runtime network not ready: NetworkRe ady=false reason:NetworkPluginNotReady message:docker: network plugin is not rea dy: cni config uninitialized
Addresses:
InternalIP: 10.2.10.7
Hostname: master
Capacity:
cpu: 4
ephemeral-storage: 27245572Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 7972976Ki
pods: 110
Allocatable:
cpu: 4
ephemeral-storage: 25109519114
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 7870576Ki
pods: 110
System Info:
Machine ID: 5259f055333f4db7868d10a708ef7900
System UUID: 8E1512E4-9C1A-40CF-8E5E-80AD7177FAC2
Boot ID: a223fed7-71d5-4905-9760-260101ba5052
Kernel Version: 3.10.0-957.1.3.el7.x86_64
OS Image: CentOS Linux 7 (Core)
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://19.3.2
Kubelet Version: v1.16.0
Kube-Proxy Version: v1.16.0
PodCIDR: 172.17.0.0/24
PodCIDRs: 172.17.0.0/24
Non-terminated Pods: (6 in total)
Namespace Name CPU Requests CPU L imits Memory Requests Memory Limits AGE
kube-system etcd-master 0 (0%) 0 (0% ) 0 (0%) 0 (0%) 59m
kube-system kube-apiserver-master 250m (6%) 0 (0% ) 0 (0%) 0 (0%) 59m
kube-system kube-controller-manager-master 200m (5%) 0 (0% ) 0 (0%) 0 (0%) 58m
kube-system kube-flannel-ds-amd64-hrpcw 100m (2%) 100m (2%) 50Mi (0%) 50Mi (0%) 49m
kube-system kube-proxy-qmvmp 0 (0%) 0 (0% ) 0 (0%) 0 (0%) 59m
kube-system kube-scheduler-master 100m (2%) 0 (0% ) 0 (0%) 0 (0%) 58m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
cpu 650m (16%) 100m (2%)
memory 50Mi (0%) 50Mi (0%)
ephemeral-storage 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
Normal Starting 60m kubelet, master Starting kubelet.
Normal NodeAllocatableEnforced 60m kubelet, master Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 60m (x8 over 60m) kubelet, master Node pr0 2 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 60m (x8 over 60m) kubelet, master Node pr0 2 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 60m (x7 over 60m) kubelet, master Node pr0 2 status is now: NodeHasSufficientPID
Normal Starting 59m kube-proxy, master Starting kube-proxy.
Normal Starting 40m kubelet, master Starting kubelet.
Normal NodeHasSufficientMemory 40m (x8 over 40m) kubelet, master Node pr0 2 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 40m (x8 over 40m) kubelet, master Node pr0 2 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 40m (x7 over 40m) kubelet, master Node pr0 2 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 40m kubelet, master Updated Node Allocatable limit across pods
Normal Starting 40m kube-proxy, master Starting kube-proxy.
Normal Starting 19m kubelet, master Starting kubelet.
Normal NodeHasSufficientMemory 19m kubelet, master Node pr0 2 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 19m kubelet, master Node pr0 2 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 19m kubelet, master Node pr0 2 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 19m kubelet, master Updated Node Allocatable limit across pods
Normal Starting 17m kubelet, master Starting kubelet.
Normal NodeHasSufficientMemory 17m kubelet, master Node pr0 2 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 17m kubelet, master Node pr0 2 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 17m kubelet, master Node pr0 2 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 17m kubelet, master Updated
kubeadm version (use
kubeadm version
):kubeadm version: &version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:34:01Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
Environment:
kubectl version
):Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:36:53Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:27:17Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
virtual server
Centos 7.3 x86_64
uname -a
):Linux pr02 3.10.0-957.1.3.el7.x86_64 kubeadm join on slave node fails preflight checks #1 SMP Thu Nov 29 14:49:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
What happened?
CoreDNS status pending
Master node not started
==============================================================
Kubernetes:
The text was updated successfully, but these errors were encountered: