-
Notifications
You must be signed in to change notification settings - Fork 717
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
coredns is always pending after apply flannel.yml #1906
Comments
/triage support try another CNI plugin, please. if still does not work: are you getting any logs from the coredns pods? |
I don't want alternatives,just provide some work around about that.bro I don't have log about pods that run [root@localhost ~]# kubectl describe pod coredns-58cc8c89f4-5kdx5 -n kube-system
Name: coredns-58cc8c89f4-5kdx5
Namespace: kube-system
Priority: 2000000000
Priority Class Name: system-cluster-critical
Node: <none>
Labels: k8s-app=kube-dns
pod-template-hash=58cc8c89f4
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/coredns-58cc8c89f4
Containers:
coredns:
Image: registry.aliyuncs.com/google_containers/coredns:1.6.2
Ports: 53/UDP, 53/TCP, 9153/TCP
Host Ports: 0/UDP, 0/TCP, 0/TCP
Args:
-conf
/etc/coredns/Corefile
Limits:
memory: 170Mi
Requests:
cpu: 100m
memory: 70Mi
Liveness: http-get http://:8080/health delay=60s timeout=5s period=10s #success=1 #failure=5
Readiness: http-get http://:8181/ready delay=0s timeout=1s period=10s #success=1 #failure=3
Environment: <none>
Mounts:
/etc/coredns from config-volume (ro)
/var/run/secrets/kubernetes.io/serviceaccount from coredns-token-zwfww (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
config-volume:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: coredns
Optional: false
coredns-token-zwfww:
Type: Secret (a volume populated by a Secret)
SecretName: coredns-token-zwfww
Optional: false
QoS Class: Burstable
Node-Selectors: beta.kubernetes.io/os=linux
Tolerations: CriticalAddonsOnly
node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling <unknown> default-scheduler 0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.
and [root@localhost ~]# kubectl get pod coredns-58cc8c89f4-2z55k -o yaml -n kube-system
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: "2019-11-11T13:15:48Z"
generateName: coredns-58cc8c89f4-
labels:
k8s-app: kube-dns
pod-template-hash: 58cc8c89f4
name: coredns-58cc8c89f4-2z55k
namespace: kube-system
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: ReplicaSet
name: coredns-58cc8c89f4
uid: 15d09292-516a-4bb4-be61-a61501d3b4dd
resourceVersion: "360"
selfLink: /api/v1/namespaces/kube-system/pods/coredns-58cc8c89f4-2z55k
uid: 52aa274b-961f-4d34-a021-633b45418015
spec:
containers:
- args:
- -conf
- /etc/coredns/Corefile
image: registry.aliyuncs.com/google_containers/coredns:1.6.2
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 5
httpGet:
path: /health
port: 8080
scheme: HTTP
initialDelaySeconds: 60
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
name: coredns
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
- containerPort: 9153
name: metrics
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /ready
port: 8181
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources:
limits:
memory: 170Mi
requests:
cpu: 100m
memory: 70Mi
securityContext:
allowPrivilegeEscalation: false
capabilities:
add:
- NET_BIND_SERVICE
drop:
- all
readOnlyRootFilesystem: true
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /etc/coredns
name: config-volume
readOnly: true
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: coredns-token-zwfww
readOnly: true
dnsPolicy: Default
enableServiceLinks: true
nodeSelector:
beta.kubernetes.io/os: linux
priority: 2000000000
priorityClassName: system-cluster-critical
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: coredns
serviceAccountName: coredns
terminationGracePeriodSeconds: 30
tolerations:
- key: CriticalAddonsOnly
operator: Exists
- effect: NoSchedule
key: node-role.kubernetes.io/master
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- configMap:
defaultMode: 420
items:
- key: Corefile
path: Corefile
name: coredns
name: config-volume
- name: coredns-token-zwfww
secret:
defaultMode: 420
secretName: coredns-token-zwfww
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2019-11-11T13:15:48Z"
message: '0/1 nodes are available: 1 node(s) had taints that the pod didn''t tolerate.'
reason: Unschedulable
status: "False"
type: PodScheduled
phase: Pending
qosClass: Burstable
but I got nothing when kubectl logs coredns-58cc8c89f4-2z55k -n kube-system in either pod [root@localhost ~]# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-58cc8c89f4-2z55k 0/1 Pending 0 12h
kube-system coredns-58cc8c89f4-5kdx5 0/1 Pending 0 12h
kube-system etcd-k8s-master 1/1 Running 0 12h
kube-system kube-apiserver-k8s-master 1/1 Running 0 12h
kube-system kube-controller-manager-k8s-master 1/1 Running 0 12h
kube-system kube-flannel-ds-amd64-lkclv 0/1 Init:ImagePullBackOff 0 12h
kube-system kube-proxy-cz2hd 1/1 Running 0 12h
kube-system kube-scheduler-k8s-master 1/1 Running 0 12h
I wonder
|
I got more information about my situation: [root@localhost ~]# journalctl -fu kubelet
-- Logs begin at Sun 2019-11-10 14:25:12 EST. --
Nov 11 20:38:42 k8s-master kubelet[27869]: E1111 20:38:42.896019 27869 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Nov 11 20:38:46 k8s-master kubelet[27869]: W1111 20:38:46.496271 27869 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d
Nov 11 20:38:47 k8s-master kubelet[27869]: E1111 20:38:47.897181 27869 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Nov 11 20:38:48 k8s-master kubelet[27869]: E1111 20:38:48.199334 27869 pod_workers.go:191] Error syncing pod 0a620b01-49cd-498f-8f3f-70a6ed042484 ("kube-flannel-ds-amd64-lkclv_kube-system(0a620b01-49cd-498f-8f3f-70a6ed042484)"), skipping: failed to "StartContainer" for "install-cni" with ImagePullBackOff: "Back-off pulling image \"quay.io/coreos/flannel:v0.11.0-amd64\""
Nov 11 20:38:51 k8s-master kubelet[27869]: W1111 20:38:51.496883 27869 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d
Nov 11 20:38:52 k8s-master kubelet[27869]: E1111 20:38:52.899501 27869 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Nov 11 20:38:56 k8s-master kubelet[27869]: W1111 20:38:56.497534 27869 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d
Nov 11 20:38:57 k8s-master kubelet[27869]: E1111 20:38:57.903069 27869 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Nov 11 20:39:01 k8s-master kubelet[27869]: E1111 20:39:01.200085 27869 pod_workers.go:191] Error syncing pod 0a620b01-49cd-498f-8f3f-70a6ed042484 ("kube-flannel-ds-amd64-lkclv_kube-system(0a620b01-49cd-498f-8f3f-70a6ed042484)"), skipping: failed to "StartContainer" for "install-cni" with ImagePullBackOff: "Back-off pulling image \"quay.io/coreos/flannel:v0.11.0-amd64\""
|
my flannel config file when use data:
cni-conf.json: |
{
"cniVersion": "0.3.1",
"name": "cbr0",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
net-conf.json: |
{
"Network": "10.244.0.0/16",
"Backend": {
"Type": "vxlan"
}
}
|
looks like the flannel image cannot be pulled. i'm sorry but this is not a kubeadm issue. |
What keywords did you search in kubeadm issues before filing this one?
I have view issues#1178,it doesn't work
Is this a BUG REPORT or FEATURE REQUEST?
BUG REPORT
Versions
kubeadm version (use
kubeadm version
):Environment:
kubectl version
):private computer
CentOS-7
uname -a
):What happened?
Core DNS is pending after applying flannel.yml and modify the CNI version
What you expected to happen?
Core DNS is Running
How to reproduce it (as minimally and precisely as possible)?
with this init.yaml to use kubeadm init
and use
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/2140ac876ef134e0ed5af15c65e414cf26827915/Documentation/kube-flannel.yml # after I find issues#1178,modify the CNI version and apply changes kubectl apply -f kube-flannel.yml
Anything else we need to know?
I reset
kubeadm
several times,andrm -rf $HOME/.kube
after reset.and then
kubeadm init --config=xx.yaml
.I have't join any node into kubernetes ,and my node info
The text was updated successfully, but these errors were encountered: