-
Notifications
You must be signed in to change notification settings - Fork 386
Readiness probe failing due to no cluster leader #251
Comments
@HeshamAboElMagd the readiness probes are failing because Consul can't elect a leader. |
@lkysow Thanks . Please find the below :
Logs below:
|
|
< Events: Warning FailedScheduling 76s (x5750 over 5d23h) default-scheduler 0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules. Hey @lkysow . Sorry for the late reply. Above was the events from this server 2 pod , and I only receive that error with that pod ( even non consul pods dont report that error) . Would you recommend anything I should look into , in order to have it running? |
|
@HeshamAboElMagd @lkysow This may be related to issues #264 #265 the easiest way to check is to output the server to yaml and confirm if the retry-join is in the command for the containers. I have a PR #266 to resolve if that is the case. |
Hi @s3than . I did changed the values.yaml file .. makes the values of replicas of server and updatePartition to string , still unfortunately facing the readiness problem `Events: Normal Scheduled 5m49s default-scheduler Successfully assigned multicloud/consul-consul-server-0 to k8stest-node2 |
What environment are you running your cluster in? I noticed this
Are you running it in minikube? |
That comment from me was related to this closed issue : #169 |
@lkysow I was mistaken for having 3 replica of the consul server , although I have one two worker.. now I have two servers : k get pod | grep consul The result of consul operator raft list-peers Node ID Address State Voter RaftProtocol |
logs : ==> Log data will now stream in as it occurs:
==> Consul agent running!
==> Newer Consul version available: 1.6.1 (currently running: 1.6.0) ` |
I'll configure a cluster with 1 master and 2 workers and get back to you. |
@HeshamAboElMagd are you still having problems then? Looks like the server are up? |
Awesome! 🎉 |
@lkysow I need to have this issue reopen . I still getting readiness probe failed on the server and the client:
sever:
Controlled By: StatefulSet/consul-consul-server
Containers:
consul:
Container ID: docker://a30ee732808461baae47884154aa981e7d572570d67ab5583c502103a103fa6b
Image: consul:1.6.0
Image ID: docker-pullable://consul@sha256:63e1a07260418ba05be08b6dc53f4a3bb95aa231cd53922f7b5b5ee5fd77ef3f
Ports: 8500/TCP, 8301/TCP, 8302/TCP, 8300/TCP, 8600/TCP, 8600/UDP
Host Ports: 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/UDP
client:
Controlled By: DaemonSet/consul-consul
Containers:
consul:
Container ID: docker://fa0792b9918aa615ef364a129eb44b66fe45ef993aff247e9d9f9763ce988f84
Image: consul:1.6.0
Image ID: docker-pullable://consul@sha256:63e1a07260418ba05be08b6dc53f4a3bb95aa231cd53922f7b5b5ee5fd77ef3f
Ports: 8500/TCP, 8502/TCP, 8301/TCP, 8302/TCP, 8300/TCP, 8600/TCP, 8600/UDP
Host Ports: 8500/TCP, 8502/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/UDP
There are not port conflict at my VM .. Why pods can't there port exposed and keep getting these these at the logs:
Originally posted by @HeshamAboElMagd in #169 (comment)
The text was updated successfully, but these errors were encountered: