kubectl drain ignores terminating pods when reboot master node #1532
Labels
kind/bug
Categorizes issue or PR as related to a bug.
triage/accepted
Indicates an issue or PR is ready to be actively worked on.
What happened:
When I evict and update one of the worker nodes, a master node is also rebooting at this time. At this time, the evicted pod needs to wait for terminationGracePeriodSeconds before the eviction is successful.
Then something strange happened, the eviction was successful, and the currently terminating pod was ignored.
kubectl/pkg/drain/drain.go
Line 420 in b359351
I found that because the master node was rebooting at this time, obtaining the pod's interface failed. At this time, the above judgment
p != nil && p.ObjectMeta.UID != pod.ObjectMeta.UID
was met, so the current pod was ignored.What you expected to happen:
Judgment here should be strictly controlled,such as:
Maybe there is a better way to fix it.
How to reproduce it (as minimally and precisely as possible):
You can try to drain a worker node which owns terminating pod for a long time. Then reboot master node.
The text was updated successfully, but these errors were encountered: