You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
What happened:
yurtctl/yurtadm use 1 replicas Deployment to deploy yurt-controller-manager. Under such circumstances, if the node where yurt-controller-manager deployed shutdowns, the node won't turn into NotReady state, so the yurt-controller-manager won't be re scheduled which may cause issues.
What you expected to happen:
As the yurt-controller-manager is a replacement of the NodeLifecycleController in kube-controller-manager, which is usually
deployed as a static pod on each control plane node.
Maybe the yurt-controller-manager should follow the same deployment way (Daemonset or static pod)?
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
Environment:
OpenYurt version:
Kubernetes version (use kubectl version):
OS (e.g: cat /etc/os-release):
Kernel (e.g. uname -a):
Install tools:
Others:
others
/kind question
The text was updated successfully, but these errors were encountered:
What happened:
yurtctl/yurtadm use 1 replicas
Deployment
to deployyurt-controller-manager
. Under such circumstances, if the node whereyurt-controller-manager
deployed shutdowns, the node won't turn into NotReady state, so theyurt-controller-manager
won't be re scheduled which may cause issues.What you expected to happen:
As the
yurt-controller-manager
is a replacement of the NodeLifecycleController inkube-controller-manager
, which is usuallydeployed as a static pod on each control plane node.
Maybe the
yurt-controller-manager
should follow the same deployment way (Daemonset or static pod)?How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
Environment:
kubectl version
):cat /etc/os-release
):uname -a
):others
/kind question
The text was updated successfully, but these errors were encountered: