You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've been able to scale nodes up/down using the kubectl scale command. However, when scaling down the number of CP replicas, I noticed:
It throws an error if the operation would result in an odd number of replicas
If I scale back to a single replica, it borks the cluster
For comparison, I used kubectl edit to scale up and down the number of worker nodes and also control plane nodes. I was able to go from 3 CP nodes to 1 CP node, seemingly without error. It didn't let me edit the config to have two CP nodes though.
It is expected that it doesn't let you have two control plane nodes. This would cause a "split brain" issue, where etcd would not be able to determine a leader. When finding this problem, did you use the OCI provider on OCNE 2.0? Were you running a managed or self-managed setup for your cluster?
I've been able to scale nodes up/down using the
kubectl scale
command. However, when scaling down the number of CP replicas, I noticed:For comparison, I used
kubectl edit
to scale up and down the number of worker nodes and also control plane nodes. I was able to go from 3 CP nodes to 1 CP node, seemingly without error. It didn't let me edit the config to have two CP nodes though.kubectl-scale.txt
kubectl-edit-workers.txt
kubectl-edit-cp.txt
The text was updated successfully, but these errors were encountered: