-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
🏃[KCP] combine health checks of scale up and down #2849
Conversation
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: sedefsavas The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
r.recorder.Eventf(kcp, corev1.EventTypeWarning, "ControlPlaneUnhealthy", | ||
"Waiting for control plane to pass control plane health check before removing a control plane machine: %v", err) | ||
return ctrl.Result{}, &capierrors.RequeueAfterError{RequeueAfter: healthCheckFailedRequeueAfter} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These checks (in both scale up and scale down) were also gating the upgrade workflow as well and the new general check is only triggered during normal scale up/scale down operations currently.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good catch! So my new assumption is as far as the control plane is initialized, we want to run these health checks. Moved it before the upgrade.
1319186
to
0682a9d
Compare
@@ -320,3 +329,24 @@ func (r *KubeadmControlPlaneReconciler) ClusterToKubeadmControlPlane(o handler.M | |||
|
|||
return nil | |||
} | |||
|
|||
func (r *KubeadmControlPlaneReconciler) generalHealthCheck(ctx context.Context, cluster *clusterv1.Cluster, kcp *controlplanev1.KubeadmControlPlane, controlPlane *internal.ControlPlane) (ctrl.Result, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
func (r *KubeadmControlPlaneReconciler) generalHealthCheck(ctx context.Context, cluster *clusterv1.Cluster, kcp *controlplanev1.KubeadmControlPlane, controlPlane *internal.ControlPlane) (ctrl.Result, error) { | |
func (r *KubeadmControlPlaneReconciler) checkHealth(ctx context.Context, cluster *clusterv1.Cluster, kcp *controlplanev1.KubeadmControlPlane, controlPlane *internal.ControlPlane) (ctrl.Result, error) { |
numMachines := len(ownedMachines) | ||
// If the control plane is initialized, wait for health checks to pass to continue. | ||
if numMachines > 0 { | ||
result, err := r.generalHealthCheck(ctx, cluster, kcp, controlPlane) | ||
if err != nil { | ||
return result, err | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How would this work when we can do remediation? Let's say a Machine isn't responding, if the health check fails, we won't create a new one?
Closing this issue due to the concerns raised about returning early before a possible remediation. |
What this PR does / why we need it:
This PR mov control plane and ETCD health checks from scale up/down to reconcile.
Which issue(s) this PR fixes (optional, in
fixes #<issue number>(, fixes #<issue_number>, ...)
format, will close the issue(s) when PR gets merged):Related to #2818 and #2753
/kind cleanup
/area control-plane