-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Refactor deployment healthcheck #2750
Refactor deployment healthcheck #2750
Conversation
Codecov Report
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@tejal29 I've only reviewed the UX, not the code.
It looks like it's doing the job which is awesome!
It would be even better if the output could be streamlined and refined.
Here's what I see:
Deploy complete in 378.581679ms
Waiting for deployments to stabilize
deployment/leeroy-app is pending due to Waiting for deployment "leeroy-app" rollout to finish: 1 old replicas are pending termination...
deployment/leeroy-web is pending due to Waiting for deployment "leeroy-web" rollout to finish: 1 old replicas are pending termination...
deployment/leeroy-app is ready. [1/2 deployment(s) still pending]
deployment/leeroy-web is ready.
Here's what I'd love to see:
Deploy complete in 378.581679ms
Waiting for deployments to stabilize...
- Waiting for deployment "leeroy-app" rollout to finish: 1 old replicas are pending termination...
- Waiting for deployment "leeroy-web" rollout to finish: 1 old replicas are pending termination...
- deployment/leeroy-app is ready. [1/2 deployment(s) still pending]
- deployment/leeroy-web is ready.
Deployments stabilized in Xs
Another thing that it doesn't display and that I think is useful to the users is pods that fail to start.
This s something I see with kubectl get -w pods
@dgageot Thanks! The pod check is upcoming in another PR, but i will show you how it look. Regarding the "UI changes", let me fix it. Thanks |
61c7f19
to
a843292
Compare
a843292
to
dccc68f
Compare
@dgageot This is how it looks on my branch
|
if err != nil { | ||
reason, details := parseKubectlError(err.Error()) | ||
d.UpdateStatus(details, reason, err) | ||
if reason != KubectlConnection { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We would want to keep polling for rollout status
if there was a connectivity issue.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Another round of nits. This is looking really good, I just have to look again at it with fresh eyes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a way to break this PR in two pieces? I don't have ideas on how to do that but I must admit I got lost during the review.
@dgageot Atcually i can.
Once those 2 merged in, i will update this PR. This Pr will be relatively shorter than. |
In this Pr
Refactor health check to add a Resource interface which will implement the details if a resource is healthy or not.
Print health check summary after every resource check is returned indicating, if the current resource is ready or in error. How many more are remaining.
In case the deployment is in error, the message looks like:
replicas
,the user will see the current stautus of the deployment
statusCheckDeadlineSeconds
is not reached.Sample full output