Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Machine failures not propagated up to MachineDeployment.Status #5635

Closed
Tracked by #10852
jdef opened this issue Nov 10, 2021 · 14 comments
Closed
Tracked by #10852

Machine failures not propagated up to MachineDeployment.Status #5635

jdef opened this issue Nov 10, 2021 · 14 comments
Labels
help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. triage/accepted Indicates an issue or PR is ready to be actively worked on.

Comments

@jdef
Copy link
Contributor

jdef commented Nov 10, 2021

What steps did you take and what happened:
[A clear and concise description on how to REPRODUCE the bug.]
MD.Status.Phase may be MachineDeploymetPhaseFailed: MD controller code indicates it's derived from MachineSet.Status.Failure{Message,Reason} but those fields aren't seemingly set if/when a Machine experiences a failure (read: Machine.Status.Failure{Message,Reason} set). So although the controller logic exists to set MD.Status.Phase for MS failures, the MS controller doesn't seem to record Machine failures as MS failures.

What did you expect to happen:
A Machine failure would bubble up and be reported, somehow, in MD status.

Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]
Slack Thread: https://kubernetes.slack.com/archives/C8TSNPY4T/p1636128024376500

CAPI folks proposed bubbling up this information via MD.Status.Conditions. Which seems reasonable to me, as long as I can deduce that an MD-owned Machine experienced a terminal failure (without walking the entire MD/MS/Machine resource graph).

Environment:

  • Cluster-api version: all, AFAICT
  • Minikube/KIND version:
  • Kubernetes version: (use kubectl version):
  • OS (e.g. from /etc/os-release):

/kind bug
[One or more /area label. See https://github.com/kubernetes-sigs/cluster-api/labels?q=area for the list of labels]

@k8s-ci-robot k8s-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Nov 10, 2021
@jdef
Copy link
Contributor Author

jdef commented Nov 10, 2021

@vincepri
Copy link
Member

/milestone v1.1

@k8s-ci-robot k8s-ci-robot added this to the v1.1 milestone Nov 10, 2021
@srm09
Copy link
Contributor

srm09 commented Nov 11, 2021

/assign

@fabriziopandini fabriziopandini modified the milestones: v1.1, v1.2 Feb 3, 2022
@enxebre
Copy link
Member

enxebre commented Feb 11, 2022

/assign

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 12, 2022
@enxebre
Copy link
Member

enxebre commented May 12, 2022

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 12, 2022
@srm09
Copy link
Contributor

srm09 commented May 16, 2022

/unassign

@fabriziopandini fabriziopandini added the triage/accepted Indicates an issue or PR is ready to be actively worked on. label Jul 29, 2022
@fabriziopandini fabriziopandini removed this from the v1.2 milestone Jul 29, 2022
@fabriziopandini fabriziopandini removed the triage/accepted Indicates an issue or PR is ready to be actively worked on. label Jul 29, 2022
@fabriziopandini
Copy link
Member

/triage accepted
/help
/unassign @enxebre

We still need to figure out if to maintain failureReason/Message or if to merge everything into conditions and benefit from a single mechanism to bubble up problems

@k8s-ci-robot
Copy link
Contributor

@fabriziopandini:
This request has been marked as needing help from a contributor.

Guidelines

Please ensure that the issue body includes answers to the following questions:

  • Why are we solving this issue?
  • To address this issue, are there any code changes? If there are code changes, what needs to be done in the code and what places can the assignee treat as reference points?
  • Does this issue have zero to low barrier of entry?
  • How can the assignee reach out to you for help?

For more details on the requirements of such an issue, please see here and ensure that they are met.

If this request no longer meets these requirements, the label can be removed
by commenting with the /remove-help command.

In response to this:

/triage accepted
/help
/unassign @enxebre

We still need to figure out if to maintain failureReason/Message or if to merge everything into conditions and benefit from a single mechanism to bubble up problems

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added triage/accepted Indicates an issue or PR is ready to be actively worked on. help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. labels Oct 3, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle stale
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 8, 2023
@vaibhav2107
Copy link

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 27, 2023
@fabriziopandini fabriziopandini added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 2, 2023
@fabriziopandini
Copy link
Member

/priority important-longterm

@k8s-ci-robot k8s-ci-robot added the priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. label Apr 12, 2024
@sbueringer
Copy link
Member

We are going to remove the concept of terminal failures (it is described in #10897).

Also as part of this proposal we are suggesting to bubble up errors from Machines to MachineDeployments.

/close

@k8s-ci-robot
Copy link
Contributor

@sbueringer: Closing this issue.

In response to this:

We are going to remove the concept of terminal failures (it is described in #10897).

Also as part of this proposal we are suggesting to bubble up errors from Machines to MachineDeployments.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. triage/accepted Indicates an issue or PR is ready to be actively worked on.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

9 participants