-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Machine failures not propagated up to MachineDeployment.Status #5635
Comments
/milestone v1.1 |
/assign |
/assign |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
/unassign |
/triage accepted We still need to figure out if to maintain failureReason/Message or if to merge everything into conditions and benefit from a single mechanism to bubble up problems |
@fabriziopandini: GuidelinesPlease ensure that the issue body includes answers to the following questions:
For more details on the requirements of such an issue, please see here and ensure that they are met. If this request no longer meets these requirements, the label can be removed In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs. This bot triages PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
/priority important-longterm |
We are going to remove the concept of terminal failures (it is described in #10897). Also as part of this proposal we are suggesting to bubble up errors from Machines to MachineDeployments. /close |
@sbueringer: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
What steps did you take and what happened:
[A clear and concise description on how to REPRODUCE the bug.]
MD.Status.Phase may be MachineDeploymetPhaseFailed: MD controller code indicates it's derived from MachineSet.Status.Failure{Message,Reason} but those fields aren't seemingly set if/when a Machine experiences a failure (read: Machine.Status.Failure{Message,Reason} set). So although the controller logic exists to set MD.Status.Phase for MS failures, the MS controller doesn't seem to record Machine failures as MS failures.
What did you expect to happen:
A Machine failure would bubble up and be reported, somehow, in MD status.
Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]
Slack Thread: https://kubernetes.slack.com/archives/C8TSNPY4T/p1636128024376500
CAPI folks proposed bubbling up this information via MD.Status.Conditions. Which seems reasonable to me, as long as I can deduce that an MD-owned Machine experienced a terminal failure (without walking the entire MD/MS/Machine resource graph).
Environment:
kubectl version
):/etc/os-release
):/kind bug
[One or more /area label. See https://github.com/kubernetes-sigs/cluster-api/labels?q=area for the list of labels]
The text was updated successfully, but these errors were encountered: