-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Decide how to handle CAPD containers being stopped #5026
Comments
+1 to surface a condition, but I would defer to the user remediation because stopping container might be useful to test remediation, easing memory pressure, etc. /milestone v0.4 |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
/remove-lifecycle rotten |
/close This is not a problem for CAPD usage in E2E tests/quick start. We can re-open if there is demand for other use cases |
@fabriziopandini: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Detailed Description
If we deploy a cluster using CAPD, then stop one of the containers after deployment, there is no kind of remediation that happens and depending on which container, there may only be some errors logged about failed connections.
We get a list of the containers present, but it doesn't appear we check the container status to see that the container has exited.
We should determine the appropriate way to react to these scenarios to make sure things behave as a user would expect.
Anything else you would like to add:
See discussion in #5021 for more context.
/kind feature
The text was updated successfully, but these errors were encountered: