-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow user to opt-in to Node draining during Cluster delete #9692
Comments
This issue is currently awaiting triage. If CAPI contributors determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/cc @erkanerol |
On a related note, there is an ongoing conversation in Kubernetes slack with CAPVCD maintainers about whether the InfraMachine controller should impose the condition described above. |
I think it's already doable if you implement the MachineDeletionPhaseHook https://github.com/kubernetes-sigs/cluster-api/blob/main/docs/proposals/20200602-machine-deletion-phase-hooks.md#changes-to-machine-controller |
Thanks for this helpful context! I amended my feature request to explain why I think Cluster API should provide a solution that doesn't require the user to implement and deploy a webhook. |
This seems a reasonable ask, I'd like to see an option in the Cluster object under a field we could call |
Sounds reasonable. Shouldn't be necessary to implement a hook for this case. |
I think there are two obvious deletion strategies. Feel free to suggest others, or alternate names for these.
Today, we support the Forced strategy only. I think we could support the Graceful strategy, but this support is limited by how drain works. Notably, drain does not evict Pods managed by DaemonSets. Let's say a CAPVCD cluster, runs a DaemonSet that mounts VCD CSI-backed persistent volumes. If we use the Graceful strategy to delete the cluster, CAPI will drain each Node. However, after a Node is drained, the DaemonSet Pods continue to run, and their volumes remain mounted. |
In summary, I think supporting a Graceful strategy may make sense, but it won't address problem that motivated this idea. I'm fine with waiting to see what the community thinks. /priority awaiting-more-evidence |
/triage accepted IMO there are a couple of things to unwind (and eventually split into subtasks), but the discussion is interesting:
|
triage-party: /remove-triage accepted |
@fabriziopandini: Those labels are not set on the issue: In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/help |
@sbueringer: GuidelinesPlease ensure that the issue body includes answers to the following questions:
For more details on the requirements of such an issue, please see here and ensure that they are met. If this request no longer meets these requirements, the label can be removed In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
hands up |
Thank you!! Feel free to assign yourself the issue |
Also I understand graceful also makes the assumption there's no PDBs in the data plane? otherwise deletion is perpetually blocked. |
Just to mention it. Once we fixed propagation of timeouts across the board even if objects are in deleting (#10753 + e.g. Cluster topology + KCP). It might be possible to just set NodeDrainTimeout + NodeVolumeDetachTimeout to a very low value when deleting the Cluster and we end up with something very similar to "skip drain". |
/assign |
What would you like to be added (User Story)?
As an operator, I would like to opt-in to Node draining during Cluster delete, so that I can delete Clusters managed by Infrastructure Providers that impose additional conditions on deleting underlying VMs, e.g. detaching all secondary storage (which in turns can require all Pods using this storage to be evicted).
Detailed Description
When it reconciles a Machine resource marked to be deleted, the Machine controller attempts to drain the corresponding Node. However, if the Cluster resource is marked for deletion, it does not perform the drain. This behavior was added in #2746 in order to decrease the time it takes to delete a cluster.
As part of reconciline a Machine marked for deletion, the Machine controller marks the referenced InfraMachine resource for deletion. The infrastructure provider's InfraMachine controller reconciles that delete. The InfraMachine controller can refuse to delete the underlying infrastructure until some condition is met. That condition could be that the corresponding Node must be drained.
For example, if I create a cluster with the VMware Cloud Director infrastructure provider (CAPVCD), and use secondary storage ("VCD Named Disk Volumes"), then the delete cannot proceed until I drain all Nodes with Pods that use this storage. Cluster API does not drain the Nodes.
CAPVCD requires the Node corresponding to the InfraMachine be drained. This is because CAPVCD requires all secondary storage to be detached from the VM underlying the InfraMachine. Detaching this storage is the responsibility of the VCD CSI driver, and it refuses to do this until all volumes that use this storage can be unmounted, and in turn that means all Pods using these volumes must be evicted from the Node.
Because Cluster API does not drain Nodes during Cluster delete, the Nodes are not drained, the volumes are not unmounted, and CAPVCD refuses to delete the underlying VMs. The Cluster delete process continues until the the Nodes are drained manually.
Anything else you would like to add?
As @lubronzhan helpfully points out below, draining on Cluster delete is possible by implementing and deploying a PreDrainDeleteHook.
It is my understanding that the Machine controller skips drain on Cluster delete as an optimization, not for correctness. For some infrastructure providers, draining is required for correctness. Given that, I think Cluster API itself should allow users to disable this optimization in favor of correctness, and I think requiring users to implement and deploy a webhook is too high a bar. I would prefer a simpler solution, e.g., an annotation, or an API field.
Label(s) to be applied
/kind feature
One or more /area label. See https://github.com/kubernetes-sigs/cluster-api/labels?q=area for the list of labels.
The text was updated successfully, but these errors were encountered: