-
Notifications
You must be signed in to change notification settings - Fork 260
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cleanup loadbalancers created by ccm #842
Comments
/assign @seanschneeweiss /cc @iamemilio I assume you can provide some input on this :) |
@sbueringer: GitHub didn't allow me to assign the following users: seanschneeweiss. Note that only kubernetes-sigs members, repo collaborators and people who have commented on this issue/PR can be assigned. Additionally, issues/PRs can only have 10 assignees at the same time. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/assign |
This is interesting feature. I want delete or not flag. Because OpenStack cloud provider user not only uses load balancer, but also cinder CSI or ingress controller etc. they(and I) may confuse only load balancers are deleted even if I know load balancer is in CCM and others are in other controllers. |
if you check https://github.com/kubernetes/cloud-provider-openstack |
I think if there's a reasonable safe way to identify which load balancers have been created by the ccm of the cluster we just deleted, it's fine to delete them. I think I wouldn't touch disks etc. as there is a chance we accidentally delete valuable data this way. Load balancers can be re-created if necessary. The AWS e2e test verifies that the lbs are deleted and the disks survive the cluster deletion. The alternative is that you always have to delete the loadbalancers via ccm first and then trigger the cluster deletion. Or that the cluster deletion gets stuck in the middle and someone has to manually cleanup the loadbalancers in OpenStack. If we think the regular case is that users want to manually cleanup the loadbalancers we can make this opt-in instead of opt-out. |
I've opened a PR #844 with the currently used implementation. This is a draft to be extended by the e2e tests and further filters to prevent unwanted deletion of other objects. |
/unassign |
Should there be an additional selection for orphaned load balancers other than the project id? How can we identify the load balancers of the to be deleted cluster? |
Does this just delete all the load balancers in the OpenStack project? What happens if there is more than one cluster in a project? Is the issue here that the network/router that we created won't delete because there are load balancers attached to it? In that case, it makes sense to me that the condition should be:
That means that:
That seems like sensible behaviour to me? |
I am still looking for how to identify all the load balancers capo deployed. @jichenjc any comments? |
@hidekazuna see this the clusterName comes from |
A trick we have used in our deployments is to apply a tag to resources created by CAPO. That might simplify your problem. |
agree, create tag is much simple and should be considered common approach but looks like this is more like a OCCM question |
@jichenjc Thanks information. |
this might come a big question, cloud provider support PV, PVC and other resource creation through cinder, manila , I am not sure whether CAPI cluster deletion action here will include those as well? haven't try e.g whether this will cascade or prevent cluster deletion etc. |
CAPI cluster deletion does not delete service load balancers (it does delete the API server load balancer) or Cinder volumes provisioned for PVCs. One option for CAPI clusters would be to have CAPI clean up LoadBalancer services and PVCs before deleting a cluster. |
Yes, I understand , so as you indicated in the PR #990 and ask the CAPI community about the desire on handling the resources created outside of CAPI lifecycle.. |
I want to deploy k8s clusters in the same network and the same project for testing purpose. I do not want to delete LBs based on the network nor project. |
Deleting by name
|
The |
I guess that deleting PVs is out of scope. PVs might contain important data and they should probably never get deleted automatically or as part of a garbage collection. Load Balancers can be recreated at any time. In addition, orphaned load balancers created by the OCCM are somehow expected and they block network deletion.
For the CAPO it would be good to rely on CAPI to clean the LoadBalancer services. I don't know if there is any provider where the load balancer is independent from the cluster network itself... I'll have to ask the CAPI developers. |
I still think that this seems like reasonable behaviour right? If CAPO created the network for a cluster and wants to remove it when the cluster is deleted, then it should be assumed that it is OK to remove anything blocking that deletion, e.g. load balancers. If CAPO did not create the network, then it doesn't need to delete the network as part of cluster deletion and we don't need to worry about it. |
Just worried we are over-complicating things, and making ourselves brittle, by assuming naming conventions from the CCM... |
@mkjpryor Sounds reasonable, indeed. How can we identify whether CAPO created the network, any suggestion? |
We already have a gate that decides whether we should delete the network or not. If we are deleting the network, we should delete the load balancers attached to it first. |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale Time is a bit rare at this moment but I'll hope to work on this very soon. If someone likes to takeover, please do not hesitate to do so. |
I would like something like but Im not sure how much extra logic and code a feature would add. A different approach, that doesnt require support from CAPO, is for users to use the |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
xref: CAPA has a similar feature: https://cluster-api-aws.sigs.k8s.io/topics/external-resource-gc.html |
For CAPO and CAPVCD, we use our own cleaner operator for this issue. I wish we had a flag in CAPO like CAPA. |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close not-planned |
@k8s-triage-robot: Closing this issue, marking it as "Not Planned". In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/kind feature
Currently, we only delete the kubeapi loadbalancer created by ClusterAPI. Usually, there are additional loadbalancers created by the ccm. I would also like to delete those on cluster deletion. They cannot be deleted by the ccm, because the ccm is not running anymore. The cluster deletion get's blocked because network etc. cannot be deleted if there are ccm orphan lbs left.
Notes:
The text was updated successfully, but these errors were encountered: