-
Notifications
You must be signed in to change notification settings - Fork 584
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
AWS NLB linger after they're orphaned #1718
Comments
/milestone v0.5.x |
@vincepri: Please ensure the request meets the requirements listed here. If this request no longer meets these requirements, the label can be removed In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/assign |
It is probably safe to assume that we will need an additional elbv2 service to handle the cleanup of the NLB resources |
The issue is that you essentially need to drain the workload cluster of services, so that the cloud provider tears down the NLBs. It's not specifically a CAPA problem IMHO, or you add logic to CAPA to handle resources created by the cloud provider integration. Worth adding to the agenda for the meeting, as I'm a bit weary of crossing responsibility boundaries here. |
@randomvariable that is a good point, we could remove the current ELB Classic cleanup we do in favor of moving the core functionality into core cluster api, where we could delete all Services w/ Type=LoadBalancer prior to deletion of a given Cluster. That would then cover any similar issues that would arise with other infrastructure providers as well. |
That could be good. Did suggest to @nckturner, @justinsb, @andrewsykim that we could use the test framework to set up CI for the cloud provider repo. Having CAPI take care of auto-deleting service type="load balancers" would be a neat trick. |
We forgot to discuss this in yesterday's meeting. I'll file an issue to Cluster API and add it to the agenda. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
/lifecycle frozen |
Closing in favor of kubernetes-sigs/cluster-api#3075 |
@sedefsavas - this issue is impacting some customers so going to reopen with this plan in mind:
/reopen |
@richardcase: Reopened this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/priority critical-urgent |
/triage accepted |
Just tested this scenario and it does occur. The delete of the cluster gets stuck because of
And the load balancer still exists. |
Looks like we only clean up the CCM created ELBs but not NLBs. |
@sedefsavas - spot on :) |
/milestone v1.5.0 |
/kind bug
What steps did you take and what happened:
Workload clusters are able to create Network Load Balancers instead of Classic ELBs, you just add an annotation
service.beta.kubernetes.io/aws-load-balancer-type: nlb
(read more), we need to identify those Load Balancers by tag and delete them as well when the associated workload cluster is destroyed.Referenced here in a conversation on the topic.
What did you expect to happen:
NLB are deleted alongside the cluster.
Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]
Environment:
kubectl version
):/etc/os-release
):The text was updated successfully, but these errors were encountered: