Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add an annotation for the Load Balancer service instead of the deprecated field LoadBalancerIP #371

Closed
yang-wang11 opened this issue Sep 26, 2022 · 13 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@yang-wang11
Copy link

yang-wang11 commented Sep 26, 2022

from Kubernetes 1.24 the field LoadBalancerIP will be deprecated, see here
we have to introduce a new LB annotation for service to avoid the service being unavailable further.

#371 (comment)

@yang-wang11
Copy link
Author

yang-wang11 commented Sep 27, 2022

On the EnsureLoadBalancer of interface LoadBalancer, there is still uses the field LoadBalancerIP instead of service annotation.

I do a quick look around the code. from my perspective, both InternalLoadBalancer and ExternalLoadBalancer should be considered; see the below code

case cloud.SchemeInternal:
status, err = g.ensureInternalLoadBalancer(clusterName, clusterID, svc, existingFwdRule, nodes)
default:
status, err = g.ensureExternalLoadBalancer(clusterName, clusterID, svc, existingFwdRule, nodes)
}

InternalLoadBalancer related code

ipToUse := ilbIPToUse(svc, existingFwdRule, subnetworkURL)

ExternalLoadBalancer related code

requestedIP := apiService.Spec.LoadBalancerIP

@zetaab
Copy link
Member

zetaab commented Oct 13, 2022

we have to introduce a new LB annotation for service to avoid the service being unavailable further.

What is this LB annotation called? Is there information/documentation somewhere? We have similar issue in cloudprovider OpenStack

@yang-wang11
Copy link
Author

@zetaab the annotation name is provided by the owners themself, the below is the reference for AWS.
https://github.com/kubernetes/cloud-provider-aws/blob/eb83663f3edd341f6923e54907bc7d1a50e526a6/pkg/providers/v1/aws.go#L228

I noticed that Azure already fix this issue in the below PR, hope can help
kubernetes-sigs/cloud-provider-azure#2428

@aojea
Copy link
Member

aojea commented Nov 18, 2022

I've should have commented here before going to the PR.
#394 (comment)

Let me clarify first to avoid unnecessary panic and confusion, the field service.Spec.LoadbalancerIP is not going to be removed until there is no risk of breaking clients, moving to annotations as soon as possible is encourage of course.

The idea is to move most of the loadbalancer functionality to https://github.com/kubernetes/ingress-gce, and the annotations names is something that requires discussion between the projects.

@yang-wang11
Copy link
Author

@aojea
I am confused about the below comments. If I correctly understand the words from Lars, he expresses completely different ideas from you

 the field service.Spec.LoadbalancerIP is not going to be removed until there is no risk of breaking clients

image

@aojea
Copy link
Member

aojea commented Nov 18, 2022

scroll down

image

See API changes

https://kubernetes.io/docs/concepts/overview/kubernetes-api/#api-changes

Elimination of resources or fields requires following the API deprecation policy.

See the rules

https://kubernetes.io/docs/reference/using-api/deprecation-policy/#deprecating-parts-of-the-api

Fields of REST resources
As with whole REST resources, an individual field which was present in API v1 must exist and function until API v1 is removed. Unlike whole resources, the v2 APIs may choose a different representation for the field, as long as it can be round-tripped. For example a v1 field named "magnitude" which was deprecated might be named "deprecatedMagnitude" in API v2. When v1 is eventually removed, the deprecated field can be removed from v2.

@aojea
Copy link
Member

aojea commented Nov 18, 2022

cloud provider code depend on cloud provider internals, the PR you linked are done by people from those organizations, AWS or Microsoft for Azure because already reached a consensus on which annotation want to support. That is not different here, as I explained somewhere else defining an annotation creates a contract with users and need to be carefully planned

@aojea
Copy link
Member

aojea commented Nov 18, 2022

however, this issues is legit and we should not forget about it, thanks for reporting it

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 18, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Mar 20, 2023
@hobti01
Copy link

hobti01 commented Apr 13, 2023

The idea is to move most of the loadbalancer functionality to https://github.com/kubernetes/ingress-gce...

@aojea Does this mean that enhancements to the Load Balancer functionality in GKE should be requested within ingress-gce or here?

Based on comments in ingress-gce that codebase is focused on L7 ingress.

@aojea
Copy link
Member

aojea commented Apr 14, 2023

@aojea Does this mean that enhancements to the Load Balancer functionality in GKE should be requested within ingress-gce or here?

we removed the confusing part from the docs kubernetes/kubernetes#117051 , this field is not going to be removed, I'm going to close this to avoid generating more confusing

/close

Based on comments in ingress-gce that codebase is focused on L7 ingress.

😉 kubernetes/ingress-gce#1825

@k8s-ci-robot
Copy link
Contributor

@aojea: Closing this issue.

In response to this:

@aojea Does this mean that enhancements to the Load Balancer functionality in GKE should be requested within ingress-gce or here?

we removed the confusing part from the docs kubernetes/kubernetes#117051 , this field is not going to be removed, I'm going to close this to avoid generating more confusing

/close

Based on comments in ingress-gce that codebase is focused on L7 ingress.

😉 kubernetes/ingress-gce#1825

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

6 participants