Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

KEP ipv4-ipv6-dual-stack; added service.spec.loadBalancerIPs #1992

Closed
wants to merge 3 commits into from

Conversation

uablrek
Copy link

@uablrek uablrek commented Sep 18, 2020

service.spec.loadBalancerIPs

The current service.spec.loadBalancerIP is single-stack (string) but for dual-stack is must be possible to define load-balancer addresses for both families so a new field is introduced;

apiVersion: v1
kind: Service
metadata:
  name: my-service
spec:
  type: LoadBalancer
  selector:
    app: MyApp
  ports:
    - protocol: TCP
      port: 80
      targetPort: 9376
  ipFamilyPolicy: PreferDualStack
  loadBalancerIPs:
  - 10.0.0.1
  - "1000::1"

Since there was a discussion on the sig/network meeting Thu Sep 17 about validation I include the text from the PR here for further comments;

While the spec.loadBalancerIPs array has become necessary for dual-stack the field has a broader use. Kubernetes does not use this field but it is a hint to the cloud-provider that may or may not be obeyed. For instance it is a valid case for the user to request 3 IPv6 load-balancer addresses and no IPv4, or to specify both IPv4 and IPv6 addresses in a single-stack service and let the cloud-provider decide which one to use depending on the main family.

Therefore kubernetes will not do the same validation on spec.loadBalancerIPs as for it's own dual-stack fields: "two addresses, one of each type". The only validation for spec.loadBalancerIPs is that the strings are valid ip addresses.

@k8s-ci-robot k8s-ci-robot added the cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. label Sep 18, 2020
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: uablrek
To complete the pull request process, please assign dcbw
You can assign the PR to them by writing /assign @dcbw in a comment when ready.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added kind/kep Categorizes KEP tracking issues and PRs modifying the KEP directory size/M Denotes a PR that changes 30-99 lines, ignoring generated files. sig/network Categorizes an issue or PR as relevant to SIG Network. labels Sep 18, 2020
@uablrek
Copy link
Author

uablrek commented Sep 18, 2020

/assign @thockin

@uablrek
Copy link
Author

uablrek commented Sep 18, 2020

/cc @khenidak

on the main family.

Therefore kubernetes will not do the same validation on `spec.loadBalancerIPs` as for
it's own dual-stack fields: "two addresses, one of each type". The only validation
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
it's own dual-stack fields: "two addresses, one of each type". The only validation
its own dual-stack fields: "two addresses, one of each type". The only validation

that may or may not be obeyed. For instance it is a valid case for the user to request
3 IPv6 load-balancer addresses and no IPv4, or to specify both IPv4 and IPv6 addresses
in a single-stack service and let the cloud-provider decide which one to use depending
on the main family.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it assumed that regardless of the number of requested IPs, that the cloud provider will assign at most one IPv4 IP and one IPv6 IP?

Is there really a use case for requesting multiple load balancer IPs like this? How is it that the user knows that 1.2.3.4 and 5.6.7.8 are suitable load balancers, but 9.10.11.12 is not? Why does the cloud provider not know this? It seems like there's secretly some additional information here (eg, topology) and it might be nicer to expose that directly.

This also seems like something that Gateway, etc, might be able to handle better than Service does, in which case maybe it would be better to leave this unsolved in Service?

Copy link
Author

@uablrek uablrek Sep 21, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it assumed that regardless of the number of requested IPs, that the cloud provider will assign at most one IPv4 IP and one IPv6 IP?

No, it may differ between cloud-providers I suppose. I have already got queries about multiple load-baclancer addresses for one service so I suppose someone has a use-case (I didn't ask). It was easily solved by defining multiple services, so there are other ways.

I don't know how "any-number" of loadBalancerIPs may be used, but I am sure somebody will have a use-case and IMHO a K8s imposed constraint is unnecessary. This is a user/cloud-provider contract I think K8s should keep out of it.

But I understand the argument that uses may diverge in an undesirable way between cloud-providers if too much freedom is given. I am fine with a two-address/ipv4/ipv6 constraint so I will not argue.

For me it's more important to get loadBalancerIPs accepted so I can start implementing in K8s (if nobody else want to) and in metallb. We have a usecase for ipv4/ipv6 addresses.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This also seems like something that Gateway, etc, might be able to handle better than Service does, in which case maybe it would be better to leave this unsolved in Service?

I don't understand this. Do you suggest to not add loadBalancerIPs? If so, I think this is mostly needed for private-cloud where there is no K8s-aware cloud infra-structure with it's own config setup, but the next step are K8s-unaware routers (GW's). The possibility to specify external addresses (loadBalancerIPs) is a requirement. Now, this could be solved with annotations, but then again almost everything can but it's never pretty.

Copy link
Author

@uablrek uablrek Sep 21, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But for any cloud-providers currently supporting the loadBalancerIP field a loadBalancerIPs field for dual-stack services must be a wanted feature.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't know how "any-number" of loadBalancerIPs may be used, but I am sure somebody will have a use-case and IMHO a K8s imposed constraint is unnecessary. This is a user/cloud-provider contract I think K8s should keep out of it.

But I understand the argument that uses may diverge in an undesirable way between cloud-providers if too much freedom is given. I am fine with a two-address/ipv4/ipv6 constraint so I will not argue.

I'm not objecting to either way, but I think the semantics need to be well-defined. We don't want loadBalancerIPs: [ 1.2.3.4, 5.6.7.8 ] to mean "I want both 1.2.3.4 and 5.6.7.8" on some cloud providers, but "I want 1.2.3.4, or if I can't have that, then I want 5.6.7.8 instead" on others.

I don't understand this. Do you suggest to not add loadBalancerIPs? If so, I think this is mostly needed for private-cloud where there is no K8s-aware cloud infra-structure with it's own config setup, but the next step are K8s-unaware routers (GW's).

OK... it still feels a bit weird to have this essentially only be there for the dumb-private-cloud case. It seems like it would be more k8s-like to match Services to load balancers based on labels or something...

Though I guess it's not really any lower-level than externalIPs

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The "dumb-private-cloud case" would be all installations without a surrounding K8s aware environment with some cloud-control-manager. Basically all K8s installations not bought or sold and including all "metallb" users. While always a minority in number of installed cluster I don't think those should be dismissed so easily.

But the general use-case must be;
Any installations now using spec.loadBalancerIP in some way must have the possibility to use the same function for dual-stack services.

With this in mind, a constraint of two addresses, one of each family is fine.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I wasn't saying we shouldn't support that use case, I was just saying that it would be nicer if there was some way to support that use case well without having to add new fields that are only used for that one use case.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure how I feel about this loosening. Why?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The status.loadBalancerIP.ingress has always been an array. I consider making loadBalancerIP singularis was an oversight, it should have been an array long before dual-stack for symmetry. IMHO loadBalancerIPs is not a dual-stack thing (but has become important due to dual-stack) and should not have the constraints that applies for other real dual-stack fields such as clusterIPs. Not even the "loadBalancerIP=loadBalancerIPs[0]" and not the upgrade/downgrade complexity of those fields. loadBalancerIPs should be ignored by K8s precisely as loadBalancerIP today.

But as said, this is my opinion and I am fine with the dual-stack constraints for loadBalancerIPs. But I would like this to be decided before I update the PR; should a loadBalancerIPs field be added? Should it have the same constraints as clusterIPs?

If multiple status.loadBalancerIP.ingress addresses are specified K8s will setup rules in iptables/ipvs for those addresses. This works already.

@aojea
Copy link
Member

aojea commented Sep 22, 2020

I'm in favor on this because:

  • this field was not validated before, can we break someone adding validation now? I think some people was already using it to deploy nat64 load balancer (requesting an IPv6 LB address for an IPv4 service)
  • the field validations seems to be a load-balancer provider responsability only, you can also request invalid IP addresses now.
	// Only applies to Service Type: LoadBalancer
	// LoadBalancer will get created with the IP specified in this field.
	// This feature depends on whether the underlying cloud-provider supports specifying
	// the loadBalancerIP when a load balancer is created.
	// This field will be ignored if the cloud-provider does not support the feature.
	// +optional
	LoadBalancerIP string
  • I could not come up with any cons to not do it 😄

@uablrek
Copy link
Author

uablrek commented Sep 24, 2020

Here is an example of a annotation-work-around; metallb/metallb#727 (comment)

port: 80
targetPort: 9376
ipFamilyPolicy: PreferDualStack
loadBalancerIPs:
Copy link
Contributor

@khenidak khenidak Oct 2, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can i ask you to layout transformation rules singular->plural. Specifically

  1. confirm what will happen to existing services.
  2. What happens in core version?
  3. How are fields will be validated (and mutated)?
  4. Can a user mutate LB type service (to ClusterIP or other types)? if then what happens to field values (who resets what)? and how would a cloud provider know about the old reset values (if applicable).

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. Existing services are single-stack. loadBalancerIP -> loadBalancerIPs[0] (if set)
  2. What is "core version"?
  3. That's the question. I wanted the same validation (or lack of) as for loadBalancerIP; only address validation. The other alternative is 1-2 addresses, if 2 then one of each family.
  4. This question must exist in exactly the same way for the existing loadBalancerIP. I assume the same rules for loadBalancerIPs. I suspect the field is left as-is.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably a check that a user does not specify both loadBalancerIP and loadBalancerIPs in the service manifest should be added. At least not contradicting loadBalancerIP != loadBalancerIPs[0].

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So we THINK we have the protocol for pluralizing a field pinned down. This could be a good test of it.

  1. add the plural field to the type, leave the old field
    1a) document that when specified, plural[0] must be the same as singular
  2. add logic to "normalize" the two fields in the context of a create vs update
    2a) see NormalizeClusterIPs() as an example
  3. call this normalization from defaultOnReadService()
  4. call this normalization from PrepareForCreate() and PrepareForUpdate()
  5. add validation that singular and plural[0] are in sync

PTAL at kubernetes/kubernetes#95894 which makes default-on-read a thing for ClusterIPs.

address to be specified. A new multi value `spec.loadBalancerIPs` is added;

```yaml
apiVersion: v1
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This example needs a bit more work. If it is PrefereDualStack on a dual stack cluster then it will have both ClusterIP and ClusterIPs set. This is important for LBs that forward LB->ServiceIP or LB-IP.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can't see the connection between clusterIP(s) and load-balancer-ip. No extern load-balancer should balance to clusterIPs either to the node-port or to status.loadBalancerIP.ingress.

port: 80
targetPort: 9376
ipFamilyPolicy: PreferDualStack
loadBalancerIPs:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add existing loadBalancerIP

@uablrek
Copy link
Author

uablrek commented Oct 16, 2020

I saw the comment from the sig/network meeting that this PR needs update for review comments. I could not attend this time myself 😞

I can however not see any consensus about validation and how to handle upgrade/rollback. Before I update the PR I would like that to be settled.

My opinion is that K8s should validate that the addresses are valid IP-addresses and nothing else. No automatic fill-in/removal of loadBalancerIPs at all. As @aojea pointed out this is consistent with the contract for loadBalancerIP which may be used to configure NAT64 with an ipv6 address in a ipv4 cluster for instance. But...

Maybe loadBalancerIPs (and loadBalancerIP) should be replaced with annotations?

@danwinship expressed concerns of different interpretations of loadBalancerIPs (and loadBalancerIP) between cloud-providers and that the external address can be specified at a better place than in the service mainifest. For me that means in the cloud-provider setup some place outside K8s, which makes sense.

I know annotations are not usually popular but this may be one case where they are better. This is already a corner case because the normal configuration is already outside K8s, so there is no risk of an "explosion" of annotations (like for Ingress). And an annotation is specific so the interpretation is defined by the cloud-provider.

If that can be agreed I will withdraw my loadBalancerIPs proposal and replace it with a comment about annotations for load-balancer IPs for dual-stack. Probably with the example from metallb metallb/metallb#727 (comment).

@danwinship
Copy link
Contributor

@danwinship expressed concerns of different interpretations of loadBalancerIPs (and loadBalancerIP) between cloud-providers and that the external address can be specified at a better place than in the service mainifest.

There were two separate points there:

  1. It was not initially clear to me that it made sense to allow the user to specify load balancer IPs explicitly at all.
  2. But if we're going to allow the user to specify load balancer IPs, we should make it work the same way everywhere, so that samples/docs/etc don't end up having to say stuff like "if you're using MetalLB, then add loadBalancerIPs like this, but if you're using vSphere, then add loadBalancerIPs like this" or whatever.

But I think you addressed my comments on the first point, so I don't think there's any reason to go with annotations. (Unless there's no good way to address the second point and we really do need different behavior between MetalLB and vSphere, etc, in which case maybe annotations would be better.)

@uablrek
Copy link
Author

uablrek commented Oct 19, 2020

I first thought loadBalancerIPs was natural because the status.loadBalancerIP.ingress is an array. Kind of a "symmetry" reason. But considering the interpretations I have reconsidered. IMO it should be a free contract between the cloud-provider and the user;

We don't want loadBalancerIPs: [ 1.2.3.4, 5.6.7.8 ] to mean "I want both 1.2.3.4 and 5.6.7.8" on some cloud providers, but "I want 1.2.3.4, or if I can't have that, then I want 5.6.7.8 instead" on others.

I think both are valid use-cases. And considering that annotations are probably better.

If loadBalancerIPs is introduced with the constraints;

  • Max 2 addresses
  • If 2 addresses are specified there must be one ipv4 and one ipv6

Then I would like to avoid any automatic settings on upgrade. The loadBalancerIP fields are still never used by K8s and a cloud-provider will use loadBalancerIP as before (if it's used at all), and move to loadBalancerIPs for dual-stack services in due time. On a downgrade (dual-stack feature-gate=off) then perhaps loadBalancerIPs may be nil'ed.

Either way I think the KEP must be updated. Users and cloud-providers that now are using loadBalancerIP would like to know what happens for a dual-stack service of type:LoadBalancer.

@thockin
Copy link
Member

thockin commented Oct 26, 2020

So now that dual-stack is merged, I have bandwidth to think about this.

I agree that it's probably not ideal that this is part of the API. But it is. So pluralizing it seems appropriate. I'll comment more on the file.

@thockin
Copy link
Member

thockin commented Oct 27, 2020 via email

@aojea
Copy link
Member

aojea commented Oct 27, 2020

Since Lars seems ok with one or the other option, it would be nice to have some feedback from cloud providers, they are the main stakeholders here ... if they end not using the plural is just not worth the effort IMHO

@uablrek
Copy link
Author

uablrek commented Oct 31, 2020

From #1992 (comment);

The status.loadBalancerIP.ingress has always been an array. I consider making loadBalancerIP singularis was an oversight, it should have been an array long before dual-stack for symmetry. IMHO loadBalancerIPs is not a dual-stack thing (but has become important due to dual-stack) and should not have the constraints that applies for other real dual-stack fields such as clusterIPs. Not even the "loadBalancerIP=loadBalancerIPs[0]" and not the upgrade/downgrade complexity of those fields. loadBalancerIPs should be ignored by K8s precisely as loadBalancerIP today.

If I phrase it this way; If loadBalancerIP would have been pluralized from start (since status.loadBalancerIP.ingress is an array), would we then put additional constraints on when dual-stack is introduced?

spec the expectations when someone uses the old field

The "someones" here are not normal end-users it is cloud-providers. I think it is reasonable to expect a little more from them. The users of loadBalancerIP today (only metallb users?) will have to read up on dual-stack services and create new ones, enforcing loadBalancerIP=loadBalancerIPs[0] will not make anything clearer for them I think.

But the important question to get an answer to first is the one @aojea wrote above (#1992 (comment));

it would be nice to have some feedback from cloud providers, they are the main stakeholders here ... if they end not using the plural is just not worth the effort IMHO

@thockin
Copy link
Member

thockin commented Nov 3, 2020

If someone had tried to pluralize it from the start, I would have said "that implies that you know something about the implementation, which really would suggest this should be an annotation instead of a field", and probably shot down the whole proposal. As a singular, it seems pretty clear (to me) that this means "use this LB IP for my singular LB, if possible".

So if we are to pluralize this, I think the plural means the same thing as plural clusterIPs. That's what I think is least surprising, anyway...

@deitch
Copy link

deitch commented Jun 21, 2021

I am trying to get my head around this. So this will not be adopted? I see what @thockin is saying, that having everything else be dual-stack capable, but spec.loadBalancerIP be single-stack is quite off. I also get your point, though, that maybe that should be in an annotation.

You do reference metallb; metallb actually does look at spec.loadBalancerIP, see here, where is says:

MetalLB respects the spec.loadBalancerIP parameter, so if you want your service to be set up with a specific address, you can request it by setting that parameter.

I manage the Equinix Metal CCM, and we have been building on that as a fairly accepted standard why to specify "I would like this IP, please" when creating a Service of type=LoadBalancer. To my mind, that is why it does make some sense to live inside the spec.

@uablrek
Copy link
Author

uablrek commented Jun 22, 2021

You do reference metallb; metallb actually does look at spec.loadBalancerIP,

Yes, I know. And more to the point, I refer to a PR that actually implements the same for dual-stack but with an annotation, please see; metallb/metallb#727 (comment)

@deitch
Copy link

deitch commented Jun 22, 2021

Yup, I saw. Do you think it will make it in?

I will say, the part that concerns me a bit about the annotation - besides @thockin 's point above about it already existing so it should be made to match, not conflict - is that Spec.LoadBalancerIP has become a semi-universal way to say in the spec, "this is the IP I want" (or, as we are heading to, "these are the IPs I want"). As soon as it becomes an implementation-specific annotation, it becomes "the metallb-way of specifying IPs" and "the kube-vip way of specifying IPs" and "the ELB way of specifying IPs" etc. There is something very nice (and appropriate, IMO, but I understand the other side) about someone being able to say, "I am creating a service, I want it to have these IPs, let the backend figure it out."

@uablrek
Copy link
Author

uablrek commented Jun 22, 2021

The problem, as I understand it, is that was to loosely specified in the first place. Before K8s v1.18 (I think) the string was not even checked, you could write a comma-separated list of addresses in spec.loadBalancerIP. I did, and it worked. And I have heard that some IPv4 installations writes an IPv6 address in spec.loadBalancerIP to configure NAT64.

That spec.loadBalancerIP is already interpreted differently in different installations makes it more like a "wildcard" annotation than a well defined K8s function. You can't trust it to mean the same thing when you move your app to a new environment. In that respect I think an annotation is more suitable. It is installation specific.

The spec.loadBalancerIPs seems even more uncertain than spec.loadBalancerIPs when reading the comments in this PR.

@uablrek
Copy link
Author

uablrek commented Jun 22, 2021

About the metallb PR, I don't know. I like my first implementation better since it does not mess with the existing (and very well tested) code. But the maintainers of metallb wanted to interleave the dual-stack code with the old code. IMHO it becomes messy and could easily add some bugs. I made half of it and asked for comments. And am still waiting.

We have resorted to use 2 services, one ipv4 and one ipv6 for external access. From an outside client it doesn't matter.

@deitch
Copy link

deitch commented Jun 22, 2021

That spec.loadBalancerIP is already interpreted differently in different installations makes it more like a "wildcard" annotation than a well defined K8s function. You can't trust it to mean the same thing when you move your app to a new environment. In that respect I think an annotation is more suitable. It is installation specific.

The spec.loadBalancerIPs seems even more uncertain than spec.loadBalancerIPs when reading the comments in this PR.

That makes a lot of sense. You kind of almost want to have a list in Spec that looks like (I prefer yaml over the go struct):

spec:
  type: LoadBalancer
  ports:
  - nodePort: 31868
    port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: nginx
  cluster:
  - family: IPv4
     ip: 192.168.150.149
  - family: IPv6
     clusterIP: 2001:::::8a2e:0123:4567
  loadBalancer:
  - family: IPv4
     ip: 99.88.77.66
  - family: IPv6
     clusterIP: 1999:::::8a2e:0123:4567

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 20, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Oct 20, 2021
@flokli
Copy link

flokli commented Oct 20, 2021

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Oct 20, 2021
@thockin
Copy link
Member

thockin commented Dec 10, 2021

Do we have new thinking or information to re-evaluate this proposal?

@uablrek
Copy link
Author

uablrek commented Dec 11, 2021

Do we have new thinking or information to re-evaluate this proposal?

I haven't. I am still in favor of dropping this in favor of annotations, please see #1992 (comment).

I think few cloud-providers (none?) will expose an external address defined by the user in this way. Remaining is metallb and custom-implementations, and they are implementation specific, so an annotation is appropriate IMO.

@uablrek
Copy link
Author

uablrek commented Dec 11, 2021

I think few cloud-providers (none?) will expose an external address defined by the user in this way.

At least not without confirming that the address is legit by some cloud configuration interface. And then the spec-loadBalancerIPs would serve no purpose since it must be configured in the cloud anyway.

@thockin
Copy link
Member

thockin commented Dec 17, 2021

Perhaps we should deprecate this field - not remove it, but document that its meaning is implementation specific. Then, if we really have demand, we can add a net new field (or fields) with clearer semantics.

What do you think?

@uablrek
Copy link
Author

uablrek commented Dec 18, 2021

Perhaps we should deprecate this field - not remove it, but document that its meaning is implementation specific. Then, if we really have demand, we can add a net new field (or fields) with clearer semantics.

What do you think?

I think that is the best way, yes.

@thockin
Copy link
Member

thockin commented Dec 21, 2021

Do you want to make a PR for that, or shall I?

@uablrek
Copy link
Author

uablrek commented Dec 22, 2021

Do you want to make a PR for that, or shall I?

I can do it, I will get time next week between Christmas and new year. But please, point me to an example of some other api field that have been deplrecated so I know what should be done. Also, is there some documentation repo(s) that needs a PR?

@thockin
Copy link
Member

thockin commented Dec 22, 2021 via email

@uablrek
Copy link
Author

uablrek commented Dec 23, 2021

Here is how it's done on Azure; https://docs.microsoft.com/en-us/azure/aks/static-ip#create-a-service-using-the-static-ip-address

You have to define the lb-address in the cloud with az network public-ip create ... and you must have an Azure annotation in the service manifest and you must specify loadBalancerIP.

On Azure an annotation might look like;

metadata:
  annotations:
    service.beta.kubernetes.io/azure-load-balancer-resource-group: myResourceGroup
    service.beta.kubernetes.io/azure-load-balancer-ips: myAKSPublicIP,myAKSPublicIP-ipv6
  name: azure-load-balancer
...

Since the public-ips have names it probably is a better user experience to use them instead of raw ip addresses.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/kep Categorizes KEP tracking issues and PRs modifying the KEP directory needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. sig/network Categorizes an issue or PR as relevant to SIG Network. size/M Denotes a PR that changes 30-99 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.