Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for Services/Ingresses with externally managed endpoints (e.g. type=NodePort/HostPort) #588

Closed
whereisaaron opened this issue Jun 6, 2018 · 9 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@whereisaaron
Copy link

Services and Ingresses with type=NodePort and sometimes type=HostPort are often externally managed endpoints providing NAT, reverse proxy, load balancers, CDNs, or combinations of these that eventually route to the Service/Ingress. external-dns can't be expected to work out the external IP addresses or domain name that will get traffic to the NodePort/HostPort service.

I propose external-dns support annotations that let users supply the correct external IP address list or domain name that should be used at the target of the DNS record. E.g. specify either and target-hostname or target-ips annotation.

e.g. create myapp.example.com CNAME mycdn.domain.name for Service

apiVersion: v1
kind: Service
metadata:
  name: myapp
  annotations:
    external-dns.alpha.kubernetes.io/hostname: myapp.example.com.
    external-dns.alpha.kubernetes.io/target-hostname: mycdn.domain.name.
...

e.g. create myapp.example.com A 10.20.30.40 and myapp.example.com A 10.20.30.41 for Service

apiVersion: v1
kind: Service
metadata:
  name: myapp
  annotations:
    external-dns.alpha.kubernetes.io/hostname: myapp.example.com.
    external-dns.alpha.kubernetes.io/target-ips: "10.20.30.40, 10.20.30.41"
...

e.g. create myapp.example.com CNAME mycdn.domain.name for Ingress

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: myapp
  annotations:
    kubernetes.io/ingress.class: "nginx"
    external-dns.alpha.kubernetes.io/target-hostname: mycdn.domain.name.
spec:
  rules:
  - host: myapp.example.com
...

e.g. create myapp.example.com CNAME mycdn.domain.name and
myapp-alias.example.com CNAME mycdn.domain.name for Ingress

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: myapp
  annotations:
    kubernetes.io/ingress.class: "nginx"
    external-dns.alpha.kubernetes.io/target-hostname: mycdn.domain.name.
spec:
  rules:
  - host: myapp.example.com
  - host: myapp-alias.example.com
...

e.g. create
myapp.example.com A 10.20.30.40
myapp.example.com A 10.20.30.41
myapp-alias.example.com A 10.20.30.40
myapp-alias.example.com A 10.20.30.41
for Ingress

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: myapp
  annotations:
    kubernetes.io/ingress.class: "nginx"
    external-dns.alpha.kubernetes.io/target-ips: "10.20.30.40, 10.20.30.41"
spec:
  rules:
  - host: myapp.example.com
  - host: myapp-alias.example.com
...
@montyz
Copy link

montyz commented Aug 23, 2018

I could really use this. In order to support IPv6 on AWS I need to use an ALB, which is not supported as a type LoadBalancer. So I set the ALB up manually and expose ingress as a NodePort. It would be great if I could specify my ALB name in an annotation and have external-dns create the CNAME pointing from the ingress hostname to my ALB name.

@Raffo
Copy link
Contributor

Raffo commented Aug 24, 2018

@whereisaaron I'm not sure about this, it seems to be pushing what ExternalDNS was designed to do a bit over. While I understand why you want to do this in general, I'm not sure if it should be ExternalDNS to do that (i.e. Route53 can be managed via CloudFormation on AWS).
That said, if we decide to move forward with that, I wouldn't use annotations, but rather a CRD (to define?) to make very clear what is happening.

@whereisaaron
Copy link
Author

Thanks @Raffo. I agree it extends the scope of external-dns, but I think it still fits the mission. And otherwise we'll end up with two different DNS controllers for different use cases for provisioning external DNS records for ingress. I'd rather that external-dns was the definitive solution for external DNS for ingress (e.g. paired with cert-manager for certificates and optionally nginx-ingress), than fragmented point solutions.

This is the missing link for us for k8s deployments. Everything else for ingress, like load balancers, ingress controller configuration, or TLS certificates, can be configured from k8s resources. Right now, after doing helm install and have to follow up with a second action against e.g. the Route53 API. We'd like the domain names to be a first class component of the deployment (small 'd') configuration.

Yes CRDs would be the best, stable, long-term approach. I suggested annotations to be consistent with the other annotations I saw used for external-dns. And because annotations can be handy still, since it is easier to inject them into existing Helm charts.

@Raffo
Copy link
Contributor

Raffo commented Aug 28, 2018

I see, but I'd still go with a CRD as it seems hacky to adopt annotations on Services/Ingresses for this particular use case. I don't have a strong opinion though, WDYT @linki @ideahitme @njuettner ?

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 25, 2019
@whereisaaron
Copy link
Author

whereisaaron commented May 6, 2019

Ref #21 #555 #892

@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jun 5, 2019
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

lou-lan pushed a commit to lou-lan/external-dns that referenced this issue May 11, 2022
* Update each index instead of just default

* Fix linting issue

* Split into two tests

* Code review changes

* Wrap error in search

* Add integration tests

Switch back to sequential updates. The test where gitutil.EnsureUpdated
fails seems to hang when it happens in a goroutine for some reason.

* Code review changes

* Code review changes
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

5 participants