Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

external-dns doesn't find the service #403

Closed
nrobert13 opened this issue Nov 28, 2017 · 23 comments
Closed

external-dns doesn't find the service #403

nrobert13 opened this issue Nov 28, 2017 · 23 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@nrobert13
Copy link
Contributor

Hey,

I'm trying to get external-dns work with infoblox provider.
tried the exmple from the tutorial, but external-dns doesn't create records.

this is how my service looks like:

$ kubectl -n ingress describe service nginx-ingress-service
Name:			nginx-ingress-service
Namespace:		ingress
Labels:			k8s-svc=nginx-ingress-service
Annotations:		external-dns.alpha.kubernetes.io/hostname=prod.k8s.vcdcc.example.info
Selector:		pod=nginx-ingress-lb
Type:			LoadBalancer
IP:			10.233.12.109
Port:			http	80/TCP
NodePort:		http	31742/TCP
Endpoints:		10.68.69.75:80,10.68.74.204:80,10.68.76.75:80 + 2 more...
Port:			https	443/TCP
NodePort:		https	32204/TCP
Endpoints:		10.68.69.75:443,10.68.74.204:443,10.68.76.75:443 + 2 more...
Session Affinity:	None
Events:			<none>

when I run the external-dns with the following flags ( + environment variables ):

$docker run registry.opensource.zalan.do/teapot/external-dns --kubeconfig="/root/config" --source=service --domain-filter=prod.k8s.vcdcc.example.info --provider=infoblox --txt-owner-id=ext-dns-k8s-prod --log-level=debug

I only get the following output, nothing in infoblox.

INFO[0000] config: &{Master: KubeConfig:/root/config Sources:[service] Namespace: AnnotationFilter: FQDNTemplate: Compatibility: PublishInternal:false Provider:infoblox GoogleProject: DomainFilter:[prod.k8s.vcdcc.travian.info] AWSZoneType: AzureConfigFile:/etc/kubernetes/azure.json AzureResourceGroup: CloudflareProxied:false InfobloxGridHost:infoblox.example.info InfobloxWapiPort:443 InfobloxWapiUsername:<user> InfobloxWapiPassword:<pwd> InfobloxWapiVersion:2.3.1 InfobloxSSLVerify:true InMemoryZones:[] Policy:sync Registry:txt TXTOwnerID:ext-dns-k8s-prod TXTPrefix: Interval:1m0s Once:false DryRun:false LogFormat:text MetricsAddress::7979 LogLevel:debug} 
INFO[0000] Connected to cluster at https://master01.prod.k8s.vcdcc.example.info:6443 
DEBU[0002] No endpoints could be generated from service default/kubernetes 
DEBU[0002] No endpoints could be generated from service ingress/default-http-backend 
DEBU[0002] No endpoints could be generated from service ingress/nginx-ingress-service 
DEBU[0002] No endpoints could be generated from service kube-system/heapster 
DEBU[0002] No endpoints could be generated from service kube-system/kube-controller-manager-prometheus-discovery 
DEBU[0002] No endpoints could be generated from service kube-system/kube-dns 
DEBU[0002] No endpoints could be generated from service kube-system/kube-dns-prometheus-discovery 
DEBU[0002] No endpoints could be generated from service kube-system/kube-scheduler-prometheus-discovery 
DEBU[0002] No endpoints could be generated from service kube-system/kubelet 
DEBU[0002] No endpoints could be generated from service kube-system/kubernetes-dashboard 
DEBU[0002] No endpoints could be generated from service kube-system/monitoring-grafana 
DEBU[0002] No endpoints could be generated from service kube-system/monitoring-influxdb 
DEBU[0002] No endpoints could be generated from service monitoring/alertmanager-main 
DEBU[0002] No endpoints could be generated from service monitoring/alertmanager-operated 
DEBU[0002] No endpoints could be generated from service monitoring/grafana 
DEBU[0002] No endpoints could be generated from service monitoring/kube-state-metrics 
DEBU[0002] No endpoints could be generated from service monitoring/node-exporter 
DEBU[0002] No endpoints could be generated from service monitoring/prometheus-k8s 
DEBU[0002] No endpoints could be generated from service monitoring/prometheus-operated 
DEBU[0002] No endpoints could be generated from service monitoring/prometheus-operator 

Thanks,
Robert

@hjacobs
Copy link
Contributor

hjacobs commented Nov 28, 2017

Can you check what the status field of your LoadBalancer service says? External DNS will only create a record for your LoadBalancer service if the status field is populated (e.g. by Kubernetes creating the ELB).

@nrobert13
Copy link
Contributor Author

Not sure what you mean by status field of the Loadbalancer service. maybe this?

$  kubectl -n ingress get service nginx-ingress-service
NAME                    CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
nginx-ingress-service   10.233.12.109   <pending>     80:31742/TCP,443:32204/TCP   15h

I'm on premise, so I guess this applies only to public cloud providers, right? I'm not really aware of a way to let kubernetes create ELB on premise.

@hjacobs
Copy link
Contributor

hjacobs commented Nov 29, 2017

@nrobert13 ok, that's why it's not working: External DNS will just create DNS records to point to the service "External IP / Load Balancer" and in your case it's empty. You can either:

Example of a properly filled status field (on AWS):

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  creationTimestamp: 2017-01-18T16:03:17Z
  generation: 2
  name: myapp
  namespace: default
  resourceVersion: "41632270"
  selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/myapp
  uid: a003f57b-dd97-1234-8ee7-06af11f8e77b
spec:
  rules:
  - host: myapp.example.org
    http:
      paths:
      - backend:
          serviceName: myapp
          servicePort: 80
status:
  loadBalancer:
    ingress:
    - hostname: aws-1234-lb-123znwf3n9dgs-1728323123.eu-central-1.elb.amazonaws.com

@nrobert13
Copy link
Contributor Author

@evaldasou was saying in this issue that he could get the endpoints in DNS with the ClusterIP service and using the --publish-internal-services. I use the follwoing flags, but still can't get the Endpoints into my Infoblox with the external-dns.

--source=service --publish-internal-services --domain-filter=prod.k8s.vcdcc.example.info --provider=infoblox --txt-owner-id=ext-dns-k8s-prod --log-level=debug

@khrisrichardson
Copy link
Contributor

@nrobert13 are you seeing the same log messages as before now that you've set --publish-internal-services?

@nrobert13
Copy link
Contributor Author

@khrisrichardson , yes, the same messages. I expected external-dns to create records for the Endpoints in my ingress/nginx-ingress-service service,

$ kubectl -n ingress describe service nginx-ingress-service 
Name:			nginx-ingress-service
Namespace:		ingress
Labels:			k8s-svc=nginx-ingress-service
Annotations:		<none>
Selector:		pod=nginx-ingress-lb
Type:			ClusterIP
IP:			None
Port:			http	80/TCP
Endpoints:		10.68.69.75:80,10.68.74.204:80,10.68.76.75:80 + 2 more...
Port:			https	443/TCP
Endpoints:		10.68.69.75:443,10.68.74.204:443,10.68.76.75:443 + 2 more...
Session Affinity:	None
Events:			<none>

but instead still getting the following:

No endpoints could be generated from service ingress/nginx-ingress-service 

running external-dns with the following flags:

--source=service --publish-internal-services --domain-filter=prod.k8s.vcdcc.example.info --provider=infoblox --txt-owner-id=ext-dns-k8s-prod --log-level=debug

@khrisrichardson
Copy link
Contributor

The reason I asked if it was the same set of messages is because I was wondering if instead your domain filter was too restrictive. Does prod.k8s.vcdcc.example.info refer to an actual (sub)zone in Infoblox? If not you should try editing that string to be the longest string in common with an actual (sub)zone. If your hostnames didn't match a domain filter, you would have seen a different log message, however.

@nrobert13
Copy link
Contributor Author

Does prod.k8s.vcdcc.example.info refer to an actual (sub)zone in Infoblox?

Yes!

@linki
Copy link
Member

linki commented Dec 12, 2017

@nrobert13 Your last example has no ClusterIP nor Annotations. Can you double-check?

@nrobert13
Copy link
Contributor Author

hey, sorry for the delay.

I thought external-dns will use the . for the records.
Anyways, I've got closer with the annotations, but still not as expected.

$ kubectl -n ingress describe service nginx-ingress-service 
Name:              nginx-ingress-service
Namespace:         ingress
Labels:            k8s-svc=nginx-ingress-service
Annotations:       external-dns.alpha.kubernetes.io/hostname=prod.k8s.vcdcc.example.info
Selector:          pod=nginx-ingress-lb
Type:              ClusterIP
IP:                None
Port:              http  80/TCP
TargetPort:        80/TCP
Endpoints:         10.68.69.75:80,10.68.74.204:80,10.68.76.75:80 + 2 more...
Port:              https  443/TCP
TargetPort:        443/TCP
Endpoints:         10.68.69.75:443,10.68.74.204:443,10.68.76.75:443 + 2 more...
Session Affinity:  None
Events:            <none>

now external-dns tries to create records, but not sure how. It tries to concat somethin in fron of the annotation I use, but seems to be an empty string. Should the annotation contain only the (sub)domain I want to add the record to, or the fqdn? either way, the output below doesn't look good.

DEBU[0006] Endpoints generated from service: ingress/nginx-ingress-service: [.prod.k8s.vcdcc.example.info 0 IN A 10.64.59.164 .prod.k8s.vcdcc.example.info 0 IN A 10.64.59.162 .prod.k8s.vcdcc.example.info 0 IN A 10.64.59.166 .prod.k8s.vcdcc.example.info 0 IN A 10.64.59.163 .prod.k8s.vcdcc.example.info 0 IN A 10.64.59.160]

INFO[0007] Would create A record named '.prod.k8s.vcdcc.example.info' to '10.64.59.164' for Infoblox DNS zone 'prod.k8s.vcdcc.example.info'. 
INFO[0007] Would create A record named '.prod.k8s.vcdcc.example.info' to '10.64.59.162' for Infoblox DNS zone 'prod.k8s.vcdcc.example.info'. 
INFO[0007] Would create A record named '.prod.k8s.vcdcc.example.info' to '10.64.59.166' for Infoblox DNS zone 'prod.k8s.vcdcc.example.info'. 
INFO[0007] Would create A record named '.prod.k8s.vcdcc.example.info' to '10.64.59.163' for Infoblox DNS zone 'prod.k8s.vcdcc.example.info'. 
INFO[0007] Would create A record named '.prod.k8s.vcdcc.example.info' to '10.64.59.160' for Infoblox DNS zone 'prod.k8s.vcdcc.example.info'. 
INFO[0007] Would create TXT record named '.prod.k8s.vcdcc.example.info' to '"heritage=external-dns,external-dns/owner=ext-dns-k8s-prod"' for Infoblox DNS zone 'prod.k8s.vcdcc.example.info'. 
INFO[0007] Would create TXT record named '.prod.k8s.vcdcc.example.info' to '"heritage=external-dns,external-dns/owner=ext-dns-k8s-prod"' for Infoblox DNS zone 'prod.k8s.vcdcc.example.info'. 
INFO[0007] Would create TXT record named '.prod.k8s.vcdcc.example.info' to '"heritage=external-dns,external-dns/owner=ext-dns-k8s-prod"' for Infoblox DNS zone 'prod.k8s.vcdcc.example.info'. 
INFO[0007] Would create TXT record named '.prod.k8s.vcdcc.example.info' to '"heritage=external-dns,external-dns/owner=ext-dns-k8s-prod"' for Infoblox DNS zone 'prod.k8s.vcdcc.example.info'. 
INFO[0007] Would create TXT record named '.prod.k8s.vcdcc.example.info' to '"heritage=external-dns,external-dns/owner=ext-dns-k8s-prod"' for Infoblox DNS zone 'prod.k8s.vcdcc.example.info'. 

@linki
Copy link
Member

linki commented Jan 2, 2018

@nrobert13 That looks odd. The hostname annotation is used as is. Please double-check that you don't include the leading dot in the annotation value. Then make sure the line with

DEBU[0006] Endpoints generated from service: ingress/nginx-ingress-service: [LIST OF ENDPOINTS]

is looking correct, i.e. [prod.k8s.vcdcc.example.info 0 IN A 10.64.59.164 ...] (no leading dots).

However, having multiple target IPs for the same hostname is not supported in ExternalDNS, yet. For now you would have to limit the number of endpoints to one for ExternalDNS to work.

It looks like you're trying to deploy nginx-ingress-controller without the --publish-service flag (but I might be wrong). You can make ExternalDNS work with nginx-ingress-controller by using this flag. Follow this tutorial for guidance.

@nrobert13
Copy link
Contributor Author

I'm trying this with v0.4.8 now but still the same issue.

$ kubectl -n kube-system describe services dashboard-external-dns 
Name:              dashboard-external-dns
Namespace:         kube-system
Labels:            k8s-svc=dashboard-external-dns
Annotations:       external-dns.alpha.kubernetes.io/hostname=dashboard.dev.k8s.vcdcc.example.info
Selector:          k8s-app=kubernetes-dashboard
Type:              ClusterIP
IP:                None
Port:              http  8443/TCP
TargetPort:        8443/TCP
Endpoints:         10.68.99.168:8443
Session Affinity:  None
Events:            <none>

and getting this log:

DEBU[0002] Unable to associate dashboard-external-dns headless service with a Cluster IP 
DEBU[0002] Generating matching endpoint .dashboard.dev.k8s.vcdcc.example.info with HostIP 10.64.58.35 
DEBU[0002] Endpoints generated from service: kube-system/dashboard-external-dns: [.dashboard.dev.k8s.vcdcc.example.info 0 IN A 10.64.58.35] 

INFO[0003] Would create A record named '.dashboard.dev.k8s.vcdcc.example.info' to '10.64.58.35' for Infoblox DNS zone 'dev.k8s.vcdcc.example.info'. 
INFO[0003] Would create TXT record named '.dashboard.dev.k8s.vcdcc.example.info' to '"heritage=external-dns,external-dns/owner=ext-dns-k8s-dev"' for Infoblox DNS zone 'dev.k8s.vcdcc.example.info'. 

running with:

registry.opensource.zalan.do/teapot/external-dns:v0.4.8 --kubeconfig="/root/.kube/config" --source=service --publish-internal-services --domain-filter=dev.k8s.vcdcc.example.info --provider=infoblox --txt-owner-id=ext-dns-k8s-dev --no-infoblox-ssl-verify --log-level=debug --dry-run

In infoblox there's a subzone: dev.k8s.vcdcc.example.info

I see 2 problems:

  1. tries to create the records with dash in front
  2. tries to create the records with the IP of the node on which the pod is running instead of the pod ip ( using calico ), the service resource doesn't have the node ip at all, not sure where's it coming from ...

@linki
Copy link
Member

linki commented Mar 15, 2018

yeah, that seems broken. Regarding your problems:

  1. This is a bug in your version when the Pod has no spec.hostname set. Use v0.5.0-alpha.1 to mitigate this (In your case it will omit the beginning dot)
  2. This is actually in the code. It takes the Pod's HostIP which seems wrong to me. The Pod IP should be fine and in case of hostNetwork: true should be the node IP. I created an issue: Headless Service: Targets contain NodeIP instead of PodIP #496.

@nrobert13
Copy link
Contributor Author

nrobert13 commented Mar 15, 2018

thanks for the quick reply,

indeed,the version indicated fixes the first problem

 Endpoints generated from service: kube-system/dashboard-external-dns: [dashboard.dev.k8s.vcdcc.example.info 0 IN A 10.64.58.35] 

would it be possible to override spec.hostname with the hostname annotation.

Use case: create records for pods created dynamically ( i.e. operators ), where the admin cannot control the spec.hostname, but still would have a meaningful dns record for it.

@channprj
Copy link

channprj commented Jun 7, 2018

I set up a cluster on bare metal machines, but it doesn' work.
External IP still pending, and DNS Record changes nothing... Need help! 😭

@igoratencompass
Copy link

igoratencompass commented Oct 31, 2018

I'm seeing the same issue in AWS:

time="2018-10-31T01:40:55Z" level=debug msg="No endpoints could be generated from service namespace/service-name"

Although I have the external-dns.alpha.kubernetes.io/hostname annotation in the service of type: LoadBalancer. I can also see:

status:
  loadBalancer:
    ingress:
    - hostname: ad30xxxxx.eu-west-1.elb.amazonaws.com

in the service status. I'm using the registry.opensource.zalan.do/teapot/external-dns:v0.5.6 image. The DNS for Ingress works fine with this image. Kubernetes is 1.10.8 deployed via kops.

@igoratencompass
Copy link

I also noticed that the annotation is missing from the kubectl.kubernetes.io/last-applied-configuration:

apiVersion: v1
kind: Service
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"service-name"},"name":"service-name","namespace":"namespace"},"spec":{"ports":[{"name":"service-name","port":80,"protocol":"TCP","targetPort":5000}],"selector":{"app":"service-name"},"sessionAffinity":"None","type":"LoadBalancer"}}
...

Is this normal?

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 26, 2019
@igoratencompass
Copy link

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 27, 2019
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 26, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Aug 25, 2019
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

8 participants