Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add DNS entry for Endpoint IP (if not using type loadbalancer) #187

Closed
evaldasou opened this issue May 3, 2017 · 18 comments
Closed

Add DNS entry for Endpoint IP (if not using type loadbalancer) #187

evaldasou opened this issue May 3, 2017 · 18 comments
Labels
help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/feature Categorizes issue or PR as related to a new feature. size/M Denotes a PR that changes 30-99 lines, ignoring generated files.

Comments

@evaldasou
Copy link

Hey Guys,

Thanks for a great tool.
However, is it possible to get DNS entries updated with Internal IPs? Or with Endpoints IPs?
I do not want to expose service to the internet, so type loadbalancer is not ideal for this.

Thanks!

@hjacobs
Copy link
Contributor

hjacobs commented May 3, 2017

AFAIK this should "just work" for service and ingress as long as the Kubernetes field status.loadBalancer.ingress is properly populated: External DNS only treats hostnames in a special way (no A record possible, also check for AWS ELB hosted zone), all other IPs are just used as-is. Even a local IP like 127.0.0.1 should work (but would not make sense).

There is no special check for service type "LoadBalancer" as you can see in https://github.com/kubernetes-incubator/external-dns/blob/master/source/service.go .

Maybe you can describe your use case in more detail? I'm not entirely sure what you want to achieve.

@evaldasou
Copy link
Author

evaldasou commented May 3, 2017

Hey @hjacobs , thanks for quick response!

I deploy my service like this :

kubectl run nginx --image=nginx --replicas=1 --port=80
kubectl expose deployment nginx --port=80 --target-port=80 --type=LoadBalancer
kubectl annotate service nginx "external-dns.alpha.kubernetes.io/hostname=nginx1.test.net" 

However, this creates a LoadBalancer with External IP address.
I would like to expose deployment, without using --type=LoadBalancer command ... and get Endpoint IP populated in my DNS zone. Like this :

kubectl expose deployment nginx --port=80 --target-port=80

kubectl describe service nginx
Name:                   nginx
Namespace:              default
Labels:                 run=nginx
Annotations:            <none>
Selector:               run=nginx
Type:                   ClusterIP
IP:                     10.111.253.237
Port:                   <unset> 80/TCP
**Endpoints:              10.108.2.78:80**

I want this IP : 10.108.2.78 in my DNS zone configuration :)

@hjacobs
Copy link
Contributor

hjacobs commented May 3, 2017

@evaldasou hmm, why do you want to expose the internal endpoint IPs in public DNS? Also why are you talking about endpoint IPs and not the ClusterIP of the service (10.111.253.237 in your example)? The service might have an "unlimited" number of endpoints --- would you expect to have load balancing on the DNS side for all those IP (DNS round robin)? FYI: Inside the cluster you will get a DNS entry for the ClusterIP "out of the box" via kube-dns (you can just do "curl http://nginx/" from some pod).

I still don't get your use case, maybe you can elaborate...

@evaldasou
Copy link
Author

Sure.

So first, ClusterIP is only reachable from within the cluster... If I could connect to it from outside of the cluster - it's all good! I want to access my resources from Internal only/ VPN network via DNS names.

I agree that Endpoints IP makes no sense for multiple Endpoints too, but they are reachable at least outside of the cluster (not as ClusterIP).

So I want my services to be reachable only via Internal IPs (not via internet). It could be cluster IP or endpoint IP.
LoadBalancer does not work with Internal IPs as I know, and also, I cannot limit access to LoadBalancer via firewall rules. I'm using Google Cloud Platform , and it's possible to configure firewall rules for instances, not loadbalancers.

Thanks!

@jrnt30
Copy link
Contributor

jrnt30 commented May 4, 2017

We actually have exactly the same situation. We run PriTunl via a LoadBalancer service but the other services we want to expose via the VPN connection, not through an ELB that we need to manage and deal with.

@evaldasou What we are in the process of doing currently is standing up an internal ELB that fronts the nginx-ingress controller and publish the Services as an Ingress object. This keeps the DNS records internal and doesn't expose them to the world ever. We then are publishing these to an internal DNS hosted zone that is resolvable via the PriTunl VPN connection that is running in the VPN itself. I see you're on GCE so I'm not sure if that would help as I'm not all that familiar with running/exposing a service like the nginx-ingress controller on GCE without exposing it.

@linki
Copy link
Member

linki commented May 4, 2017

@evaldasou did you have a look at Headless Services? KubeDNS will serve A records for each pod belonging to a headless service.

In your example above this would lead to something like this, I believe:

$ dig @kubednsIP nginx.default.svc.cluster.local
10.108.2.78     <== pod IP
...

@jrnt30
Copy link
Contributor

jrnt30 commented May 4, 2017

That works, but it can be handy to have an abstraction over that that is "nicer" for end users that remains consistent and abstracts over the different namespaces. Our end users want something like redis.dev.vpn and redis.stg.vpn but we want the flexibility of potentially deploying stg and dev in the same k8 cluster but different namespaces or in completely different clusters.

jrnt30 pushed a commit to jrnt30/external-dns that referenced this issue May 5, 2017
- First pass at addresssing kubernetes-sigs#187 by allowing services with type ClusterIP to be directly supported
@linki
Copy link
Member

linki commented May 5, 2017

I see, that makes sense. I created an issue as well.

@linki linki added the kind/feature Categorizes issue or PR as related to a new feature. label May 5, 2017
@evaldasou
Copy link
Author

thanks a lot guys! really appreciate Your time and effort! looks promising! 👍

@jrnt30
Copy link
Contributor

jrnt30 commented May 5, 2017

@linki One thing I guess I should have mentioned with the Headless Service comment explicitly is that currently the extneral-dns doesn't support this, as it doesn't have the svc.Status.LoadBalancer.Ingress populated.

I started some work to support the ClusterIP service type as well, which I personally think is more useful than relying on the PodSpec IP, but perhaps I'm missing another use case.

I started some work on this @ master...jrnt30:clusterip-sources

@linki linki added size/medium help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. labels Jun 12, 2017
hjacobs pushed a commit that referenced this issue Aug 17, 2017
* ClusterIP service support

- First pass at addresssing #187 by allowing services with type ClusterIP to be directly supported

* Getting existing tests to pass

* Adjusting formatting for gofmt/govet

* Adding in guard logic around publishing of ClusterIP sources

* Addressing PR feedback

* Adding in CHANGELOG entry

* Adding in Headless service test
@evaldasou
Copy link
Author

evaldasou commented Aug 23, 2017

hey @jrnt30 !
It's a great fix that You have added ClusterIP - I have tested it works!
However, ClusterIP is reachable only inside Kubernetes Cluster! Why not to add EndpointIP too ?
So we could reach Kubernetes resources directly by DNS name from outside of the Cluster? :)

@jrnt30
Copy link
Contributor

jrnt30 commented Aug 23, 2017

I'm glad to see that it's working for you as well. Little context, a few questions and a direct answer to your question.

Context:
We run our VPN directly in the the cluster itself and expose the VPN server as a LoadBalancer. When our users VPN in, since the VPN server is sitting in the cluster and we have it configured to "own" that CIDR block and domain for the associated hosted zone, our users are able to use the external-dns managed entry to resolve and access those "internal" services.

We went this route due to some limitations we saw with the Ingress controller's ability to map arbitrary protocols/ports and a few other things I can't recall immediately.

Questions:
I'm unfamiliar with some alternate deployment techniques, but aren't the endpoints you via the kubectl get endpionts <svcname> similarly "private" and unrouteable? In my case, these IPs are the in-cluster IPs of the various pods and would be unreachable if not "inside" of the K8 cluster itself (as our VPN server is).

Can you provide a bit more information about what you are attempting to expose and what IPs the endpoint vs. service actually exposes?

Answer

  • Currently there is not any support for multiple targets for a single DNS entry. In the future it could be possible, after the multiple target support is introduced, to potentially expose the endpoints if those services were of type net = Host or something along those lines.

@evaldasou
Copy link
Author

evaldasou commented Aug 30, 2017

thanks @jrnt30
Actually all good with ClusterIP - I have configured it to work by changing routing configuration.
However, I want to ask, as I have tested --publish-internal-services works only with "ClusterIP" services.
Can we make it work with Type : "NodePort" too?

@jrnt30
Copy link
Contributor

jrnt30 commented Aug 30, 2017

That too will require the multiple target support as well, however we could create an issue to cover some of those.

@evaldasou
Copy link
Author

evaldasou commented Aug 30, 2017

@jrnt30 , NodePort can work with single target too, for example it looks like this on my service :

root:evaldas# kubectl describe svc dev-http
Name:			dev-http
Namespace:		default
Labels:			<none>
Annotations:		external-dns.alpha.kubernetes.io/hostname=dev.evaldas.net.
Selector:		app=nifi
Type:			NodePort
**IP:			10.111.240.79**
Port:			nifi-http	80/TCP
NodePort:		nifi-http	31846/TCP
Endpoints:		10.108.3.65:80,10.108.4.41:80
Session Affinity:	None
Events:			<none>

IP is the same as ClusterIP and could be exposed in this case.

@nrobert13
Copy link
Contributor

@evaldasou , how did you get it to work with Endpoints?
I've create a headless ClusterIP service as follows:

$ kubectl -n ingress describe service nginx-ingress-service 
Name:			nginx-ingress-service
Namespace:		ingress
Labels:			k8s-svc=nginx-ingress-service
Annotations:		<none>
Selector:		pod=nginx-ingress-lb
Type:			ClusterIP
IP:			None
Port:			http	80/TCP
Endpoints:		10.68.69.75:80,10.68.74.204:80,10.68.76.75:80 + 2 more...
Port:			https	443/TCP
Endpoints:		10.68.69.75:443,10.68.74.204:443,10.68.76.75:443 + 2 more...
Session Affinity:	None
Events:			<none>

running external-dns with the following flags:
--source=service --publish-internal-services --domain-filter=prod.k8s.vcdcc.example.info --provider=infoblox --txt-owner-id=ext-dns-k8s-prod --log-level=debug

but external-dns doesn't find anything to export ( marked the service in question with stars ):

DEBU[0002] No endpoints could be generated from service default/kubernetes 
DEBU[0002] No endpoints could be generated from service ingress/default-http-backend 
**DEBU[0002] No endpoints could be generated from service ingress/nginx-ingress-service** 
DEBU[0002] No endpoints could be generated from service kube-system/heapster 
DEBU[0002] No endpoints could be generated from service kube-system/kube-controller-manager-prometheus-discovery 
DEBU[0002] No endpoints could be generated from service kube-system/kube-dns 
DEBU[0002] No endpoints could be generated from service kube-system/kube-dns-prometheus-discovery 
DEBU[0002] No endpoints could be generated from service kube-system/kube-scheduler-prometheus-discovery 
DEBU[0002] No endpoints could be generated from service kube-system/kubelet 
DEBU[0002] No endpoints could be generated from service kube-system/kubernetes-dashboard 
DEBU[0002] No endpoints could be generated from service kube-system/monitoring-grafana 
DEBU[0002] No endpoints could be generated from service kube-system/monitoring-influxdb 
DEBU[0002] No endpoints could be generated from service monitoring/alertmanager-main 
DEBU[0002] No endpoints could be generated from service monitoring/alertmanager-operated 
DEBU[0002] No endpoints could be generated from service monitoring/grafana 
DEBU[0002] No endpoints could be generated from service monitoring/kube-state-metrics 
DEBU[0002] No endpoints could be generated from service monitoring/node-exporter 
DEBU[0002] No endpoints could be generated from service monitoring/prometheus-k8s 
DEBU[0002] No endpoints could be generated from service monitoring/prometheus-operated 
DEBU[0002] No endpoints could be generated from service monitoring/prometheus-operator 

@linki linki added size/M Denotes a PR that changes 30-99 lines, ignoring generated files. and removed size/medium labels Jan 2, 2018
ffledgling pushed a commit to ffledgling/external-dns that referenced this issue Jan 18, 2018
* ClusterIP service support

- First pass at addresssing kubernetes-sigs#187 by allowing services with type ClusterIP to be directly supported

* Getting existing tests to pass

* Adjusting formatting for gofmt/govet

* Adding in guard logic around publishing of ClusterIP sources

* Addressing PR feedback

* Adding in CHANGELOG entry

* Adding in Headless service test
@ekoome
Copy link

ekoome commented Jul 11, 2018

I would like to update external-dns with a node's PUBLIC IP as a deployment needs to use host networking and uses the hosts external IP. As suggested above how do I set status.loadBalancer.ingress with the external IP so that it can be picked up with external-dns?

@rhangelxs
Copy link

Vote for external-dns can use node public ip (ephemeral or static in GCE terms).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/feature Categorizes issue or PR as related to a new feature. size/M Denotes a PR that changes 30-99 lines, ignoring generated files.
Projects
None yet
Development

No branches or pull requests

7 participants