Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can't access telepresence dns on 127.0.0.53 from within kubernetes cluster #3732

Open
billytrend-cohere opened this issue Nov 22, 2024 · 8 comments

Comments

@billytrend-cohere
Copy link
Contributor

billytrend-cohere commented Nov 22, 2024

I'm working in codespaces. I have a Kubernetes cluster within the codespace that needs to be able to access telepresence. Right now, I can connect to telepresence ips from within kubernetes but I cannot resolve domain names.

This is because telepresence DNS appears to resolve on 127.0.0.53. Unfortunately when I configure kubernetes to use 127.0.0.53 as a dns, the request just loops because I guess 127.0.0.53 within kubernetes doesn't point to the host.

Some ideas I had to resolve this issue.

  • should we run telepresence within a docker container in kubernetes and use that container ip as a dns? Would other pods in that cluster then be able to access the network interface that telepresence creates?
  • would we be able to run the telepresence dns on a non-loopback ip on the host so it can be accessed directly from within the kubernetes cluster?

Many thanks in advance

@thallgren
Copy link
Member

Why would you try to access telepresence from within the cluster? It's usually the other way around.

@billytrend-cohere
Copy link
Contributor Author

Interesting we have been using it successfully to connect our local dev cluster to our prod cluster. This works fine on macos but not on ubuntu because of the apparent differences in how the networking works.

@thallgren
Copy link
Member

I still don't understand what is is you're doing here. Are you running the Telepresence command line interface from within a pod in order to give that pod access to another cluster?

@billytrend-cohere
Copy link
Contributor Author

No we're running on the host.

The use case that has worked for us so far for local dev is to

  1. run telepresence in macos terminal to connect remote cluster
  2. start our local dev kubernetes cluster
  3. services running in the local cluster are then able to access the remote cluster

The issues we're having is that when running on linux in a codespace, the host network does not appear to be shared in the same way. From what I understand the main issue is the dns running on the loopback address in linux.

My main consideration for investigation is to maybe set up a dns proxy on the host that is available on a non-loopback ip so that the local cluster can use that proxy.

Let me know what you think of that setup or if this is wildly outside of telepresence's expected usecase; very grateful for your help so far.

@thallgren
Copy link
Member

thallgren commented Nov 29, 2024

Here's one idea, not sure if it's feasible though. But if you connect to your remote cluster with telepresence connect --docker, then Telepresence will start a containerized daemon. This daemon will have direct access to the cluster resources. If you then could start your local dev cluster using --network container:<name of Telepresence daemon>, then your local dev cluster would share that network.

The advantage with this setup is that it will work regardless of what OS you run on the host, and it will not affect the host network at all (no need for root, the /dev/net/tu,m or NETADMIN capabilites).

@billytrend-cohere
Copy link
Contributor Author

interesting thanks I'll try this!

@thallgren
Copy link
Member

@billytrend-cohere can you please try the new 2.21.0 release and check if things have improved? We've done some work on getting everything to work smoothly with codespaces.

@billytrend-cohere
Copy link
Contributor Author

oo exciting, will try

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants