Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add host.docker.internal to certSAN list #566

Closed
Ilyes512 opened this issue May 26, 2019 · 8 comments
Closed

Add host.docker.internal to certSAN list #566

Ilyes512 opened this issue May 26, 2019 · 8 comments
Labels
kind/feature Categorizes issue or PR as related to a new feature.

Comments

@Ilyes512
Copy link

What would you like to be added:
I would like to add host.docker.internal to the kubeadm certSAN list.

This is both usefull for Macos and Windows.

I think I found the place that would need to be changed:
See SourceGraph

Why is this needed:
So I can connect to the Kind cluster from within another container (same host, but not inside the cluster).

At this point you get a warning if you change https://localhost:<port> to https://host.docker.internal:<port>:

Unable to connect to the server: x509: certificate is valid for traefik-control-plane, kubernetes, kubernetes.default, kubernetes.default.svc, kubernetes.default.svc.cluster.local, localhost, not host.docker.internal

There are at this point two ways to get around the above error:

  1. Use the global kubectl flag --insecure-skip-tls-verify.
  2. Use a kind config with a jsonpatch that adds the host to the list of certSANs:
# kind-config.yml
kind: Cluster
apiVersion: kind.sigs.k8s.io/v1alpha3
kubeadmConfigPatchesJson6902:
  - group: kubeadm.k8s.io
    version: v1beta1
    kind: ClusterConfiguration
    patch: |
      - op: add
        path: /apiServer/certSANs/-
        value: host.docker.internal

@BenTheElder asked my to create an issue for the above, see our Slack chat: https://kubernetes.slack.com/archives/CEKK1KTN2/p1558816949030500

@Ilyes512 Ilyes512 added the kind/feature Categorizes issue or PR as related to a new feature. label May 26, 2019
@aojea
Copy link
Contributor

aojea commented May 27, 2019

@Ilyes512 bear in mind that you can connect directly to the cluster API from the new container, i.e. obtaining the container/node ip address to avoid to use the host.docker.internal.

There is one patch that allows you to get the internal kubeconfig #478 that you can use once is merged instead of replacing localhost with host.docker.internal

@tao12345666333
Copy link
Member

@Ilyes512 bear in mind that you can connect directly to the cluster API from the new container, i.e. obtaining the container/node ip address to avoid to use the host.docker.internal.

+1 or you can add a new container directly to the control-plane's network stack by pass --network to docker run

@Ilyes512
Copy link
Author

@aojea:
The --internal flag looks good and would indeed work for my purpose.

@tao12345666333:

you can add a new container directly to the control-plane's network stack by pass --network to docker run

How would I do this? I know I can create a new network and add both the control-plane and the container to it so I can connect to the control-plane by hostname. Not sure how I can do that without doing this though?

@aojea
Copy link
Contributor

aojea commented May 27, 2019

@Ilyes512 the kubeconfig that you have on your host is the same that the nodes have but replacing the internal ip address and port by localhost and the forwarded port.

You can obtain it with docker exec kind-control-plane sh -c 'cat /etc/kubernetes/admin.conf' and use in other containers

@tao12345666333
Copy link
Member

How would I do this? I know I can create a new network and add both the control-plane and the container to it so I can connect to the control-plane by hostname. Not sure how I can do that without doing this though?

just like:

(MoeLove) ➜  ~ docker run --rm -d redis
b6a9a6076bd9e9c70818eb5690dadc9de39fb082d26bcf38cd834e73e8dc6639
d%                                                                                                                                                                                                                   (MoeLove) ➜  ~ docker ps -l
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS               NAMES
b6a9a6076bd9        redis               "docker-entrypoint.s…"   5 seconds ago       Up 3 seconds        6379/tcp            practical_blackburn
(MoeLove) ➜  ~ docker run --rm -it --network container:b6a9a6076bd9 redis sh 
# redis-cli
127.0.0.1:6379> ping
PONG
127.0.0.1:6379> 
# hostname
b6a9a6076bd9
# 

--network container:<Your Control Plane Container ID>

@Ilyes512
Copy link
Author

Should I close this? docker exec kind-control-plane sh -c 'cat /etc/kubernetes/admin.conf' will do the trick until the --internal flag is added.

@aojea
Copy link
Contributor

aojea commented May 30, 2019

@BenTheElder I think that you fixed this with #573

@BenTheElder
Copy link
Member

#478 adds an equivilant to #566 (comment) 😅

stg-0 pushed a commit to stg-0/kind that referenced this issue Jul 31, 2024
* Fix table view on pause version

* Fix Permission table AWS to Azure

* Update GKE -docs- permissions

* Remove files from -docs-, already on stradio-docs

* Update file names

* Add GCP permiisions table structure
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature.
Projects
None yet
Development

No branches or pull requests

4 participants