-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add host.docker.internal to certSAN list #566
Comments
@Ilyes512 bear in mind that you can connect directly to the cluster API from the new container, i.e. obtaining the container/node ip address to avoid to use the host.docker.internal. There is one patch that allows you to get the |
+1 or you can add a new container directly to the control-plane's network stack by pass |
@aojea:
How would I do this? I know I can create a new network and add both the control-plane and the container to it so I can connect to the control-plane by hostname. Not sure how I can do that without doing this though? |
@Ilyes512 the kubeconfig that you have on your host is the same that the nodes have but replacing the internal ip address and port by localhost and the forwarded port. You can obtain it with |
just like:
|
Should I close this? |
@BenTheElder I think that you fixed this with #573 |
#478 adds an equivilant to #566 (comment) 😅 |
* Fix table view on pause version * Fix Permission table AWS to Azure * Update GKE -docs- permissions * Remove files from -docs-, already on stradio-docs * Update file names * Add GCP permiisions table structure
What would you like to be added:
I would like to add
host.docker.internal
to the kubeadm certSAN list.This is both usefull for Macos and Windows.
I think I found the place that would need to be changed:
See SourceGraph
Why is this needed:
So I can connect to the Kind cluster from within another container (same host, but not inside the cluster).
At this point you get a warning if you change
https://localhost:<port>
tohttps://host.docker.internal:<port>
:There are at this point two ways to get around the above error:
--insecure-skip-tls-verify
.@BenTheElder asked my to create an issue for the above, see our Slack chat: https://kubernetes.slack.com/archives/CEKK1KTN2/p1558816949030500
The text was updated successfully, but these errors were encountered: