Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How Can I configure external kubectl to control the vagrant kubernetes cluster? #62

Closed
stevanikof opened this issue Jul 24, 2018 · 3 comments

Comments

@stevanikof
Copy link

stevanikof commented Jul 24, 2018

I deployed a master + 3 node k8s cluster (on OSX)
From the master node I borrowed the ~/.kube/config file and placed in my ~/.kube/config on my Mac
Using HomeBrew, installed the kubectl package. This has worked before against minikube

Not able to communicate with the cluster

The master appears to have port 6443 mapped out on the host only adapter but I can't seem to connect/netcat to it

@AmedeeBulle
Copy link
Member

Binding the port is not enough, you need to tweak a bit the K8s config file.

Easiest is to have MANAGE_FROM_HOST=true (in your .env.local or Vagrantfile) at provision time:

# To manage your cluster from the vagrant host, set the following variable
# to true
MANAGE_FROM_HOST=true

The Kube Config file is copied as admin.conf on the host during provisioning, so it should work out-of-the-box:

pvanhaes@bommel $ vagrant status
Current machine states:

master                    running (virtualbox)
worker1                   running (virtualbox)
worker2                   running (virtualbox)

This environment represents multiple VMs. The VMs are all listed
above with their current state. For more information about a specific
VM, run `vagrant status NAME`.
 pvanhaes@bommel $ export KUBECONFIG=$(pwd)/admin.conf
 pvanhaes@bommel $ kubectl cluster-info
Kubernetes master is running at https://127.0.0.1:6443
KubeDNS is running at https://127.0.0.1:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
 pvanhaes@bommel $ kubectl get nodes
NAME                 STATUS    ROLES     AGE       VERSION
master.vagrant.vm    Ready     master    12m       v1.9.1+2.1.8.el7
worker1.vagrant.vm   Ready     <none>    7m        v1.9.1+2.1.8.el7
worker2.vagrant.vm   Ready     <none>    2m        v1.9.1+2.1.8.el7

@gvenzl
Copy link
Member

gvenzl commented Nov 8, 2018

Hey @AmedeeBulle, do you thin we can close this as "assumed answered"?

@AmedeeBulle
Copy link
Member

Yes, we can assume this is solved.
Can be reopened if further investigation is required.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants