Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot connect to tiller using helm v2.9.0: Error: Get http://localhost:8080/api/v1/namespaces/kube-system/configmaps?labelSelector=OWNER%!D(MISSING)TILLER: dial tcp [::1]:8080: connect: connection refused #3985

Closed
DonMartin76 opened this issue Apr 27, 2018 · 13 comments · Fixed by #3990

Comments

@DonMartin76
Copy link

I have a fresh 1.9.6 Azure AKS cluster (one node). Ran a helm init --upgrade --force-upgrade against it to get it to 2.9.0, admittedly without checking whether there was a tiller on it before. I have set KUBECONFIG to the configuration file of this cluster, and made sure I can connect to it. Everything works, but helm does not, or better helm version works, but apparently nothing else.

I have tried this with helm 2.9.0 (on macOS) and 2.8.2 (on Linux/in Docker), and both behave the same; so this may or may not be an issue with AKS, or with helm, or possibly both. The error messages come from tiller, so that's why I file an issue here:

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.3", GitCommit:"d2835416544f298c919e2ead3be3d0864b52323b", GitTreeState:"clean", BuildDate:"2018-02-09T21:51:54Z", GoVersion:"go1.9.4", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.6", GitCommit:"9f8ebd171479bec0ada837d7ee641dec2f8c6dd1", GitTreeState:"clean", BuildDate:"2018-03-21T15:13:31Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}

$ helm version
Client: &version.Version{SemVer:"v2.9.0", GitCommit:"f6025bb9ee7daf9fee0026541c90a6f557a3e0bc", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.9.0", GitCommit:"f6025bb9ee7daf9fee0026541c90a6f557a3e0bc", GitTreeState:"clean"}

$ helm ls
Error: Get http://localhost:8080/api/v1/namespaces/kube-system/configmaps?labelSelector=OWNER%!D(MISSING)TILLER: dial tcp [::1]:8080: connect: connection refused

Checking logs:

$ kubectl logs -n kube-system tiller-deploy-f7bd48bf-tfk8p 
[main] 2018/04/27 14:26:58 Starting Tiller v2.9.0 (tls=false)
[main] 2018/04/27 14:26:58 GRPC listening on :44134
[main] 2018/04/27 14:26:58 Probes listening on :44135
[main] 2018/04/27 14:26:58 Storage driver is ConfigMap
[main] 2018/04/27 14:26:58 Max history per release is 0
[storage] 2018/04/27 14:27:36 listing all releases with filter
[storage/driver] 2018/04/27 14:27:36 list: failed to list: Get http://localhost:8080/api/v1/namespaces/kube-system/configmaps?labelSelector=OWNER%3DTILLER: dial tcp [::1]:8080: connect: connection refused

And the env vars:

$ kubectl exec -n kube-system tiller-deploy-f7bd48bf-tfk8p -- env
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=tiller-deploy-f7bd48bf-tfk8p
TILLER_NAMESPACE=kube-system
TILLER_HISTORY_MAX=0
KUBERNETES_DASHBOARD_SERVICE_HOST=10.0.88.104
KUBERNETES_DASHBOARD_SERVICE_PORT=80
KUBERNETES_DASHBOARD_PORT=tcp://10.0.88.104:80
KUBE_DNS_PORT_53_UDP_ADDR=10.0.0.10
KUBERNETES_SERVICE_PORT_HTTPS=443
HEAPSTER_SERVICE_HOST=10.0.52.123
HEAPSTER_PORT_80_TCP_ADDR=10.0.52.123
KUBE_DNS_PORT=udp://10.0.0.10:53
KUBE_DNS_PORT_53_TCP=tcp://10.0.0.10:53
TILLER_DEPLOY_PORT_44134_TCP=tcp://10.0.239.195:44134
KUBE_DNS_SERVICE_PORT=53
KUBE_DNS_PORT_53_UDP=udp://10.0.0.10:53
KUBERNETES_DASHBOARD_PORT_80_TCP_ADDR=10.0.88.104
KUBERNETES_SERVICE_PORT=443
HEAPSTER_SERVICE_PORT=80
HEAPSTER_PORT_80_TCP_PORT=80
KUBE_DNS_SERVICE_HOST=10.0.0.10
KUBE_DNS_PORT_53_TCP_PROTO=tcp
TILLER_DEPLOY_PORT_44134_TCP_ADDR=10.0.239.195
TILLER_DEPLOY_PORT_44134_TCP_PORT=44134
KUBERNETES_PORT_443_TCP=tcp://10.0.0.1:443
HEAPSTER_PORT_80_TCP_PROTO=tcp
KUBE_DNS_SERVICE_PORT_DNS=53
KUBE_DNS_PORT_53_UDP_PORT=53
KUBE_DNS_PORT_53_TCP_ADDR=10.0.0.10
TILLER_DEPLOY_PORT_44134_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP_PORT=443
HEAPSTER_PORT=tcp://10.0.52.123:80
HEAPSTER_PORT_80_TCP=tcp://10.0.52.123:80
KUBE_DNS_PORT_53_TCP_PORT=53
KUBERNETES_DASHBOARD_PORT_80_TCP=tcp://10.0.88.104:80
KUBERNETES_DASHBOARD_PORT_80_TCP_PORT=80
TILLER_DEPLOY_PORT=tcp://10.0.239.195:44134
TILLER_DEPLOY_SERVICE_PORT_TILLER=44134
KUBERNETES_SERVICE_HOST=10.0.0.1
KUBE_DNS_SERVICE_PORT_DNS_TCP=53
KUBERNETES_DASHBOARD_PORT_80_TCP_PROTO=tcp
TILLER_DEPLOY_SERVICE_PORT=44134
KUBERNETES_PORT=tcp://10.0.0.1:443
KUBERNETES_PORT_443_TCP_ADDR=10.0.0.1
KUBE_DNS_PORT_53_UDP_PROTO=udp
TILLER_DEPLOY_SERVICE_HOST=10.0.239.195
HOME=/tmp

Ideas? This felt like a very vanilla use case.

@DonMartin76
Copy link
Author

Okay, it seems to be this: #2464

This one-liner by @johnhamelink made it work for me as well:

kubectl -n kube-system patch deployment tiller-deploy -p '{"spec": {"template": {"spec": {"automountServiceAccountToken": true}}}}'

Is this an issue with AKS, or with Helm? When looking at what helm init --dry-run --debug produces, it certainly does look like an issue with Helm, as automountServiceAccountToken: false is in the deployment YAML.

@bacongobbler
Copy link
Member

bacongobbler commented Apr 27, 2018

According to the code, if no service account is provided in helm init then it does not auto-mount the service account token. https://github.com/kubernetes/helm/blob/4d519a741dd43f8f0c175228dee9e9df951a7c5c/cmd/helm/installer/install.go#L179

Can you try again with v2.8.2 and see if that works as described? #3784 was a recent 2.9 feature so it could be a regression.

EDIT: just saw that you did try this with 2.8.2. Hmm. Not sure what the fix is here then :(

@johnhamelink
Copy link

@DonMartin76 For me the issue was to do with how Terraform spins up AKS clusters: hashicorp/terraform-provider-kubernetes#38

@djzager
Copy link

djzager commented Apr 27, 2018

@bacongobbler your comment is an interesting one. If you look at my comment in the referenced issue, our Travis CI jobs started running into this earlier today (we are installing the latest released version of helm). But I haven't been able to reproduce locally with helm version 2.8.1

@Chili-Man
Copy link

Chili-Man commented Apr 27, 2018

We're running into this same issue as well on a kops launched kubernetes (1.9.7) cluster on AWS , but it was working fine with helm 2.8.2 .

Can confirm that running:

kubectl -n kube-system patch deployment tiller-deploy -p '{"spec": {"template": {"spec": {"automountServiceAccountToken": true}}}}'

fixes the issue for helm 2.9.0

@bacongobbler
Copy link
Member

another workaround for the time being:

helm init --service-account default

@bacongobbler
Copy link
Member

If you would be so kind as to test #3990, that would be appreciated. Seems like there was a regression in 2.9.

@bacongobbler
Copy link
Member

alternative PR: #3991

@DonMartin76
Copy link
Author

DonMartin76 commented Apr 28, 2018

If you would be so kind as to test #3990, that would be appreciated. Seems like there was a regression in 2.9.

@bacongobbler I will try to do that, but I haven't yet built helm myself, so I can't promise it will work out directly - or are there binaries somewhere?

@bacongobbler
Copy link
Member

@DonMartin76 because the patch has been merged into master you can test following the "From Canary" section in the docs. :)

@huangyuqi
Copy link

the issue can be fixed just by exec
kubectl -n kube-system patch deployment tiller-deploy -p '{"spec": {"template": {"spec": {"automountServiceAccountToken": true}}}}'

@bacongobbler
Copy link
Member

It's also been fixed in 2.9.1. :)

@gdville
Copy link

gdville commented Feb 19, 2022

error: failed to create serviceaccount: Post "http://localhost:8080/api/v1/namespaces/kube-system/serviceaccounts?fieldManager=kubectl-create": dial tcp 127.0.0.1:8080: connect: connection refused

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

7 participants