Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Document how to create dashboard & other key components #202

Closed
justinsb opened this issue Jul 25, 2016 · 4 comments
Closed

Document how to create dashboard & other key components #202

justinsb opened this issue Jul 25, 2016 · 4 comments
Labels
Milestone

Comments

@justinsb
Copy link
Member

No description provided.

@justinsb justinsb added the P0 label Jul 25, 2016
@justinsb justinsb modified the milestone: 1.3.0 Jul 29, 2016
@azell
Copy link

azell commented Aug 2, 2016

I installed a fresh k8s 1.3.4 cluster on AWS and find that the dashboard has been installed automatically. However the dashboard pod has errors:

Starting HTTP server on port 9090
Creating API server client for https://100.64.0.1:443
Error while initializing connection to Kubernetes apiserver. This most likely means that the cluster is misconfigured (e.g., it has invalid apiserver certificates or service accounts configuration) or the --apiserver-host param points to a server that does not exist. Reason: the server has asked for the client to provide credentials

I can hit the master over HTTPS though the /ui URL shows:

{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {},
  "status": "Failure",
  "message": "no endpoints available for service \"kubernetes-dashboard\"",
  "reason": "ServiceUnavailable",
  "code": 503
}

Maybe a bad node?

@azell
Copy link

azell commented Aug 3, 2016

This error might disappear when I rm -fr $HOME/.terraform.d/. But daemonsets that were running on a destroyed cluster (newrelic-agent) were automatically running when I re-created the cluster. Is kops caching that info somewhere?

After running terraform to create the cluster:

kubectl --namespace=kube-system get po
NAME                                                   READY     STATUS    RESTARTS   AGE
etcd-server-events-ip-172-20-83-15.ec2.internal        1/1       Running   1          19m
etcd-server-ip-172-20-83-15.ec2.internal               1/1       Running   1          18m
kope-aws-ip-172-20-70-54.ec2.internal                  1/1       Running   0          9s
kope-aws-ip-172-20-83-15.ec2.internal                  1/1       Running   1          18m
kube-apiserver-ip-172-20-70-54.ec2.internal            1/1       Running   3          18s
kube-apiserver-ip-172-20-83-15.ec2.internal            1/1       Running   3          17m
kube-controller-manager-ip-172-20-83-15.ec2.internal   1/1       Running   1          18m
kube-dns-v14-rr0ca                                     2/3       Running   2          17m
kube-proxy-ip-172-20-83-15.ec2.internal                1/1       Running   1          18m
kube-proxy-ip-172-20-88-200.ec2.internal               1/1       Running   0          17m
kube-proxy-ip-172-20-88-201.ec2.internal               1/1       Running   0          17m
kube-proxy-ip-172-20-88-202.ec2.internal               1/1       Running   0          17m
kube-scheduler-ip-172-20-70-54.ec2.internal            1/1       Running   0          9s
kube-scheduler-ip-172-20-83-15.ec2.internal            1/1       Running   1          17m
newrelic-agent-1chhy                                   1/1       Running   0          18m
newrelic-agent-9jc8s                                   1/1       Running   0          18m
newrelic-agent-hlwvy                                   1/1       Running   0          18m
newrelic-agent-sql9y                                   1/1       Running   1          18m

I did not install the newrelic-agent daemonset though it did exist on a previous (destroyed) cluster with the same name and S3 bucket.

@justinsb
Copy link
Member Author

justinsb commented Aug 3, 2016

I've written up how to install addons: https://github.com/kubernetes/kops/blob/master/docs/addons.md

@azell I think the problem is that the disks are persisting, I think because if you use the terraform workaround you need to kops delete cluster to delete the disks. With terraform 0.7 released, I'm going to see if we can get rid of the workaround, or document the need to kops delete cluster: #256

Going to close this issue in favor of #256. And if problems persist with the dashboard, please do open an issue for that also (or ping me directly!)

@justinsb justinsb closed this as completed Aug 3, 2016
@azell
Copy link

azell commented Aug 3, 2016

Great, will use kops delete cluster.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants