-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Migrating to 1.8 with RBAC is incompatiable #4163
Comments
Do the authorization errors persist in the log after the api server has completed startup and /healthz returns a 200? Some denials during server startup are normal as the authorization cache fills |
It continues and the cluster is inoperable. |
After upgrading, what does this show?
|
I'm not familiar with what kops does on upgrade with the add on bindings |
@liggitt I manually ran the above yaml and it didn't help. The api-server is unavailable after the upgrade so any of the kubectl commands fail. |
kubelet permissions should not affect api server availability. I'm not sure how to debug further if the api server is unreachable. Do you have more apiserver logs that might be illuminating? @chrislovecnm any ideas of what else might be at play here? |
@naveensrinivasan was RBAC already configured and working when the Cluster was on @liggitt not sure about the addons behaviour, but if performing an upgrade from |
@KashifSaadat RBAC was already configured and working when the cluster was |
Here are the log files. https://gist.github.com/naveensrinivasan/80eb10aa3bd2259139b48a6a78100357 I don't know exactly when I grabbed them. This from the master and I grabbed all the logs
|
I am hitting the same issue after the upgrade following different installation method and I am sure
Following this API server never comes up and kube control plane is down. |
@naveensrinivasan what does apiserver /healthz show while the API server is crashlooping in that state? do you have the full apiserver manifest used, including all flags? seeing this, which makes me suspect issues writing to etcd:
|
@mqasimsarfraz what is the output of a superuser in the |
@liggitt Where can I find that output? also following is what I can find related to /healthz in API server logs:
|
formatted better, that shows:
the details for the failed hooks are available at these URLs:
|
Can't find anything useful from URLs:
|
All of those point to etcd write errors/hangs. Did etcd setup change during the upgrade? What are the flags passed to the apiserver? |
Ahan interesting, No I haven't changed it but let me try to check
|
I noticed that etcd is not setup for etcd 3 btw. Check but I think you are still running efcd2 |
You have storageBackend: etcd3 But you are not setting the etcd version in the manifest as required |
@liggitt thanks for the pointer for me it was
Thanks again! |
I have it running as
|
@naveensrinivasan and is your etcd cluster an etcd3 cluster? What version is it running? |
@liggitt It was running |
Did you migrate the etcd data from the etcd2 to etcd3 stores? You cannot simply upgrade the etcd binary and switch to etcd3 mode. If you didn't do a migration, you should continue to run kubernetes in etcd2 mode as long as you have v2 data (even against an etcd3 server) |
Nope, I didn't migrate. I was trying to use etcd2 in @chrislovecnm Would |
You can continue to use etcd2 (or etcd3 in etcd2 mode) against 1.8 and 1.9 |
how do you use etcd2 in etcd3? |
Run etcd3 binaries and start the kube apiserver with --storage-backend=etcd2 Kubernetes will continue to use the v2 API (which etcd3 still supports) and will have access to your old c2 data via it |
Thanks, I don't know if |
Yes, remove the etcd3 line in you manifest. Or edit your cluster. |
I think the issue was I was using the |
Thanks for submitting an issue! Please fill in as much of the template below as
you can.
------------- BUG REPORT TEMPLATE --------------------
What
kops
version are you running? The commandkops version
, will displaythis information.
Version 1.8.0 (git-4876009bd)
What Kubernetes version are you running?
kubectl version
will print theversion if a cluster is running or provide the Kubernetes version specified as
a
kops
flag.v1.7.7
What cloud provider are you using?
aws
What commands did you run? What is the simplest way to reproduce this issue?
kops update cluster
What happened after the commands executed?
What did you expect to happen?
Upgrade the cluster to v1.8.6
Please provide your cluster manifest. Execute
kops get --name my.example.com -oyaml
to display your cluster manifest.You may want to remove your cluster name and other sensitive information.
Please run the commands with most verbose logging by adding the
-v 10
flag.Paste the logs into this report, or in a gist and provide the gist link here.
Anything else do we need to know?
v1.7.7
tov1.8.6
withRBAC
turned on.kops
master to upgradekops version Version 1.8.0 (git-4876009bd)
We did run this yaml before migrating and it still didn't help.
------------- FEATURE REQUEST TEMPLATE --------------------
Describe IN DETAIL the feature/behavior/change you would like to see.
Feel free to provide a design supporting your feature request.
The text was updated successfully, but these errors were encountered: