-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Setting CoreDNS version to 1.8.3 did not update its RBAC #11450
Comments
fixed with #11459 |
@dntosas Is there a workaround? We see coredns not coming up when upgrading from 1.18 to 1.19. |
@dntosas We used the latest version available via brew:
I'll try to get the error message. |
We had a custom coredns config that didn't include |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
/remove-lifecycle rotten |
@olemarkus: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
1. What
kops
version are you running? The commandkops version
, will displaythis information.
kops 1.19.2 built from the release-1.19 branch
2. What Kubernetes version are you running?
kubectl version
will print theversion if a cluster is running or provide the Kubernetes version specified as
a
kops
flag.kubernetes 1.19.10
3. What cloud provider are you using?
aws
4. What commands did you run? What is the simplest way to reproduce this issue?
kops replace $my_cluster_config.yaml
kops update
kops update --yes
the config part that I think matters:
but the problem is that the coredns
ClusterRole
was not updated. I looked at the upstream config and it was different from what was in my kube cluster:but in my config map it was still on the 1.6.3 style I had before:
How is the coredns release managed? Could this
ClusterRole
be handled by kops?5. What happened after the commands executed?
Kubernetes 1.19 was rolled out and most of the pods worked but eventually coredns became inoperable and I lost DNS.
6. What did you expect to happen?
Setting the coredns version should have synced all the parameters.
7. Please provide your cluster manifest. Execute
kops get --name my.example.com -o yaml
to display your cluster manifest.You may want to remove your cluster name and other sensitive information.
8. Please run the commands with most verbose logging by adding the
-v 10
flag.Paste the logs into this report, or in a gist and provide the gist link here.
9. Anything else do we need to know?
The text was updated successfully, but these errors were encountered: