-
Notifications
You must be signed in to change notification settings - Fork 39.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix kubeadm upgrade regression #70893
fix kubeadm upgrade regression #70893
Conversation
@fabriziopandini: GitHub didn't allow me to request PR reviews from the following users: rdodev. Note that only kubernetes members and repo collaborators can review this PR, and authors cannot review their own PRs. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: fabriziopandini The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
@rdodev fyi |
/LGTM |
@rdodev: changing LGTM is restricted to assignees, and only kubernetes/kubernetes repo collaborators may be assigned issues. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
} | ||
etcdContainer := etcdPod.Spec.Containers[0] | ||
for _, arg := range etcdContainer.Command { | ||
if arg == "--listen-client-urls=https://127.0.0.1:2379" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i guess there is no other way to determine if these pod manifests were created by kubeadm 1.12?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not a fan of this solution, but considering it should go aways next cycle I think this can be accepted (the same hack is used elsewhere in kubeadm itself).
Nevertheless, if there are any better idea ...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we probably should just merge on monday unless there is further feedback.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Works for me.
In the future it may be handy to add a comment at the beginning of generated YAML files, stating that these files are autogenerated and the kubeadm version with which it was done. This will not only provide a start point for issues like this, but make users less tempted to tamper the generated YAMLs themselves, rather than edit the kubeadm config and rerun it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Very good point @rosti, please file an issue for it!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
for 1.12 -> 1.13 there is no way.
for later versions an annotation seem OK as long as we need it and/or want to maintain it..
annotating which pods are created with what kubeadm version seems like a nice to have to me.
kind: Pod
metadata:
annotations:
kubeadmVersion: <version-from-client>
version-from-client
should be the same as kubeadm version -o=short
and be parsed using: https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apimachinery/pkg/util/version/version.go#L111
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
issue created:
kubernetes/kubeadm#1231
/lgtm |
/hold cancel |
@fabriziopandini @neolit123 - this PR has |
/hold |
/lgtm cancel |
we obviously missed that. but the odd part is that i didn't see CI failures. |
8c84370
to
7f1b2a6
Compare
@dims @neolit123 Thanks for the hint! |
/lgtm |
/hold cancel |
What type of PR is this?
/kind bug
What this PR does / why we need it:
To fix kubeadm v1.13 error when upgrading clusters created with kubeadm v1.12.
Which issue(s) this PR fixes:
Fixes #kubernetes/kubeadm#1224
Special notes for your reviewer:
The error happens when connecting to etcd, and this regression was introduced by #69486
kubeadm v1.12 cluster are deployed with etcd listening on localhost only, while kubeadm v1.13 assumes etcd is listening both on localhost and on the advertising address.
This PR makes kubadm v1.13 support both cases.
Does this PR introduce a user-facing change?:
/sig cluster-lifecycle
/priority critical-urgent
/cc @timothysc
/cc @rdodev
/cc @neolit123
@kubernetes/sig-cluster-lifecycle-pr-reviews