-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rolling Upgrade 1.5 -> 1.6 Issues & Evaluation #2674
Comments
Possibly duplicate discussion? |
I believe they are related, I mentioned that one at the top but wanted to find a way to condense several Kops related issues and outline my findings |
I also attempted to add a 1.6 IG to an 1.5 cluster. The initialization task fails with:
And the kubelet fails to launch with error:
Current Cluster version: 1.5.2 (masters + nodes) Aside: given the backwards compatibility of kube it seems like the safest way to upgrade a cluster (or at least what I'm attempting) is:
|
Closing as others have discussed this and it's not particularly actionable |
I wanted to provide an analysis on a variety of upgrade issues people have been seeing, which I believe are directly related to:
K8 Dropping Attributes: kubernetes/kubernetes#46073
DNS RS Issues: #2594
Weave issues: #2366
Process:
Currently with the KOPS 1.5 -> 1.6 rolling upgrade my understanding is that:
Impact:
last-applied-configuration
as Justin noted and [we see as well](https://github.com/kubernetes/kops/issues/2605#issuecomment-305650544]-- There are not taints on the 1st and 2nd Masters, but since the 3rd master comes up with a 1.6 cluster it does have them
-- Any Daemonset updates (such as weave-net) are missing their tolerations section
tolerations
(they were stripped) the 3rd master will never have the DaemonSet pod placed on it.Potential Solution/Hack
I can see this happening essentially every major change where features graduate into "real" Kubernetes supported attributes moving from annotations to attributes.
Due to the volume attachment for etcd, etc. I don't know that we could simply bring up a 4th node as the "new" master and force an election until that takes place, however as a hack it may be possible to:
A more comprehensive solution which would be broader than Kops might be:
kubectl.kubernetes.io/last-applied-configuration
for the resourcestaints
on the Masters or thetolerations
on the DaemonSets)The text was updated successfully, but these errors were encountered: