-
Notifications
You must be signed in to change notification settings - Fork 716
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
document how to upgrade a non-selfhosted kubeadm cluster #278
Comments
@mikedanese do you think you would be able to write up some notes on this please? I'm happy to turn rough notes into a docs PR if that helps. |
Don't we need to jig some form --upgrade of basic manifest drop from kubeadm, otherwise it's a hodge-podgery of downloading YAMLs. So here is how I upgraded. (updated)
@pipejakob ^ Were your tests actually testing 1.7.0? Either way I think we can stub out init a quick upgrade, by copping init and trimming it down to what's listed above. |
How did you generate new manifests? |
ran my hijacked |
hmm. we might be able to make that cleaner. What if we had a kubeadm reset --partial which didn't delete the certs or etcd data? |
Why not just copy the reset cmd, trim it down, and rename it to update/upgrade? |
What's the plan here @timothysc @mikedanese @pipejakob? |
So here are my thoughts:
|
Not sure I follow the question. Are you asking if we have any automated e2e tests for 1.7, or if I was using 1.7 when I tested a self-hosted kubeadm initialization (which I mentioned in Slack)? For the former, we have CI tests that always check out For the latter, I was using the latest stable Debian packages, so I believe 1.6.4. |
The thing that I don't particularly like about doing upgrades via a partial reset + init is that users have to remember all of the init flags they used the first time around if they want their cluster to remain the same (but upgraded). As a user, I would much prefer the UX in line with @lukemarsden's original proposal of Of course, with two days until code freeze, beggars can't be choosers, but I am interested in potentially prototyping in-place manifest changes to swap the version strings and modify control plane CLI args as needed. @timothysc I don't want to step on any toes -- were your comments on the issue just comments, or were you also looking to code/own this? |
In my opinion having a |
I'm jumping on this today. Please let me know if there are any parallel efforts. My first goal is to run through the kubeadm code and attempt manual upgrades to make sure I understand all of the moving parts before chiming in on which of the different approaches seems the most palatable. At first glance, if we're going with an approach of "just run kubeadm init again," then it seems like we need to not only wipe out the pod manifests but also the other kubeconfig files in /assign |
(Though I suppose if you specify an alternate |
As long as the apiserver address/port don't change, I've so far had good luck with this approach (which doesn't require using a separate, clean host to generate manifests that need to be manually copied over):
The
which is expected, since the master node already has the taint from the original initialization. If we make the However, even after getting this error and exiting with a return code of 1, all of the control plane pods are healthy and fully upgraded to the new version. My other nodes are in-tact and |
Mmm, @pipejakob, why would you change --api-server-advertise/--apiserver-bind-port (edited, cause @pipejakob made another comment ;) ), if you upgrading? If we go through all the possible things that can change, we might just say, deploy a new cluster, move your jobs, and done. Maybe that raises a point, especially with major changes, like in 1.6, RBAC, what is an upgrade?
So I have done, exactly what @timothysc did, prior to 1.6, building 1.6.alpha Control Plane+underlay, and just replacing them on top of a running cluster, then point the manifests to point to a newer image, and restart kubelet. This was easy as the static manifests for the Control Plane did not change much. But an add-on like weave, which wasn't updated for RBAC, failed of course. So well, depending on how you see an upgrade, it's kinda easy (1.6.4. => 1.6.5 case probably), or..........harder then you think, depending on the new release (1.6.x => 1.7.0 (with possibly new features or config changes), crippling deployments running. |
@coeki My point isn't that I expect cluster owners to want to change it, but quite the opposite. My point is that they may have initialized their cluster long enough ago that they have forgotten exactly what flags they used in order to get running in the first place. They might have had a special need to not use the default IP address detected, or to use a different port, etc. Now that they're ready to upgrade, if we go the route of asking them to just My stretch goal of trying to offer a But, we'll see if there's enough time to throw that together before the code freeze. The solution of |
Actually, I can't think of a great reason for removing the original manifests before upgrading. There may be an edge case we want to avoid where kubelet wouldn't notice that the manifests had changed without a full removal and recreation, but I don't think that's the case (do we support OSes where you could potentially disable mtime on the hosting filesystem?). I'd like to believe that kubelet is already very resilient to cases where static manifests are changed in-place. With a minor patch to tolerate duplicate taints, I'm seeing completely successful upgrades and downgrades between 1.6.x and 1.7.x by just doing:
It's possible that there could be a race condition between kubelet restarting the static pods and kubeadm's check of the control plane health, and kubeadm might erroneously report success when actually the manifest changes haven't taken effect yet. It might be useful to double check the versions when kubeadm checks the control plane health, if it doesn't already. Our upgrade documentation will very likely have other sanity-checking steps after the upgrade anyway, like running Also, @coeki I forgot to address this point, but I should update the issue description: this isn't the long-term plan for kubeadm upgrades. There's a separate design doc and efforts around the proper way to support upgrades, but its implementation slipped for this release cycle. We've been discussing in the SIG Cluster Lifecycle meetings that now that kubeadm is considered Beta as of 1.6, it needs to have some sort of documented ability to upgrade kubeadm clusters to 1.7 when it gets released. So, this issue is capturing the efforts of at least having a bandaid to allow users to upgrade from 1.6.x to 1.7.x until better upgrade support is available. Hope that clears some things up. |
@pipejakob ok, I see, reusing kubeadm init, with having no manifests what so ever, but generating new ones with specifying a newer version of kubeadm. Probably in most cases the manifest won't change much, and get over written and none the wiser (well possibly a But we need to backup the old ones first (so no What I probably was trying to say was, we need to define a upgrade strategy, rather then a way, cause you probably also need a path to revert to old, if things break. This tripped us on multiple levels during 1.6.0 release. So need to retain older versions (images, deb's/rpm's and manifests). Then your plan seems a way to go. I'll do some tests. |
That's a good point about backups. I haven't thoroughly tested, but I believe a prestep of:
and a rollback plan of:
might be all we need (although a more inclusive copy of all of |
My PR to not duplicate taints is LGTMed and sitting in the submit-queue. I'm going to put together a quick-and-dirty draft of the upgrade steps so that anyone interested in helping can test them out. |
@pipejakob Another concern of mine is this line. Have you tested that it works? Basically I had time to make the certs and kubeconfig phases idempotent, but not the rest. The fourth phase (apiconfig) right now, will be better with your PR, but still will We might be able to bugfix those |
Sadly I'm out-of-pocket atm so I'm not going to have time to really dig in for a couple of days but I can review the instructions a little later. |
@luxas Good catch, you're totally right. I had been doing full testing, but didn't pay attention to the exit code, and the last line printed had seemed innocent enough:
I had originally interpreted it as "I'm skipping this step because it already exists," but the process returns 1, so it's clearly an error (and should probably indicate so in the message). This is after the manifests get updated, though, so all of the other signs of the cluster being upgraded and healthy were green. Expect a few more PRs from me to make these remaining steps idempotent. I think there's a good argument that this could be considered bug fixing and shouldn't be affected by the code freeze. |
@pipejakob kubernetes/kubernetes#46819 is a step forward, but doesn't solve all the problems. Since the bootstrap configmap is created before the addons are applied, the addons weren't updated. We still have to make the following lines idempotent: https://github.com/kubernetes/kubernetes/blob/master/cmd/kubeadm/app/cmd/init.go#L250-L269 (and kubernetes/kubernetes#46819 is indeed a step in that direction). I mentioned this in the #sig-api-machinery Slack channel, and we might be able to make the addons phase upgrade the manifests in case they already exist |
@luxas Totally agreed that this doesn't fully fix everything yet. I'm still working on a few more PRs to address the other steps that aren't idempotent yet (writing unit tests is taking the longest because most of this code completely lacks them). Does the plan to make all of these steps idempotent sound solid? Do any of the steps (besides addons like you mentioned) need to have the behavior of CreateOrUpdate instead of just CreateIfNotExists? The fact that everything except addons seemed to be successfully upgraded even when |
@pipejakob I'll have three PRs up for review soon... stay tuned
I have things locally in one dirty tree, now working on cleaning up, writing some small tests and sending PRs 👍 |
This helps enable a graceful upgrade/downgrade process between 1.6.x and 1.7.x kubeadm clusters (although no guarantees outside of that range) by doing: $ kubeadm init --kubernetes-version <version> --skip-preflight-checks Without this change, the command fails with an error that the node taint is duplicated. This is part of kubernetes/kubeadm#278
Automatic merge from submit-queue kubeadm: don't duplicate master taint if it already exists. **What this PR does / why we need it**: This helps enable a graceful upgrade/downgrade process between 1.6.x and 1.7.x kubeadm clusters (although no guarantees outside of that range) by doing: $ kubeadm init --kubernetes-version <version> --skip-preflight-checks Without this change, the command fails with an error that the node taint is duplicated. This is part of kubernetes/kubeadm#278 **Release note**: ```release-note NONE ``` Fixes: kubernetes/kubeadm#288
Ignore errors for duplicates when creating service accounts. kubernetes/kubeadm#278
Automatic merge from submit-queue (batch tested with PRs 46787, 46876, 46621, 46907, 46819) kubeadm: Only create bootstrap configmap if not exists. **What this PR does / why we need it**: The fact that this method was not idempotent was breaking kubeadm upgrades. kubernetes/kubeadm#278 **Release note**: ```release-note NONE ```
This helps enable a graceful upgrade/downgrade process between 1.6.x and 1.7.x kubeadm clusters (although no guarantees outside of that range) by doing: $ kubeadm init --kubernetes-version <version> --skip-preflight-checks Without this change, the command fails with an error that the node taint is duplicated. This is part of kubernetes/kubeadm#278
Automatic merge from submit-queue (batch tested with PRs 46897, 46899, 46864, 46854, 46875) kubeadm: Idempotent service account creation. **What this PR does / why we need it**: During `kubeadm init`, ignore errors for duplicates when creating service accounts. kubernetes/kubeadm#278 Fixes: kubernetes/kubeadm#288 **Release note**: ```release-note NONE ```
In-place upgrades are supported between 1.6 and 1.7 releases. Fixes kubernetes/kubeadm#278
In-place upgrades are supported between 1.6 and 1.7 releases. Fixes kubernetes/kubeadm#278
In-place upgrades are supported between 1.6 and 1.7 releases. Fixes kubernetes/kubeadm#278
In-place upgrades are supported between 1.6 and 1.7 releases. Fixes kubernetes/kubeadm#278
In-place upgrades are supported between 1.6 and 1.7 releases. Fixes kubernetes/kubeadm#278
In-place upgrades are supported between 1.6 and 1.7 releases. Fixes kubernetes/kubeadm#278
In-place upgrades are supported between 1.6 and 1.7 releases. Rollback instructions to come in a separate commit. Fixes kubernetes/kubeadm#278
Fixed by kubernetes/website#3999. |
* Minor fixes in the Deployment doc Signed-off-by: Michail Kargakis <[email protected]> * add NodeRestriction to admission-controllers (#3842) * Admins Can Configure Zones in Storage Class The PR #38505 (kubernetes/kubernetes#38505) added zones optional parameter to Storage Class for AWS and GCE provisioners. That's why documentation needs to be updated accordingly. * document custom resource definitions * add host paths to psp (#3971) * add host paths to psp * add italics * Update ConfigMap doc to explain TTL-based cache updates (#3989) * Update ConfigMap doc to explain TTL-based cache updates * swap word order Change "When a ConfigMap being already consumed..." to "When a ConfigMap already being consumed..." * Update NetworkPolicy docs for v1 * StorageOS Volume plugin * Update GPU docs * docs: HPA autoscaling/v2alpha1 status conditions This commit documents the new status conditions feature for HPA autoscaling/v2alpha1. It demonstrates how to get the status conditions using `kubectl describe`, and how to interpret them. * Update description about NodeRestriction kubelet node can alse create mirror pods for their own static pods. * adding storage as a supported resource to node allocatable Signed-off-by: Vishnu kannan <[email protected]> * Add documentation for podpreset opt-out annotation This adds the annotation for having the podpreset admission controller to skip (opt-out) manipulating the pod spec. Also, the annotation format for what presets have acted on a pod has been modified to add a prefix of "podpreset-". The new naming makes it such that there is no chance of collision with the newly introduced opt-out annotation (or future ones yet to be added). Opt-out annotation PR: kubernetes/kubernetes#44965 * Update PDB documentation to explain new field (#3885) * update-docs-pdb * Addressed erictune@'s comments * Fix title and add a TOC to the logging concept page * Patch #4118 for typos * Describe setting coredns server in nameserver resolv chain * Address comments in PR #3997. Comment is in https://github.com/kubernetes/kubernetes.github.io/pull/3997/files/f6eb59c67e28efc298c87b1ef49a96bc6adacd1e#diff-7a14981f3dd8eb203f897ce6c11d9828 * Update task for DaemonSet history and rollback (#4098) * Update task for DaemonSet history and rollback Also remove mentions of templateGeneration field because it's deprecated * Address comments * removed lt and gt as operators (#4152) * removed lt and gt as operators * replace lt and gt for node-affinfity * updated based on bsalamat review * Initial draft of upgrade guide for kubeadm clusters. In-place upgrades are supported between 1.6 and 1.7 releases. Rollback instructions to come in a separate commit. Fixes kubernetes/kubeadm#278 * Add local volume documentation (#4050) * Add local volume documentation * Add PV local volume example * Patch PR #3999 * Add documentation for Stackdriver event exporter * Add documentation about controller metrics * Federation: Add task for setting up placement policies (#4075) * Add task for setting up placement policies * Update version of management sidecar in policy engine deployment * Address @nikhiljindal's comments - Lower case filenames - Comments in policy - Typo fixes - Removed type LoadBalancer from OPA Service * Add example that sets cluster selector Per-@nikhiljindal's suggestion * Fix wording and templating per @chenopis * PodDisruptionBudget documentation Improvements (#4140) * Changes from #3885 Title: Update PDB documentation to explain new field Author: foxish * Added Placeholder Disruptions Concept Guide New file: docs/concepts/workloads/pods/disruptions.md Intented contents: concept for Pod Disruption Budget, cross reference to Eviction and Preemption docs. Linked from: concepts > workloads > pods * Added placeholder Configuring PDB Task New file: docs/tasks/run-application/configure-pdb.md Intented contents: task for writing a Pod Disruption Budget. Linked from: tasks > configuring-applications > configure pdb. * Add refs to the "drain a node" task. * Refactor PDB docs. Move the "Requesting an eviction" section from: docs/tasks/administer-cluster/configure-pod-disruption-budget.md -- which is going away -- to: docs/tasks/administer-cluster/safely-drain-node.md The move is verbatim, except for an introductory sentence. Also added assignees. * Refactor of PDB docs Moved the section: Specifying a PodDisruptionBudget from: docs/tasks/administer-cluster/configure-pod-disruption-budget.md to: docs/tasks/run-application/configure-pdb.md because that former file is going away. Move is verbatim. * Explain how Eviction tools should handle failures * Refactor PDB docs Move text from: docs/tasks/administer-cluster/configure-pod-disruption-budget.md to: docs/concepts/workloads/pods/disruptions.md Delete the now empty: docs/tasks/administer-cluster/configure-pod-disruption-budget.md Added a redirects_from section to the new doc, containing the path of the now-deleted doc, plus all the redirects from the deleted doc. * Expand PDB Concept guide Building on a little content from the old task, greatly expanded the Disruptions concept guide, including an abstract example. * Update creating a pdb Task. * Address review comments. * Fixed for all cody-clark's review comments * Address review comments from mml * Address review comments from maisem * Fix missing backtick * Api and Kubectl reference docs updates for 1.7 (#4193) * Fix includes groups * Generated kubectl docs for 1.7 * Generated references docs for 1.7 api * Document node authorization mode * API Aggregator (#4173) * API Aggregator * Additional bullet points * incorporated feedback for apiserver-aggregation.md * split setup-api-aggregator.md into two docs and address feedback * fix link * addressed docs feedback * incorporate feedback * integrate feedback * Add documentation for DNS stub domains (#4063) * Add documentation for DNS stub domains * add additional prereq * fix image path * review feedback * minor grammar and style nits * documentation for using hostAliases to manage hosts file (#4080) * documentation for using hostAliases to manage hosts file * add to table of contents * review comments * update the right command to see hosts file * reformat doc based on suggestion and change some wording * Fix typo for #4080 * Patch PR #4063 * Fix wording in placement policy task introduction * Add update to statefulset concepts and basic tutorial (#4174) * Add update to statefulset concpets and basic tutorial * Address tech comments. * Update ESIPP docs for new added API fields * Custom resource docs * update audit document with advanced audit features added in 1.7 * kubeadm v1.7 documentation updates (#4018) * v1.7 updates for kubeadm * Address review comments * Address Luke's comments * Encrypting secrets at rest and cluster security guide * Edits for Custom DNS Documentation (#4207) * reorganize custom dns doc * format fixes * Update version numbers to 1.7 * Patch PR #4140 (#4215) * Patch PR #4140 * fix link and typos * Update PR template * Update TLS bootstrapping with 1.7 features This includes documenting the new CSR approver built into the controller manager and the kubelet alpha features for certificate rotation. Since the CSR approver changed over the 1.7 release cycle we need to call out the migration steps for those using the alpha feature. This document as a whole could probably use some updates, but the main focus of this PR is just to get these features minimally documented before the release. * Federated ClusterSelector formatting updates from review * complete PR #4181 (#4223) * complete PR #4181 * fix security link * Extensible admission controller (#4092) * extensible-admission-controllers * Update extensible-admission-controllers.md * more on initializers * fixes * Expand external admission webhooks documentation * wrap at 80 chars * more * add reference * Use correct apigroup for network policy * Docs changes to PR #4092 (#4224) * Docs changes to PR #4092 * address feedback * add doc for --as-group in cli Add doc for this pr: kubernetes/kubernetes#43696
In-place upgrades are supported between 1.6 and 1.7 releases. Rollback instructions to come in a separate commit. Fixes kubernetes/kubeadm#278
* Minor fixes in the Deployment doc Signed-off-by: Michail Kargakis <[email protected]> * add NodeRestriction to admission-controllers (#3842) * Admins Can Configure Zones in Storage Class The PR #38505 (kubernetes/kubernetes#38505) added zones optional parameter to Storage Class for AWS and GCE provisioners. That's why documentation needs to be updated accordingly. * document custom resource definitions * add host paths to psp (#3971) * add host paths to psp * add italics * Update ConfigMap doc to explain TTL-based cache updates (#3989) * Update ConfigMap doc to explain TTL-based cache updates * swap word order Change "When a ConfigMap being already consumed..." to "When a ConfigMap already being consumed..." * Update NetworkPolicy docs for v1 * StorageOS Volume plugin * Update GPU docs * docs: HPA autoscaling/v2alpha1 status conditions This commit documents the new status conditions feature for HPA autoscaling/v2alpha1. It demonstrates how to get the status conditions using `kubectl describe`, and how to interpret them. * Update description about NodeRestriction kubelet node can alse create mirror pods for their own static pods. * adding storage as a supported resource to node allocatable Signed-off-by: Vishnu kannan <[email protected]> * Add documentation for podpreset opt-out annotation This adds the annotation for having the podpreset admission controller to skip (opt-out) manipulating the pod spec. Also, the annotation format for what presets have acted on a pod has been modified to add a prefix of "podpreset-". The new naming makes it such that there is no chance of collision with the newly introduced opt-out annotation (or future ones yet to be added). Opt-out annotation PR: kubernetes/kubernetes#44965 * Update PDB documentation to explain new field (#3885) * update-docs-pdb * Addressed erictune@'s comments * Fix title and add a TOC to the logging concept page * Patch #4118 for typos * Describe setting coredns server in nameserver resolv chain * Address comments in PR #3997. Comment is in https://github.com/kubernetes/kubernetes.github.io/pull/3997/files/f6eb59c67e28efc298c87b1ef49a96bc6adacd1e#diff-7a14981f3dd8eb203f897ce6c11d9828 * Update task for DaemonSet history and rollback (#4098) * Update task for DaemonSet history and rollback Also remove mentions of templateGeneration field because it's deprecated * Address comments * removed lt and gt as operators (#4152) * removed lt and gt as operators * replace lt and gt for node-affinfity * updated based on bsalamat review * Initial draft of upgrade guide for kubeadm clusters. In-place upgrades are supported between 1.6 and 1.7 releases. Rollback instructions to come in a separate commit. Fixes kubernetes/kubeadm#278 * Add local volume documentation (#4050) * Add local volume documentation * Add PV local volume example * Patch PR #3999 * Add documentation for Stackdriver event exporter * Add documentation about controller metrics * Federation: Add task for setting up placement policies (#4075) * Add task for setting up placement policies * Update version of management sidecar in policy engine deployment * Address @nikhiljindal's comments - Lower case filenames - Comments in policy - Typo fixes - Removed type LoadBalancer from OPA Service * Add example that sets cluster selector Per-@nikhiljindal's suggestion * Fix wording and templating per @chenopis * PodDisruptionBudget documentation Improvements (#4140) * Changes from #3885 Title: Update PDB documentation to explain new field Author: foxish * Added Placeholder Disruptions Concept Guide New file: docs/concepts/workloads/pods/disruptions.md Intented contents: concept for Pod Disruption Budget, cross reference to Eviction and Preemption docs. Linked from: concepts > workloads > pods * Added placeholder Configuring PDB Task New file: docs/tasks/run-application/configure-pdb.md Intented contents: task for writing a Pod Disruption Budget. Linked from: tasks > configuring-applications > configure pdb. * Add refs to the "drain a node" task. * Refactor PDB docs. Move the "Requesting an eviction" section from: docs/tasks/administer-cluster/configure-pod-disruption-budget.md -- which is going away -- to: docs/tasks/administer-cluster/safely-drain-node.md The move is verbatim, except for an introductory sentence. Also added assignees. * Refactor of PDB docs Moved the section: Specifying a PodDisruptionBudget from: docs/tasks/administer-cluster/configure-pod-disruption-budget.md to: docs/tasks/run-application/configure-pdb.md because that former file is going away. Move is verbatim. * Explain how Eviction tools should handle failures * Refactor PDB docs Move text from: docs/tasks/administer-cluster/configure-pod-disruption-budget.md to: docs/concepts/workloads/pods/disruptions.md Delete the now empty: docs/tasks/administer-cluster/configure-pod-disruption-budget.md Added a redirects_from section to the new doc, containing the path of the now-deleted doc, plus all the redirects from the deleted doc. * Expand PDB Concept guide Building on a little content from the old task, greatly expanded the Disruptions concept guide, including an abstract example. * Update creating a pdb Task. * Address review comments. * Fixed for all cody-clark's review comments * Address review comments from mml * Address review comments from maisem * Fix missing backtick * Api and Kubectl reference docs updates for 1.7 (#4193) * Fix includes groups * Generated kubectl docs for 1.7 * Generated references docs for 1.7 api * Document node authorization mode * API Aggregator (#4173) * API Aggregator * Additional bullet points * incorporated feedback for apiserver-aggregation.md * split setup-api-aggregator.md into two docs and address feedback * fix link * addressed docs feedback * incorporate feedback * integrate feedback * Add documentation for DNS stub domains (#4063) * Add documentation for DNS stub domains * add additional prereq * fix image path * review feedback * minor grammar and style nits * documentation for using hostAliases to manage hosts file (#4080) * documentation for using hostAliases to manage hosts file * add to table of contents * review comments * update the right command to see hosts file * reformat doc based on suggestion and change some wording * Fix typo for #4080 * Patch PR #4063 * Fix wording in placement policy task introduction * Add update to statefulset concepts and basic tutorial (#4174) * Add update to statefulset concpets and basic tutorial * Address tech comments. * Update ESIPP docs for new added API fields * Custom resource docs * update audit document with advanced audit features added in 1.7 * kubeadm v1.7 documentation updates (#4018) * v1.7 updates for kubeadm * Address review comments * Address Luke's comments * Encrypting secrets at rest and cluster security guide * Edits for Custom DNS Documentation (#4207) * reorganize custom dns doc * format fixes * Update version numbers to 1.7 * Patch PR #4140 (#4215) * Patch PR #4140 * fix link and typos * Update PR template * Update TLS bootstrapping with 1.7 features This includes documenting the new CSR approver built into the controller manager and the kubelet alpha features for certificate rotation. Since the CSR approver changed over the 1.7 release cycle we need to call out the migration steps for those using the alpha feature. This document as a whole could probably use some updates, but the main focus of this PR is just to get these features minimally documented before the release. * Federated ClusterSelector formatting updates from review * complete PR #4181 (#4223) * complete PR #4181 * fix security link * Extensible admission controller (#4092) * extensible-admission-controllers * Update extensible-admission-controllers.md * more on initializers * fixes * Expand external admission webhooks documentation * wrap at 80 chars * more * add reference * Use correct apigroup for network policy * Docs changes to PR #4092 (#4224) * Docs changes to PR #4092 * address feedback * add doc for --as-group in cli Add doc for this pr: kubernetes/kubernetes#43696
Since kubeadm was promoted to Beta in 1.6, which implies forwards-compatibility, we need a plan for cluster owners to be able to upgrade their 1.6.x kubeadm clusters to 1.7.x (and ideally, to other 1.6.x releases) for the 1.7.0 release.
There is a separate effort for supporting a
kubeadm upgrade
subcommand that relies on self-hosted clusters, but self-hosting was not the default in 1.6, and implementation for the subcommand has slipped this release cycle anyway. We need steps that users can follow to upgrade their existing non-selfhosted clusters, which may involve new code changes or tooling, or may just be a document detailing the manual steps.This doesn't need to be the long-term plan for upgrades, but just enough to cover us for the 1.7 release.
The text was updated successfully, but these errors were encountered: