Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

document how to upgrade a non-selfhosted kubeadm cluster #278

Closed
mikedanese opened this issue May 25, 2017 · 28 comments
Closed

document how to upgrade a non-selfhosted kubeadm cluster #278

mikedanese opened this issue May 25, 2017 · 28 comments
Assignees
Labels
priority/critical-urgent Highest priority. Must be actively worked on as someone's top priority right now.
Milestone

Comments

@mikedanese
Copy link
Member

mikedanese commented May 25, 2017

Since kubeadm was promoted to Beta in 1.6, which implies forwards-compatibility, we need a plan for cluster owners to be able to upgrade their 1.6.x kubeadm clusters to 1.7.x (and ideally, to other 1.6.x releases) for the 1.7.0 release.

There is a separate effort for supporting a kubeadm upgrade subcommand that relies on self-hosted clusters, but self-hosting was not the default in 1.6, and implementation for the subcommand has slipped this release cycle anyway. We need steps that users can follow to upgrade their existing non-selfhosted clusters, which may involve new code changes or tooling, or may just be a document detailing the manual steps.

This doesn't need to be the long-term plan for upgrades, but just enough to cover us for the 1.7 release.

@mikedanese mikedanese added this to the v1.7 milestone May 25, 2017
@timothysc timothysc added the priority/critical-urgent Highest priority. Must be actively worked on as someone's top priority right now. label May 25, 2017
@lukemarsden
Copy link

lukemarsden commented May 25, 2017

@mikedanese do you think you would be able to write up some notes on this please? I'm happy to turn rough notes into a docs PR if that helps.

@timothysc
Copy link
Member

timothysc commented May 26, 2017

Don't we need to jig some form --upgrade of basic manifest drop from kubeadm, otherwise it's a hodge-podgery of downloading YAMLs.

So here is how I upgraded. (updated)

  1. yum/apt-get update kubelet kubernetes-cni... (rebuilt from source)
  2. I ran kubeadm init --config = see_below on a clean machines to generate new manifests for v1.7.0-alpha.4
kind: MasterConfiguration
apiVersion: kubeadm.k8s.io/v1alpha1
kubernetesVersion: v1.7.0-alpha.4
  1. systemctl restart kubelet (worked fine no issue)
  2. Copied over the new manifests to the old cluster - system updated properly

@pipejakob ^ Were your tests actually testing 1.7.0?

Either way I think we can stub out init a quick upgrade, by copping init and trimming it down to what's listed above.

/cc @jbeda @luxas

@mikedanese
Copy link
Member Author

Copied over the new manifests

How did you generate new manifests?

@timothysc
Copy link
Member

How did you generate new manifests?

ran my hijacked kubeadm init on a clean machine.

@mikedanese
Copy link
Member Author

hmm. we might be able to make that cleaner. What if we had a kubeadm reset --partial which didn't delete the certs or etcd data?

@timothysc
Copy link
Member

Why not just copy the reset cmd, trim it down, and rename it to update/upgrade?

@luxas
Copy link
Member

luxas commented May 29, 2017

What's the plan here @timothysc @mikedanese @pipejakob?

@timothysc
Copy link
Member

timothysc commented May 30, 2017

So here are my thoughts:

  1. Create an upgrade instruction that includes the yum-update
  2. Add a command, or sub-command as @mikedanese mentioned above, which simply backs up existing manifests and drops latest
  3. Outline pointers for where to do other upgrades e.g. the actual cni-plugins themselves.

@pipejakob
Copy link
Contributor

@pipejakob ^ Were your tests actually testing 1.7.0?

Not sure I follow the question. Are you asking if we have any automated e2e tests for 1.7, or if I was using 1.7 when I tested a self-hosted kubeadm initialization (which I mentioned in Slack)?

For the former, we have CI tests that always check out master for kubeadm and use latest for the control plane images (which now would be v1.7.0-alpha.4).

For the latter, I was using the latest stable Debian packages, so I believe 1.6.4.

@pipejakob
Copy link
Contributor

The thing that I don't particularly like about doing upgrades via a partial reset + init is that users have to remember all of the init flags they used the first time around if they want their cluster to remain the same (but upgraded). As a user, I would much prefer the UX in line with @lukemarsden's original proposal of kubeadm upgrade that retains all of the existing configuration.

Of course, with two days until code freeze, beggars can't be choosers, but I am interested in potentially prototyping in-place manifest changes to swap the version strings and modify control plane CLI args as needed.

@timothysc I don't want to step on any toes -- were your comments on the issue just comments, or were you also looking to code/own this?

@fabriziopandini
Copy link
Member

In my opinion having a kubeadm init --only-manifests - or kubeadm upgrade manifests' or something similar - will avoid the needs of a new machine and reset, and simplify all upgrade procedure ...
(this could be a quick PR to be included in the 1.7 timeframe that can supports this procedure)

@pipejakob
Copy link
Contributor

I'm jumping on this today. Please let me know if there are any parallel efforts.

My first goal is to run through the kubeadm code and attempt manual upgrades to make sure I understand all of the moving parts before chiming in on which of the different approaches seems the most palatable. At first glance, if we're going with an approach of "just run kubeadm init again," then it seems like we need to not only wipe out the pod manifests but also the other kubeconfig files in /etc/kubernetes (admin.conf, controller-manager.conf, kubelet.conf, scheduler.conf) because they embed the IP address and port to use when connecting to the API server, both of which can change if someone runs kubeadm init with different flags than they did originally.

/assign

@pipejakob
Copy link
Contributor

(Though I suppose if you specify an alternate --apiserver-advertise-address or --apiserver-bind-port that you'll need to re-run kubeadm join on the nodes, too, in order to update their kubeconfigs. This is why I'm hoping to avoid having to kubeadm init again at all and be able to ignore the usecase of changing flags.)

@pipejakob
Copy link
Contributor

As long as the apiserver address/port don't change, I've so far had good luck with this approach (which doesn't require using a separate, clean host to generate manifests that need to be manually copied over):

$ <upgrade kubelet/kubeadm/kubectl via OS packages>
$ rm /etc/kubernetes/manifests/*
$ kubeadm init --kubernetes-version v1.7.0-alpha.4 --skip-preflight-checks

The --skip-preflight-checks is definitely required to get past directories already existing and ports already being in use, and the init ultimately fails with the message:

failed to update master node - [Node "upgrade-master" is invalid: metadata.taints[1]: Duplicate value: api.Taint{Key:"node-role.kubernetes.io/master", Value:"", Effect:"NoSchedule", TimeAdded:v1.Time{Time:time.Time{sec:0, nsec:0, loc:(*time.Location)(nil)}}}: taints must be unique by key and effect pair]

which is expected, since the master node already has the taint from the original initialization. If we make the apiconfigphase.UpdateMasterRoleLabelsAndTaints() method tolerate and ignore duplicates, that may be enough for it to exit completely cleanly.

However, even after getting this error and exiting with a return code of 1, all of the control plane pods are healthy and fully upgraded to the new version. My other nodes are in-tact and Ready, and existing pods look undisrupted.

@coeki
Copy link

coeki commented May 31, 2017

Mmm, @pipejakob, why would you change --api-server-advertise/--apiserver-bind-port (edited, cause @pipejakob made another comment ;) ), if you upgrading? If we go through all the possible things that can change, we might just say, deploy a new cluster, move your jobs, and done.

Maybe that raises a point, especially with major changes, like in 1.6, RBAC, what is an upgrade?

  • Control plane as in etcd/kubernetes (sorry again) core components (kube-{apiserver, controller, scheduler, dns, proxy} (and what did I miss) + underlay packages (kubelet, kubernetes-cni, kubeadm, docker)
  • OR, Control Plane +underlay see above, and everything (the workload so to say) running as before? Because if a major change like RBAC happens, a lot running on a cluster, that has older configs etc breaks.

So I have done, exactly what @timothysc did, prior to 1.6, building 1.6.alpha Control Plane+underlay, and just replacing them on top of a running cluster, then point the manifests to point to a newer image, and restart kubelet. This was easy as the static manifests for the Control Plane did not change much. But an add-on like weave, which wasn't updated for RBAC, failed of course.

So well, depending on how you see an upgrade, it's kinda easy (1.6.4. => 1.6.5 case probably), or..........harder then you think, depending on the new release (1.6.x => 1.7.0 (with possibly new features or config changes), crippling deployments running.

@pipejakob
Copy link
Contributor

@coeki My point isn't that I expect cluster owners to want to change it, but quite the opposite. My point is that they may have initialized their cluster long enough ago that they have forgotten exactly what flags they used in order to get running in the first place. They might have had a special need to not use the default IP address detected, or to use a different port, etc. Now that they're ready to upgrade, if we go the route of asking them to just kubeadm init again, then they need to remember all of the same initialization flags they used the first time in order to make sure they're consistent. If they forgot that their original initialization used --apiserver-bind-port 8888, and they just try to do kubeadm init as part of the upgrade and forget to specify this flag again, we will generate an apiserver manifest that isn't compatible with all of the existing kubeconfigs set up to communicate with it before.

My stretch goal of trying to offer a kubeadm upgrade experience is so that cluster owners don't have to remember or specify any of the flags they used the first time around when they initialized their cluster, they just run kubeadm upgrade with the target version to upgrade to, and everything just works (because it minimally patches the existing manifest files instead of wiping them out entirely).

But, we'll see if there's enough time to throw that together before the code freeze. The solution of rm /etc/kubernetes/manifests/*; kubeadm init --skip-preflight-checks --kubernetes-version <version> seems like a very promising backup plan, regardless of whether or not we decide to bundle the rm step into a flag for kubeadm init or kubeadm reset.

@pipejakob
Copy link
Contributor

Actually, I can't think of a great reason for removing the original manifests before upgrading. There may be an edge case we want to avoid where kubelet wouldn't notice that the manifests had changed without a full removal and recreation, but I don't think that's the case (do we support OSes where you could potentially disable mtime on the hosting filesystem?). I'd like to believe that kubelet is already very resilient to cases where static manifests are changed in-place.

With a minor patch to tolerate duplicate taints, I'm seeing completely successful upgrades and downgrades between 1.6.x and 1.7.x by just doing:

$ kubeadm init --skip-preflight-checks --kubernetes-version <version>

It's possible that there could be a race condition between kubelet restarting the static pods and kubeadm's check of the control plane health, and kubeadm might erroneously report success when actually the manifest changes haven't taken effect yet. It might be useful to double check the versions when kubeadm checks the control plane health, if it doesn't already. Our upgrade documentation will very likely have other sanity-checking steps after the upgrade anyway, like running kubectl get nodes to check the node versions and kubectl version to check the API server version.

Also, @coeki I forgot to address this point, but I should update the issue description: this isn't the long-term plan for kubeadm upgrades. There's a separate design doc and efforts around the proper way to support upgrades, but its implementation slipped for this release cycle. We've been discussing in the SIG Cluster Lifecycle meetings that now that kubeadm is considered Beta as of 1.6, it needs to have some sort of documented ability to upgrade kubeadm clusters to 1.7 when it gets released. So, this issue is capturing the efforts of at least having a bandaid to allow users to upgrade from 1.6.x to 1.7.x until better upgrade support is available. Hope that clears some things up.

@coeki
Copy link

coeki commented May 31, 2017

@pipejakob ok, I see, reusing kubeadm init, with having no manifests what so ever, but generating new ones with specifying a newer version of kubeadm. Probably in most cases the manifest won't change much, and get over written and none the wiser (well possibly a systemctl kubelet restart).

But we need to backup the old ones first (so no rm but cp /tmp/kube-backup) or something.

What I probably was trying to say was, we need to define a upgrade strategy, rather then a way, cause you probably also need a path to revert to old, if things break. This tripped us on multiple levels during 1.6.0 release.

So need to retain older versions (images, deb's/rpm's and manifests).

Then your plan seems a way to go. I'll do some tests.

@pipejakob
Copy link
Contributor

That's a good point about backups. I haven't thoroughly tested, but I believe a prestep of:

cp /etc/kubernetes/manifests/* <backup-path>/manifests-1.6/

and a rollback plan of:

cp <backup-path>/manifests-1.6/* /etc/kubernetes/manifests/

might be all we need (although a more inclusive copy of all of /etc/kubernetes is probably a good idea).

@pipejakob
Copy link
Contributor

My PR to not duplicate taints is LGTMed and sitting in the submit-queue. I'm going to put together a quick-and-dirty draft of the upgrade steps so that anyone interested in helping can test them out.

@luxas
Copy link
Member

luxas commented Jun 1, 2017

@pipejakob Another concern of mine is this line. Have you tested that it works?

Basically I had time to make the certs and kubeconfig phases idempotent, but not the rest.
Seems like it's not a big deal the third phase (control plane manifests) overwrites the files, so that's ok for now.

The fourth phase (apiconfig) right now, will be better with your PR, but still will .Create() RBAC rules etc. (which I suppose will fail if they already exist). The same goes for kube-dns and kube-proxy .Create()s.

We might be able to bugfix those .Create() to be more like Apply yet this cycle though...

@timothysc
Copy link
Member

Sadly I'm out-of-pocket atm so I'm not going to have time to really dig in for a couple of days but I can review the instructions a little later.

@pipejakob
Copy link
Contributor

@luxas Good catch, you're totally right. I had been doing full testing, but didn't pay attention to the exit code, and the last line printed had seemed innocent enough:

configmaps "cluster-info" already exists

I had originally interpreted it as "I'm skipping this step because it already exists," but the process returns 1, so it's clearly an error (and should probably indicate so in the message). This is after the manifests get updated, though, so all of the other signs of the cluster being upgraded and healthy were green.

Expect a few more PRs from me to make these remaining steps idempotent. I think there's a good argument that this could be considered bug fixing and shouldn't be affected by the code freeze.

@luxas
Copy link
Member

luxas commented Jun 2, 2017

@pipejakob kubernetes/kubernetes#46819 is a step forward, but doesn't solve all the problems. Since the bootstrap configmap is created before the addons are applied, the addons weren't updated.

We still have to make the following lines idempotent: https://github.com/kubernetes/kubernetes/blob/master/cmd/kubeadm/app/cmd/init.go#L250-L269 (and kubernetes/kubernetes#46819 is indeed a step in that direction). I mentioned this in the #sig-api-machinery Slack channel, and we might be able to make the addons phase upgrade the manifests in case they already exist

@pipejakob
Copy link
Contributor

@luxas Totally agreed that this doesn't fully fix everything yet. I'm still working on a few more PRs to address the other steps that aren't idempotent yet (writing unit tests is taking the longest because most of this code completely lacks them).

Does the plan to make all of these steps idempotent sound solid? Do any of the steps (besides addons like you mentioned) need to have the behavior of CreateOrUpdate instead of just CreateIfNotExists? The fact that everything except addons seemed to be successfully upgraded even when kubeadm init exited early makes me think that the CreateIfNotExists behavior is fine for all of these objects (at least for upgrades from 1.6 to 1.7), but I don't know if I'm overlooking any subtleties. Once I have kubeadm init exiting cleanly, and a few steps written up for the upgrade plan, I also want to get a candidate build into the hands of others to help test and see if anything else was overlooked.

@luxas
Copy link
Member

luxas commented Jun 2, 2017

@pipejakob I'll have three PRs up for review soon... stay tuned

  1. Fix CSR groupapprover regression when deploying a v1.6.x control plane
  2. Enable the Node Authorizer on v1.7 upgrade
  3. Make the remaining operations idempotent

I have things locally in one dirty tree, now working on cleaning up, writing some small tests and sending PRs 👍

@luxas luxas self-assigned this Jun 2, 2017
pipejakob added a commit to pipejakob/kubernetes that referenced this issue Jun 5, 2017
This helps enable a graceful upgrade/downgrade process between 1.6.x and
1.7.x kubeadm clusters (although no guarantees outside of that range) by
doing:

  $ kubeadm init --kubernetes-version <version> --skip-preflight-checks

Without this change, the command fails with an error that the node taint
is duplicated.

This is part of kubernetes/kubeadm#278
k8s-github-robot pushed a commit to kubernetes/kubernetes that referenced this issue Jun 6, 2017
Automatic merge from submit-queue

kubeadm: don't duplicate master taint if it already exists.

**What this PR does / why we need it**:
This helps enable a graceful upgrade/downgrade process between 1.6.x and 1.7.x kubeadm clusters (although no guarantees outside of that range) by doing:

    $ kubeadm init --kubernetes-version <version> --skip-preflight-checks

Without this change, the command fails with an error that the node taint is duplicated.

This is part of kubernetes/kubeadm#278

**Release note**:

```release-note
NONE
```
Fixes: kubernetes/kubeadm#288
pipejakob added a commit to pipejakob/kubernetes that referenced this issue Jun 6, 2017
Ignore errors for duplicates when creating service accounts.

kubernetes/kubeadm#278
k8s-github-robot pushed a commit to kubernetes/kubernetes that referenced this issue Jun 6, 2017
Automatic merge from submit-queue (batch tested with PRs 46787, 46876, 46621, 46907, 46819)

kubeadm: Only create bootstrap configmap if not exists.

**What this PR does / why we need it**:
The fact that this method was not idempotent was breaking kubeadm upgrades.

kubernetes/kubeadm#278

**Release note**:

```release-note
NONE
```
mrIncompetent pushed a commit to kubermatic/kubernetes that referenced this issue Jun 6, 2017
This helps enable a graceful upgrade/downgrade process between 1.6.x and
1.7.x kubeadm clusters (although no guarantees outside of that range) by
doing:

  $ kubeadm init --kubernetes-version <version> --skip-preflight-checks

Without this change, the command fails with an error that the node taint
is duplicated.

This is part of kubernetes/kubeadm#278
peebs pushed a commit to coreos/kubernetes that referenced this issue Jun 6, 2017
Automatic merge from submit-queue (batch tested with PRs 46897, 46899, 46864, 46854, 46875)

kubeadm: Idempotent service account creation.

**What this PR does / why we need it**:
During `kubeadm init`, ignore errors for duplicates when creating service accounts.

kubernetes/kubeadm#278

Fixes: kubernetes/kubeadm#288

**Release note**:

```release-note
NONE
```
@lukemarsden
Copy link

kubernetes/website#3999

pipejakob added a commit to pipejakob/kubernetes.github.io that referenced this issue Jun 20, 2017
In-place upgrades are supported between 1.6 and 1.7 releases.

Fixes kubernetes/kubeadm#278
pipejakob added a commit to pipejakob/kubernetes.github.io that referenced this issue Jun 20, 2017
In-place upgrades are supported between 1.6 and 1.7 releases.

Fixes kubernetes/kubeadm#278
pipejakob added a commit to pipejakob/kubernetes.github.io that referenced this issue Jun 20, 2017
In-place upgrades are supported between 1.6 and 1.7 releases.

Fixes kubernetes/kubeadm#278
pipejakob added a commit to pipejakob/kubernetes.github.io that referenced this issue Jun 21, 2017
In-place upgrades are supported between 1.6 and 1.7 releases.

Fixes kubernetes/kubeadm#278
pipejakob added a commit to pipejakob/kubernetes.github.io that referenced this issue Jun 21, 2017
In-place upgrades are supported between 1.6 and 1.7 releases.

Fixes kubernetes/kubeadm#278
pipejakob added a commit to pipejakob/kubernetes.github.io that referenced this issue Jun 22, 2017
In-place upgrades are supported between 1.6 and 1.7 releases.

Fixes kubernetes/kubeadm#278
pipejakob added a commit to pipejakob/kubernetes.github.io that referenced this issue Jun 22, 2017
In-place upgrades are supported between 1.6 and 1.7 releases. Rollback
instructions to come in a separate commit.

Fixes kubernetes/kubeadm#278
@roberthbailey
Copy link
Contributor

Fixed by kubernetes/website#3999.

dchen1107 pushed a commit to kubernetes/website that referenced this issue Jun 30, 2017
* Minor fixes in the Deployment doc

Signed-off-by: Michail Kargakis <[email protected]>

* add NodeRestriction to admission-controllers (#3842)

* Admins Can Configure Zones in Storage Class

The PR #38505 (kubernetes/kubernetes#38505) added zones optional parameter to Storage Class for AWS and GCE provisioners.

That's why documentation needs to be updated accordingly.

* document custom resource definitions

* add host paths to psp (#3971)

* add host paths to psp

* add italics

* Update ConfigMap doc to explain TTL-based cache updates (#3989)

* Update ConfigMap doc to explain TTL-based cache updates

* swap word order

Change "When a ConfigMap being already consumed..." to "When a ConfigMap already being consumed..."

* Update NetworkPolicy docs for v1

* StorageOS Volume plugin

* Update GPU docs

* docs: HPA autoscaling/v2alpha1 status conditions

This commit documents the new status conditions feature for HPA
autoscaling/v2alpha1.  It demonstrates how to get the status conditions
using `kubectl describe`, and how to interpret them.

* Update description about NodeRestriction

kubelet node can alse create mirror pods for their own static pods.

* adding storage as a supported resource to node allocatable

Signed-off-by: Vishnu kannan <[email protected]>

* Add documentation for podpreset opt-out annotation

This adds the annotation for having the podpreset admission controller
to skip (opt-out) manipulating the pod spec.

Also, the annotation format for what presets have acted on a pod has
been modified to add a prefix of "podpreset-". The new naming makes it such
that there is no chance of collision with the newly introduced opt-out
annotation (or future ones yet to be added).

Opt-out annotation PR:
kubernetes/kubernetes#44965

* Update PDB documentation to explain new field (#3885)

* update-docs-pdb

* Addressed erictune@'s comments

* Fix title and add a TOC to the logging concept page

* Patch #4118 for typos

* Describe setting coredns server in nameserver resolv chain

* Address comments in PR #3997.

Comment is in
https://github.com/kubernetes/kubernetes.github.io/pull/3997/files/f6eb59c67e28efc298c87b1ef49a96bc6adacd1e#diff-7a14981f3dd8eb203f897ce6c11d9828

* Update task for DaemonSet history and rollback (#4098)

* Update task for DaemonSet history and rollback

Also remove mentions of templateGeneration field because it's deprecated

* Address comments

* removed lt and gt as operators (#4152)

* removed lt and gt as operators

* replace lt and gt for node-affinfity

* updated based on bsalamat review

* Initial draft of upgrade guide for kubeadm clusters.

In-place upgrades are supported between 1.6 and 1.7 releases. Rollback
instructions to come in a separate commit.

Fixes kubernetes/kubeadm#278

* Add local volume documentation (#4050)

* Add local volume documentation

* Add PV local volume example

* Patch PR #3999

* Add documentation for Stackdriver event exporter

* Add documentation about controller metrics

* Federation: Add task for setting up placement policies (#4075)

* Add task for setting up placement policies

* Update version of management sidecar in policy engine deployment

* Address @nikhiljindal's comments

- Lower case filenames
- Comments in policy
- Typo fixes
- Removed type LoadBalancer from OPA Service

* Add example that sets cluster selector

Per-@nikhiljindal's suggestion

* Fix wording and templating per @chenopis

* PodDisruptionBudget documentation Improvements (#4140)

* Changes from #3885

Title: Update PDB documentation to explain new field
Author: foxish

* Added Placeholder Disruptions Concept Guide

New file: docs/concepts/workloads/pods/disruptions.md
Intented contents: concept for Pod Disruption Budget,
 cross reference to Eviction and Preemption docs.
Linked from: concepts > workloads > pods

* Added placeholder Configuring PDB Task

New file: docs/tasks/run-application/configure-pdb.md
Intented contents: task for writing a Pod Disruption Budget.
Linked from: tasks > configuring-applications > configure pdb.

* Add refs to the "drain a node" task.

* Refactor PDB docs.

Move the "Requesting an eviction" section from:
docs/tasks/administer-cluster/configure-pod-disruption-budget.md
-- which is going away -- to:
docs/tasks/administer-cluster/safely-drain-node.md

The move is verbatim, except for an introductory sentence.

Also added assignees.

* Refactor of PDB docs

Moved the section:
Specifying a PodDisruptionBudget
from:
docs/tasks/administer-cluster/configure-pod-disruption-budget.md
to:
docs/tasks/run-application/configure-pdb.md
because that former file is going away.
Move is verbatim.

* Explain how Eviction tools should handle failures

* Refactor PDB docs

Move text from:
docs/tasks/administer-cluster/configure-pod-disruption-budget.md
to:
docs/concepts/workloads/pods/disruptions.md

Delete the now empty:
docs/tasks/administer-cluster/configure-pod-disruption-budget.md

Added a redirects_from section to the new doc, containing the path
of the now-deleted doc, plus all the redirects from the deleted
doc.

* Expand PDB Concept guide

Building on a little content from the old task,
greatly expanded the Disruptions concept
guide, including an abstract example.

* Update creating a pdb Task.

* Address review comments.

* Fixed for all cody-clark's review comments

* Address review comments from mml

* Address review comments from maisem

* Fix missing backtick

* Api and Kubectl reference docs updates for 1.7 (#4193)

* Fix includes groups

* Generated kubectl docs for 1.7

* Generated references docs for 1.7 api

* Document node authorization mode

* API Aggregator (#4173)

* API Aggregator

* Additional bullet points

* incorporated feedback for apiserver-aggregation.md

* split setup-api-aggregator.md into two docs and address feedback

* fix link

* addressed docs feedback

* incorporate feedback

* integrate feedback

* Add documentation for DNS stub domains (#4063)

* Add documentation for DNS stub domains

* add additional prereq

* fix image path

* review feedback

* minor grammar and style nits

* documentation for using hostAliases to manage hosts file (#4080)

* documentation for using hostAliases to manage hosts file

* add to table of contents

* review comments

* update the right command to see hosts file

* reformat doc based on suggestion and change some wording

* Fix typo for #4080

* Patch PR #4063

* Fix wording in placement policy task introduction

* Add update to statefulset concepts and basic tutorial (#4174)

* Add update to statefulset concpets and basic tutorial

* Address tech comments.

* Update ESIPP docs for new added API fields

* Custom resource docs

* update audit document with advanced audit features added in 1.7

* kubeadm v1.7 documentation updates (#4018)

* v1.7 updates for kubeadm

* Address review comments

* Address Luke's comments

* Encrypting secrets at rest and cluster security guide

* Edits for Custom DNS Documentation (#4207)

* reorganize custom dns doc

* format fixes

* Update version numbers to 1.7

* Patch PR #4140 (#4215)

* Patch PR #4140

* fix link and typos

* Update PR template

* Update TLS bootstrapping with 1.7 features

This includes documenting the new CSR approver built into the
controller manager and the kubelet alpha features for certificate
rotation.

Since the CSR approver changed over the 1.7 release cycle we need
to call out the migration steps for those using the alpha feature.
This document as a whole could probably use some updates, but the
main focus of this PR is just to get these features minimally
documented before the release.

* Federated ClusterSelector

formatting updates from review

* complete PR #4181 (#4223)

* complete PR #4181

* fix security link

* Extensible admission controller (#4092)

* extensible-admission-controllers

* Update extensible-admission-controllers.md

* more on initializers

* fixes

* Expand external admission webhooks documentation

* wrap at 80 chars

* more

* add reference

* Use correct apigroup for network policy

* Docs changes to PR #4092 (#4224)

* Docs changes to PR #4092

* address feedback

* add doc for --as-group in cli

Add doc for this pr:
kubernetes/kubernetes#43696
jesscodez pushed a commit to kubernetes/website that referenced this issue Sep 22, 2017
In-place upgrades are supported between 1.6 and 1.7 releases. Rollback
instructions to come in a separate commit.

Fixes kubernetes/kubeadm#278
jesscodez pushed a commit to kubernetes/website that referenced this issue Sep 22, 2017
* Minor fixes in the Deployment doc

Signed-off-by: Michail Kargakis <[email protected]>

* add NodeRestriction to admission-controllers (#3842)

* Admins Can Configure Zones in Storage Class

The PR #38505 (kubernetes/kubernetes#38505) added zones optional parameter to Storage Class for AWS and GCE provisioners.

That's why documentation needs to be updated accordingly.

* document custom resource definitions

* add host paths to psp (#3971)

* add host paths to psp

* add italics

* Update ConfigMap doc to explain TTL-based cache updates (#3989)

* Update ConfigMap doc to explain TTL-based cache updates

* swap word order

Change "When a ConfigMap being already consumed..." to "When a ConfigMap already being consumed..."

* Update NetworkPolicy docs for v1

* StorageOS Volume plugin

* Update GPU docs

* docs: HPA autoscaling/v2alpha1 status conditions

This commit documents the new status conditions feature for HPA
autoscaling/v2alpha1.  It demonstrates how to get the status conditions
using `kubectl describe`, and how to interpret them.

* Update description about NodeRestriction

kubelet node can alse create mirror pods for their own static pods.

* adding storage as a supported resource to node allocatable

Signed-off-by: Vishnu kannan <[email protected]>

* Add documentation for podpreset opt-out annotation

This adds the annotation for having the podpreset admission controller
to skip (opt-out) manipulating the pod spec.

Also, the annotation format for what presets have acted on a pod has
been modified to add a prefix of "podpreset-". The new naming makes it such
that there is no chance of collision with the newly introduced opt-out
annotation (or future ones yet to be added).

Opt-out annotation PR:
kubernetes/kubernetes#44965

* Update PDB documentation to explain new field (#3885)

* update-docs-pdb

* Addressed erictune@'s comments

* Fix title and add a TOC to the logging concept page

* Patch #4118 for typos

* Describe setting coredns server in nameserver resolv chain

* Address comments in PR #3997.

Comment is in
https://github.com/kubernetes/kubernetes.github.io/pull/3997/files/f6eb59c67e28efc298c87b1ef49a96bc6adacd1e#diff-7a14981f3dd8eb203f897ce6c11d9828

* Update task for DaemonSet history and rollback (#4098)

* Update task for DaemonSet history and rollback

Also remove mentions of templateGeneration field because it's deprecated

* Address comments

* removed lt and gt as operators (#4152)

* removed lt and gt as operators

* replace lt and gt for node-affinfity

* updated based on bsalamat review

* Initial draft of upgrade guide for kubeadm clusters.

In-place upgrades are supported between 1.6 and 1.7 releases. Rollback
instructions to come in a separate commit.

Fixes kubernetes/kubeadm#278

* Add local volume documentation (#4050)

* Add local volume documentation

* Add PV local volume example

* Patch PR #3999

* Add documentation for Stackdriver event exporter

* Add documentation about controller metrics

* Federation: Add task for setting up placement policies (#4075)

* Add task for setting up placement policies

* Update version of management sidecar in policy engine deployment

* Address @nikhiljindal's comments

- Lower case filenames
- Comments in policy
- Typo fixes
- Removed type LoadBalancer from OPA Service

* Add example that sets cluster selector

Per-@nikhiljindal's suggestion

* Fix wording and templating per @chenopis

* PodDisruptionBudget documentation Improvements (#4140)

* Changes from #3885

Title: Update PDB documentation to explain new field
Author: foxish

* Added Placeholder Disruptions Concept Guide

New file: docs/concepts/workloads/pods/disruptions.md
Intented contents: concept for Pod Disruption Budget,
 cross reference to Eviction and Preemption docs.
Linked from: concepts > workloads > pods

* Added placeholder Configuring PDB Task

New file: docs/tasks/run-application/configure-pdb.md
Intented contents: task for writing a Pod Disruption Budget.
Linked from: tasks > configuring-applications > configure pdb.

* Add refs to the "drain a node" task.

* Refactor PDB docs.

Move the "Requesting an eviction" section from:
docs/tasks/administer-cluster/configure-pod-disruption-budget.md
-- which is going away -- to:
docs/tasks/administer-cluster/safely-drain-node.md

The move is verbatim, except for an introductory sentence.

Also added assignees.

* Refactor of PDB docs

Moved the section:
Specifying a PodDisruptionBudget
from:
docs/tasks/administer-cluster/configure-pod-disruption-budget.md
to:
docs/tasks/run-application/configure-pdb.md
because that former file is going away.
Move is verbatim.

* Explain how Eviction tools should handle failures

* Refactor PDB docs

Move text from:
docs/tasks/administer-cluster/configure-pod-disruption-budget.md
to:
docs/concepts/workloads/pods/disruptions.md

Delete the now empty:
docs/tasks/administer-cluster/configure-pod-disruption-budget.md

Added a redirects_from section to the new doc, containing the path
of the now-deleted doc, plus all the redirects from the deleted
doc.

* Expand PDB Concept guide

Building on a little content from the old task,
greatly expanded the Disruptions concept
guide, including an abstract example.

* Update creating a pdb Task.

* Address review comments.

* Fixed for all cody-clark's review comments

* Address review comments from mml

* Address review comments from maisem

* Fix missing backtick

* Api and Kubectl reference docs updates for 1.7 (#4193)

* Fix includes groups

* Generated kubectl docs for 1.7

* Generated references docs for 1.7 api

* Document node authorization mode

* API Aggregator (#4173)

* API Aggregator

* Additional bullet points

* incorporated feedback for apiserver-aggregation.md

* split setup-api-aggregator.md into two docs and address feedback

* fix link

* addressed docs feedback

* incorporate feedback

* integrate feedback

* Add documentation for DNS stub domains (#4063)

* Add documentation for DNS stub domains

* add additional prereq

* fix image path

* review feedback

* minor grammar and style nits

* documentation for using hostAliases to manage hosts file (#4080)

* documentation for using hostAliases to manage hosts file

* add to table of contents

* review comments

* update the right command to see hosts file

* reformat doc based on suggestion and change some wording

* Fix typo for #4080

* Patch PR #4063

* Fix wording in placement policy task introduction

* Add update to statefulset concepts and basic tutorial (#4174)

* Add update to statefulset concpets and basic tutorial

* Address tech comments.

* Update ESIPP docs for new added API fields

* Custom resource docs

* update audit document with advanced audit features added in 1.7

* kubeadm v1.7 documentation updates (#4018)

* v1.7 updates for kubeadm

* Address review comments

* Address Luke's comments

* Encrypting secrets at rest and cluster security guide

* Edits for Custom DNS Documentation (#4207)

* reorganize custom dns doc

* format fixes

* Update version numbers to 1.7

* Patch PR #4140 (#4215)

* Patch PR #4140

* fix link and typos

* Update PR template

* Update TLS bootstrapping with 1.7 features

This includes documenting the new CSR approver built into the
controller manager and the kubelet alpha features for certificate
rotation.

Since the CSR approver changed over the 1.7 release cycle we need
to call out the migration steps for those using the alpha feature.
This document as a whole could probably use some updates, but the
main focus of this PR is just to get these features minimally
documented before the release.

* Federated ClusterSelector

formatting updates from review

* complete PR #4181 (#4223)

* complete PR #4181

* fix security link

* Extensible admission controller (#4092)

* extensible-admission-controllers

* Update extensible-admission-controllers.md

* more on initializers

* fixes

* Expand external admission webhooks documentation

* wrap at 80 chars

* more

* add reference

* Use correct apigroup for network policy

* Docs changes to PR #4092 (#4224)

* Docs changes to PR #4092

* address feedback

* add doc for --as-group in cli

Add doc for this pr:
kubernetes/kubernetes#43696
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
priority/critical-urgent Highest priority. Must be actively worked on as someone's top priority right now.
Projects
None yet
Development

No branches or pull requests

8 participants