diff --git a/content/en/docs/reference/command-line-tools-reference/kube-apiserver.md b/content/en/docs/reference/command-line-tools-reference/kube-apiserver.md
index 3bc02f47edb1f..11f932a4f12e7 100644
--- a/content/en/docs/reference/command-line-tools-reference/kube-apiserver.md
+++ b/content/en/docs/reference/command-line-tools-reference/kube-apiserver.md
@@ -857,7 +857,7 @@ kube-apiserver [flags]
--storage-backend string |
- | The storage backend for persistence. Options: 'etcd3' (default), 'etcd2'. |
+ | The storage backend for persistence. Options: 'etcd3' (default) |
diff --git a/content/en/docs/reference/using-api/api-concepts.md b/content/en/docs/reference/using-api/api-concepts.md
index 5b1cf1d68229a..5438cd2a7dce1 100644
--- a/content/en/docs/reference/using-api/api-concepts.md
+++ b/content/en/docs/reference/using-api/api-concepts.md
@@ -86,7 +86,7 @@ For example:
}
...
-A given Kubernetes server will only preserve a historical list of changes for a limited time. Older clusters using etcd2 preserve a maximum of 1000 changes. Newer clusters using etcd3 preserve changes in the last 5 minutes by default. When the requested watch operations fail because the historical version of that resource is not available, clients must handle the case by recognizing the status code `410 Gone`, clearing their local cache, performing a list operation, and starting the watch from the `resourceVersion` returned by that new list operation. Most client libraries offer some form of standard tool for this logic. (In Go this is called a `Reflector` and is located in the `k8s.io/client-go/cache` package.)
+A given Kubernetes server will only preserve a historical list of changes for a limited time. Clusters using etcd3 preserve changes in the last 5 minutes by default. When the requested watch operations fail because the historical version of that resource is not available, clients must handle the case by recognizing the status code `410 Gone`, clearing their local cache, performing a list operation, and starting the watch from the `resourceVersion` returned by that new list operation. Most client libraries offer some form of standard tool for this logic. (In Go this is called a `Reflector` and is located in the `k8s.io/client-go/cache` package.)
## Retrieving large results sets in chunks
diff --git a/content/en/docs/setup/custom-cloud/master.yaml b/content/en/docs/setup/custom-cloud/master.yaml
deleted file mode 100644
index 5b7df1bd77d70..0000000000000
--- a/content/en/docs/setup/custom-cloud/master.yaml
+++ /dev/null
@@ -1,142 +0,0 @@
-#cloud-config
-
----
-write-files:
- - path: /etc/conf.d/nfs
- permissions: '0644'
- content: |
- OPTS_RPC_MOUNTD=""
- - path: /opt/bin/wupiao
- permissions: '0755'
- content: |
- #!/bin/bash
- # [w]ait [u]ntil [p]ort [i]s [a]ctually [o]pen
- [ -n "$1" ] && \
- until curl -o /dev/null -sIf http://${1}; do \
- sleep 1 && echo .;
- done;
- exit $?
-
-hostname: master
-coreos:
- etcd2:
- name: master
- listen-client-urls: http://0.0.0.0:2379,http://0.0.0.0:4001
- advertise-client-urls: http://$private_ipv4:2379,http://$private_ipv4:4001
- initial-cluster-token: k8s_etcd
- listen-peer-urls: http://$private_ipv4:2380,http://$private_ipv4:7001
- initial-advertise-peer-urls: http://$private_ipv4:2380
- initial-cluster: master=http://$private_ipv4:2380
- initial-cluster-state: new
- fleet:
- metadata: "role=master"
- units:
- - name: etcd2.service
- command: start
- - name: generate-serviceaccount-key.service
- command: start
- content: |
- [Unit]
- Description=Generate service-account key file
-
- [Service]
- ExecStartPre=-/usr/bin/mkdir -p /opt/bin
- ExecStart=/bin/openssl genrsa -out /opt/bin/kube-serviceaccount.key 2048 2>/dev/null
- RemainAfterExit=yes
- Type=oneshot
- - name: setup-network-environment.service
- command: start
- content: |
- [Unit]
- Description=Setup Network Environment
- Documentation=https://github.com/kelseyhightower/setup-network-environment
- Requires=network-online.target
- After=network-online.target
-
- [Service]
- ExecStartPre=-/usr/bin/mkdir -p /opt/bin
- ExecStartPre=/usr/bin/curl -L -o /opt/bin/setup-network-environment -z /opt/bin/setup-network-environment https://github.com/kelseyhightower/setup-network-environment/releases/download/v1.0.0/setup-network-environment
- ExecStartPre=/usr/bin/chmod +x /opt/bin/setup-network-environment
- ExecStart=/opt/bin/setup-network-environment
- RemainAfterExit=yes
- Type=oneshot
- - name: fleet.service
- command: start
- - name: flanneld.service
- command: start
- drop-ins:
- - name: 50-network-config.conf
- content: |
- [Unit]
- Requires=etcd2.service
- [Service]
- ExecStartPre=/usr/bin/etcdctl set /coreos.com/network/config '{"Network":"10.244.0.0/16", "Backend": {"Type": "vxlan"}}'
- - name: docker.service
- command: start
- - name: kube-apiserver.service
- command: start
- content: |
- [Unit]
- Description=Kubernetes API Server
- Documentation=https://github.com/kubernetes/kubernetes
- Requires=setup-network-environment.service etcd2.service generate-serviceaccount-key.service
- After=setup-network-environment.service etcd2.service generate-serviceaccount-key.service
-
- [Service]
- EnvironmentFile=/etc/network-environment
- ExecStartPre=-/usr/bin/mkdir -p /opt/bin
- ExecStartPre=/usr/bin/curl -L -o /opt/bin/kube-apiserver -z /opt/bin/kube-apiserver https://storage.googleapis.com/kubernetes-release/release/v1.1.2/bin/linux/amd64/kube-apiserver
- ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-apiserver
- ExecStartPre=/opt/bin/wupiao 127.0.0.1:2379/v2/machines
- ExecStart=/opt/bin/kube-apiserver \
- --service-account-key-file=/opt/bin/kube-serviceaccount.key \
- --service-account-lookup=false \
- --enable-admission-plugins=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota \
- --runtime-config=api/v1 \
- --allow-privileged=true \
- --insecure-bind-address=0.0.0.0 \
- --insecure-port=8080 \
- --kubelet-https=true \
- --secure-port=6443 \
- --service-cluster-ip-range=10.100.0.0/16 \
- --etcd-servers=http://127.0.0.1:2379 \
- --public-address-override=${DEFAULT_IPV4} \
- --logtostderr=true
- Restart=always
- RestartSec=10
- - name: kube-controller-manager.service
- command: start
- content: |
- [Unit]
- Description=Kubernetes Controller Manager
- Documentation=https://github.com/kubernetes/kubernetes
- Requires=kube-apiserver.service
- After=kube-apiserver.service
-
- [Service]
- ExecStartPre=/usr/bin/curl -L -o /opt/bin/kube-controller-manager -z /opt/bin/kube-controller-manager https://storage.googleapis.com/kubernetes-release/release/v1.1.2/bin/linux/amd64/kube-controller-manager
- ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-controller-manager
- ExecStart=/opt/bin/kube-controller-manager \
- --service-account-private-key-file=/opt/bin/kube-serviceaccount.key \
- --master=127.0.0.1:8080 \
- --logtostderr=true
- Restart=always
- RestartSec=10
- - name: kube-scheduler.service
- command: start
- content: |
- [Unit]
- Description=Kubernetes Scheduler
- Documentation=https://github.com/kubernetes/kubernetes
- Requires=kube-apiserver.service
- After=kube-apiserver.service
-
- [Service]
- ExecStartPre=/usr/bin/curl -L -o /opt/bin/kube-scheduler -z /opt/bin/kube-scheduler https://storage.googleapis.com/kubernetes-release/release/v1.1.2/bin/linux/amd64/kube-scheduler
- ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-scheduler
- ExecStart=/opt/bin/kube-scheduler --master=127.0.0.1:8080
- Restart=always
- RestartSec=10
- update:
- group: alpha
- reboot-strategy: off
diff --git a/content/en/docs/setup/custom-cloud/node.yaml b/content/en/docs/setup/custom-cloud/node.yaml
deleted file mode 100644
index 9f5caff49bc3e..0000000000000
--- a/content/en/docs/setup/custom-cloud/node.yaml
+++ /dev/null
@@ -1,92 +0,0 @@
-#cloud-config
-write-files:
- - path: /opt/bin/wupiao
- permissions: '0755'
- content: |
- #!/bin/bash
- # [w]ait [u]ntil [p]ort [i]s [a]ctually [o]pen
- [ -n "$1" ] && [ -n "$2" ] && while ! curl --output /dev/null \
- --silent --head --fail \
- http://${1}:${2}; do sleep 1 && echo -n .; done;
- exit $?
-coreos:
- etcd2:
- listen-client-urls: http://0.0.0.0:2379,http://0.0.0.0:4001
- advertise-client-urls: http://0.0.0.0:2379,http://0.0.0.0:4001
- initial-cluster: master=http://:2380
- proxy: on
- fleet:
- metadata: "role=node"
- units:
- - name: etcd2.service
- command: start
- - name: fleet.service
- command: start
- - name: flanneld.service
- command: start
- - name: docker.service
- command: start
- - name: setup-network-environment.service
- command: start
- content: |
- [Unit]
- Description=Setup Network Environment
- Documentation=https://github.com/kelseyhightower/setup-network-environment
- Requires=network-online.target
- After=network-online.target
-
- [Service]
- ExecStartPre=-/usr/bin/mkdir -p /opt/bin
- ExecStartPre=/usr/bin/curl -L -o /opt/bin/setup-network-environment -z /opt/bin/setup-network-environment https://github.com/kelseyhightower/setup-network-environment/releases/download/v1.0.0/setup-network-environment
- ExecStartPre=/usr/bin/chmod +x /opt/bin/setup-network-environment
- ExecStart=/opt/bin/setup-network-environment
- RemainAfterExit=yes
- Type=oneshot
- - name: kube-proxy.service
- command: start
- content: |
- [Unit]
- Description=Kubernetes Proxy
- Documentation=https://github.com/kubernetes/kubernetes
- Requires=setup-network-environment.service
- After=setup-network-environment.service
-
- [Service]
- ExecStartPre=/usr/bin/curl -L -o /opt/bin/kube-proxy -z /opt/bin/kube-proxy https://storage.googleapis.com/kubernetes-release/release/v1.1.2/bin/linux/amd64/kube-proxy
- ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-proxy
- # wait for kubernetes master to be up and ready
- ExecStartPre=/opt/bin/wupiao 8080
- ExecStart=/opt/bin/kube-proxy \
- --master=:8080 \
- --logtostderr=true
- Restart=always
- RestartSec=10
- - name: kube-kubelet.service
- command: start
- content: |
- [Unit]
- Description=Kubernetes Kubelet
- Documentation=https://github.com/kubernetes/kubernetes
- Requires=setup-network-environment.service
- After=setup-network-environment.service
-
- [Service]
- EnvironmentFile=/etc/network-environment
- ExecStartPre=/usr/bin/curl -L -o /opt/bin/kubelet -z /opt/bin/kubelet https://storage.googleapis.com/kubernetes-release/release/v1.1.2/bin/linux/amd64/kubelet
- ExecStartPre=/usr/bin/chmod +x /opt/bin/kubelet
- # wait for kubernetes master to be up and ready
- ExecStartPre=/opt/bin/wupiao 8080
- ExecStart=/opt/bin/kubelet \
- --address=0.0.0.0 \
- --port=10250 \
- --hostname-override=${DEFAULT_IPV4} \
- --api-servers=:8080 \
- --allow-privileged=true \
- --logtostderr=true \
- --healthz-bind-address=0.0.0.0 \
- --healthz-port=10248
- Restart=always
- RestartSec=10
- update:
- group: alpha
- reboot-strategy: off
diff --git a/content/en/docs/tasks/administer-cluster/configure-upgrade-etcd.md b/content/en/docs/tasks/administer-cluster/configure-upgrade-etcd.md
index bd0f59d9483a4..c821ceaa48fe6 100644
--- a/content/en/docs/tasks/administer-cluster/configure-upgrade-etcd.md
+++ b/content/en/docs/tasks/administer-cluster/configure-upgrade-etcd.md
@@ -201,223 +201,29 @@ If the majority of etcd members have permanently failed, the etcd cluster is con
## Upgrading and rolling back etcd clusters
-### Important assumptions
-
-The upgrade procedure described in this document assumes that either:
-
-1. The etcd cluster has only a single node.
-2. The etcd cluster has multiple nodes.
-
- In this case, the upgrade procedure requires shutting down the
- etcd cluster. During the time the etcd cluster is shut down, the Kubernetes API Server will be read only.
-
-{{< warning >}}
-**Warning**: Deviations from the assumptions are untested by continuous
-integration, and deviations might create undesirable consequences. Additional information about operating an etcd cluster is available [from the etcd maintainers](https://github.com/coreos/etcd/tree/master/Documentation).
-{{< /warning >}}
-
-### Background
-
-As of Kubernetes version 1.5.1, we are still using etcd from the 2.2.1 release with
-the v2 API. Also, we have no pre-existing process for updating etcd, as we have
-never updated etcd by either minor or major version.
-
-Note that we need to migrate both the etcd versions that we are using (from 2.2.1
-to at least 3.0.x) as well as the version of the etcd API that Kubernetes talks to. The etcd 3.0.x
-binaries support both the v2 and v3 API.
-
-This document describes how to do this migration. If you want to skip the
-background and cut right to the procedure, see [Upgrade
-Procedure](#upgrade-procedure).
-
-### etcd upgrade requirements
-
-There are requirements on how an etcd cluster upgrade can be performed. The primary considerations are:
-- Upgrade between one minor release at a time
-- Rollback supported through additional tooling
-
-#### One minor release at a time
-
-Upgrade only one minor release at a time. For example, we cannot upgrade directly from 2.1.x to 2.3.x.
-Within patch releases it is possible to upgrade and downgrade between arbitrary versions. Starting a cluster for
-any intermediate minor release, waiting until the cluster is healthy, and then
-shutting down the cluster will perform the migration. For example, to upgrade from version 2.1.x to 2.3.y,
-it is enough to start etcd in 2.2.z version, wait until it is healthy, stop it, and then start the
-2.3.y version.
-
-#### Rollback via additional tooling
-
-Versions 3.0+ of etcd do not support general rollback. That is,
-after migrating from M.N to M.N+1, there is no way to go back to M.N.
-The etcd team has provided a [custom rollback tool](https://git.k8s.io/kubernetes/cluster/images/etcd/rollback)
-but the rollback tool has these limitations:
-
-* This custom rollback tool is not part of the etcd repo and does not receive the same
- testing as the rest of etcd. We are testing it in a couple of end-to-end tests.
- There is only community support here.
-
-* The rollback can be done only from the 3.0.x version (that is using the v3 API) to the
- 2.2.1 version (that is using the v2 API).
-
-* The tool only works if the data is stored in `application/json` format.
-
-* Rollback doesn’t preserve resource versions of objects stored in etcd.
-
-{{< warning >}}
-**Warning**: If the data is not kept in `application/json` format (see [Upgrade
-Procedure](#upgrade-procedure)), you will lose the option to roll back to etcd
-2.2.
-{{< /warning >}}
-
-The last bullet means that any component or user that has some logic
-depending on resource versions may require restart after etcd rollback. This
-includes that all clients using the watch API, which depends on
-resource versions. Since both the kubelet and kube-proxy use the watch API, a
-rollback might require restarting all Kubernetes components on all nodes.
-
-{{< note >}}
-**Note**: At the time of writing, both Kubelet and KubeProxy are using “resource
-version” only for watching (i.e. are not using resource versions for anything
-else). And both are using reflector and/or informer frameworks for watching
-(i.e. they don’t send watch requests themselves). Both those frameworks if they
-can’t renew watch, they will start from “current version” by doing “list + watch
-from the resource version returned by list”. That means that if the apiserver
-will be down for the period of rollback, all of node components should basically
-restart their watches and start from “now” when apiserver is back. And it will
-be back with new resource version. That would mean that restarting node
-components is not needed. But the assumptions here may not hold forever.
-{{< /note >}}
-
-{{% /capture %}}
-
-{{% capture discussion %}}
-
-### Design
-
-This section describes how we are going to do the migration, given the
-[etcd upgrade requirements](#etcd-upgrade-requirements).
-
-Note that because the code changes in Kubernetes code needed
-to support the etcd v3 API are local and straightforward, we do not
-focus on them at all. We focus only on the upgrade/rollback here.
-
-### New etcd Docker image
-
-We decided to completely change the content of the etcd image and the way it works.
-So far, the Docker image for etcd in version X has contained only the etcd and
-etcdctl binaries.
-
-Going forward, the Docker image for etcd in version X will contain multiple
-versions of etcd. For example, the 3.0.17 image will contain the 2.2.1, 2.3.7, and
-3.0.17 binaries of etcd and etcdctl. This will allow running etcd in multiple
-different versions using the same Docker image.
-
-Additionally, the image will contain a custom script, written by the Kubernetes team,
-for doing migration between versions. The image will also contain the rollback tool
-provided by the etcd team.
-
-### Migration script
-The migration script that will be part of the etcd Docker image is a bash
-script that works as follows:
-
-1. Detect which version of etcd we were previously running.
- For that purpose, we have added a dedicated file, `version.txt`, that
- holds that information and is stored in the etcd-data-specific directory,
- next to the etcd data. If the file doesn’t exist, we default it to version 2.2.1.
-1. If we are in version 2.2.1 and are supposed to upgrade, backup
- data.
-1. Based on the detected previous etcd version and the desired one
- (communicated via environment variable), do the upgrade steps as
- needed. This means that for every minor etcd release greater than the detected one and
- less than or equal to the desired one:
- 1. Start etcd in that version.
- 1. Wait until it is healthy. Healthy means that you can write some data to it.
- 1. Stop this etcd. Note that this etcd will not listen on the default
- etcd port. It is hard coded to listen on ports that the API server is not
- configured to connect to, which means that API server won’t be able to connect
- to it. Assuming no other client goes out of its way to try to
- connect and write to this obscure port, no new data will be written during
- this period.
-1. If the desired API version is v3 and the detected version is v2, do the offline
- migration from the v2 to v3 data format. For that we use two tools:
- * ./etcdctl migrate: This is the official tool for migration provided by the etcd team.
- * A custom script that is attaching TTLs to events in the etcd. Note that etcdctl
- migrate doesn’t support TTLs.
-1. After every successful step, update contents of the version file.
- This will protect us from the situation where something crashes in the
- meantime ,and the version file gets completely unsynchronized with the
- real data. Note that it is safe if the script crashes after the step is
- done and before the file is updated. This will only result in redoing one
- step in the next try.
-
-All the previous steps are for the case where the detected version is less than or
-equal to the desired version. In the opposite case, that is for a rollback, the
-script works as follows:
-
-1. Verify that the detected version is 3.0.x with the v3 API, and the
- desired version is 2.2.1 with the v2 API. We don’t support any other rollback.
-1. If so, we run the custom tool provided by etcd team to do the offline
- rollback. This tool reads the v3 formatted data and writes it back to disk
- in v2 format.
-1. Finally update the contents of the version file.
-
-### Upgrade procedure
-Simply modify the command line in the etcd manifest to:
-
-1. Run the migration script. If the previously run version is already in the
- desired version, this will be no-op.
-1. Start etcd in the desired version.
-
-Starting in Kubernetes version 1.6, this has been done in the manifests for new
-Google Compute Engine clusters. You should also specify these environment
-variables. In particular, you must keep `STORAGE_MEDIA_TYPE` set to
-`application/json` if you wish to preserve the option to roll back.
-
-```
-TARGET_STORAGE=etcd3
-ETCD_IMAGE=3.0.17
-TARGET_VERSION=3.0.17
-STORAGE_MEDIA_TYPE=application/json
-```
-
-To roll back, use these:
-
-```
-TARGET_STORAGE=etcd2
-ETCD_IMAGE=3.0.17
-TARGET_VERSION=2.2.1
-STORAGE_MEDIA_TYPE=application/json
-```
-
-{{< note >}}
-**Note:** This procedure upgrades from 2.x to 3.x. Version `3.0.17` is not recommended for running in production (see [prerequisites](#prerequisites) for minimum recommended etcd versions).
-{{< /note >}}
-
-## Notes for etcd Version 2.2.1
-
-### Default configuration
-
-The default setup scripts use kubelet's file-based static pods feature to run etcd in a
-[pod](http://releases.k8s.io/{{< param "githubbranch" >}}/cluster/gce/manifests/etcd.manifest). This manifest should only
-be run on master VMs. The default location that kubelet scans for manifests is
-`/etc/kubernetes/manifests/`.
-
-### Kubernetes's usage of etcd
-
-By default, Kubernetes objects are stored under the `/registry` key in etcd.
-This path can be prefixed by using the [kube-apiserver](/docs/admin/kube-apiserver) flag
-`--etcd-prefix="/foo"`.
-
-`etcd` is the only place that Kubernetes keeps state.
-
-### Troubleshooting
-
-To test whether `etcd` is running correctly, you can try writing a value to a
-test key. On your master VM (or somewhere with firewalls configured such that
-you can talk to your cluster's etcd), try:
-
-```shell
-curl -X PUT "http://${host}:${port}/v2/keys/_test"
-```
+As of Kubernetes v1.13.0, etcd2 is no longer supported as a storage backend for
+new or existing Kubernetes clusters. The timeline for Kubernetes support for
+etcd2 and etcd3 is as follows:
+
+- Kubernetes v1.0: etcd2 only
+- Kubernetes v1.5.1: etcd3 support added, new clusters still default to etcd2
+- Kubernetes v1.6.0: new clusters created with `kube-up.sh` default to etcd3,
+ and `kube-apiserver` defaults to etcd3
+- Kubernetes v1.9.0: deprecation of etcd2 storage backend announced
+- Kubernetes v1.13.0: etcd2 storage backend removed, `kube-apiserver` will
+ refuse to start with `--storage-backend=etcd2`, with the
+ message `etcd2 is no longer a supported storage backend`
+
+Before upgrading a v1.12.x kube-apiserver using `--storage-backend=etcd2` to
+v1.13.x, etcd v2 data MUST by migrated to the v3 storage backend, and
+kube-apiserver invocations changed to use `--storage-backend=etcd3`.
+
+The process for migrating from etcd2 to etcd3 is highly dependent on how the
+etcd cluster was deployed and configured, as well as how the Kubernetes
+cluster was deployed and configured. We recommend that you consult your cluster
+provider's documentation to see if there is a predefined solution.
+
+If your cluster was created via `kube-up.sh` and is still using etcd2 as its
+storage backend, please consult the [Kubernetes v1.12 etcd cluster upgrade docs](https://v1-12.docs.kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/#upgrading-and-rolling-back-etcd-clusters)
{{% /capture %}}