+
+
diff --git a/v1.1/app-admin-detailed.md b/v1.1/app-admin-detailed.md
new file mode 100644
index 0000000000000..ec46284dddc83
--- /dev/null
+++ b/v1.1/app-admin-detailed.md
@@ -0,0 +1,17 @@
+---
+layout: docwithnav
+title: "Application Administration: Detailed Walkthrough"
+---
+
+## {{ page.title }} ##
+
+The detailed walkthrough covers all the in-depth details and tasks for administering your applications in Kubernetes.
+
+
Table of Contents:
+
+
+
diff --git a/v1.1/basicstutorials.md b/v1.1/basicstutorials.md
new file mode 100644
index 0000000000000..d33e3885c038d
--- /dev/null
+++ b/v1.1/basicstutorials.md
@@ -0,0 +1,17 @@
+---
+layout: docwithnav
+title: "Quick Walkthrough: Kubernetes Basics"
+---
+
+## {{ page.title }} ##
+
+Use this quick walkthrough of Kubernetes to learn about the basic application administration tasks.
+
+
Table of Contents:
+
+
+
diff --git a/v1.1/deploy-clusters.md b/v1.1/deploy-clusters.md
new file mode 100644
index 0000000000000..f6eb48fefba7e
--- /dev/null
+++ b/v1.1/deploy-clusters.md
@@ -0,0 +1,17 @@
+---
+layout: docwithnav
+title: "Examples: Deploying Clusters"
+---
+
+## {{ page.title }} ##
+
+Use the following examples to learn how to deploy your application into a Kubernetes cluster.
+
+
Table of Contents:
+
+
+
diff --git a/v1.1/docs/README.md b/v1.1/docs/README.md
new file mode 100644
index 0000000000000..df574178325d5
--- /dev/null
+++ b/v1.1/docs/README.md
@@ -0,0 +1,49 @@
+---
+layout: docwithnav
+title: "Kubernetes Documentation: releases.k8s.io/release-1.1"
+---
+
+
+
+
+
+# Kubernetes Documentation: releases.k8s.io/release-1.1
+
+* The [User's guide](user-guide/README.html) is for anyone who wants to run programs and
+ services on an existing Kubernetes cluster.
+
+* The [Cluster Admin's guide](admin/README.html) is for anyone setting up
+ a Kubernetes cluster or administering it.
+
+* The [Developer guide](devel/README.html) is for anyone wanting to write
+ programs that access the Kubernetes API, write plugins or extensions, or
+ modify the core code of Kubernetes.
+
+* The [Kubectl Command Line Interface](user-guide/kubectl/kubectl.html) is a detailed reference on
+ the `kubectl` CLI.
+
+* The [API object documentation](http://kubernetes.io/third_party/swagger-ui/)
+ is a detailed description of all fields found in core API objects.
+
+* An overview of the [Design of Kubernetes](design/)
+
+* There are example files and walkthroughs in the [examples](../examples/)
+ folder.
+
+* If something went wrong, see the [troubleshooting](troubleshooting.html) document for how to debug.
+You should also check the [known issues](user-guide/known-issues.html) for the release you're using.
+
+* To report a security issue, see [Reporting a Security Issue](reporting-security-issues.html).
+
+
+
+
+
+
+
+
+
+
+[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/README.md?pixel)]()
+
+
diff --git a/v1.1/docs/admin/README.md b/v1.1/docs/admin/README.md
new file mode 100644
index 0000000000000..def2322d6edb8
--- /dev/null
+++ b/v1.1/docs/admin/README.md
@@ -0,0 +1,58 @@
+---
+layout: docwithnav
+title: "Kubernetes Cluster Admin Guide"
+---
+
+
+
+
+
+# Kubernetes Cluster Admin Guide
+
+The cluster admin guide is for anyone creating or administering a Kubernetes cluster.
+It assumes some familiarity with concepts in the [User Guide](../user-guide/README.html).
+
+## Admin Guide Table of Contents
+
+[Introduction](introduction.html)
+
+1. [Components of a cluster](cluster-components.html)
+ 1. [Cluster Management](cluster-management.html)
+ 1. Administrating Master Components
+ 1. [The kube-apiserver binary](kube-apiserver.html)
+ 1. [Authorization](authorization.html)
+ 1. [Authentication](authentication.html)
+ 1. [Accessing the api](accessing-the-api.html)
+ 1. [Admission Controllers](admission-controllers.html)
+ 1. [Administrating Service Accounts](service-accounts-admin.html)
+ 1. [Resource Quotas](resource-quota.html)
+ 1. [The kube-scheduler binary](kube-scheduler.html)
+ 1. [The kube-controller-manager binary](kube-controller-manager.html)
+ 1. [Administrating Kubernetes Nodes](node.html)
+ 1. [The kubelet binary](kubelet.html)
+ 1. [Garbage Collection](garbage-collection.html)
+ 1. [The kube-proxy binary](kube-proxy.html)
+ 1. Administrating Addons
+ 1. [DNS](dns.html)
+ 1. [Networking](networking.html)
+ 1. [OVS Networking](ovs-networking.html)
+ 1. Example Configurations
+ 1. [Multiple Clusters](multi-cluster.html)
+ 1. [High Availability Clusters](high-availability.html)
+ 1. [Large Clusters](cluster-large.html)
+ 1. [Getting started from scratch](../getting-started-guides/scratch.html)
+ 1. [Kubernetes's use of salt](salt.html)
+ 1. [Troubleshooting](cluster-troubleshooting.html)
+
+
+
+
+
+
+
+
+
+
+[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/admin/README.md?pixel)]()
+
+
diff --git a/v1.1/docs/admin/accessing-the-api.md b/v1.1/docs/admin/accessing-the-api.md
new file mode 100644
index 0000000000000..b4113652db0c1
--- /dev/null
+++ b/v1.1/docs/admin/accessing-the-api.md
@@ -0,0 +1,91 @@
+---
+layout: docwithnav
+title: "Configuring APIserver ports"
+---
+
+
+
+
+
+# Configuring APIserver ports
+
+This document describes what ports the Kubernetes apiserver
+may serve on and how to reach them. The audience is
+cluster administrators who want to customize their cluster
+or understand the details.
+
+Most questions about accessing the cluster are covered
+in [Accessing the cluster](../user-guide/accessing-the-cluster.html).
+
+
+## Ports and IPs Served On
+
+The Kubernetes API is served by the Kubernetes apiserver process. Typically,
+there is one of these running on a single kubernetes-master node.
+
+By default the Kubernetes APIserver serves HTTP on 2 ports:
+ 1. Localhost Port
+ - serves HTTP
+ - default is port 8080, change with `--insecure-port` flag.
+ - defaults IP is localhost, change with `--insecure-bind-address` flag.
+ - no authentication or authorization checks in HTTP
+ - protected by need to have host access
+ 2. Secure Port
+ - default is port 6443, change with `--secure-port` flag.
+ - default IP is first non-localhost network interface, change with `--bind-address` flag.
+ - serves HTTPS. Set cert with `--tls-cert-file` and key with `--tls-private-key-file` flag.
+ - uses token-file or client-certificate based [authentication](authentication.html).
+ - uses policy-based [authorization](authorization.html).
+ 3. Removed: ReadOnly Port
+ - For security reasons, this had to be removed. Use the [service account](../user-guide/service-accounts.html) feature instead.
+
+## Proxies and Firewall rules
+
+Additionally, in some configurations there is a proxy (nginx) running
+on the same machine as the apiserver process. The proxy serves HTTPS protected
+by Basic Auth on port 443, and proxies to the apiserver on localhost:8080. In
+these configurations the secure port is typically set to 6443.
+
+A firewall rule is typically configured to allow external HTTPS access to port 443.
+
+The above are defaults and reflect how Kubernetes is deployed to Google Compute Engine using
+kube-up.sh. Other cloud providers may vary.
+
+## Use Cases vs IP:Ports
+
+There are three differently configured serving ports because there are a
+variety of uses cases:
+ 1. Clients outside of a Kubernetes cluster, such as human running `kubectl`
+ on desktop machine. Currently, accesses the Localhost Port via a proxy (nginx)
+ running on the `kubernetes-master` machine. The proxy can use cert-based authentication
+ or token-based authentication.
+ 2. Processes running in Containers on Kubernetes that need to read from
+ the apiserver. Currently, these can use a [service account](../user-guide/service-accounts.html).
+ 3. Scheduler and Controller-manager processes, which need to do read-write
+ API operations. Currently, these have to run on the same host as the
+ apiserver and use the Localhost Port. In the future, these will be
+ switched to using service accounts to avoid the need to be co-located.
+ 4. Kubelets, which need to do read-write API operations and are necessarily
+ on different machines than the apiserver. Kubelet uses the Secure Port
+ to get their pods, to find the services that a pod can see, and to
+ write events. Credentials are distributed to kubelets at cluster
+ setup time. Kubelet and kube-proxy can use cert-based authentication or token-based
+ authentication.
+
+## Expected changes
+
+ - Policy will limit the actions kubelets can do via the authed port.
+ - Scheduler and Controller-manager will use the Secure Port too. They
+ will then be able to run on different machines than the apiserver.
+
+
+
+
+
+
+
+
+
+[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/admin/accessing-the-api.md?pixel)]()
+
+
diff --git a/v1.1/docs/admin/admission-controllers.md b/v1.1/docs/admin/admission-controllers.md
new file mode 100644
index 0000000000000..9e5b9aa21f050
--- /dev/null
+++ b/v1.1/docs/admin/admission-controllers.md
@@ -0,0 +1,177 @@
+---
+layout: docwithnav
+title: "Admission Controllers"
+---
+
+
+
+
+
+# Admission Controllers
+
+**Table of Contents**
+
+
+- [Admission Controllers](#admission-controllers)
+ - [What are they?](#what-are-they)
+ - [Why do I need them?](#why-do-i-need-them)
+ - [How do I turn on an admission control plug-in?](#how-do-i-turn-on-an-admission-control-plug-in)
+ - [What does each plug-in do?](#what-does-each-plug-in-do)
+ - [AlwaysAdmit](#alwaysadmit)
+ - [AlwaysDeny](#alwaysdeny)
+ - [DenyExecOnPrivileged (deprecated)](#denyexeconprivileged-deprecated)
+ - [DenyEscalatingExec](#denyescalatingexec)
+ - [ServiceAccount](#serviceaccount)
+ - [SecurityContextDeny](#securitycontextdeny)
+ - [ResourceQuota](#resourcequota)
+ - [LimitRanger](#limitranger)
+ - [InitialResources (experimental)](#initialresources-experimental)
+ - [NamespaceExists (deprecated)](#namespaceexists-deprecated)
+ - [NamespaceAutoProvision (deprecated)](#namespaceautoprovision-deprecated)
+ - [NamespaceLifecycle](#namespacelifecycle)
+ - [Is there a recommended set of plug-ins to use?](#is-there-a-recommended-set-of-plug-ins-to-use)
+
+
+
+## What are they?
+
+An admission control plug-in is a piece of code that intercepts requests to the Kubernetes
+API server prior to persistence of the object, but after the request is authenticated
+and authorized. The plug-in code is in the API server process
+and must be compiled into the binary in order to be used at this time.
+
+Each admission control plug-in is run in sequence before a request is accepted into the cluster. If
+any of the plug-ins in the sequence reject the request, the entire request is rejected immediately
+and an error is returned to the end-user.
+
+Admission control plug-ins may mutate the incoming object in some cases to apply system configured
+defaults. In addition, admission control plug-ins may mutate related resources as part of request
+processing to do things like increment quota usage.
+
+## Why do I need them?
+
+Many advanced features in Kubernetes require an admission control plug-in to be enabled in order
+to properly support the feature. As a result, a Kubernetes API server that is not properly
+configured with the right set of admission control plug-ins is an incomplete server and will not
+support all the features you expect.
+
+## How do I turn on an admission control plug-in?
+
+The Kubernetes API server supports a flag, `admission-control` that takes a comma-delimited,
+ordered list of admission control choices to invoke prior to modifying objects in the cluster.
+
+## What does each plug-in do?
+
+### AlwaysAdmit
+
+Use this plugin by itself to pass-through all requests.
+
+### AlwaysDeny
+
+Rejects all requests. Used for testing.
+
+### DenyExecOnPrivileged (deprecated)
+
+This plug-in will intercept all requests to exec a command in a pod if that pod has a privileged container.
+
+If your cluster supports privileged containers, and you want to restrict the ability of end-users to exec
+commands in those containers, we strongly encourage enabling this plug-in.
+
+This functionality has been merged into [DenyEscalatingExec](#denyescalatingexec).
+
+### DenyEscalatingExec
+
+This plug-in will deny exec and attach commands to pods that run with escalated privileges that
+allow host access. This includes pods that run as privileged, have access to the host IPC namespace, and
+have access to the host PID namespace.
+
+If your cluster supports containers that run with escalated privileges, and you want to
+restrict the ability of end-users to exec commands in those containers, we strongly encourage
+enabling this plug-in.
+
+### ServiceAccount
+
+This plug-in implements automation for [serviceAccounts](../user-guide/service-accounts.html).
+We strongly recommend using this plug-in if you intend to make use of Kubernetes `ServiceAccount` objects.
+
+### SecurityContextDeny
+
+This plug-in will deny any pod with a [SecurityContext](../user-guide/security-context.html) that defines options that were not available on the `Container`.
+
+### ResourceQuota
+
+This plug-in will observe the incoming request and ensure that it does not violate any of the constraints
+enumerated in the `ResourceQuota` object in a `Namespace`. If you are using `ResourceQuota`
+objects in your Kubernetes deployment, you MUST use this plug-in to enforce quota constraints.
+
+See the [resourceQuota design doc](../design/admission_control_resource_quota.html) and the [example of Resource Quota](resourcequota/) for more details.
+
+It is strongly encouraged that this plug-in is configured last in the sequence of admission control plug-ins. This is
+so that quota is not prematurely incremented only for the request to be rejected later in admission control.
+
+### LimitRanger
+
+This plug-in will observe the incoming request and ensure that it does not violate any of the constraints
+enumerated in the `LimitRange` object in a `Namespace`. If you are using `LimitRange` objects in
+your Kubernetes deployment, you MUST use this plug-in to enforce those constraints. LimitRanger can also
+be used to apply default resource requests to Pods that don't specify any; currently, the default LimitRanger
+applies a 0.1 CPU requirement to all Pods in the `default` namespace.
+
+See the [limitRange design doc](../design/admission_control_limit_range.html) and the [example of Limit Range](limitrange/) for more details.
+
+### InitialResources (experimental)
+
+This plug-in observes pod creation requests. If a container omits compute resource requests and limits,
+then the plug-in auto-populates a compute resource request based on historical usage of containers running the same image.
+If there is not enough data to make a decision the Request is left unchanged.
+When the plug-in sets a compute resource request, it annotates the pod with information on what compute resources it auto-populated.
+
+See the [InitialResouces proposal](../proposals/initial-resources.html) for more details.
+
+### NamespaceExists (deprecated)
+
+This plug-in will observe all incoming requests that attempt to create a resource in a Kubernetes `Namespace`
+and reject the request if the `Namespace` was not previously created. We strongly recommend running
+this plug-in to ensure integrity of your data.
+
+The functionality of this admission controller has been merged into `NamespaceLifecycle`
+
+### NamespaceAutoProvision (deprecated)
+
+This plug-in will observe all incoming requests that attempt to create a resource in a Kubernetes `Namespace`
+and create a new `Namespace` if one did not already exist previously.
+
+We strongly recommend `NamespaceLifecycle` over `NamespaceAutoProvision`.
+
+### NamespaceLifecycle
+
+This plug-in enforces that a `Namespace` that is undergoing termination cannot have new objects created in it,
+and ensures that requests in a non-existant `Namespace` are rejected.
+
+A `Namespace` deletion kicks off a sequence of operations that remove all objects (pods, services, etc.) in that
+namespace. In order to enforce integrity of that process, we strongly recommend running this plug-in.
+
+## Is there a recommended set of plug-ins to use?
+
+Yes.
+
+For Kubernetes 1.0, we strongly recommend running the following set of admission control plug-ins (order matters):
+
+```
+{% raw %}
+--admission-control=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota
+{% endraw %}
+```
+
+
+
+
+
+
+
+
+
+
+[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/admin/admission-controllers.md?pixel)]()
+
+
diff --git a/v1.1/docs/admin/authentication.md b/v1.1/docs/admin/authentication.md
new file mode 100644
index 0000000000000..196463e0b1403
--- /dev/null
+++ b/v1.1/docs/admin/authentication.md
@@ -0,0 +1,146 @@
+---
+layout: docwithnav
+title: "Authentication Plugins"
+---
+
+
+
+
+
+# Authentication Plugins
+
+Kubernetes uses client certificates, tokens, or http basic auth to authenticate users for API calls.
+
+**Client certificate authentication** is enabled by passing the `--client-ca-file=SOMEFILE`
+option to apiserver. The referenced file must contain one or more certificates authorities
+to use to validate client certificates presented to the apiserver. If a client certificate
+is presented and verified, the common name of the subject is used as the user name for the
+request.
+
+**Token File** is enabled by passing the `--token-auth-file=SOMEFILE` option
+to apiserver. Currently, tokens last indefinitely, and the token list cannot
+be changed without restarting apiserver.
+
+The token file format is implemented in `plugin/pkg/auth/authenticator/token/tokenfile/...`
+and is a csv file with 3 columns: token, user name, user uid.
+
+When using token authentication from an http client the apiserver expects an `Authorization`
+header with a value of `Bearer SOMETOKEN`.
+
+**OpenID Connect ID Token** is enabled by passing the following options to the apiserver:
+- `--oidc-issuer-url` (required) tells the apiserver where to connect to the OpenID provider. Only HTTPS scheme will be accepted.
+- `--oidc-client-id` (required) is used by apiserver to verify the audience of the token.
+A valid [ID token](http://openid.net/specs/openid-connect-core-1_0.html#IDToken) MUST have this
+client-id in its `aud` claims.
+- `--oidc-ca-file` (optional) is used by apiserver to establish and verify the secure connection
+to the OpenID provider.
+- `--oidc-username-claim` (optional, experimental) specifies which OpenID claim to use as the user name. By default, `sub`
+will be used, which should be unique and immutable under the issuer's domain. Cluster administrator can
+choose other claims such as `email` to use as the user name, but the uniqueness and immutability is not guaranteed.
+
+Please note that this flag is still experimental until we settle more on how to handle the mapping of the OpenID user to the Kubernetes user. Thus further changes are possible.
+
+Currently, the ID token will be obtained by some third-party app. This means the app and apiserver
+MUST share the `--oidc-client-id`.
+
+Like **Token File**, when using token authentication from an http client the apiserver expects
+an `Authorization` header with a value of `Bearer SOMETOKEN`.
+
+**Basic authentication** is enabled by passing the `--basic-auth-file=SOMEFILE`
+option to apiserver. Currently, the basic auth credentials last indefinitely,
+and the password cannot be changed without restarting apiserver. Note that basic
+authentication is currently supported for convenience while we finish making the
+more secure modes described above easier to use.
+
+The basic auth file format is implemented in `plugin/pkg/auth/authenticator/password/passwordfile/...`
+and is a csv file with 3 columns: password, user name, user id.
+
+When using basic authentication from an http client, the apiserver expects an `Authorization` header
+with a value of `Basic BASE64ENCODED(USER:PASSWORD)`.
+
+**Keystone authentication** is enabled by passing the `--experimental-keystone-url=`
+option to the apiserver during startup. The plugin is implemented in
+`plugin/pkg/auth/authenticator/request/keystone/keystone.go`.
+For details on how to use keystone to manage projects and users, refer to the
+[Keystone documentation](http://docs.openstack.org/developer/keystone/). Please note that
+this plugin is still experimental which means it is subject to changes.
+Please refer to the [discussion](https://github.com/kubernetes/kubernetes/pull/11798#issuecomment-129655212)
+and the [blueprint](https://github.com/kubernetes/kubernetes/issues/11626) for more details
+
+## Plugin Development
+
+We plan for the Kubernetes API server to issue tokens
+after the user has been (re)authenticated by a *bedrock* authentication
+provider external to Kubernetes. We plan to make it easy to develop modules
+that interface between Kubernetes and a bedrock authentication provider (e.g.
+github.com, google.com, enterprise directory, kerberos, etc.)
+
+## APPENDIX
+
+### Creating Certificates
+
+When using client certificate authentication, you can generate certificates manually or
+using an existing deployment script.
+
+**Deployment script** is implemented at
+`cluster/saltbase/salt/generate-cert/make-ca-cert.sh`.
+Execute this script with two parameters. First is the IP address of apiserver, the second is
+a list of subject alternate names in the form `IP: or DNS:`.
+The script will generate three files:ca.crt, server.crt and server.key.
+Finally, add these parameters
+`--client-ca-file=/srv/kubernetes/ca.crt`
+`--tls-cert-file=/srv/kubernetes/server.cert`
+`--tls-private-key-file=/srv/kubernetes/server.key`
+into apiserver start parameters.
+
+**easyrsa** can be used to manually generate certificates for your cluster.
+
+1. Download, unpack, and initialize the patched version of easyrsa3.
+
+ curl -L -O https://storage.googleapis.com/kubernetes-release/easy-rsa/easy-rsa.tar.gz
+ tar xzf easy-rsa.tar.gz
+ cd easy-rsa-master/easyrsa3
+ ./easyrsa init-pki
+1. Generate a CA. (`--batch` set automatic mode. `--req-cn` default CN to use.)
+
+ ./easyrsa --batch "--req-cn=${MASTER_IP}@`date +%s`" build-ca nopass
+1. Generate server certificate and key.
+ (build-server-full [filename]: Generate a keypair and sign locally for a client or server)
+
+ ./easyrsa --subject-alt-name="IP:${MASTER_IP}" build-server-full kubernetes-master nopass
+1. Copy `pki/ca.crt` `pki/issued/kubernetes-master.crt`
+ `pki/private/kubernetes-master.key` to your directory.
+1. Remember fill the parameters
+ `--client-ca-file=/yourdirectory/ca.crt`
+ `--tls-cert-file=/yourdirectory/server.cert`
+ `--tls-private-key-file=/yourdirectory/server.key`
+ and add these into apiserver start parameters.
+
+**openssl** can also be use to manually generate certificates for your cluster.
+
+1. Generate a ca.key with 2048bit
+ `openssl genrsa -out ca.key 2048`
+1. According to the ca.key generate a ca.crt. (-days set the certificate effective time).
+ `openssl req -x509 -new -nodes -key ca.key -subj "/CN=${MASTER_IP}" -days 10000 -out ca.crt`
+1. Generate a server.key with 2048bit
+ `openssl genrsa -out server.key 2048`
+1. According to the server.key generate a server.csr.
+ `openssl req -new -key server.key -subj "/CN=${MASTER_IP}" -out server.csr`
+1. According to the ca.key, ca.crt and server.csr generate the server.crt.
+ `openssl x509 -req -in server.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out server.crt
+ -days 10000`
+1. View the certificate.
+ `openssl x509 -noout -text -in ./server.crt`
+ Finally, do not forget fill the same parameters and add parameters into apiserver start parameters.
+
+
+
+
+
+
+
+
+
+[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/admin/authentication.md?pixel)]()
+
+
diff --git a/v1.1/docs/admin/authorization.md b/v1.1/docs/admin/authorization.md
new file mode 100644
index 0000000000000..e0378b772c587
--- /dev/null
+++ b/v1.1/docs/admin/authorization.md
@@ -0,0 +1,159 @@
+---
+layout: docwithnav
+title: "Authorization Plugins"
+---
+
+
+
+
+
+# Authorization Plugins
+
+
+In Kubernetes, authorization happens as a separate step from authentication.
+See the [authentication documentation](authentication.html) for an
+overview of authentication.
+
+Authorization applies to all HTTP accesses on the main (secure) apiserver port.
+
+The authorization check for any request compares attributes of the context of
+the request, (such as user, resource, and namespace) with access
+policies. An API call must be allowed by some policy in order to proceed.
+
+The following implementations are available, and are selected by flag:
+ - `--authorization-mode=AlwaysDeny`
+ - `--authorization-mode=AlwaysAllow`
+ - `--authorization-mode=ABAC`
+
+`AlwaysDeny` blocks all requests (used in tests).
+`AlwaysAllow` allows all requests; use if you don't need authorization.
+`ABAC` allows for user-configured authorization policy. ABAC stands for Attribute-Based Access Control.
+
+## ABAC Mode
+
+### Request Attributes
+
+A request has 5 attributes that can be considered for authorization:
+ - user (the user-string which a user was authenticated as).
+ - group (the list of group names the authenticated user is a member of).
+ - whether the request is readonly (GETs are readonly).
+ - what resource is being accessed.
+ - applies only to the API endpoints, such as
+ `/api/v1/namespaces/default/pods`. For miscellaneous endpoints, like `/version`, the
+ resource is the empty string.
+ - the namespace of the object being access, or the empty string if the
+ endpoint does not support namespaced objects.
+
+We anticipate adding more attributes to allow finer grained access control and
+to assist in policy management.
+
+### Policy File Format
+
+For mode `ABAC`, also specify `--authorization-policy-file=SOME_FILENAME`.
+
+The file format is [one JSON object per line](http://jsonlines.org/). There should be no enclosing list or map, just
+one map per line.
+
+Each line is a "policy object". A policy object is a map with the following properties:
+ - `user`, type string; the user-string from `--token-auth-file`. If you specify `user`, it must match the username of the authenticated user.
+ - `group`, type string; if you specify `group`, it must match one of the groups of the authenticated user.
+ - `readonly`, type boolean, when true, means that the policy only applies to GET
+ operations.
+ - `resource`, type string; a resource from an URL, such as `pods`.
+ - `namespace`, type string; a namespace string.
+
+An unset property is the same as a property set to the zero value for its type (e.g. empty string, 0, false).
+However, unset should be preferred for readability.
+
+In the future, policies may be expressed in a JSON format, and managed via a REST
+interface.
+
+### Authorization Algorithm
+
+A request has attributes which correspond to the properties of a policy object.
+
+When a request is received, the attributes are determined. Unknown attributes
+are set to the zero value of its type (e.g. empty string, 0, false).
+
+An unset property will match any value of the corresponding
+attribute. An unset attribute will match any value of the corresponding property.
+
+The tuple of attributes is checked for a match against every policy in the policy file.
+If at least one line matches the request attributes, then the request is authorized (but may fail later validation).
+
+To permit any user to do something, write a policy with the user property unset.
+To permit an action Policy with an unset namespace applies regardless of namespace.
+
+### Examples
+
+ 1. Alice can do anything: `{"user":"alice"}`
+ 2. Kubelet can read any pods: `{"user":"kubelet", "resource": "pods", "readonly": true}`
+ 3. Kubelet can read and write events: `{"user":"kubelet", "resource": "events"}`
+ 4. Bob can just read pods in namespace "projectCaribou": `{"user":"bob", "resource": "pods", "readonly": true, "namespace": "projectCaribou"}`
+
+[Complete file example](http://releases.k8s.io/release-1.1/pkg/auth/authorizer/abac/example_policy_file.jsonl)
+
+### A quick note on service accounts
+
+A service account automatically generates a user. The user's name is generated according to the naming convention:
+
+```
+{% raw %}
+system:serviceaccount::
+{% endraw %}
+```
+
+Creating a new namespace also causes a new service account to be created, of this form:*
+
+```
+{% raw %}
+system:serviceaccount::default
+{% endraw %}
+```
+
+For example, if you wanted to grant the default service account in the kube-system full privilege to the API, you would add this line to your policy file:
+
+{% highlight json %}
+{% raw %}
+{"user":"system:serviceaccount:kube-system:default"}
+{% endraw %}
+{% endhighlight %}
+
+The apiserver will need to be restarted to pickup the new policy lines.
+
+## Plugin Development
+
+Other implementations can be developed fairly easily.
+The APIserver calls the Authorizer interface:
+
+{% highlight go %}
+{% raw %}
+type Authorizer interface {
+ Authorize(a Attributes) error
+}
+{% endraw %}
+{% endhighlight %}
+
+to determine whether or not to allow each API action.
+
+An authorization plugin is a module that implements this interface.
+Authorization plugin code goes in `pkg/auth/authorizer/$MODULENAME`.
+
+An authorization module can be completely implemented in go, or can call out
+to a remote authorization service. Authorization modules can implement
+their own caching to reduce the cost of repeated authorization calls with the
+same or similar arguments. Developers should then consider the interaction between
+caching and revocation of permissions.
+
+
+
+
+
+
+
+
+
+
+[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/admin/authorization.md?pixel)]()
+
+
diff --git a/v1.1/docs/admin/cluster-components.md b/v1.1/docs/admin/cluster-components.md
new file mode 100644
index 0000000000000..48c4148e1a0e4
--- /dev/null
+++ b/v1.1/docs/admin/cluster-components.md
@@ -0,0 +1,136 @@
+---
+layout: docwithnav
+title: "Kubernetes Cluster Admin Guide: Cluster Components"
+---
+
+
+
+
+
+# Kubernetes Cluster Admin Guide: Cluster Components
+
+This document outlines the various binary components that need to run to
+deliver a functioning Kubernetes cluster.
+
+## Master Components
+
+Master components are those that provide the cluster's control plane. For
+example, master components are responsible for making global decisions about the
+cluster (e.g., scheduling), and detecting and responding to cluster events
+(e.g., starting up a new pod when a replication controller's 'replicas' field is
+unsatisfied).
+
+Master components could in theory be run on any node in the cluster. However,
+for simplicity, current set up scripts typically start all master components on
+the same VM, and does not run user containers on this VM. See
+[high-availability.md](high-availability.html) for an example multi-master-VM setup.
+
+Even in the future, when Kubernetes is fully self-hosting, it will probably be
+wise to only allow master components to schedule on a subset of nodes, to limit
+co-running with user-run pods, reducing the possible scope of a
+node-compromising security exploit.
+
+### kube-apiserver
+
+[kube-apiserver](kube-apiserver.html) exposes the Kubernetes API; it is the front-end for the
+Kubernetes control plane. It is designed to scale horizontally (i.e., one scales
+it by running more of them-- [high-availability.md](high-availability.html)).
+
+### etcd
+
+[etcd](etcd.html) is used as Kubernetes' backing store. All cluster data is stored here.
+Proper administration of a Kubernetes cluster includes a backup plan for etcd's
+data.
+
+### kube-controller-manager
+
+[kube-controller-manager](kube-controller-manager.html) is a binary that runs controllers, which are the
+background threads that handle routine tasks in the cluster. Logically, each
+controller is a separate process, but to reduce the number of moving pieces in
+the system, they are all compiled into a single binary and run in a single
+process.
+
+These controllers include:
+
+* Node Controller
+ * Responsible for noticing & responding when nodes go down.
+* Replication Controller
+ * Responsible for maintaining the correct number of pods for every replication
+ controller object in the system.
+* Endpoints Controller
+ * Populates the Endpoints object (i.e., join Services & Pods).
+* Service Account & Token Controllers
+ * Create default accounts and API access tokens for new namespaces.
+* ... and others.
+
+### kube-scheduler
+
+[kube-scheduler](kube-scheduler.html) watches newly created pods that have no node assigned, and
+selects a node for them to run on.
+
+### addons
+
+Addons are pods and services that implement cluster features. They don't run on
+the master VM, but currently the default setup scripts that make the API calls
+to create these pods and services does run on the master VM. See:
+[kube-master-addons](http://releases.k8s.io/release-1.1/cluster/saltbase/salt/kube-master-addons/kube-master-addons.sh)
+
+Addon objects are created in the "kube-system" namespace.
+
+Example addons are:
+* [DNS](http://releases.k8s.io/release-1.1/cluster/addons/dns/) provides cluster local DNS.
+* [kube-ui](http://releases.k8s.io/release-1.1/cluster/addons/kube-ui/) provides a graphical UI for the
+ cluster.
+* [fluentd-elasticsearch](http://releases.k8s.io/release-1.1/cluster/addons/fluentd-elasticsearch/) provides
+ log storage. Also see the [gcp version](http://releases.k8s.io/release-1.1/cluster/addons/fluentd-gcp/).
+* [cluster-monitoring](http://releases.k8s.io/release-1.1/cluster/addons/cluster-monitoring/) provides
+ monitoring for the cluster.
+
+## Node components
+
+Node components run on every node, maintaining running pods and providing them
+the Kubernetes runtime environment.
+
+### kubelet
+
+[kubelet](kubelet.html) is the primary node agent. It:
+* Watches for pods that have been assigned to its node (either by apiserver
+ or via local configuration file) and:
+ * Mounts the pod's required volumes
+ * Downloads the pod's secrets
+ * Run the pod's containers via docker (or, experimentally, rkt).
+ * Periodically executes any requested container liveness probes.
+ * Reports the status of the pod back to the rest of the system, by creating a
+ "mirror pod" if necessary.
+* Reports the status of the node back to the rest of the system.
+
+### kube-proxy
+
+[kube-proxy](kube-proxy.html) enables the Kubernetes service abstraction by maintaining
+network rules on the host and performing connection forwarding.
+
+### docker
+
+`docker` is of course used for actually running containers.
+
+### rkt
+
+`rkt` is supported experimentally as an alternative to docker.
+
+### supervisord
+
+`supervisord` is a lightweight process babysitting system for keeping kubelet and docker
+running.
+
+
+
+
+
+
+
+
+
+
+[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/admin/cluster-components.md?pixel)]()
+
+
diff --git a/v1.1/docs/admin/cluster-large.md b/v1.1/docs/admin/cluster-large.md
new file mode 100644
index 0000000000000..73ece08b56663
--- /dev/null
+++ b/v1.1/docs/admin/cluster-large.md
@@ -0,0 +1,86 @@
+---
+layout: docwithnav
+title: "Kubernetes Large Cluster"
+---
+
+
+
+
+
+# Kubernetes Large Cluster
+
+## Support
+
+At v1.0, Kubernetes supports clusters up to 100 nodes with 30 pods per node and 1-2 containers per pod.
+
+## Setup
+
+A cluster is a set of nodes (physical or virtual machines) running Kubernetes agents, managed by a "master" (the cluster-level control plane).
+
+Normally the number of nodes in a cluster is controlled by the the value `NUM_MINIONS` in the platform-specific `config-default.sh` file (for example, see [GCE's `config-default.sh`](http://releases.k8s.io/release-1.1/cluster/gce/config-default.sh)).
+
+Simply changing that value to something very large, however, may cause the setup script to fail for many cloud providers. A GCE deployment, for example, will run in to quota issues and fail to bring the cluster up.
+
+When setting up a large Kubernetes cluster, the following issues must be considered.
+
+### Quota Issues
+
+To avoid running into cloud provider quota issues, when creating a cluster with many nodes, consider:
+* Increase the quota for things like CPU, IPs, etc.
+ * In [GCE, for example,](https://cloud.google.com/compute/docs/resource-quotas) you'll want to increase the quota for:
+ * CPUs
+ * VM instances
+ * Total persistent disk reserved
+ * In-use IP addresses
+ * Firewall Rules
+ * Forwarding rules
+ * Routes
+ * Target pools
+* Gating the setup script so that it brings up new node VMs in smaller batches with waits in between, because some cloud providers rate limit the creation of VMs.
+
+### Addon Resources
+
+To prevent memory leaks or other resource issues in [cluster addons](https://releases.k8s.io/release-1.1/cluster/addons) from consuming all the resources available on a node, Kubernetes sets resource limits on addon containers to limit the CPU and Memory resources they can consume (See PR [#10653](http://pr.k8s.io/10653/files) and [#10778](http://pr.k8s.io/10778/files)).
+
+For example:
+
+{% highlight yaml %}
+{% raw %}
+containers:
+ - image: gcr.io/google_containers/heapster:v0.15.0
+ name: heapster
+ resources:
+ limits:
+ cpu: 100m
+ memory: 200Mi
+{% endraw %}
+{% endhighlight %}
+
+These limits, however, are based on data collected from addons running on 4-node clusters (see [#10335](http://issue.k8s.io/10335#issuecomment-117861225)). The addons consume a lot more resources when running on large deployment clusters (see [#5880](http://issue.k8s.io/5880#issuecomment-113984085)). So, if a large cluster is deployed without adjusting these values, the addons may continuously get killed because they keep hitting the limits.
+
+To avoid running into cluster addon resource issues, when creating a cluster with many nodes, consider the following:
+* Scale memory and CPU limits for each of the following addons, if used, along with the size of cluster (there is one replica of each handling the entire cluster so memory and CPU usage tends to grow proportionally with size/load on cluster):
+ * Heapster ([GCM/GCL backed](http://releases.k8s.io/release-1.1/cluster/addons/cluster-monitoring/google/heapster-controller.yaml), [InfluxDB backed](http://releases.k8s.io/release-1.1/cluster/addons/cluster-monitoring/influxdb/heapster-controller.yaml), [InfluxDB/GCL backed](http://releases.k8s.io/release-1.1/cluster/addons/cluster-monitoring/googleinfluxdb/heapster-controller-combined.yaml), [standalone](http://releases.k8s.io/release-1.1/cluster/addons/cluster-monitoring/standalone/heapster-controller.yaml))
+ * [InfluxDB and Grafana](http://releases.k8s.io/release-1.1/cluster/addons/cluster-monitoring/influxdb/influxdb-grafana-controller.yaml)
+ * [skydns, kube2sky, and dns etcd](http://releases.k8s.io/release-1.1/cluster/addons/dns/skydns-rc.yaml.in)
+ * [Kibana](http://releases.k8s.io/release-1.1/cluster/addons/fluentd-elasticsearch/kibana-controller.yaml)
+* Scale number of replicas for the following addons, if used, along with the size of cluster (there are multiple replicas of each so increasing replicas should help handle increased load, but, since load per replica also increases slightly, also consider increasing CPU/memory limits):
+ * [elasticsearch](http://releases.k8s.io/release-1.1/cluster/addons/fluentd-elasticsearch/es-controller.yaml)
+* Increase memory and CPU limits slightly for each of the following addons, if used, along with the size of cluster (there is one replica per node but CPU/memory usage increases slightly along with cluster load/size as well):
+ * [FluentD with ElasticSearch Plugin](http://releases.k8s.io/release-1.1/cluster/saltbase/salt/fluentd-es/fluentd-es.yaml)
+ * [FluentD with GCP Plugin](http://releases.k8s.io/release-1.1/cluster/saltbase/salt/fluentd-gcp/fluentd-gcp.yaml)
+
+For directions on how to detect if addon containers are hitting resource limits, see the [Troubleshooting section of Compute Resources](../user-guide/compute-resources.html#troubleshooting).
+
+
+
+
+
+
+
+
+
+
+[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/admin/cluster-large.md?pixel)]()
+
+
diff --git a/v1.1/docs/admin/cluster-management.md b/v1.1/docs/admin/cluster-management.md
new file mode 100644
index 0000000000000..6de4e221ad14a
--- /dev/null
+++ b/v1.1/docs/admin/cluster-management.md
@@ -0,0 +1,221 @@
+---
+layout: docwithnav
+title: "Cluster Management"
+---
+
+
+
+
+
+# Cluster Management
+
+This document describes several topics related to the lifecycle of a cluster: creating a new cluster,
+upgrading your cluster's
+master and worker nodes, performing node maintenance (e.g. kernel upgrades), and upgrading the Kubernetes API version of a
+running cluster.
+
+## Creating and configuring a Cluster
+
+To install Kubernetes on a set of machines, consult one of the existing [Getting Started guides](../../docs/getting-started-guides/README.html) depending on your environment.
+
+## Upgrading a cluster
+
+The current state of cluster upgrades is provider dependent.
+
+### Master Upgrades
+
+Both Google Container Engine (GKE) and
+Compute Engine Open Source (GCE-OSS) support node upgrades via a [Managed Instance Group](https://cloud.google.com/compute/docs/instance-groups/).
+Managed Instance Group upgrades sequentially delete and recreate each virtual machine, while maintaining the same
+Persistent Disk (PD) to ensure that data is retained across the upgrade.
+
+In contrast, the `kube-push.sh` process used on [other platforms](#other-platforms) attempts to upgrade the binaries in
+places, without recreating the virtual machines.
+
+### Node Upgrades
+
+Node upgrades for GKE and GCE-OSS again use a Managed Instance Group, each node is sequentially destroyed and then recreated with new software. Any Pods that are running
+on that node need to be controlled by a Replication Controller, or manually re-created after the roll out.
+
+For other platforms, `kube-push.sh` is again used, performing an in-place binary upgrade on existing machines.
+
+### Upgrading Google Container Engine (GKE)
+
+Google Container Engine automatically updates master components (e.g. `kube-apiserver`, `kube-scheduler`) to the latest
+version. It also handles upgrading the operating system and other components that the master runs on.
+
+The node upgrade process is user-initiated and is described in the [GKE documentation.](https://cloud.google.com/container-engine/docs/clusters/upgrade)
+
+### Upgrading open source Google Compute Engine clusters
+
+Upgrades on open source Google Compute Engine (GCE) clusters are controlled by the ```cluster/gce/upgrade.sh``` script.
+
+Get its usage by running `cluster/gce/upgrade.sh -h`.
+
+For example, to upgrade just your master to a specific version (v1.0.2):
+
+{% highlight console %}
+{% raw %}
+cluster/gce/upgrade.sh -M v1.0.2
+{% endraw %}
+{% endhighlight %}
+
+Alternatively, to upgrade your entire cluster to the latest stable release:
+
+{% highlight console %}
+{% raw %}
+cluster/gce/upgrade.sh release/stable
+{% endraw %}
+{% endhighlight %}
+
+### Other platforms
+
+The `cluster/kube-push.sh` script will do a rudimentary update. This process is still quite experimental, we
+recommend testing the upgrade on an experimental cluster before performing the update on a production cluster.
+
+## Resizing a cluster
+
+If your cluster runs short on resources you can easily add more machines to it if your cluster is running in [Node self-registration mode](node.html#self-registration-of-nodes).
+If you're using GCE or GKE it's done by resizing Instance Group managing your Nodes. It can be accomplished by modifying number of instances on `Compute > Compute Engine > Instance groups > your group > Edit group` [Google Cloud Console page](https://console.developers.google.com) or using gcloud CLI:
+
+```
+{% raw %}
+gcloud compute instance-groups managed --zone compute-zone resize my-cluster-minon-group --new-size 42
+{% endraw %}
+```
+
+Instance Group will take care of putting appropriate image on new machines and start them, while Kubelet will register its Node with API server to make it available for scheduling. If you scale the instance group down, system will randomly choose Nodes to kill.
+
+In other environments you may need to configure the machine yourself and tell the Kubelet on which machine API server is running.
+
+
+### Horizontal auto-scaling of nodes (GCE)
+
+If you are using GCE, you can configure your cluster so that the number of nodes will be automatically scaled based on their CPU and memory utilization.
+Before setting up the cluster by ```kube-up.sh```, you can set ```KUBE_ENABLE_NODE_AUTOSCALER``` environment variable to ```true``` and export it.
+The script will create an autoscaler for the instance group managing your nodes.
+
+The autoscaler will try to maintain the average CPU and memory utilization of nodes within the cluster close to the target value.
+The target value can be configured by ```KUBE_TARGET_NODE_UTILIZATION``` environment variable (default: 0.7) for ``kube-up.sh`` when creating the cluster.
+The node utilization is the total node's CPU/memory usage (OS + k8s + user load) divided by the node's capacity.
+If the desired numbers of nodes in the cluster resulting from CPU utilization and memory utilization are different,
+the autoscaler will choose the bigger number.
+The number of nodes in the cluster set by the autoscaler will be limited from ```KUBE_AUTOSCALER_MIN_NODES``` (default: 1)
+to ```KUBE_AUTOSCALER_MAX_NODES``` (default: the initial number of nodes in the cluster).
+
+The autoscaler is implemented as a Compute Engine Autoscaler.
+The initial values of the autoscaler parameters set by ``kube-up.sh`` and some more advanced options can be tweaked on
+`Compute > Compute Engine > Instance groups > your group > Edit group`[Google Cloud Console page](https://console.developers.google.com)
+or using gcloud CLI:
+
+```
+{% raw %}
+gcloud preview autoscaler --zone compute-zone
+{% endraw %}
+```
+
+Note that autoscaling will work properly only if node metrics are accessible in Google Cloud Monitoring.
+To make the metrics accessible, you need to create your cluster with ```KUBE_ENABLE_CLUSTER_MONITORING```
+equal to ```google``` or ```googleinfluxdb``` (```googleinfluxdb``` is the default value).
+
+## Maintenance on a Node
+
+If you need to reboot a node (such as for a kernel upgrade, libc upgrade, hardware repair, etc.), and the downtime is
+brief, then when the Kubelet restarts, it will attempt to restart the pods scheduled to it. If the reboot takes longer,
+then the node controller will terminate the pods that are bound to the unavailable node. If there is a corresponding
+replication controller, then a new copy of the pod will be started on a different node. So, in the case where all
+pods are replicated, upgrades can be done without special coordination, assuming that not all nodes will go down at the same time.
+
+If you want more control over the upgrading process, you may use the following workflow:
+
+Mark the node to be rebooted as unschedulable:
+
+{% highlight console %}
+{% raw %}
+kubectl replace nodes $NODENAME --patch='{"apiVersion": "v1", "spec": {"unschedulable": true}}'
+{% endraw %}
+{% endhighlight %}
+
+This keeps new pods from landing on the node while you are trying to get them off.
+
+Get the pods off the machine, via any of the following strategies:
+ * Wait for finite-duration pods to complete.
+ * Delete pods with:
+
+{% highlight console %}
+{% raw %}
+kubectl delete pods $PODNAME
+{% endraw %}
+{% endhighlight %}
+
+For pods with a replication controller, the pod will eventually be replaced by a new pod which will be scheduled to a new node. Additionally, if the pod is part of a service, then clients will automatically be redirected to the new pod.
+
+For pods with no replication controller, you need to bring up a new copy of the pod, and assuming it is not part of a service, redirect clients to it.
+
+Perform maintenance work on the node.
+
+Make the node schedulable again:
+
+{% highlight console %}
+{% raw %}
+kubectl replace nodes $NODENAME --patch='{"apiVersion": "v1", "spec": {"unschedulable": false}}'
+{% endraw %}
+{% endhighlight %}
+
+If you deleted the node's VM instance and created a new one, then a new schedulable node resource will
+be created automatically when you create a new VM instance (if you're using a cloud provider that supports
+node discovery; currently this is only Google Compute Engine, not including CoreOS on Google Compute Engine using kube-register). See [Node](node.html) for more details.
+
+## Advanced Topics
+
+### Upgrading to a different API version
+
+When a new API version is released, you may need to upgrade a cluster to support the new API version (e.g. switching from 'v1' to 'v2' when 'v2' is launched)
+
+This is an infrequent event, but it requires careful management. There is a sequence of steps to upgrade to a new API version.
+
+ 1. Turn on the new api version.
+ 1. Upgrade the cluster's storage to use the new version.
+ 1. Upgrade all config files. Identify users of the old API version endpoints.
+ 1. Update existing objects in the storage to new version by running `cluster/update-storage-objects.sh`.
+ 1. Turn off the old API version.
+
+### Turn on or off an API version for your cluster
+
+Specific API versions can be turned on or off by passing --runtime-config=api/ flag while bringing up the API server. For example: to turn off v1 API, pass `--runtime-config=api/v1=false`.
+runtime-config also supports 2 special keys: api/all and api/legacy to control all and legacy APIs respectively.
+For example, for turning off all api versions except v1, pass `--runtime-config=api/all=false,api/v1=true`.
+For the purposes of these flags, _legacy_ APIs are those APIs which have been explicitly deprecated (e.g. `v1beta3`).
+
+### Switching your cluster's storage API version
+
+The objects that are stored to disk for a cluster's internal representation of the Kubernetes resources active in the cluster are written using a particular version of the API.
+When the supported API changes, these objects may need to be rewritten in the newer API. Failure to do this will eventually result in resources that are no longer decodable or usable
+by the kubernetes API server.
+
+`KUBE_API_VERSIONS` environment variable for the `kube-apiserver` binary which controls the API versions that are supported in the cluster. The first version in the list is used as the cluster's storage version. Hence, to set a specific version as the storage version, bring it to the front of list of versions in the value of `KUBE_API_VERSIONS`. You need to restart the `kube-apiserver` binary
+for changes to this variable to take effect.
+
+### Switching your config files to a new API version
+
+You can use the `kube-version-change` utility to convert config files between different API versions.
+
+{% highlight console %}
+{% raw %}
+$ hack/build-go.sh cmd/kube-version-change
+$ _output/local/go/bin/kube-version-change -i myPod.v1beta3.yaml -o myPod.v1.yaml
+{% endraw %}
+{% endhighlight %}
+
+
+
+
+
+
+
+
+
+
+[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/admin/cluster-management.md?pixel)]()
+
+
diff --git a/v1.1/docs/admin/cluster-troubleshooting.md b/v1.1/docs/admin/cluster-troubleshooting.md
new file mode 100644
index 0000000000000..b2cec3288085c
--- /dev/null
+++ b/v1.1/docs/admin/cluster-troubleshooting.md
@@ -0,0 +1,132 @@
+---
+layout: docwithnav
+title: "Cluster Troubleshooting"
+---
+
+
+
+
+
+# Cluster Troubleshooting
+
+This doc is about cluster troubleshooting; we assume you have already ruled out your application as the root cause of the
+problem you are experiencing. See
+the [application troubleshooting guide](../user-guide/application-troubleshooting.html) for tips on application debugging.
+You may also visit [troubleshooting document](../troubleshooting.html) for more information.
+
+## Listing your cluster
+
+The first thing to debug in your cluster is if your nodes are all registered correctly.
+
+Run
+
+{% highlight sh %}
+{% raw %}
+kubectl get nodes
+{% endraw %}
+{% endhighlight %}
+
+And verify that all of the nodes you expect to see are present and that they are all in the `Ready` state.
+
+## Looking at logs
+
+For now, digging deeper into the cluster requires logging into the relevant machines. Here are the locations
+of the relevant log files. (note that on systemd-based systems, you may need to use `journalctl` instead)
+
+### Master
+
+ * /var/log/kube-apiserver.log - API Server, responsible for serving the API
+ * /var/log/kube-scheduler.log - Scheduler, responsible for making scheduling decisions
+ * /var/log/kube-controller-manager.log - Controller that manages replication controllers
+
+### Worker Nodes
+
+ * /var/log/kubelet.log - Kubelet, responsible for running containers on the node
+ * /var/log/kube-proxy.log - Kube Proxy, responsible for service load balancing
+
+## A general overview of cluster failure modes
+
+This is an incomplete list of things that could go wrong, and how to adjust your cluster setup to mitigate the problems.
+
+Root causes:
+ - VM(s) shutdown
+ - Network partition within cluster, or between cluster and users
+ - Crashes in Kubernetes software
+ - Data loss or unavailability of persistent storage (e.g. GCE PD or AWS EBS volume)
+ - Operator error, e.g. misconfigured Kubernetes software or application software
+
+Specific scenarios:
+ - Apiserver VM shutdown or apiserver crashing
+ - Results
+ - unable to stop, update, or start new pods, services, replication controller
+ - existing pods and services should continue to work normally, unless they depend on the Kubernetes API
+ - Apiserver backing storage lost
+ - Results
+ - apiserver should fail to come up
+ - kubelets will not be able to reach it but will continue to run the same pods and provide the same service proxying
+ - manual recovery or recreation of apiserver state necessary before apiserver is restarted
+ - Supporting services (node controller, replication controller manager, scheduler, etc) VM shutdown or crashes
+ - currently those are colocated with the apiserver, and their unavailability has similar consequences as apiserver
+ - in future, these will be replicated as well and may not be co-located
+ - they do not have their own persistent state
+ - Individual node (VM or physical machine) shuts down
+ - Results
+ - pods on that Node stop running
+ - Network partition
+ - Results
+ - partition A thinks the nodes in partition B are down; partition B thinks the apiserver is down. (Assuming the master VM ends up in partition A.)
+ - Kubelet software fault
+ - Results
+ - crashing kubelet cannot start new pods on the node
+ - kubelet might delete the pods or not
+ - node marked unhealthy
+ - replication controllers start new pods elsewhere
+ - Cluster operator error
+ - Results
+ - loss of pods, services, etc
+ - lost of apiserver backing store
+ - users unable to read API
+ - etc.
+
+Mitigations:
+- Action: Use IaaS provider's automatic VM restarting feature for IaaS VMs
+ - Mitigates: Apiserver VM shutdown or apiserver crashing
+ - Mitigates: Supporting services VM shutdown or crashes
+
+- Action use IaaS providers reliable storage (e.g GCE PD or AWS EBS volume) for VMs with apiserver+etcd
+ - Mitigates: Apiserver backing storage lost
+
+- Action: Use (experimental) [high-availability](high-availability.html) configuration
+ - Mitigates: Master VM shutdown or master components (scheduler, API server, controller-managing) crashing
+ - Will tolerate one or more simultaneous node or component failures
+ - Mitigates: Apiserver backing storage (i.e., etcd's data directory) lost
+ - Assuming you used clustered etcd.
+
+- Action: Snapshot apiserver PDs/EBS-volumes periodically
+ - Mitigates: Apiserver backing storage lost
+ - Mitigates: Some cases of operator error
+ - Mitigates: Some cases of Kubernetes software fault
+
+- Action: use replication controller and services in front of pods
+ - Mitigates: Node shutdown
+ - Mitigates: Kubelet software fault
+
+- Action: applications (containers) designed to tolerate unexpected restarts
+ - Mitigates: Node shutdown
+ - Mitigates: Kubelet software fault
+
+- Action: [Multiple independent clusters](multi-cluster.html) (and avoid making risky changes to all clusters at once)
+ - Mitigates: Everything listed above.
+
+
+
+
+
+
+
+
+
+
+[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/admin/cluster-troubleshooting.md?pixel)]()
+
+
diff --git a/v1.1/docs/admin/daemon.yaml b/v1.1/docs/admin/daemon.yaml
new file mode 100644
index 0000000000000..c5cd14a5921ec
--- /dev/null
+++ b/v1.1/docs/admin/daemon.yaml
@@ -0,0 +1,18 @@
+apiVersion: extensions/v1beta1
+kind: DaemonSet
+metadata:
+ name: prometheus-node-exporter
+spec:
+ template:
+ metadata:
+ name: prometheus-node-exporter
+ labels:
+ daemon: prom-node-exp
+ spec:
+ containers:
+ - name: c
+ image: prom/prometheus
+ ports:
+ - containerPort: 9090
+ hostPort: 9090
+ name: serverport
diff --git a/v1.1/docs/admin/daemons.md b/v1.1/docs/admin/daemons.md
new file mode 100644
index 0000000000000..02ae58103de1d
--- /dev/null
+++ b/v1.1/docs/admin/daemons.md
@@ -0,0 +1,210 @@
+---
+layout: docwithnav
+title: "Daemon Sets"
+---
+
+
+
+
+
+# Daemon Sets
+
+**Table of Contents**
+
+
+- [Daemon Sets](#daemon-sets)
+ - [What is a _Daemon Set_?](#what-is-a-daemon-set)
+ - [Writing a DaemonSet Spec](#writing-a-daemonset-spec)
+ - [Required Fields](#required-fields)
+ - [Pod Template](#pod-template)
+ - [Pod Selector](#pod-selector)
+ - [Running Pods on Only Some Nodes](#running-pods-on-only-some-nodes)
+ - [How Daemon Pods are Scheduled](#how-daemon-pods-are-scheduled)
+ - [Communicating with DaemonSet Pods](#communicating-with-daemonset-pods)
+ - [Updating a DaemonSet](#updating-a-daemonset)
+ - [Alternatives to Daemon Set](#alternatives-to-daemon-set)
+ - [Init Scripts](#init-scripts)
+ - [Bare Pods](#bare-pods)
+ - [Static Pods](#static-pods)
+ - [Replication Controller](#replication-controller)
+ - [Caveats](#caveats)
+
+
+
+## What is a _Daemon Set_?
+
+A _Daemon Set_ ensures that all (or some) nodes run a copy of a pod. As nodes are added to the
+cluster, pods are added to them. As nodes are removed from the cluster, those pods are garbage
+collected. Deleting a Daemon Set will clean up the pods it created.
+
+Some typical uses of a Daemon Set are:
+
+- running a cluster storage daemon, such as `glusterd`, `ceph`, on each node.
+- running a logs collection daemon on every node, such as `fluentd` or `logstash`.
+- running a node monitoring daemon on every node, such as [Prometheus Node Exporter](
+ https://github.com/prometheus/node_exporter), `collectd`, New Relic agent, or Ganglia `gmond`.
+
+In a simple case, one Daemon Set, covering all nodes, would be used for each type of daemon.
+A more complex setup might use multiple DaemonSets would be used for a single type of daemon,
+but with different flags and/or different memory and cpu requests for different hardware types.
+
+## Writing a DaemonSet Spec
+
+### Required Fields
+
+As with all other Kubernetes config, a DaemonSet needs `apiVersion`, `kind`, and `metadata` fields. For
+general information about working with config files, see [here](../user-guide/simple-yaml.html),
+[here](../user-guide/configuring-containers.html), and [here](../user-guide/working-with-resources.html).
+
+A DaemonSet also needs a [`.spec`](../devel/api-conventions.html#spec-and-status) section.
+
+### Pod Template
+
+The `.spec.template` is the only required field of the `.spec`.
+
+The `.spec.template` is a [pod template](../user-guide/replication-controller.html#pod-template).
+It has exactly the same schema as a [pod](../user-guide/pods.html), except
+it is nested and does not have an `apiVersion` or `kind`.
+
+In addition to required fields for a pod, a pod template in a DaemonSet has to specify appropriate
+labels (see [pod selector](#pod-selector)).
+
+A pod template in a DaemonSet must have a [`RestartPolicy`](../user-guide/pod-states.html)
+ equal to `Always`, or be unspecified, which defaults to `Always`.
+
+### Pod Selector
+
+The `.spec.selector` field is a pod selector. It works the same as the `.spec.selector` of
+a [ReplicationController](../user-guide/replication-controller.html) or
+[Job](../user-guide/jobs.html).
+
+If the `.spec.selector` is specified, it must equal the `.spec.template.metadata.labels`. If not
+specified, the are default to be equal. Config with these unequal will be rejected by the API.
+
+Also you should not normally create any pods whose labels match this selector, either directly, via
+another DaemonSet, or via other controller such as ReplicationController. Otherwise, the DaemonSet
+controller will think that those pods were created by it. Kubernetes will not stop you from doing
+this. Once case where you might want to do this is manually create a pod with a different value on
+a node for testing.
+
+### Running Pods on Only Some Nodes
+
+If you specify a `.spec.template.spec.nodeSelector`, then the DaemonSet controller will
+create pods on nodes which match that [node
+selector](../user-guide/node-selection/README.html).
+
+If you do not specify a `.spec.template.spec.nodeSelector`, then the DaemonSet controller will
+create pods on all nodes.
+
+## How Daemon Pods are Scheduled
+
+Normally, the machine that a pod runs on is selected by the Kubernetes scheduler. However, pods
+created by the Daemon controller have the machine already selected (`.spec.nodeName` is specified
+when the pod is created, so it is ignored by the scheduler). Therefore:
+
+ - the [`unschedulable`](node.html#manual-node-administration) field of a node is not respected
+ by the daemon set controller.
+ - daemon set controller can make pods even when the scheduler has not been started, which can help cluster
+ bootstrap.
+
+## Communicating with DaemonSet Pods
+
+Some possible patterns for communicating with pods in a DaemonSet are:
+
+- **Push**: Pods in the Daemon Set are configured to send updates to another service, such
+ as a stats database. They do not have clients.
+- **NodeIP and Known Port**: Pods in the Daemon Set use a `hostPort`, so that the pods are reachable
+ via the node IPs. Clients knows the the list of nodes ips somehow, and know the port by convention.
+- **DNS**: Create a [headless service](../user-guide/services.html#headless-services) with the same pod selector,
+ and then discover DaemonSets using the `endpoints` resource or retrieve multiple A records from
+ DNS.
+- **Service**: Create a service with the same pod selector, and use the service to reach a
+ daemon on a random node. (No way to reach specific node.)
+
+## Updating a DaemonSet
+
+If node labels are changed, the DaemonSet will promptly add pods to newly matching nodes and delete
+pods from newly not-matching nodes.
+
+You can modify the pods that a DaemonSet creates. However, pods do not allow all
+fields to be updated. Also, the DeamonSet controller will use the original template the next
+time a node (even with the same name) is created.
+
+
+You can delete a DeamonSet. If you specify `--cascade=false` with `kubectl`, then the pods
+will be left on the nodes. You can then create a new DaemonSet with a different template.
+the new DaemonSet with the different template will recognize all the existing pods as having
+matching labels. It will not modify or delete them despite a mismatch in the pod template.
+You will need to force new pod creation by deleting the pod or deleting the node.
+
+You cannot update a DaemonSet.
+
+Support for updating DaemonSets and controlled updating of nodes is planned.
+
+## Alternatives to Daemon Set
+
+### Init Scripts
+
+It is certainly possible to run daemon processes by directly starting them on a node (e.g using
+`init`, `upstartd`, or `systemd`). This is perfectly fine. However, there are several advantages to
+running such processes via a DaemonSet:
+
+- Ability to monitor and manage logs for daemons in the same way as applications.
+- Same config language and tools (e.g. pod templates, `kubectl`) for daemons and applications.
+- Future versions of Kubernetes will likely support integration between DaemonSet-created
+ pods and node upgrade workflows.
+- Running daemons in containers with resource limits increases isolation between daemons from app
+ containers. However, this can also be accomplished by running the daemons in a container but not in a pod
+ (e.g. start directly via Docker).
+
+### Bare Pods
+
+It is possible to create pods directly which specify a particular node to run on. However,
+a Daemon Set replaces pods that are deleted or terminated for any reason, such as in the case of
+node failure or disruptive node maintenance, such as a kernel upgrade. For this reason, you should
+use a Daemon Set rather than creating individual pods.
+
+### Static Pods
+
+It is possible to create pods by writing a file to a certain directory watched by Kubelet. These
+are called [static pods](static-pods.html).
+Unlike DaemonSet, static pods cannot be managed with kubectl
+or other Kubernetes API clients. Static pods do not depend on the apiserver, making them useful
+in cluster bootstrapping cases. Also, static pods may be deprecated in the future.
+
+### Replication Controller
+
+Daemon Set are similar to [Replication Controllers](../user-guide/replication-controller.html) in that
+they both create pods, and those pods have processes which are not expected to terminate (e.g. web servers,
+storage servers).
+
+Use a replication controller for stateless services, like frontends, where scaling up and down the
+number of replicas and rolling out updates are more important than controlling exactly which host
+the pod runs on. Use a Daemon Controller when it is important that a copy of a pod always run on
+all or certain hosts, and when it needs to start before other pods.
+
+## Caveats
+
+DaemonSet objects are in the [`extensions` API Group](../api.html#api-groups).
+DaemonSet is not enabled by default. Enable it by setting
+`--runtime-config=extensions/v1beta1/daemonsets=true` on the api server. This can be
+achieved by exporting ENABLE_DAEMONSETS=true before running kube-up.sh script
+on GCE.
+
+DaemonSet objects effectively have [API version `v1alpha1`](../api.html#api-versioning).
+ Alpha objects may change or even be discontinued in future software releases.
+However, due to to a known issue, they will appear as API version `v1beta1` if enabled.
+
+
+
+
+
+
+
+
+
+
+
+[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/admin/daemons.md?pixel)]()
+
+
diff --git a/v1.1/docs/admin/dns.md b/v1.1/docs/admin/dns.md
new file mode 100644
index 0000000000000..e58fb8f7b5015
--- /dev/null
+++ b/v1.1/docs/admin/dns.md
@@ -0,0 +1,60 @@
+---
+layout: docwithnav
+title: "DNS Integration with Kubernetes"
+---
+
+
+
+
+
+# DNS Integration with Kubernetes
+
+As of Kubernetes 0.8, DNS is offered as a [cluster add-on](http://releases.k8s.io/release-1.1/cluster/addons/README.md).
+If enabled, a DNS Pod and Service will be scheduled on the cluster, and the kubelets will be
+configured to tell individual containers to use the DNS Service's IP to resolve DNS names.
+
+Every Service defined in the cluster (including the DNS server itself) will be
+assigned a DNS name. By default, a client Pod's DNS search list will
+include the Pod's own namespace and the cluster's default domain. This is best
+illustrated by example:
+
+Assume a Service named `foo` in the Kubernetes namespace `bar`. A Pod running
+in namespace `bar` can look up this service by simply doing a DNS query for
+`foo`. A Pod running in namespace `quux` can look up this service by doing a
+DNS query for `foo.bar`.
+
+The cluster DNS server ([SkyDNS](https://github.com/skynetservices/skydns))
+supports forward lookups (A records) and service lookups (SRV records).
+
+## How it Works
+
+The running DNS pod holds 3 containers - skydns, etcd (a private instance which skydns uses),
+and a Kubernetes-to-skydns bridge called kube2sky. The kube2sky process
+watches the Kubernetes master for changes in Services, and then writes the
+information to etcd, which skydns reads. This etcd instance is not linked to
+any other etcd clusters that might exist, including the Kubernetes master.
+
+## Issues
+
+The skydns service is reachable directly from Kubernetes nodes (outside
+of any container) and DNS resolution works if the skydns service is targeted
+explicitly. However, nodes are not configured to use the cluster DNS service or
+to search the cluster's DNS domain by default. This may be resolved at a later
+time.
+
+## For more information
+
+See [the docs for the DNS cluster addon](http://releases.k8s.io/release-1.1/cluster/addons/dns/README.md).
+
+
+
+
+
+
+
+
+
+
+[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/admin/dns.md?pixel)]()
+
+
diff --git a/v1.1/docs/admin/etcd.md b/v1.1/docs/admin/etcd.md
new file mode 100644
index 0000000000000..490afa83fa35a
--- /dev/null
+++ b/v1.1/docs/admin/etcd.md
@@ -0,0 +1,69 @@
+---
+layout: docwithnav
+title: "etcd"
+---
+
+
+
+
+
+# etcd
+
+[etcd](https://coreos.com/etcd/docs/2.0.12/) is a highly-available key value
+store which Kubernetes uses for persistent storage of all of its REST API
+objects.
+
+## Configuration: high-level goals
+
+Access Control: give *only* kube-apiserver read/write access to etcd. You do not
+want apiserver's etcd exposed to every node in your cluster (or worse, to the
+internet at large), because access to etcd is equivalent to root in your
+cluster.
+
+Data Reliability: for reasonable safety, either etcd needs to be run as a
+[cluster](high-availability.html#clustering-etcd) (multiple machines each running
+etcd) or etcd's data directory should be located on durable storage (e.g., GCE's
+persistent disk). In either case, if high availability is required--as it might
+be in a production cluster--the data directory ought to be [backed up
+periodically](https://coreos.com/etcd/docs/2.0.12/admin_guide.html#disaster-recovery),
+to reduce downtime in case of corruption.
+
+## Default configuration
+
+The default setup scripts use kubelet's file-based static pods feature to run etcd in a
+[pod](http://releases.k8s.io/release-1.1/cluster/saltbase/salt/etcd/etcd.manifest). This manifest should only
+be run on master VMs. The default location that kubelet scans for manifests is
+`/etc/kubernetes/manifests/`.
+
+## Kubernetes's usage of etcd
+
+By default, Kubernetes objects are stored under the `/registry` key in etcd.
+This path can be prefixed by using the [kube-apiserver](kube-apiserver.html) flag
+`--etcd-prefix="/foo"`.
+
+`etcd` is the only place that Kubernetes keeps state.
+
+## Troubleshooting
+
+To test whether `etcd` is running correctly, you can try writing a value to a
+test key. On your master VM (or somewhere with firewalls configured such that
+you can talk to your cluster's etcd), try:
+
+{% highlight sh %}
+{% raw %}
+curl -fs -X PUT "http://${host}:${port}/v2/keys/_test"
+{% endraw %}
+{% endhighlight %}
+
+
+
+
+
+
+
+
+
+
+[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/admin/etcd.md?pixel)]()
+
+
diff --git a/v1.1/docs/admin/garbage-collection.md b/v1.1/docs/admin/garbage-collection.md
new file mode 100644
index 0000000000000..354f29d44c1f7
--- /dev/null
+++ b/v1.1/docs/admin/garbage-collection.md
@@ -0,0 +1,93 @@
+---
+layout: docwithnav
+title: "Garbage Collection"
+---
+
+
+
+
+
+# Garbage Collection
+
+- [Introduction](#introduction)
+- [Image Collection](#image-collection)
+- [Container Collection](#container-collection)
+- [User Configuration](#user-configuration)
+
+### Introduction
+
+Garbage collection is managed by kubelet automatically, mainly including unreferenced
+images and dead containers. kubelet applies container garbage collection every minute
+and image garbage collection every 5 minutes.
+Note that we don't recommend external garbage collection tool generally, since it could
+break the behavior of kubelet potentially if it attempts to remove all of the containers
+which acts as the tombstone kubelet relies on. Yet those garbage collector aims to deal
+with the docker leaking issues would be appreciated.
+
+### Image Collection
+
+kubernetes manages lifecycle of all images through imageManager, with the cooperation
+of cadvisor.
+The policy for garbage collecting images we apply takes two factors into consideration,
+`HighThresholdPercent` and `LowThresholdPercent`. Disk usage above the the high threshold
+will trigger garbage collection, which attempts to delete unused images until the low
+threshold is met. Least recently used images are deleted first.
+
+### Container Collection
+
+The policy for garbage collecting containers we apply takes on three variables, which can
+be user-defined. `MinAge` is the minimum age at which a container can be garbage collected,
+zero for no limit. `MaxPerPodContainer` is the max number of dead containers any single
+pod (UID, container name) pair is allowed to have, less than zero for no limit.
+`MaxContainers` is the max number of total dead containers, less than zero for no limit as well.
+
+kubelet sorts out containers which are unidentified or stay out of bounds set by previous
+mentioned three flags. Gernerally the oldest containers are removed first. Since we take both
+`MaxPerPodContainer` and `MaxContainers` into consideration, it could happen when they
+have conflict -- retaining the max number of containers per pod goes out of range set by max
+number of global dead containers. In this case, we would sacrifice the `MaxPerPodContainer`
+a little bit. For the worst case, we first downgrade it to 1 container per pod, and then
+evict the oldest containers for the greater good.
+
+When kubelet removes the dead containers, all the files inside the container will be cleaned up as well.
+Note that we will skip the containers that are not managed by kubelet.
+
+### User Configuration
+
+Users are free to set their own value to address image garbage collection.
+
+1. `image-gc-high-threshold`, the percent of disk usage which triggers image garbage collection.
+Default is 90%.
+2. `image-gc-low-threshold`, the percent of disk usage to which image garbage collection attempts
+to free. Default is 80%.
+
+We also allow users to customize garbage collection policy, basically via following three flags.
+
+1. `minimum-container-ttl-duration`, minimum age for a finished container before it is
+garbage collected. Default is 1 minute.
+2. `maximum-dead-containers-per-container`, maximum number of old instances to retain
+per container. Default is 2.
+3. `maximum-dead-containers`, maximum number of old instances of containers to retain globally.
+Default is 100.
+
+Note that we highly recommend a large enough value for `maximum-dead-containers-per-container`
+to allow at least 2 dead containers retaining per expected container when you customize the flag
+configuration. A loose value for `maximum-dead-containers` also assumes importance for a similar reason.
+See [this issue](https://github.com/kubernetes/kubernetes/issues/13287) for more details.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/admin/garbage-collection.md?pixel)]()
+
+
diff --git a/v1.1/docs/admin/high-availability.md b/v1.1/docs/admin/high-availability.md
new file mode 100644
index 0000000000000..7f826b268312f
--- /dev/null
+++ b/v1.1/docs/admin/high-availability.md
@@ -0,0 +1,280 @@
+---
+layout: docwithnav
+title: "High Availability Kubernetes Clusters"
+---
+
+
+
+
+
+# High Availability Kubernetes Clusters
+
+**Table of Contents**
+
+
+- [High Availability Kubernetes Clusters](#high-availability-kubernetes-clusters)
+ - [Introduction](#introduction)
+ - [Overview](#overview)
+ - [Initial set-up](#initial-set-up)
+ - [Reliable nodes](#reliable-nodes)
+ - [Establishing a redundant, reliable data storage layer](#establishing-a-redundant-reliable-data-storage-layer)
+ - [Clustering etcd](#clustering-etcd)
+ - [Validating your cluster](#validating-your-cluster)
+ - [Even more reliable storage](#even-more-reliable-storage)
+ - [Replicated API Servers](#replicated-api-servers)
+ - [Installing configuration files](#installing-configuration-files)
+ - [Starting the API Server](#starting-the-api-server)
+ - [Load balancing](#load-balancing)
+ - [Master elected components](#master-elected-components)
+ - [Installing configuration files](#installing-configuration-files)
+ - [Running the podmaster](#running-the-podmaster)
+ - [Conclusion](#conclusion)
+ - [Vagrant up!](#vagrant-up)
+
+
+
+## Introduction
+
+This document describes how to build a high-availability (HA) Kubernetes cluster. This is a fairly advanced topic.
+Users who merely want to experiment with Kubernetes are encouraged to use configurations that are simpler to set up such as
+the simple [Docker based single node cluster instructions](../../docs/getting-started-guides/docker.html),
+or try [Google Container Engine](https://cloud.google.com/container-engine/) for hosted Kubernetes.
+
+Also, at this time high availability support for Kubernetes is not continuously tested in our end-to-end (e2e) testing. We will
+be working to add this continuous testing, but for now the single-node master installations are more heavily tested.
+
+## Overview
+
+Setting up a truly reliable, highly available distributed system requires a number of steps, it is akin to
+wearing underwear, pants, a belt, suspenders, another pair of underwear, and another pair of pants. We go into each
+of these steps in detail, but a summary is given here to help guide and orient the user.
+
+The steps involved are as follows:
+ * [Creating the reliable constituent nodes that collectively form our HA master implementation.](#reliable-nodes)
+ * [Setting up a redundant, reliable storage layer with clustered etcd.](#establishing-a-redundant-reliable-data-storage-layer)
+ * [Starting replicated, load balanced Kubernetes API servers](#replicated-api-servers)
+ * [Setting up master-elected Kubernetes scheduler and controller-manager daemons](#master-elected-components)
+
+Here's what the system should look like when it's finished:
+![High availability Kubernetes diagram](high-availability/ha.png)
+
+Ready? Let's get started.
+
+## Initial set-up
+
+The remainder of this guide assumes that you are setting up a 3-node clustered master, where each machine is running some flavor of Linux.
+Examples in the guide are given for Debian distributions, but they should be easily adaptable to other distributions.
+Likewise, this set up should work whether you are running in a public or private cloud provider, or if you are running
+on bare metal.
+
+The easiest way to implement an HA Kubernetes cluster is to start with an existing single-master cluster. The
+instructions at [https://get.k8s.io](https://get.k8s.io)
+describe easy installation for single-master clusters on a variety of platforms.
+
+## Reliable nodes
+
+On each master node, we are going to run a number of processes that implement the Kubernetes API. The first step in making these reliable is
+to make sure that each automatically restarts when it fails. To achieve this, we need to install a process watcher. We choose to use
+the `kubelet` that we run on each of the worker nodes. This is convenient, since we can use containers to distribute our binaries, we can
+establish resource limits, and introspect the resource usage of each daemon. Of course, we also need something to monitor the kubelet
+itself (insert who watches the watcher jokes here). For Debian systems, we choose monit, but there are a number of alternate
+choices. For example, on systemd-based systems (e.g. RHEL, CentOS), you can run 'systemctl enable kubelet'.
+
+If you are extending from a standard Kubernetes installation, the `kubelet` binary should already be present on your system. You can run
+`which kubelet` to determine if the binary is in fact installed. If it is not installed,
+you should install the [kubelet binary](https://storage.googleapis.com/kubernetes-release/release/v0.19.3/bin/linux/amd64/kubelet), the
+[kubelet init file](http://releases.k8s.io/release-1.1/cluster/saltbase/salt/kubelet/initd) and [high-availability/default-kubelet](high-availability/default-kubelet)
+scripts.
+
+If you are using monit, you should also install the monit daemon (`apt-get install monit`) and the [high-availability/monit-kubelet](high-availability/monit-kubelet) and
+[high-availability/monit-docker](high-availability/monit-docker) configs.
+
+On systemd systems you `systemctl enable kubelet` and `systemctl enable docker`.
+
+
+## Establishing a redundant, reliable data storage layer
+
+The central foundation of a highly available solution is a redundant, reliable storage layer. The number one rule of high-availability is
+to protect the data. Whatever else happens, whatever catches on fire, if you have the data, you can rebuild. If you lose the data, you're
+done.
+
+Clustered etcd already replicates your storage to all master instances in your cluster. This means that to lose data, all three nodes would need
+to have their physical (or virtual) disks fail at the same time. The probability that this occurs is relatively low, so for many people
+running a replicated etcd cluster is likely reliable enough. You can add additional reliability by increasing the
+size of the cluster from three to five nodes. If that is still insufficient, you can add
+[even more redundancy to your storage layer](#even-more-reliable-storage).
+
+### Clustering etcd
+
+The full details of clustering etcd are beyond the scope of this document, lots of details are given on the
+[etcd clustering page](https://github.com/coreos/etcd/blob/master/Documentation/clustering.md). This example walks through
+a simple cluster set up, using etcd's built in discovery to build our cluster.
+
+First, hit the etcd discovery service to create a new token:
+
+{% highlight sh %}
+{% raw %}
+curl https://discovery.etcd.io/new?size=3
+{% endraw %}
+{% endhighlight %}
+
+On each node, copy the [etcd.yaml](high-availability/etcd.yaml) file into `/etc/kubernetes/manifests/etcd.yaml`
+
+The kubelet on each node actively monitors the contents of that directory, and it will create an instance of the `etcd`
+server from the definition of the pod specified in `etcd.yaml`.
+
+Note that in `etcd.yaml` you should substitute the token URL you got above for `${DISCOVERY_TOKEN}` on all three machines,
+and you should substitute a different name (e.g. `node-1`) for ${NODE_NAME} and the correct IP address
+for `${NODE_IP}` on each machine.
+
+
+#### Validating your cluster
+
+Once you copy this into all three nodes, you should have a clustered etcd set up. You can validate with
+
+{% highlight sh %}
+{% raw %}
+etcdctl member list
+{% endraw %}
+{% endhighlight %}
+
+and
+
+{% highlight sh %}
+{% raw %}
+etcdctl cluster-health
+{% endraw %}
+{% endhighlight %}
+
+You can also validate that this is working with `etcdctl set foo bar` on one node, and `etcd get foo`
+on a different node.
+
+### Even more reliable storage
+
+Of course, if you are interested in increased data reliability, there are further options which makes the place where etcd
+installs it's data even more reliable than regular disks (belts *and* suspenders, ftw!).
+
+If you use a cloud provider, then they usually provide this
+for you, for example [Persistent Disk](https://cloud.google.com/compute/docs/disks/persistent-disks) on the Google Cloud Platform. These
+are block-device persistent storage that can be mounted onto your virtual machine. Other cloud providers provide similar solutions.
+
+If you are running on physical machines, you can also use network attached redundant storage using an iSCSI or NFS interface.
+Alternatively, you can run a clustered file system like Gluster or Ceph. Finally, you can also run a RAID array on each physical machine.
+
+Regardless of how you choose to implement it, if you chose to use one of these options, you should make sure that your storage is mounted
+to each machine. If your storage is shared between the three masters in your cluster, you should create a different directory on the storage
+for each node. Throughout these instructions, we assume that this storage is mounted to your machine in `/var/etcd/data`
+
+
+## Replicated API Servers
+
+Once you have replicated etcd set up correctly, we will also install the apiserver using the kubelet.
+
+### Installing configuration files
+
+First you need to create the initial log file, so that Docker mounts a file instead of a directory:
+
+{% highlight sh %}
+{% raw %}
+touch /var/log/kube-apiserver.log
+{% endraw %}
+{% endhighlight %}
+
+Next, you need to create a `/srv/kubernetes/` directory on each node. This directory includes:
+ * basic_auth.csv - basic auth user and password
+ * ca.crt - Certificate Authority cert
+ * known_tokens.csv - tokens that entities (e.g. the kubelet) can use to talk to the apiserver
+ * kubecfg.crt - Client certificate, public key
+ * kubecfg.key - Client certificate, private key
+ * server.cert - Server certificate, public key
+ * server.key - Server certificate, private key
+
+The easiest way to create this directory, may be to copy it from the master node of a working cluster, or you can manually generate these files yourself.
+
+### Starting the API Server
+
+Once these files exist, copy the [kube-apiserver.yaml](high-availability/kube-apiserver.yaml) into `/etc/kubernetes/manifests/` on each master node.
+
+The kubelet monitors this directory, and will automatically create an instance of the `kube-apiserver` container using the pod definition specified
+in the file.
+
+### Load balancing
+
+At this point, you should have 3 apiservers all working correctly. If you set up a network load balancer, you should
+be able to access your cluster via that load balancer, and see traffic balancing between the apiserver instances. Setting
+up a load balancer will depend on the specifics of your platform, for example instructions for the Google Cloud
+Platform can be found [here](https://cloud.google.com/compute/docs/load-balancing/)
+
+Note, if you are using authentication, you may need to regenerate your certificate to include the IP address of the balancer,
+in addition to the IP addresses of the individual nodes.
+
+For pods that you deploy into the cluster, the `kubernetes` service/dns name should provide a load balanced endpoint for the master automatically.
+
+For external users of the API (e.g. the `kubectl` command line interface, continuous build pipelines, or other clients) you will want to configure
+them to talk to the external load balancer's IP address.
+
+## Master elected components
+
+So far we have set up state storage, and we have set up the API server, but we haven't run anything that actually modifies
+cluster state, such as the controller manager and scheduler. To achieve this reliably, we only want to have one actor modifying state at a time, but we want replicated
+instances of these actors, in case a machine dies. To achieve this, we are going to use a lease-lock in etcd to perform
+master election. On each of the three apiserver nodes, we run a small utility application named `podmaster`. It's job is to implement a master
+election protocol using etcd "compare and swap". If the apiserver node wins the election, it starts the master component it is managing (e.g. the scheduler), if it
+loses the election, it ensures that any master components running on the node (e.g. the scheduler) are stopped.
+
+In the future, we expect to more tightly integrate this lease-locking into the scheduler and controller-manager binaries directly, as described in the [high availability design proposal](../proposals/high-availability.html)
+
+### Installing configuration files
+
+First, create empty log files on each node, so that Docker will mount the files not make new directories:
+
+{% highlight sh %}
+{% raw %}
+touch /var/log/kube-scheduler.log
+touch /var/log/kube-controller-manager.log
+{% endraw %}
+{% endhighlight %}
+
+Next, set up the descriptions of the scheduler and controller manager pods on each node.
+by copying [kube-scheduler.yaml](high-availability/kube-scheduler.yaml) and [kube-controller-manager.yaml](high-availability/kube-controller-manager.yaml) into the `/srv/kubernetes/`
+ directory.
+
+### Running the podmaster
+
+Now that the configuration files are in place, copy the [podmaster.yaml](high-availability/podmaster.yaml) config file into `/etc/kubernetes/manifests/`
+
+As before, the kubelet on the node monitors this directory, and will start an instance of the podmaster using the pod specification provided in `podmaster.yaml`.
+
+Now you will have one instance of the scheduler process running on a single master node, and likewise one
+controller-manager process running on a single (possibly different) master node. If either of these processes fail,
+the kubelet will restart them. If any of these nodes fail, the process will move to a different instance of a master
+node.
+
+## Conclusion
+
+At this point, you are done (yeah!) with the master components, but you still need to add worker nodes (boo!).
+
+If you have an existing cluster, this is as simple as reconfiguring your kubelets to talk to the load-balanced endpoint, and
+restarting the kubelets on each node.
+
+If you are turning up a fresh cluster, you will need to install the kubelet and kube-proxy on each worker node, and
+set the `--apiserver` flag to your replicated endpoint.
+
+## Vagrant up!
+
+We indeed have an initial proof of concept tester for this, which is available [here](https://releases.k8s.io/release-1.1/examples/high-availability).
+
+It implements the major concepts (with a few minor reductions for simplicity), of the podmaster HA implementation alongside a quick smoke test using k8petstore.
+
+
+
+
+
+
+
+
+
+
+[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/admin/high-availability.md?pixel)]()
+
+
diff --git a/v1.1/docs/admin/high-availability/default-kubelet b/v1.1/docs/admin/high-availability/default-kubelet
new file mode 100644
index 0000000000000..41ee5301510a3
--- /dev/null
+++ b/v1.1/docs/admin/high-availability/default-kubelet
@@ -0,0 +1,8 @@
+# This should be the IP address of the load balancer for all masters
+MASTER_IP=
+# This should be the internal service IP address reserved for DNS
+DNS_IP=
+
+DAEMON_ARGS="$DAEMON_ARGS --api-servers=https://${MASTER_IP} --enable-debugging-handlers=true --cloud-provider=
+gce --config=/etc/kubernetes/manifests --allow-privileged=False --v=2 --cluster-dns=${DNS_IP} --cluster-domain=c
+luster.local --configure-cbr0=true --cgroup-root=/ --system-container=/system "
diff --git a/v1.1/docs/admin/high-availability/etcd.yaml b/v1.1/docs/admin/high-availability/etcd.yaml
new file mode 100644
index 0000000000000..fc9fe67e7546b
--- /dev/null
+++ b/v1.1/docs/admin/high-availability/etcd.yaml
@@ -0,0 +1,87 @@
+apiVersion: v1
+kind: Pod
+metadata:
+ name: etcd-server
+spec:
+ hostNetwork: true
+ containers:
+ - image: gcr.io/google_containers/etcd:2.0.9
+ name: etcd-container
+ command:
+ - /usr/local/bin/etcd
+ - --name
+ - ${NODE_NAME}
+ - --initial-advertise-peer-urls
+ - http://${NODE_IP}:2380
+ - --listen-peer-urls
+ - http://${NODE_IP}:2380
+ - --advertise-client-urls
+ - http://${NODE_IP}:4001
+ - --listen-client-urls
+ - http://127.0.0.1:4001
+ - --data-dir
+ - /var/etcd/data
+ - --discovery
+ - ${DISCOVERY_TOKEN}
+ ports:
+ - containerPort: 2380
+ hostPort: 2380
+ name: serverport
+ - containerPort: 4001
+ hostPort: 4001
+ name: clientport
+ volumeMounts:
+ - mountPath: /var/etcd
+ name: varetcd
+ - mountPath: /etc/ssl
+ name: etcssl
+ readOnly: true
+ - mountPath: /usr/share/ssl
+ name: usrsharessl
+ readOnly: true
+ - mountPath: /var/ssl
+ name: varssl
+ readOnly: true
+ - mountPath: /usr/ssl
+ name: usrssl
+ readOnly: true
+ - mountPath: /usr/lib/ssl
+ name: usrlibssl
+ readOnly: true
+ - mountPath: /usr/local/openssl
+ name: usrlocalopenssl
+ readOnly: true
+ - mountPath: /etc/openssl
+ name: etcopenssl
+ readOnly: true
+ - mountPath: /etc/pki/tls
+ name: etcpkitls
+ readOnly: true
+ volumes:
+ - hostPath:
+ path: /var/etcd/data
+ name: varetcd
+ - hostPath:
+ path: /etc/ssl
+ name: etcssl
+ - hostPath:
+ path: /usr/share/ssl
+ name: usrsharessl
+ - hostPath:
+ path: /var/ssl
+ name: varssl
+ - hostPath:
+ path: /usr/ssl
+ name: usrssl
+ - hostPath:
+ path: /usr/lib/ssl
+ name: usrlibssl
+ - hostPath:
+ path: /usr/local/openssl
+ name: usrlocalopenssl
+ - hostPath:
+ path: /etc/openssl
+ name: etcopenssl
+ - hostPath:
+ path: /etc/pki/tls
+ name: etcpkitls
diff --git a/v1.1/docs/admin/high-availability/ha.png b/v1.1/docs/admin/high-availability/ha.png
new file mode 100644
index 0000000000000000000000000000000000000000..a005de69d7fc860a25fc871968bbfa8fe32d63eb
GIT binary patch
literal 38814
zcmd?RcTiJZ7e0FE0s{I17P_J!B8osjiUd)?LO|&~DAGa)A+%6L#0J;^2~7opl+b$z
z3sOQ6k^rHK5Fm62EtGqLzTfxFy}y||_wVauV&%Aqnb~5GGdL*musJ#dIenhr1k6&
zubAsqi);F&Xy;|Lsfdoo;b24n!%H|d
z$IAV8EUP(tBYj9`;eqE?Th~M%U*O_i!U#8f&Dy$2ykN+y@P|Gj9o=U7}&Ecc0QYZXgR8i
zTR@%Zy&)z0YCXJ6+(QOw1vb=43`|Ar;k(N_R_28o(gU_kh8Ex^Gb3$QN9nJ{IKm4k
zX`{eG2ZK)b2Pl)Zs#(1-q3SL4}(2dO-=8~mXy6W~%=!9^S=@;Jc*;q=yIhMcS%
zdXw0LY-zo)j`Sq5fG=Y>rZLd7^Q8Jb`1?Qu{j+}&ybk^^1~7pi@CUOE{X;)i2f+{c
zgHaYZ6bg2z0kG14@3*D@roZt2tu5GazinuOU}#%g8`g(Hfc34F$hgXz^%4jJ&3qgY
z5<#SayfU7@UFb4YR#paxtKTE_l$GEfe0-H{i*RyESafx+p6B6t{^AAJtk~WxDk|!f
z+TRuufF5^5M{e%9oa}7G0HcSUT?Qm%H;A%2-SgHrMLSxku+spZ`8KaOI=n#d>PE
zYTpC|1ZGh=M|`Uo7Y#q>