Skip to content

Commit

Permalink
Downloadable examples for “Run applications” section (kubernetes#14147)
Browse files Browse the repository at this point in the history
* Move examples ahead of commands that use them

In support of kubernetes#12740

The aim is to adopt a consistent style around providing downloadable
examples for use with kubectl, etc.

* Tweak wording for stateful app pod example

* Adopt formatting conventions for code blocks

* Move ReplicationController sample YAML to examples

In aid of kubernetes#12740

* Move PodDisruptionBudget sample YAML to examples

In aid of kubernetes#12740

* Update test schema for new examples

* Use Unicode ellipsis in example

Aim here is to make the elision more obvious
  • Loading branch information
sftim authored and k8s-ci-robot committed Jun 11, 2019
1 parent b3e3332 commit 7b44b7a
Show file tree
Hide file tree
Showing 8 changed files with 111 additions and 98 deletions.
26 changes: 3 additions & 23 deletions content/en/docs/tasks/run-application/configure-pdb.md
Original file line number Diff line number Diff line change
Expand Up @@ -151,31 +151,11 @@ You can find examples of pod disruption budgets defined below. They match pods w

Example PDB Using minAvailable:

```yaml
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: zk-pdb
spec:
minAvailable: 2
selector:
matchLabels:
app: zookeeper
```
{{< codenew file="policy/zookeeper-pod-disruption-budget-minavailable.yaml" >}}

Example PDB Using maxUnavailable (Kubernetes 1.7 or higher):

```yaml
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: zk-pdb
spec:
maxUnavailable: 1
selector:
matchLabels:
app: zookeeper
```
{{< codenew file="policy/zookeeper-pod-disruption-budget-maxunavailable.yaml" >}}

For example, if the above `zk-pdb` object selects the pods of a StatefulSet of size 3, both
specifications have the exact same meaning. The use of `maxUnavailable` is recommended as it
Expand Down Expand Up @@ -227,7 +207,7 @@ metadata:
creationTimestamp: 2017-08-28T02:38:26Z
generation: 1
name: zk-pdb
...
status:
currentHealthy: 3
desiredHealthy: 3
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -37,8 +37,12 @@ A rolling update works by:

Rolling updates are initiated with the `kubectl rolling-update` command:

kubectl rolling-update NAME \
([NEW_NAME] --image=IMAGE | -f FILE)
```shell
kubectl rolling-update NAME NEW_NAME --image=IMAGE:TAG

# or read the configuration from a file
kubectl rolling-update NAME -f FILE
```

{{% /capture %}}

Expand All @@ -50,7 +54,9 @@ Rolling updates are initiated with the `kubectl rolling-update` command:
To initiate a rolling update using a configuration file, pass the new file to
`kubectl rolling-update`:

kubectl rolling-update NAME -f FILE
```shell
kubectl rolling-update NAME -f FILE
```

The configuration file must:

Expand All @@ -65,25 +71,29 @@ Replication controller configuration files are described in

### Examples

// Update pods of frontend-v1 using new replication controller data in frontend-v2.json.
kubectl rolling-update frontend-v1 -f frontend-v2.json
```shell
# Update pods of frontend-v1 using new replication controller data in frontend-v2.json.
kubectl rolling-update frontend-v1 -f frontend-v2.json

// Update pods of frontend-v1 using JSON data passed into stdin.
cat frontend-v2.json | kubectl rolling-update frontend-v1 -f -
# Update pods of frontend-v1 using JSON data passed into stdin.
cat frontend-v2.json | kubectl rolling-update frontend-v1 -f -
```

## Updating the container image

To update only the container image, pass a new image name and tag with the
`--image` flag and (optionally) a new controller name:

kubectl rolling-update NAME [NEW_NAME] --image=IMAGE:TAG
```shell
kubectl rolling-update NAME NEW_NAME --image=IMAGE:TAG
```

The `--image` flag is only supported for single-container pods. Specifying
`--image` with multi-container pods returns an error.

If no `NEW_NAME` is specified, a new replication controller is created with
a temporary name. Once the rollout is complete, the old controller is deleted,
and the new controller is updated to use the original name.
If you didn't specify a new name, this creates a new replication controller
with a temporary name. Once the rollout is complete, the old controller is
deleted, and the new controller is updated to use the original name.

The update will fail if `IMAGE:TAG` is identical to the
current value. For this reason, we recommend the use of versioned tags as
Expand All @@ -94,11 +104,13 @@ Moreover, the use of `:latest` is not recommended, see

### Examples

// Update the pods of frontend-v1 to frontend-v2
kubectl rolling-update frontend-v1 frontend-v2 --image=image:v2
```shell
# Update the pods of frontend-v1 to frontend-v2
kubectl rolling-update frontend-v1 frontend-v2 --image=image:v2

// Update the pods of frontend, keeping the replication controller name
kubectl rolling-update frontend --image=image:v2
# Update the pods of frontend, keeping the replication controller name
kubectl rolling-update frontend --image=image:v2
```

## Required and optional fields

Expand Down Expand Up @@ -143,24 +155,7 @@ from the [`kubectl` reference](/docs/reference/generated/kubectl/kubectl-command

Let's say you were running version 1.7.9 of nginx:

```yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: my-nginx
spec:
replicas: 5
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
```
{{< codenew file="controllers/replication-nginx-1.7.9.yaml" >}}

To update to version 1.9.1, you can use [`kubectl rolling-update --image`](https://git.k8s.io/community/contributors/design-proposals/cli/simple-rolling-update.md) to specify the new image:

Expand Down Expand Up @@ -218,34 +213,13 @@ This is one example where the immutability of containers is a huge asset.

If you need to update more than just the image (e.g., command arguments, environment variables), you can create a new replication controller, with a new name and distinguishing label value, such as:

```yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: my-nginx-v4
spec:
replicas: 5
selector:
app: nginx
deployment: v4
template:
metadata:
labels:
app: nginx
deployment: v4
spec:
containers:
- name: nginx
image: nginx:1.9.2
args: ["nginx", "-T"]
ports:
- containerPort: 80
```
{{< codenew file="controllers/replication-nginx-1.9.2.yaml" >}}

and roll it out:

```shell
kubectl rolling-update my-nginx -f ./nginx-rc.yaml
# Assuming you named the file "my-nginx.yaml"
kubectl rolling-update my-nginx -f ./my-nginx.yaml
```
```
Created my-nginx-v4
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -59,12 +59,12 @@ and a StatefulSet.

Create the ConfigMap from the following YAML configuration file:

{{< codenew file="application/mysql/mysql-configmap.yaml" >}}

```shell
kubectl apply -f https://k8s.io/examples/application/mysql/mysql-configmap.yaml
```

{{< codenew file="application/mysql/mysql-configmap.yaml" >}}

This ConfigMap provides `my.cnf` overrides that let you independently control
configuration on the MySQL master and slaves.
In this case, you want the master to be able to serve replication logs to slaves
Expand All @@ -79,12 +79,12 @@ based on information provided by the StatefulSet controller.

Create the Services from the following YAML configuration file:

{{< codenew file="application/mysql/mysql-services.yaml" >}}

```shell
kubectl apply -f https://k8s.io/examples/application/mysql/mysql-services.yaml
```

{{< codenew file="application/mysql/mysql-services.yaml" >}}

The Headless Service provides a home for the DNS entries that the StatefulSet
controller creates for each Pod that's part of the set.
Because the Headless Service is named `mysql`, the Pods are accessible by
Expand All @@ -105,12 +105,12 @@ writes.

Finally, create the StatefulSet from the following YAML configuration file:

{{< codenew file="application/mysql/mysql-statefulset.yaml" >}}

```shell
kubectl apply -f https://k8s.io/examples/application/mysql/mysql-statefulset.yaml
```

{{< codenew file="application/mysql/mysql-statefulset.yaml" >}}

You can watch the startup progress by running:

```shell
Expand Down Expand Up @@ -141,8 +141,8 @@ ordinal index.
It waits until each Pod reports being Ready before starting the next one.

In addition, the controller assigns each Pod a unique, stable name of the form
`<statefulset-name>-<ordinal-index>`.
In this case, that results in Pods named `mysql-0`, `mysql-1`, and `mysql-2`.
`<statefulset-name>-<ordinal-index>`, which results in Pods named `mysql-0`,
`mysql-1`, and `mysql-2`.

The Pod template in the above StatefulSet manifest takes advantage of these
properties to perform orderly startup of MySQL replication.
Expand Down
16 changes: 16 additions & 0 deletions content/en/examples/controllers/replication-nginx-1.7.9.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
apiVersion: v1
kind: ReplicationController
metadata:
name: my-nginx
spec:
replicas: 5
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
21 changes: 21 additions & 0 deletions content/en/examples/controllers/replication-nginx-1.9.2.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
apiVersion: v1
kind: ReplicationController
metadata:
name: my-nginx-v4
spec:
replicas: 5
selector:
app: nginx
deployment: v4
template:
metadata:
labels:
app: nginx
deployment: v4
spec:
containers:
- name: nginx
image: nginx:1.9.2
args: ["nginx", "-T"]
ports:
- containerPort: 80
24 changes: 14 additions & 10 deletions content/en/examples/examples_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -415,13 +415,15 @@ func TestExampleObjectSchemas(t *testing.T) {
"configmap-multikeys": {&api.ConfigMap{}},
},
"controllers": {
"daemonset": {&apps.DaemonSet{}},
"frontend": {&apps.ReplicaSet{}},
"hpa-rs": {&autoscaling.HorizontalPodAutoscaler{}},
"job": {&batch.Job{}},
"replicaset": {&apps.ReplicaSet{}},
"replication": {&api.ReplicationController{}},
"nginx-deployment": {&apps.Deployment{}},
"daemonset": {&apps.DaemonSet{}},
"frontend": {&apps.ReplicaSet{}},
"hpa-rs": {&autoscaling.HorizontalPodAutoscaler{}},
"job": {&batch.Job{}},
"replicaset": {&apps.ReplicaSet{}},
"replication": {&api.ReplicationController{}},
"replication-nginx-1.7.9": {&api.ReplicationController{}},
"replication-nginx-1.9.2": {&api.ReplicationController{}},
"nginx-deployment": {&apps.Deployment{}},
},
"debug": {
"counter-pod": {&api.Pod{}},
Expand Down Expand Up @@ -523,9 +525,11 @@ func TestExampleObjectSchemas(t *testing.T) {
"redis": {&api.Pod{}},
},
"policy": {
"privileged-psp": {&policy.PodSecurityPolicy{}},
"restricted-psp": {&policy.PodSecurityPolicy{}},
"example-psp": {&policy.PodSecurityPolicy{}},
"privileged-psp": {&policy.PodSecurityPolicy{}},
"restricted-psp": {&policy.PodSecurityPolicy{}},
"example-psp": {&policy.PodSecurityPolicy{}},
"zookeeper-pod-disruption-budget-maxunavailable": {&policy.PodDisruptionBudget{}},
"zookeeper-pod-disruption-budget-minunavailable": {&policy.PodDisruptionBudget{}},
},
"service": {
"nginx-service": {&api.Service{}},
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: zk-pdb
spec:
maxUnavailable: 1
selector:
matchLabels:
app: zookeeper
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: zk-pdb
spec:
minAvailable: 2
selector:
matchLabels:
app: zookeeper

0 comments on commit 7b44b7a

Please sign in to comment.