Skip to content

Commit

Permalink
Merge branch 'master' into dev-1.15
Browse files Browse the repository at this point in the history
  • Loading branch information
makoscafee authored Jun 19, 2019
2 parents f04d0cf + 0e1e971 commit 455f312
Show file tree
Hide file tree
Showing 33 changed files with 463 additions and 307 deletions.
4 changes: 4 additions & 0 deletions content/en/docs/concepts/cluster-administration/addons.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,6 +44,10 @@ Add-ons in each section are sorted alphabetically - the ordering does not imply
* [Dashboard](https://github.com/kubernetes/dashboard#kubernetes-dashboard) is a dashboard web interface for Kubernetes.
* [Weave Scope](https://www.weave.works/documentation/scope-latest-installing/#k8s) is a tool for graphically visualizing your containers, pods, services etc. Use it in conjunction with a [Weave Cloud account](https://cloud.weave.works/) or host the UI yourself.

## Infrastructure

* [KubeVirt](https://kubevirt.io/user-guide/docs/latest/administration/intro.html#cluster-side-add-on-deployment) is an add-on to run virtual machines on Kubernetes. Usually run on bare-metal clusters.

## Legacy Add-ons

There are several other add-ons documented in the deprecated [cluster/addons](https://git.k8s.io/kubernetes/cluster/addons) directory.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -212,10 +212,10 @@ You can check node capacities and amounts allocated with the
`kubectl describe nodes` command. For example:
```shell
kubectl describe nodes e2e-test-minion-group-4lw4
kubectl describe nodes e2e-test-node-pool-4lw4
```
```
Name: e2e-test-minion-group-4lw4
Name: e2e-test-node-pool-4lw4
[ ... lines removed for clarity ...]
Capacity:
cpu: 2
Expand Down
237 changes: 59 additions & 178 deletions content/en/docs/concepts/overview/what-is-kubernetes.md

Large diffs are not rendered by default.

Original file line number Diff line number Diff line change
Expand Up @@ -144,7 +144,7 @@ kind: StatefulSet
metadata:
labels:
app.kubernetes.io/name: mysql
app.kubernetes.io/instance: wordpress-abcxzy
app.kubernetes.io/instance: mysql-abcxzy
app.kubernetes.io/managed-by: helm
app.kubernetes.io/component: database
app.kubernetes.io/part-of: wordpress
Expand All @@ -160,7 +160,7 @@ kind: Service
metadata:
labels:
app.kubernetes.io/name: mysql
app.kubernetes.io/instance: wordpress-abcxzy
app.kubernetes.io/instance: mysql-abcxzy
app.kubernetes.io/managed-by: helm
app.kubernetes.io/component: database
app.kubernetes.io/part-of: wordpress
Expand All @@ -170,4 +170,4 @@ metadata:

With the MySQL `StatefulSet` and `Service` you'll notice information about both MySQL and Wordpress, the broader application, are included.

{{% /capture %}}
{{% /capture %}}
15 changes: 15 additions & 0 deletions content/en/docs/concepts/overview/working-with-objects/names.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,21 @@ See the [identifiers design doc](https://git.k8s.io/community/contributors/desig

By convention, the names of Kubernetes resources should be up to maximum length of 253 characters and consist of lower case alphanumeric characters, `-`, and `.`, but certain resources have more specific restrictions.

For example, here’s the configuration file with a Pod name as `nginx-demo` and a Container name as `nginx`:

```yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx-demo
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
```
## UIDs
{{< glossary_definition term_id="uid" length="all" >}}
Expand Down
6 changes: 4 additions & 2 deletions content/en/docs/concepts/policy/pod-security-policy.md
Original file line number Diff line number Diff line change
Expand Up @@ -485,8 +485,10 @@ spec:
minimum value of the first range as the default. Validates against all ranges.
- *MustRunAsNonRoot* - Requires that the pod be submitted with a non-zero
`runAsUser` or have the `USER` directive defined (using a numeric UID) in the
image. No default provided. Setting `allowPrivilegeEscalation=false` is strongly
recommended with this strategy.
image. Pods which have specified neither `runAsNonRoot` nor `runAsUser` settings
will be mutated to set `runAsNonRoot=true`, thus requiring a defined non-zero
numeric `USER` directive in the container. No default provided. Setting
`allowPrivilegeEscalation=false` is strongly recommended with this strategy.
- *RunAsAny* - No default provided. Allows any `runAsUser` to be specified.

**RunAsGroup** - Controls which primary group ID the containers are run with.
Expand Down
2 changes: 0 additions & 2 deletions content/en/docs/concepts/policy/resource-quotas.md
Original file line number Diff line number Diff line change
Expand Up @@ -66,10 +66,8 @@ The following resource types are supported:

| Resource Name | Description |
| --------------------- | ----------------------------------------------------------- |
| `cpu` | Across all pods in a non-terminal state, the sum of CPU requests cannot exceed this value. |
| `limits.cpu` | Across all pods in a non-terminal state, the sum of CPU limits cannot exceed this value. |
| `limits.memory` | Across all pods in a non-terminal state, the sum of memory limits cannot exceed this value. |
| `memory` | Across all pods in a non-terminal state, the sum of memory requests cannot exceed this value. |
| `requests.cpu` | Across all pods in a non-terminal state, the sum of CPU requests cannot exceed this value. |
| `requests.memory` | Across all pods in a non-terminal state, the sum of memory requests cannot exceed this value. |

Expand Down
20 changes: 10 additions & 10 deletions content/en/docs/concepts/services-networking/dns-pod-service.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,11 +41,11 @@ For more up-to-date specification, see
### A records

"Normal" (not headless) Services are assigned a DNS A record for a name of the
form `my-svc.my-namespace.svc.cluster.local`. This resolves to the cluster IP
form `my-svc.my-namespace.svc.cluster-domain.example`. This resolves to the cluster IP
of the Service.

"Headless" (without a cluster IP) Services are also assigned a DNS A record for
a name of the form `my-svc.my-namespace.svc.cluster.local`. Unlike normal
a name of the form `my-svc.my-namespace.svc.cluster-domain.example`. Unlike normal
Services, this resolves to the set of IPs of the pods selected by the Service.
Clients are expected to consume the set or else use standard round-robin
selection from the set.
Expand All @@ -55,12 +55,12 @@ selection from the set.
SRV Records are created for named ports that are part of normal or [Headless
Services](/docs/concepts/services-networking/service/#headless-services).
For each named port, the SRV record would have the form
`_my-port-name._my-port-protocol.my-svc.my-namespace.svc.cluster.local`.
`_my-port-name._my-port-protocol.my-svc.my-namespace.svc.cluster-domain.example`.
For a regular service, this resolves to the port number and the domain name:
`my-svc.my-namespace.svc.cluster.local`.
`my-svc.my-namespace.svc.cluster-domain.example`.
For a headless service, this resolves to multiple answers, one for each pod
that is backing the service, and contains the port number and the domain name of the pod
of the form `auto-generated-name.my-svc.my-namespace.svc.cluster.local`.
of the form `auto-generated-name.my-svc.my-namespace.svc.cluster-domain.example`.

## Pods

Expand All @@ -76,7 +76,7 @@ the hostname of the pod. For example, given a Pod with `hostname` set to
The Pod spec also has an optional `subdomain` field which can be used to specify
its subdomain. For example, a Pod with `hostname` set to "`foo`", and `subdomain`
set to "`bar`", in namespace "`my-namespace`", will have the fully qualified
domain name (FQDN) "`foo.bar.my-namespace.svc.cluster.local`".
domain name (FQDN) "`foo.bar.my-namespace.svc.cluster-domain.example`".

Example:

Expand Down Expand Up @@ -133,7 +133,7 @@ record for the Pod's fully qualified hostname.
For example, given a Pod with the hostname set to "`busybox-1`" and the subdomain set to
"`default-subdomain`", and a headless Service named "`default-subdomain`" in
the same namespace, the pod will see its own FQDN as
"`busybox-1.default-subdomain.my-namespace.svc.cluster.local`". DNS serves an
"`busybox-1.default-subdomain.my-namespace.svc.cluster-domain.example`". DNS serves an
A record at that name, pointing to the Pod's IP. Both pods "`busybox1`" and
"`busybox2`" can have their distinct A records.

Expand All @@ -143,7 +143,7 @@ along with its IP.
{{< note >}}
Because A records are not created for Pod names, `hostname` is required for the Pod's A
record to be created. A Pod with no `hostname` but with `subdomain` will only create the
A record for the headless service (`default-subdomain.my-namespace.svc.cluster.local`),
A record for the headless service (`default-subdomain.my-namespace.svc.cluster-domain.example`),
pointing to the Pod's IP address. Also, Pod needs to become ready in order to have a
record unless `publishNotReadyAddresses=True` is set on the Service.
{{< /note >}}
Expand Down Expand Up @@ -234,7 +234,7 @@ in its `/etc/resolv.conf` file:

```
nameserver 1.2.3.4
search ns1.svc.cluster.local my.dns.search.suffix
search ns1.svc.cluster-domain.example my.dns.search.suffix
options ndots:2 edns0
```

Expand All @@ -246,7 +246,7 @@ kubectl exec -it dns-example -- cat /etc/resolv.conf
The output is similar to this:
```shell
nameserver fd00:79:30::a
search default.svc.cluster.local svc.cluster.local cluster.local
search default.svc.cluster-domain.example svc.cluster-domain.example cluster-domain.example
options ndots:5
```

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ within a cluster. When you create an ingress, you should annotate each ingress w
[`ingress.class`](https://git.k8s.io/ingress-gce/docs/faq/README.md#how-do-i-run-multiple-ingress-controllers-in-the-same-cluster)
to indicate which ingress controller should be used if more than one exists within your cluster.

If you do not define a class, your cloud provider may use a default ingress provider.
If you do not define a class, your cloud provider may use a default ingress controller.

Ideally, all ingress controllers should fulfill this specification, but the various ingress
controllers operate slightly differently.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -230,12 +230,13 @@ allows you to still view the logs of completed pods to check for errors, warning
The job object also remains after it is completed so that you can view its status. It is up to the user to delete
old jobs after noting their status. Delete the job with `kubectl` (e.g. `kubectl delete jobs/pi` or `kubectl delete -f ./job.yaml`). When you delete the job using `kubectl`, all the pods it created are deleted too.

By default, a Job will run uninterrupted unless a Pod fails, at which point the Job defers to the
`.spec.backoffLimit` described above. Another way to terminate a Job is by setting an active deadline.
Do this by setting the `.spec.activeDeadlineSeconds` field of the Job to a number of seconds.
By default, a Job will run uninterrupted unless a Pod fails (`restartPolicy=Never`) or a Container exits in error (`restartPolicy=OnFailure`), at which point the Job defers to the
`.spec.backoffLimit` described above. Once `.spec.backoffLimit` has been reached the Job will be marked as failed and any running Pods will be terminated.

Another way to terminate a Job is by setting an active deadline.
Do this by setting the `.spec.activeDeadlineSeconds` field of the Job to a number of seconds.
The `activeDeadlineSeconds` applies to the duration of the job, no matter how many Pods are created.
Once a Job reaches `activeDeadlineSeconds`, all of its Pods are terminated and the Job status will become `type: Failed` with `reason: DeadlineExceeded`.
Once a Job reaches `activeDeadlineSeconds`, all of its running Pods are terminated and the Job status will become `type: Failed` with `reason: DeadlineExceeded`.

Note that a Job's `.spec.activeDeadlineSeconds` takes precedence over its `.spec.backoffLimit`. Therefore, a Job that is retrying one or more failed Pods will not deploy additional Pods once it reaches the time limit specified by `activeDeadlineSeconds`, even if the `backoffLimit` is not yet reached.

Expand Down
5 changes: 3 additions & 2 deletions content/en/docs/reference/glossary/minikube.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
title: Minikube
id: minikube
date: 2018-04-12
full_link: /docs/getting-started-guides/minikube/
full_link: /docs/setup/learning-environment/minikube/
short_description: >
A tool for running Kubernetes locally.
Expand All @@ -16,4 +16,5 @@ tags:
<!--more-->

Minikube runs a single-node cluster inside a VM on your computer.

You can use Minikube to
[try Kubernetes in a learning environment](/docs/setup/learning-environment/).
6 changes: 4 additions & 2 deletions content/en/docs/reference/issues-security/security.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,9 @@ You may encrypt your email to this list using the GPG keys of the [Product Secur

- You think you discovered a potential security vulnerability in Kubernetes
- You are unsure how a vulnerability affects Kubernetes
- You think you discovered a vulnerability in another project that Kubernetes depends on (e.g. docker, rkt, etcd)
- You think you discovered a vulnerability in another project that Kubernetes depends on
- For projects with their own vulnerability reporting and disclosure process, please report it directly there


### When Should I NOT Report a Vulnerability?

Expand All @@ -51,5 +53,5 @@ As the security issue moves from triage, to identified fix, to release planning

## Public Disclosure Timing

A public disclosure date is negotiated by the Kubernetes Product Security Committee and the bug submitter. We prefer to fully disclose the bug as soon as possible once a user mitigation is available. It is reasonable to delay disclosure when the bug or the fix is not yet fully understood, the solution is not well-tested, or for vendor coordination. The timeframe for disclosure is from immediate (especially if it's already publicly known) to a few weeks. As a basic default, we expect report date to disclosure date to be on the order of 7 days. The Kubernetes Product Security Committee holds the final say when setting a disclosure date.
A public disclosure date is negotiated by the Kubernetes Product Security Committee and the bug submitter. We prefer to fully disclose the bug as soon as possible once a user mitigation is available. It is reasonable to delay disclosure when the bug or the fix is not yet fully understood, the solution is not well-tested, or for vendor coordination. The timeframe for disclosure is from immediate (especially if it's already publicly known) to a few weeks. For a vulnerability with a straightforward mitigation, we expect report date to disclosure date to be on the order of 7 days. The Kubernetes Product Security Committee holds the final say when setting a disclosure date.
{{% /capture %}}
3 changes: 3 additions & 0 deletions content/en/docs/reference/kubectl/cheatsheet.md
Original file line number Diff line number Diff line change
Expand Up @@ -182,6 +182,9 @@ echo $(kubectl get pods --selector=$sel --output=jsonpath={.items..metadata.name
# Also uses "jq"
for item in $( kubectl get pod --output=name); do printf "Labels for %s\n" "$item" | grep --color -E '[^/]+$' && kubectl get "$item" --output=json | jq -r -S '.metadata.labels | to_entries | .[] | " \(.key)=\(.value)"' 2>/dev/null; printf "\n"; done

# Or this command can be used as well to get all the labels associated with pods
kubectl get pods --show-labels

# Check which nodes are ready
JSONPATH='{range .items[*]}{@.metadata.name}:{range @.status.conditions[*]}{@.type}={@.status};{end}{end}' \
&& kubectl get nodes -o jsonpath="$JSONPATH" | grep "Ready=True"
Expand Down
22 changes: 9 additions & 13 deletions content/en/docs/reference/kubectl/conventions.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,19 +37,15 @@ For `kubectl run` to satisfy infrastructure as code:

You can create the following resources using `kubectl run` with the `--generator` flag:

| Resource | kubectl command |
|---------------------------------|---------------------------------------------------|
| Pod | `kubectl run --generator=run-pod/v1` |
| Replication controller | `kubectl run --generator=run/v1` |
| Deployment | `kubectl run --generator=extensions/v1beta1` |
| -for an endpoint (default) | `kubectl run --generator=deployment/v1beta1` |
| Deployment | `kubectl run --generator=apps/v1beta1` |
| -for an endpoint (recommended) | `kubectl run --generator=deployment/apps.v1beta1` |
| Job | `kubectl run --generator=job/v1` |
| CronJob | `kubectl run --generator=batch/v1beta1` |
| -for an endpoint (default) | `kubectl run --generator=cronjob/v1beta1` |
| CronJob | `kubectl run --generator=batch/v2alpha1` |
| -for an endpoint (deprecated) | `kubectl run --generator=cronjob/v2alpha1` |
| Resource | api group | kubectl command |
|---------------------------------|--------------------|---------------------------------------------------|
| Pod | v1 | `kubectl run --generator=run-pod/v1` |
| Replication controller | v1 | `kubectl run --generator=run/v1` |
| Deployment (deprecated) | extensions/v1beta1 | `kubectl run --generator=deployment/v1beta1` |
| Deployment (deprecated) | apps/v1beta1 | `kubectl run --generator=deployment/apps.v1beta1` |
| Job (deprecated) | batch/v1 | `kubectl run --generator=job/v1` |
| CronJob (default) | batch/v1beta1 | `kubectl run --generator=cronjob/v1beta1` |
| CronJob (deprecated) | batch/v2alpha1 | `kubectl run --generator=cronjob/v2alpha1` |

If you do not specify a generator flag, other flags prompt you to use a specific generator. The following table lists the flags that force you to use specific generators, depending on the version of the cluster:

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -183,7 +183,7 @@ add-apt-repository ppa:projectatomic/ppa
apt-get update

# Install CRI-O
apt-get install cri-o-1.11
apt-get install cri-o-1.13

{{< /tab >}}
{{< tab name="CentOS/RHEL 7.4+" codelang="bash" >}}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -178,6 +178,8 @@ The following file is an Ingress resource that sends traffic to your Service via

1. Add the following line to the bottom of the `/etc/hosts` file.

{{< note >}}If you are running Minikube locally, use `minikube ip` to get the external IP. The IP address displayed within the ingress list will be the internal IP.{{< /note >}}

```
172.17.0.15 hello-world.info
```
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -85,7 +85,7 @@ If your cluster runs short on resources you can easily add more machines to it i
If you're using GCE or Google Kubernetes Engine it's done by resizing Instance Group managing your Nodes. It can be accomplished by modifying number of instances on `Compute > Compute Engine > Instance groups > your group > Edit group` [Google Cloud Console page](https://console.developers.google.com) or using gcloud CLI:

```shell
gcloud compute instance-groups managed resize kubernetes-minion-group --size=42 --zone=$ZONE
gcloud compute instance-groups managed resize kubernetes-node-pool --size=42 --zone=$ZONE
```

Instance Group will take care of putting appropriate image on new machines and start them, while Kubelet will register its Node with API server to make it available for scheduling. If you scale the instance group down, system will randomly choose Nodes to kill.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -316,8 +316,9 @@ without compromising the minimum required capacity for running your workloads.
{{< tabs name="k8s_kubelet_and_kubectl" >}}
{{% tab name="Ubuntu, Debian or HypriotOS" %}}
# replace x in 1.14.x-00 with the latest patch version
apt-get update
apt-get install -y kubelet=1.14.x-00 kubectl=1.14.x-00
apt-mark unhold kubelet kubectl && \
apt-get update && apt-get install -y kubelet=1.14.x-00 kubectl=1.14.x-00 && \
apt-mark hold kubelet kubectl
{{% /tab %}}
{{% tab name="CentOS, RHEL or Fedora" %}}
# replace x in 1.14.x-0 with the latest patch version
Expand Down
29 changes: 17 additions & 12 deletions content/en/docs/tasks/administer-cluster/namespaces.md
Original file line number Diff line number Diff line change
Expand Up @@ -83,18 +83,23 @@ See the [design doc](https://git.k8s.io/community/contributors/design-proposals/

1. Create a new YAML file called `my-namespace.yaml` with the contents:

```yaml
apiVersion: v1
kind: Namespace
metadata:
name: <insert-namespace-name-here>
```
Then run:
```shell
kubectl create -f ./my-namespace.yaml
```
```yaml
apiVersion: v1
kind: Namespace
metadata:
name: <insert-namespace-name-here>
```
Then run:
```
kubectl create -f ./my-namespace.yaml
```

2. Alternatively, you can create namespace using below command:

```
kubectl create namespace <insert-namespace-name-here>
```

Note that the name of your namespace must be a DNS compatible label.

Expand Down
Loading

0 comments on commit 455f312

Please sign in to comment.