Skip to content

Commit

Permalink
fix 404 urls in content/en/blog/ (#17682)
Browse files Browse the repository at this point in the history
  • Loading branch information
tanjunchen authored and k8s-ci-robot committed Nov 21, 2019
1 parent dc5c882 commit cfd7793
Show file tree
Hide file tree
Showing 18 changed files with 25 additions and 27 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -92,7 +92,7 @@ Finally, we pointed our browser to [http://$BOOKINFO\_URL/productpage](about:bla


## HTTP request routing
Existing container orchestration platforms like Kubernetes, Mesos, and other microservice frameworks allow operators to control when a particular set of pods/VMs should receive traffic (e.g., by adding/removing specific labels). Unlike existing techniques, Istio decouples traffic flow and infrastructure scaling. This allows Istio to provide a variety of traffic management features that reside outside the application code, including dynamic HTTP [request routing](https://istio.io/docs/concepts/traffic-management/request-routing.html) for A/B testing, canary releases, gradual rollouts, [failure recovery](https://istio.io/docs/concepts/traffic-management/handling-failures.html) using timeouts, retries, circuit breakers, and [fault injection](https://istio.io/docs/concepts/traffic-management/fault-injection.html) to test compatibility of failure recovery policies across services.
Existing container orchestration platforms like Kubernetes, Mesos, and other microservice frameworks allow operators to control when a particular set of pods/VMs should receive traffic (e.g., by adding/removing specific labels). Unlike existing techniques, Istio decouples traffic flow and infrastructure scaling. This allows Istio to provide a variety of traffic management features that reside outside the application code, including dynamic HTTP [request routing](https://istio.io/docs/concepts/traffic-management/#routing-rules) for A/B testing, canary releases, gradual rollouts, [failure recovery](https://istio.io/docs/concepts/traffic-management/#network-resilience-and-testing) using timeouts, retries, circuit breakers, and [fault injection](https://istio.io/docs/concepts/traffic-management/fault-injection.html) to test compatibility of failure recovery policies across services.

To demonstrate, we’ll deploy v2 of the **reviews** service and use Istio to make it visible only for a specific test user. We can create a Kubernetes deployment, reviews-v2, with [this YAML file](https://raw.githubusercontent.com/istio/istio/master/samples/kubernetes-blog/bookinfo-reviews-v2.yaml):

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ _Editor's note: Today's post is a joint post from the deep learning team at Baid

Two open source communities—PaddlePaddle, the deep learning framework originated in Baidu, and Kubernetes®, the most famous containerized application scheduler—are announcing the Elastic Deep Learning (EDL) feature in PaddlePaddle’s new release codenamed Fluid.

Fluid EDL includes a [Kubernetes controller](https://github.com/kubernetes/community/blob/master/contributors/devel/controllers.md), [_PaddlePaddle auto-scaler_](https://github.com/PaddlePaddle/cloud/tree/develop/doc/autoscale), which changes the number of processes of distributed jobs according to the idle hardware resource in the cluster, and a new fault-tolerable architecture as described in the [PaddlePaddle design doc](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/design/cluster_train/README.md).
Fluid EDL includes a [Kubernetes controller](https://github.com/kubernetes/community/blob/master/contributors/devel/controllers.md), [_PaddlePaddle auto-scaler_](https://github.com/PaddlePaddle/cloud/tree/develop/doc/edl/experiment#auto-scaling-experiment), which changes the number of processes of distributed jobs according to the idle hardware resource in the cluster, and a new fault-tolerable architecture as described in the [PaddlePaddle design doc](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/design/cluster_train/README.md).

Industrial deep learning requires significant computation power. Research labs and companies often build GPU clusters managed by SLURM, MPI, or SGE. These clusters either run a submitted job if it requires less than the idle resource, or pend the job for an unpredictably long time. This approach has its drawbacks: in an example with 99 available nodes and a submitted job that requires 100, the job has to wait without using any of the available nodes. Fluid works with Kubernetes to power elastic deep learning jobs, which often lack optimal resources, by helping to expose potential algorithmic problems as early as possible.

Expand All @@ -37,7 +37,7 @@ In the second test, each experiment ran 400 Nginx pods, which has higher priorit
| _Figure 2. Fluid changes PaddlePaddle processes with the change of Nginx processes._ |


We continue to work on FluidEDL and welcome comments and contributions. Visit the [PaddlePaddle repo](https://github.com/PaddlePaddle/cloud), where you can find the [design doc](https://github.com/PaddlePaddle/cloud/blob/develop/doc/autoscale/README.md), a [simple tutorial](https://github.com/PaddlePaddle/cloud/blob/develop/doc/autoscale/example/autoscale.md), and [experiment details](https://github.com/PaddlePaddle/cloud/tree/develop/doc/autoscale/experiment).
We continue to work on FluidEDL and welcome comments and contributions. Visit the [PaddlePaddle repo](https://github.com/PaddlePaddle/cloud), where you can find the [design doc](https://github.com/PaddlePaddle/cloud/tree/develop/doc/design), a [simple tutorial](https://github.com/PaddlePaddle/cloud/blob/develop/doc/autoscale/example/autoscale.md), and [experiment details](https://github.com/PaddlePaddle/cloud/tree/develop/doc/edl/experiment).

- Xu Yan (Baidu Research)
- Helin Wang (Baidu Research)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ A `VolumeAttributes` field was added to Kubernetes `CSIPersistentVolumeSource` o

CSI plugin authors must provide their own instructions for deploying their plugin on Kubernetes.

The Kubernetes-CSI implementation team created a [sample hostpath CSI driver](https://kubernetes-csi.github.io/docs/Example.html). The sample provides a rough idea of what the deployment process for a CSI driver looks like. Production drivers, however, would deploy node components via a DaemonSet and controller components via a StatefulSet rather than a single pod (for example, see the deployment files for the [GCE PD driver](https://github.com/GoogleCloudPlatform/compute-persistent-disk-csi-driver/blob/master/deploy/kubernetes/README.md)).
The Kubernetes-CSI implementation team created a [sample hostpath CSI driver](https://kubernetes-csi.github.io/docs/example.html). The sample provides a rough idea of what the deployment process for a CSI driver looks like. Production drivers, however, would deploy node components via a DaemonSet and controller components via a StatefulSet rather than a single pod (for example, see the deployment files for the [GCE PD driver](https://github.com/GoogleCloudPlatform/compute-persistent-disk-csi-driver/blob/master/deploy/kubernetes/README.md)).

## How do I use a CSI Volume in my Kubernetes pod?

Expand Down Expand Up @@ -167,7 +167,7 @@ Storage vendors can build Kubernetes deployments for their plugins using these c

## Where can I find CSI drivers?

CSI drivers are developed and maintained by third parties. You can find a non-definitive list of some [sample and production CSI drivers](https://kubernetes-csi.github.io/docs/Drivers.html).
CSI drivers are developed and maintained by third parties. You can find a non-definitive list of some [sample and production CSI drivers](https://kubernetes-csi.github.io/docs/drivers.html).

## What about FlexVolumes?

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ Only 4 tools were in use by more than 10% of those who took the survey with Helm

## Want To See More?

As the [Application Definition Working Group](https://github.com/kubernetes/community/tree/master/wg-app-def) is working through the data we're putting observations into a [Google Slides Document](http://bit.ly/2qTkuhx). This is a living document that will continue to grow while we look over and discuss the data.
As the [Application Definition Working Group](https://github.com/kubernetes/community/tree/master/sig-apps) is working through the data we're putting observations into a [Google Slides Document](http://bit.ly/2qTkuhx). This is a living document that will continue to grow while we look over and discuss the data.

There is [a session at KubeCon where the Application Definition Working Group will be meeting](https://kccnceu18.sched.com/event/DxV4) and discussing the survey. This is a session open to anyone in attendance, if you would like to attend.

Expand Down
4 changes: 2 additions & 2 deletions content/en/blog/_posts/2018-05-01-developing-on-kubernetes.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ As a developer you want to think about where the Kubernetes cluster you’re dev

![Dev Modes](/images/blog/2018-05-01-developing-on-kubernetes/dok-devmodes_preview.png)

A number of tools support pure offline development including Minikube, Docker for Mac/Windows, Minishift, and the ones we discuss in detail below. Sometimes, for example, in a microservices setup where certain microservices already run in the cluster, a proxied setup (forwarding traffic into and from the cluster) is preferable and Telepresence is an example tool in this category. The live mode essentially means you’re building and/or deploying against a remote cluster and, finally, the pure online mode means both your development environment and the cluster are remote, as this is the case with, for example, [Eclipse Che](https://www.eclipse.org/che/docs/kubernetes-single-user.html) or [Cloud 9](https://github.com/errordeveloper/k9c). Let’s now have a closer look at the basics of offline development: running Kubernetes locally.
A number of tools support pure offline development including Minikube, Docker for Mac/Windows, Minishift, and the ones we discuss in detail below. Sometimes, for example, in a microservices setup where certain microservices already run in the cluster, a proxied setup (forwarding traffic into and from the cluster) is preferable and Telepresence is an example tool in this category. The live mode essentially means you’re building and/or deploying against a remote cluster and, finally, the pure online mode means both your development environment and the cluster are remote, as this is the case with, for example, [Eclipse Che](https://www.eclipse.org/che/docs/che-7/introduction-to-eclipse-che/) or [Cloud 9](https://github.com/errordeveloper/k9c). Let’s now have a closer look at the basics of offline development: running Kubernetes locally.

[Minikube](/docs/getting-started-guides/minikube/) is a popular choice for those who prefer to run Kubernetes in a local VM. More recently Docker for [Mac](https://docs.docker.com/docker-for-mac/kubernetes/) and [Windows](https://docs.docker.com/docker-for-windows/kubernetes/) started shipping Kubernetes as an experimental package (in the “edge” channel). Some reasons why you may want to prefer using Minikube over the Docker desktop option are:

Expand Down Expand Up @@ -99,7 +99,7 @@ Implications:
More info:

* [Squash: A Debugger for Kubernetes Apps](https://www.youtube.com/watch?v=5TrV3qzXlgI)
* [Getting Started Guide](https://github.com/solo-io/squash/blob/master/docs/getting-started.md)
* [Getting Started Guide](https://squash.solo.io/overview/)

### Telepresence

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ Prow lets us do things like:
* Run CI jobs defined as [Knative Builds](https://github.com/knative/build), Kubernetes Pods, or Jenkins jobs
* Enforce org-wide and per-repo GitHub policies like [branch protection](https://github.com/kubernetes/test-infra/tree/master/prow/cmd/branchprotector) and [GitHub labels](https://github.com/kubernetes/test-infra/tree/master/label_sync)

Prow was initially developed by the engineering productivity team building Google Kubernetes Engine, and is actively contributed to by multiple members of Kubernetes SIG Testing. Prow has been adopted by several other open source projects, including Istio, JetStack, Knative and OpenShift. [Getting started with Prow](https://github.com/kubernetes/test-infra/blob/master/prow/getting_started.md) takes a Kubernetes cluster and `kubectl apply starter.yaml` (running pods on a Kubernetes cluster).
Prow was initially developed by the engineering productivity team building Google Kubernetes Engine, and is actively contributed to by multiple members of Kubernetes SIG Testing. Prow has been adopted by several other open source projects, including Istio, JetStack, Knative and OpenShift. [Getting started with Prow](https://github.com/kubernetes/test-infra/tree/master/prow#getting-started) takes a Kubernetes cluster and `kubectl apply starter.yaml` (running pods on a Kubernetes cluster).

Once we had Prow in place, we began to hit other scaling bottlenecks, and so produced additional tooling to support testing at the scale required by Kubernetes, including:

Expand Down
4 changes: 2 additions & 2 deletions content/en/blog/_posts/2018-10-09-volume-snapshot-alpha.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ As of the publishing of this blog, the following CSI drivers support snapshots:
* [Ceph RBD CSI Driver](https://github.com/ceph/ceph-csi/tree/master/pkg/rbd)
* [Portworx CSI Driver](https://github.com/libopenstorage/openstorage/tree/master/csi)

Snapshot support for other [drivers](https://kubernetes-csi.github.io/docs/Drivers.html) is pending, and should be available soon. Read the “[Container Storage Interface (CSI) for Kubernetes Goes Beta](https://kubernetes.io/blog/2018/04/10/container-storage-interface-beta/)” blog post to learn more about CSI and how to deploy CSI drivers.
Snapshot support for other [drivers](https://kubernetes-csi.github.io/docs/drivers.html) is pending, and should be available soon. Read the “[Container Storage Interface (CSI) for Kubernetes Goes Beta](https://kubernetes.io/blog/2018/04/10/container-storage-interface-beta/)” blog post to learn more about CSI and how to deploy CSI drivers.

## Kubernetes Snapshots API

Expand All @@ -57,7 +57,7 @@ Similar to the API for managing Kubernetes Persistent Volumes, Kubernetes Volume

It is important to note that unlike the core Kubernetes Persistent Volume objects, these Snapshot objects are defined as [CustomResourceDefinitions (CRDs)](/docs/concepts/extend-kubernetes/api-extension/custom-resources/#customresourcedefinitions). The Kubernetes project is moving away from having resource types pre-defined in the API server, and is moving towards a model where the API server is independent of the API objects. This allows the API server to be reused for projects other than Kubernetes, and consumers (like Kubernetes) can simply install the resource types they require as CRDs.

[CSI Drivers](https://kubernetes-csi.github.io/docs/Drivers.html) that support snapshots will automatically install the required CRDs. Kubernetes end users only need to verify that a CSI driver that supports snapshots is deployed on their Kubernetes cluster.
[CSI Drivers](https://kubernetes-csi.github.io/docs/drivers.html) that support snapshots will automatically install the required CRDs. Kubernetes end users only need to verify that a CSI driver that supports snapshots is deployed on their Kubernetes cluster.

In addition to these new objects, a new, DataSource field has been added to the `PersistentVolumeClaim` object:

Expand Down
2 changes: 1 addition & 1 deletion content/en/blog/_posts/2018-10-10-runtimeclass.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,6 +42,6 @@ RuntimeClass will be under active development at least through 2019, and we’re
## Learn More

- Take it for a spin! As an alpha feature, there are some additional setup steps to use RuntimeClass. Refer to the [RuntimeClass documentation](/docs/concepts/containers/runtime-class/#runtime-class) for how to get it running.
- Check out the [RuntimeClass Kubernetes Enhancement Proposal](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/0014-runtime-class.md) for more nitty-gritty design details.
- Check out the [RuntimeClass Kubernetes Enhancement Proposal](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/runtime-class.md) for more nitty-gritty design details.
- The [Sandbox Isolation Level Decision](https://docs.google.com/document/d/1fe7lQUjYKR0cijRmSbH_y0_l3CYPkwtQa5ViywuNo8Q/preview) documents the thought process that initially went into making RuntimeClass a pod-level choice.
- Join the discussions and help shape the future of RuntimeClass with the [SIG-Node community](https://github.com/kubernetes/community/tree/master/sig-node)
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ Most people who have gotten hands-on with Kubernetes have at some point been han

The Container Storage Interface ([CSI](https://github.com/container-storage-interface)) is now GA after being introduced as alpha in v1.9 and beta in v1.10. With CSI, the Kubernetes volume layer becomes truly extensible. This provides an opportunity for third party storage providers to write plugins that interoperate with Kubernetes without having to touch the core code. The [specification itself](https://github.com/container-storage-interface/spec) has also reached a 1.0 status.

With CSI now stable, plugin authors are developing storage plugins out of core, at their own pace. You can find a list of sample and production drivers in the [CSI Documentation](https://kubernetes-csi.github.io/docs/Drivers.html).
With CSI now stable, plugin authors are developing storage plugins out of core, at their own pace. You can find a list of sample and production drivers in the [CSI Documentation](https://kubernetes-csi.github.io/docs/drivers.html).

## CoreDNS is Now the Default DNS Server for Kubernetes

Expand Down
2 changes: 1 addition & 1 deletion content/en/blog/_posts/2018-12-04-kubeadm-ga-release.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ General Availability means different things for different projects. For kubeadm,
We now consider kubeadm to have achieved GA-level maturity in each of these important domains:

* **Stable command-line UX** --- The kubeadm CLI conforms to [#5a GA rule of the Kubernetes Deprecation Policy](/docs/reference/using-api/deprecation-policy/#deprecating-a-flag-or-cli), which states that a command or flag that exists in a GA version must be kept for at least 12 months after deprecation.
* **Stable underlying implementation** --- kubeadm now creates a new Kubernetes cluster using methods that shouldn't change any time soon. The control plane, for example, is run as a set of static Pods, bootstrap tokens are used for the [`kubeadm join`](/docs/reference/setup-tools/kubeadm/kubeadm-join/) flow, and [ComponentConfig](https://github.com/kubernetes/enhancements/blob/master/keps/sig-cluster-lifecycle/0014-20180707-componentconfig-api-types-to-staging.md) is used for configuring the [kubelet](/docs/reference/command-line-tools-reference/kubelet/).
* **Stable underlying implementation** --- kubeadm now creates a new Kubernetes cluster using methods that shouldn't change any time soon. The control plane, for example, is run as a set of static Pods, bootstrap tokens are used for the [`kubeadm join`](/docs/reference/setup-tools/kubeadm/kubeadm-join/) flow, and [ComponentConfig](https://github.com/kubernetes/enhancements/blob/master/keps/sig-cluster-lifecycle/wgs/0014-20180707-componentconfig-api-types-to-staging.md) is used for configuring the [kubelet](/docs/reference/command-line-tools-reference/kubelet/).
* **Configuration file schema** --- With the new **v1beta1** API version, you can now tune almost every part of the cluster declaratively and thus build a "GitOps" flow around kubeadm-built clusters. In future versions, we plan to graduate the API to version **v1** with minimal changes (and perhaps none).
* **The "toolbox" interface of kubeadm** --- Also known as **phases**. If you don't want to perform all [`kubeadm init`](/docs/reference/setup-tools/kubeadm/kubeadm-init/) tasks, you can instead apply more fine-grained actions using the `kubeadm init phase` command (for example generating certificates or control plane [Static Pod](/docs/tasks/administer-cluster/static-pod/) manifests).
* **Upgrades between minor versions** --- The [`kubeadm upgrade`](/docs/reference/setup-tools/kubeadm/kubeadm-upgrade/) command is now fully GA. It handles control plane upgrades for you, which includes upgrades to [etcd](https://etcd.io), the [API Server](/docs/reference/using-api/api-overview/), the [Controller Manager](/docs/reference/command-line-tools-reference/kube-controller-manager/), and the [Scheduler](/docs/reference/command-line-tools-reference/kube-scheduler/). You can seamlessly upgrade your cluster between minor or patch versions (e.g. v1.12.2 -> v1.13.1 or v1.13.1 -> v1.13.3).
Expand Down
Loading

0 comments on commit cfd7793

Please sign in to comment.