From 4259a6236dd1a08b3c883f6a74f856ddc4f3423c Mon Sep 17 00:00:00 2001 From: tanjunchen <2799194073@qq.com> Date: Wed, 20 Nov 2019 16:53:24 +0800 Subject: [PATCH] fix 404 urls in content/en/blog/ --- .../2017-10-00-Request-Routing-And-Policy-Management.md | 2 +- .../_posts/2017-12-00-Paddle-Paddle-Fluid-Elastic-Learning.md | 4 ++-- .../_posts/2018-04-10-container-storage-interface-beta.md | 4 ++-- .../2018-04-24-kubernetes-application-survey-results-2018.md | 2 +- content/en/blog/_posts/2018-05-01-developing-on-kubernetes.md | 4 ++-- ...kubernetes-testing-ci-automating-contributor-experience.md | 2 +- content/en/blog/_posts/2018-10-09-volume-snapshot-alpha.md | 4 ++-- content/en/blog/_posts/2018-10-10-runtimeclass.md | 2 +- .../_posts/2018-12-03-kubernetes-1-13-release-announcement.md | 2 +- content/en/blog/_posts/2018-12-04-kubeadm-ga-release.md | 2 +- .../_posts/2018-12-11-current-status-and-future-roadmap.md | 4 ++-- .../blog/_posts/2019-01-15-container-storage-interface-ga.md | 2 +- .../en/blog/_posts/2019-01-17-update-volume-snapshot-alpha.md | 2 +- ...28-automate-operations-on-your-cluster-with-operatorhub.md | 4 ++-- content/en/blog/_posts/2019-03-22-e2e-testing-for-everyone.md | 2 +- ...oduction-level-support-for-nodes-and-windows-containers.md | 2 +- ...019-05-23-Kyma-extend-and-build-on-kubernetes-with-ease.md | 4 ++-- content/en/blog/_posts/2019-11-05-grokkin-the-docs.md | 4 +--- 18 files changed, 25 insertions(+), 27 deletions(-) diff --git a/content/en/blog/_posts/2017-10-00-Request-Routing-And-Policy-Management.md b/content/en/blog/_posts/2017-10-00-Request-Routing-And-Policy-Management.md index 97bc9d13e5ae7..0d490ce1bf355 100644 --- a/content/en/blog/_posts/2017-10-00-Request-Routing-And-Policy-Management.md +++ b/content/en/blog/_posts/2017-10-00-Request-Routing-And-Policy-Management.md @@ -92,7 +92,7 @@ Finally, we pointed our browser to [http://$BOOKINFO\_URL/productpage](about:bla ## HTTP request routing -Existing container orchestration platforms like Kubernetes, Mesos, and other microservice frameworks allow operators to control when a particular set of pods/VMs should receive traffic (e.g., by adding/removing specific labels). Unlike existing techniques, Istio decouples traffic flow and infrastructure scaling. This allows Istio to provide a variety of traffic management features that reside outside the application code, including dynamic HTTP [request routing](https://istio.io/docs/concepts/traffic-management/request-routing.html) for A/B testing, canary releases, gradual rollouts, [failure recovery](https://istio.io/docs/concepts/traffic-management/handling-failures.html) using timeouts, retries, circuit breakers, and [fault injection](https://istio.io/docs/concepts/traffic-management/fault-injection.html) to test compatibility of failure recovery policies across services. +Existing container orchestration platforms like Kubernetes, Mesos, and other microservice frameworks allow operators to control when a particular set of pods/VMs should receive traffic (e.g., by adding/removing specific labels). Unlike existing techniques, Istio decouples traffic flow and infrastructure scaling. This allows Istio to provide a variety of traffic management features that reside outside the application code, including dynamic HTTP [request routing](https://istio.io/docs/concepts/traffic-management/#routing-rules) for A/B testing, canary releases, gradual rollouts, [failure recovery](https://istio.io/docs/concepts/traffic-management/#network-resilience-and-testing) using timeouts, retries, circuit breakers, and [fault injection](https://istio.io/docs/concepts/traffic-management/fault-injection.html) to test compatibility of failure recovery policies across services. To demonstrate, we’ll deploy v2 of the **reviews** service and use Istio to make it visible only for a specific test user. We can create a Kubernetes deployment, reviews-v2, with [this YAML file](https://raw.githubusercontent.com/istio/istio/master/samples/kubernetes-blog/bookinfo-reviews-v2.yaml): diff --git a/content/en/blog/_posts/2017-12-00-Paddle-Paddle-Fluid-Elastic-Learning.md b/content/en/blog/_posts/2017-12-00-Paddle-Paddle-Fluid-Elastic-Learning.md index 4937237fc5ecd..7920c14be120b 100644 --- a/content/en/blog/_posts/2017-12-00-Paddle-Paddle-Fluid-Elastic-Learning.md +++ b/content/en/blog/_posts/2017-12-00-Paddle-Paddle-Fluid-Elastic-Learning.md @@ -12,7 +12,7 @@ _Editor's note: Today's post is a joint post from the deep learning team at Baid Two open source communities—PaddlePaddle, the deep learning framework originated in Baidu, and Kubernetes®, the most famous containerized application scheduler—are announcing the Elastic Deep Learning (EDL) feature in PaddlePaddle’s new release codenamed Fluid. -Fluid EDL includes a [Kubernetes controller](https://github.com/kubernetes/community/blob/master/contributors/devel/controllers.md), [_PaddlePaddle auto-scaler_](https://github.com/PaddlePaddle/cloud/tree/develop/doc/autoscale), which changes the number of processes of distributed jobs according to the idle hardware resource in the cluster, and a new fault-tolerable architecture as described in the [PaddlePaddle design doc](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/design/cluster_train/README.md). +Fluid EDL includes a [Kubernetes controller](https://github.com/kubernetes/community/blob/master/contributors/devel/controllers.md), [_PaddlePaddle auto-scaler_](https://github.com/PaddlePaddle/cloud/tree/develop/doc/edl/experiment#auto-scaling-experiment), which changes the number of processes of distributed jobs according to the idle hardware resource in the cluster, and a new fault-tolerable architecture as described in the [PaddlePaddle design doc](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/design/cluster_train/README.md). Industrial deep learning requires significant computation power. Research labs and companies often build GPU clusters managed by SLURM, MPI, or SGE. These clusters either run a submitted job if it requires less than the idle resource, or pend the job for an unpredictably long time. This approach has its drawbacks: in an example with 99 available nodes and a submitted job that requires 100, the job has to wait without using any of the available nodes. Fluid works with Kubernetes to power elastic deep learning jobs, which often lack optimal resources, by helping to expose potential algorithmic problems as early as possible. @@ -37,7 +37,7 @@ In the second test, each experiment ran 400 Nginx pods, which has higher priorit | _Figure 2. Fluid changes PaddlePaddle processes with the change of Nginx processes._ | -We continue to work on FluidEDL and welcome comments and contributions. Visit the [PaddlePaddle repo](https://github.com/PaddlePaddle/cloud), where you can find the [design doc](https://github.com/PaddlePaddle/cloud/blob/develop/doc/autoscale/README.md), a [simple tutorial](https://github.com/PaddlePaddle/cloud/blob/develop/doc/autoscale/example/autoscale.md), and [experiment details](https://github.com/PaddlePaddle/cloud/tree/develop/doc/autoscale/experiment). +We continue to work on FluidEDL and welcome comments and contributions. Visit the [PaddlePaddle repo](https://github.com/PaddlePaddle/cloud), where you can find the [design doc](https://github.com/PaddlePaddle/cloud/tree/develop/doc/design), a [simple tutorial](https://github.com/PaddlePaddle/cloud/blob/develop/doc/autoscale/example/autoscale.md), and [experiment details](https://github.com/PaddlePaddle/cloud/tree/develop/doc/edl/experiment). - Xu Yan (Baidu Research) - Helin Wang (Baidu Research) diff --git a/content/en/blog/_posts/2018-04-10-container-storage-interface-beta.md b/content/en/blog/_posts/2018-04-10-container-storage-interface-beta.md index 4e3fd89584abc..6e8a39fe7bbcd 100644 --- a/content/en/blog/_posts/2018-04-10-container-storage-interface-beta.md +++ b/content/en/blog/_posts/2018-04-10-container-storage-interface-beta.md @@ -37,7 +37,7 @@ A `VolumeAttributes` field was added to Kubernetes `CSIPersistentVolumeSource` o CSI plugin authors must provide their own instructions for deploying their plugin on Kubernetes. -The Kubernetes-CSI implementation team created a [sample hostpath CSI driver](https://kubernetes-csi.github.io/docs/Example.html). The sample provides a rough idea of what the deployment process for a CSI driver looks like. Production drivers, however, would deploy node components via a DaemonSet and controller components via a StatefulSet rather than a single pod (for example, see the deployment files for the [GCE PD driver](https://github.com/GoogleCloudPlatform/compute-persistent-disk-csi-driver/blob/master/deploy/kubernetes/README.md)). +The Kubernetes-CSI implementation team created a [sample hostpath CSI driver](https://kubernetes-csi.github.io/docs/example.html). The sample provides a rough idea of what the deployment process for a CSI driver looks like. Production drivers, however, would deploy node components via a DaemonSet and controller components via a StatefulSet rather than a single pod (for example, see the deployment files for the [GCE PD driver](https://github.com/GoogleCloudPlatform/compute-persistent-disk-csi-driver/blob/master/deploy/kubernetes/README.md)). ## How do I use a CSI Volume in my Kubernetes pod? @@ -167,7 +167,7 @@ Storage vendors can build Kubernetes deployments for their plugins using these c ## Where can I find CSI drivers? -CSI drivers are developed and maintained by third parties. You can find a non-definitive list of some [sample and production CSI drivers](https://kubernetes-csi.github.io/docs/Drivers.html). +CSI drivers are developed and maintained by third parties. You can find a non-definitive list of some [sample and production CSI drivers](https://kubernetes-csi.github.io/docs/drivers.html). ## What about FlexVolumes? diff --git a/content/en/blog/_posts/2018-04-24-kubernetes-application-survey-results-2018.md b/content/en/blog/_posts/2018-04-24-kubernetes-application-survey-results-2018.md index e145f9635e0e1..75918a4c8ab9c 100644 --- a/content/en/blog/_posts/2018-04-24-kubernetes-application-survey-results-2018.md +++ b/content/en/blog/_posts/2018-04-24-kubernetes-application-survey-results-2018.md @@ -44,7 +44,7 @@ Only 4 tools were in use by more than 10% of those who took the survey with Helm ## Want To See More? -As the [Application Definition Working Group](https://github.com/kubernetes/community/tree/master/wg-app-def) is working through the data we're putting observations into a [Google Slides Document](http://bit.ly/2qTkuhx). This is a living document that will continue to grow while we look over and discuss the data. +As the [Application Definition Working Group](https://github.com/kubernetes/community/tree/master/sig-apps) is working through the data we're putting observations into a [Google Slides Document](http://bit.ly/2qTkuhx). This is a living document that will continue to grow while we look over and discuss the data. There is [a session at KubeCon where the Application Definition Working Group will be meeting](https://kccnceu18.sched.com/event/DxV4) and discussing the survey. This is a session open to anyone in attendance, if you would like to attend. diff --git a/content/en/blog/_posts/2018-05-01-developing-on-kubernetes.md b/content/en/blog/_posts/2018-05-01-developing-on-kubernetes.md index 3f931b7eeec48..dd1ab516c775d 100644 --- a/content/en/blog/_posts/2018-05-01-developing-on-kubernetes.md +++ b/content/en/blog/_posts/2018-05-01-developing-on-kubernetes.md @@ -19,7 +19,7 @@ As a developer you want to think about where the Kubernetes cluster you’re dev ![Dev Modes](/images/blog/2018-05-01-developing-on-kubernetes/dok-devmodes_preview.png) -A number of tools support pure offline development including Minikube, Docker for Mac/Windows, Minishift, and the ones we discuss in detail below. Sometimes, for example, in a microservices setup where certain microservices already run in the cluster, a proxied setup (forwarding traffic into and from the cluster) is preferable and Telepresence is an example tool in this category. The live mode essentially means you’re building and/or deploying against a remote cluster and, finally, the pure online mode means both your development environment and the cluster are remote, as this is the case with, for example, [Eclipse Che](https://www.eclipse.org/che/docs/kubernetes-single-user.html) or [Cloud 9](https://github.com/errordeveloper/k9c). Let’s now have a closer look at the basics of offline development: running Kubernetes locally. +A number of tools support pure offline development including Minikube, Docker for Mac/Windows, Minishift, and the ones we discuss in detail below. Sometimes, for example, in a microservices setup where certain microservices already run in the cluster, a proxied setup (forwarding traffic into and from the cluster) is preferable and Telepresence is an example tool in this category. The live mode essentially means you’re building and/or deploying against a remote cluster and, finally, the pure online mode means both your development environment and the cluster are remote, as this is the case with, for example, [Eclipse Che](https://www.eclipse.org/che/docs/che-7/introduction-to-eclipse-che/) or [Cloud 9](https://github.com/errordeveloper/k9c). Let’s now have a closer look at the basics of offline development: running Kubernetes locally. [Minikube](/docs/getting-started-guides/minikube/) is a popular choice for those who prefer to run Kubernetes in a local VM. More recently Docker for [Mac](https://docs.docker.com/docker-for-mac/kubernetes/) and [Windows](https://docs.docker.com/docker-for-windows/kubernetes/) started shipping Kubernetes as an experimental package (in the “edge” channel). Some reasons why you may want to prefer using Minikube over the Docker desktop option are: @@ -99,7 +99,7 @@ Implications: More info: * [Squash: A Debugger for Kubernetes Apps](https://www.youtube.com/watch?v=5TrV3qzXlgI) -* [Getting Started Guide](https://github.com/solo-io/squash/blob/master/docs/getting-started.md) +* [Getting Started Guide](https://squash.solo.io/overview/) ### Telepresence diff --git a/content/en/blog/_posts/2018-08-29-kubernetes-testing-ci-automating-contributor-experience.md b/content/en/blog/_posts/2018-08-29-kubernetes-testing-ci-automating-contributor-experience.md index c17d76d67b51e..63a2bcc1aa534 100644 --- a/content/en/blog/_posts/2018-08-29-kubernetes-testing-ci-automating-contributor-experience.md +++ b/content/en/blog/_posts/2018-08-29-kubernetes-testing-ci-automating-contributor-experience.md @@ -51,7 +51,7 @@ Prow lets us do things like: * Run CI jobs defined as [Knative Builds](https://github.com/knative/build), Kubernetes Pods, or Jenkins jobs * Enforce org-wide and per-repo GitHub policies like [branch protection](https://github.com/kubernetes/test-infra/tree/master/prow/cmd/branchprotector) and [GitHub labels](https://github.com/kubernetes/test-infra/tree/master/label_sync) -Prow was initially developed by the engineering productivity team building Google Kubernetes Engine, and is actively contributed to by multiple members of Kubernetes SIG Testing. Prow has been adopted by several other open source projects, including Istio, JetStack, Knative and OpenShift. [Getting started with Prow](https://github.com/kubernetes/test-infra/blob/master/prow/getting_started.md) takes a Kubernetes cluster and `kubectl apply starter.yaml` (running pods on a Kubernetes cluster). +Prow was initially developed by the engineering productivity team building Google Kubernetes Engine, and is actively contributed to by multiple members of Kubernetes SIG Testing. Prow has been adopted by several other open source projects, including Istio, JetStack, Knative and OpenShift. [Getting started with Prow](https://github.com/kubernetes/test-infra/tree/master/prow#getting-started) takes a Kubernetes cluster and `kubectl apply starter.yaml` (running pods on a Kubernetes cluster). Once we had Prow in place, we began to hit other scaling bottlenecks, and so produced additional tooling to support testing at the scale required by Kubernetes, including: diff --git a/content/en/blog/_posts/2018-10-09-volume-snapshot-alpha.md b/content/en/blog/_posts/2018-10-09-volume-snapshot-alpha.md index d18fc11dcd1b5..8706c6b6f2319 100644 --- a/content/en/blog/_posts/2018-10-09-volume-snapshot-alpha.md +++ b/content/en/blog/_posts/2018-10-09-volume-snapshot-alpha.md @@ -39,7 +39,7 @@ As of the publishing of this blog, the following CSI drivers support snapshots: * [Ceph RBD CSI Driver](https://github.com/ceph/ceph-csi/tree/master/pkg/rbd) * [Portworx CSI Driver](https://github.com/libopenstorage/openstorage/tree/master/csi) -Snapshot support for other [drivers](https://kubernetes-csi.github.io/docs/Drivers.html) is pending, and should be available soon. Read the “[Container Storage Interface (CSI) for Kubernetes Goes Beta](https://kubernetes.io/blog/2018/04/10/container-storage-interface-beta/)” blog post to learn more about CSI and how to deploy CSI drivers. +Snapshot support for other [drivers](https://kubernetes-csi.github.io/docs/drivers.html) is pending, and should be available soon. Read the “[Container Storage Interface (CSI) for Kubernetes Goes Beta](https://kubernetes.io/blog/2018/04/10/container-storage-interface-beta/)” blog post to learn more about CSI and how to deploy CSI drivers. ## Kubernetes Snapshots API @@ -57,7 +57,7 @@ Similar to the API for managing Kubernetes Persistent Volumes, Kubernetes Volume It is important to note that unlike the core Kubernetes Persistent Volume objects, these Snapshot objects are defined as [CustomResourceDefinitions (CRDs)](/docs/concepts/extend-kubernetes/api-extension/custom-resources/#customresourcedefinitions). The Kubernetes project is moving away from having resource types pre-defined in the API server, and is moving towards a model where the API server is independent of the API objects. This allows the API server to be reused for projects other than Kubernetes, and consumers (like Kubernetes) can simply install the resource types they require as CRDs. -[CSI Drivers](https://kubernetes-csi.github.io/docs/Drivers.html) that support snapshots will automatically install the required CRDs. Kubernetes end users only need to verify that a CSI driver that supports snapshots is deployed on their Kubernetes cluster. +[CSI Drivers](https://kubernetes-csi.github.io/docs/drivers.html) that support snapshots will automatically install the required CRDs. Kubernetes end users only need to verify that a CSI driver that supports snapshots is deployed on their Kubernetes cluster. In addition to these new objects, a new, DataSource field has been added to the `PersistentVolumeClaim` object: diff --git a/content/en/blog/_posts/2018-10-10-runtimeclass.md b/content/en/blog/_posts/2018-10-10-runtimeclass.md index 29ae6e328585f..6a889ada70005 100644 --- a/content/en/blog/_posts/2018-10-10-runtimeclass.md +++ b/content/en/blog/_posts/2018-10-10-runtimeclass.md @@ -42,6 +42,6 @@ RuntimeClass will be under active development at least through 2019, and we’re ## Learn More - Take it for a spin! As an alpha feature, there are some additional setup steps to use RuntimeClass. Refer to the [RuntimeClass documentation](/docs/concepts/containers/runtime-class/#runtime-class) for how to get it running. -- Check out the [RuntimeClass Kubernetes Enhancement Proposal](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/0014-runtime-class.md) for more nitty-gritty design details. +- Check out the [RuntimeClass Kubernetes Enhancement Proposal](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/runtime-class.md) for more nitty-gritty design details. - The [Sandbox Isolation Level Decision](https://docs.google.com/document/d/1fe7lQUjYKR0cijRmSbH_y0_l3CYPkwtQa5ViywuNo8Q/preview) documents the thought process that initially went into making RuntimeClass a pod-level choice. - Join the discussions and help shape the future of RuntimeClass with the [SIG-Node community](https://github.com/kubernetes/community/tree/master/sig-node) diff --git a/content/en/blog/_posts/2018-12-03-kubernetes-1-13-release-announcement.md b/content/en/blog/_posts/2018-12-03-kubernetes-1-13-release-announcement.md index c04b6c16331d1..247bfa2c8d0eb 100644 --- a/content/en/blog/_posts/2018-12-03-kubernetes-1-13-release-announcement.md +++ b/content/en/blog/_posts/2018-12-03-kubernetes-1-13-release-announcement.md @@ -23,7 +23,7 @@ Most people who have gotten hands-on with Kubernetes have at some point been han The Container Storage Interface ([CSI](https://github.com/container-storage-interface)) is now GA after being introduced as alpha in v1.9 and beta in v1.10. With CSI, the Kubernetes volume layer becomes truly extensible. This provides an opportunity for third party storage providers to write plugins that interoperate with Kubernetes without having to touch the core code. The [specification itself](https://github.com/container-storage-interface/spec) has also reached a 1.0 status. -With CSI now stable, plugin authors are developing storage plugins out of core, at their own pace. You can find a list of sample and production drivers in the [CSI Documentation](https://kubernetes-csi.github.io/docs/Drivers.html). +With CSI now stable, plugin authors are developing storage plugins out of core, at their own pace. You can find a list of sample and production drivers in the [CSI Documentation](https://kubernetes-csi.github.io/docs/drivers.html). ## CoreDNS is Now the Default DNS Server for Kubernetes diff --git a/content/en/blog/_posts/2018-12-04-kubeadm-ga-release.md b/content/en/blog/_posts/2018-12-04-kubeadm-ga-release.md index a88999a15d3f9..b2ea7cb71b9a7 100644 --- a/content/en/blog/_posts/2018-12-04-kubeadm-ga-release.md +++ b/content/en/blog/_posts/2018-12-04-kubeadm-ga-release.md @@ -32,7 +32,7 @@ General Availability means different things for different projects. For kubeadm, We now consider kubeadm to have achieved GA-level maturity in each of these important domains: * **Stable command-line UX** --- The kubeadm CLI conforms to [#5a GA rule of the Kubernetes Deprecation Policy](/docs/reference/using-api/deprecation-policy/#deprecating-a-flag-or-cli), which states that a command or flag that exists in a GA version must be kept for at least 12 months after deprecation. - * **Stable underlying implementation** --- kubeadm now creates a new Kubernetes cluster using methods that shouldn't change any time soon. The control plane, for example, is run as a set of static Pods, bootstrap tokens are used for the [`kubeadm join`](/docs/reference/setup-tools/kubeadm/kubeadm-join/) flow, and [ComponentConfig](https://github.com/kubernetes/enhancements/blob/master/keps/sig-cluster-lifecycle/0014-20180707-componentconfig-api-types-to-staging.md) is used for configuring the [kubelet](/docs/reference/command-line-tools-reference/kubelet/). + * **Stable underlying implementation** --- kubeadm now creates a new Kubernetes cluster using methods that shouldn't change any time soon. The control plane, for example, is run as a set of static Pods, bootstrap tokens are used for the [`kubeadm join`](/docs/reference/setup-tools/kubeadm/kubeadm-join/) flow, and [ComponentConfig](https://github.com/kubernetes/enhancements/blob/master/keps/sig-cluster-lifecycle/wgs/0014-20180707-componentconfig-api-types-to-staging.md) is used for configuring the [kubelet](/docs/reference/command-line-tools-reference/kubelet/). * **Configuration file schema** --- With the new **v1beta1** API version, you can now tune almost every part of the cluster declaratively and thus build a "GitOps" flow around kubeadm-built clusters. In future versions, we plan to graduate the API to version **v1** with minimal changes (and perhaps none). * **The "toolbox" interface of kubeadm** --- Also known as **phases**. If you don't want to perform all [`kubeadm init`](/docs/reference/setup-tools/kubeadm/kubeadm-init/) tasks, you can instead apply more fine-grained actions using the `kubeadm init phase` command (for example generating certificates or control plane [Static Pod](/docs/tasks/administer-cluster/static-pod/) manifests). * **Upgrades between minor versions** --- The [`kubeadm upgrade`](/docs/reference/setup-tools/kubeadm/kubeadm-upgrade/) command is now fully GA. It handles control plane upgrades for you, which includes upgrades to [etcd](https://etcd.io), the [API Server](/docs/reference/using-api/api-overview/), the [Controller Manager](/docs/reference/command-line-tools-reference/kube-controller-manager/), and the [Scheduler](/docs/reference/command-line-tools-reference/kube-scheduler/). You can seamlessly upgrade your cluster between minor or patch versions (e.g. v1.12.2 -> v1.13.1 or v1.13.1 -> v1.13.3). diff --git a/content/en/blog/_posts/2018-12-11-current-status-and-future-roadmap.md b/content/en/blog/_posts/2018-12-11-current-status-and-future-roadmap.md index fab15f369c2a2..de7149ebe77b9 100644 --- a/content/en/blog/_posts/2018-12-11-current-status-and-future-roadmap.md +++ b/content/en/blog/_posts/2018-12-11-current-status-and-future-roadmap.md @@ -40,11 +40,11 @@ etcd v3.3 continues the theme of stability. Its client is included in [Kubernete etcd v3.4 focuses on improving the operational experience. It adds [Raft pre-vote feature](https://github.com/etcd-io/etcd/pull/9352) to improve the robustness of leadership election. When a node becomes isolated (e.g. network partition), this member will start an election requesting votes with increased Raft terms. When a leader receives a vote request with a higher term, it steps down to a follower. With pre-vote, Raft runs an additional election phase to check if the candidate can get enough votes to win an election. The isolated follower's vote request is rejected because it does not contain the latest log entries. -etcd v3.4 adds a [Raft learner](https://etcd.readthedocs.io/en/latest/server-learner.html#server-learner) that joins the cluster as a non-voting member that still receives all the updates from leader. Adding a learner node does not increase the size of quorum and hence improves the cluster availability during membership reconfiguration. It only serves as a standby node until it gets promoted to a voting member. Moreover, to handle unexpected upgrade failures, v3.4 introduces [etcd downgrade](https://groups.google.com/forum/?hl=en#!topic/etcd-dev/Hq6zru44L74) feature. +etcd v3.4 adds a [Raft learner](https://etcd.io/docs/v3.4.0/learning/design-learner/#Raft%20Learner) that joins the cluster as a non-voting member that still receives all the updates from leader. Adding a learner node does not increase the size of quorum and hence improves the cluster availability during membership reconfiguration. It only serves as a standby node until it gets promoted to a voting member. Moreover, to handle unexpected upgrade failures, v3.4 introduces [etcd downgrade](https://groups.google.com/forum/?hl=en#!topic/etcd-dev/Hq6zru44L74) feature. etcd v3 storage uses multi-version concurrency control model to preserve key updates as event history. Kubernetes runs compaction to discard the event history that is no longer needed, and reclaims the storage space. etcd v3.4 will improve this storage compact operation, boost backend [concurrency for large read transactions](https://github.com/etcd-io/etcd/pull/9384), and [optimize storage commit interval](https://github.com/etcd-io/etcd/pull/10283) for Kubernetes use-case. -To further improve etcd client load balancer, the v3.4 balancer was rewritten to leverage the newly introduced gRPC load balancing API. By leveraging gPRC, the etcd client load balancer codebase was substantially simplified while retaining feature parity with the v3.3 implementation and improving overall load balancing by round-robining requests across healthy endpoints. See [Client Architecture](https://etcd.readthedocs.io/en/latest/client-architecture.html#client-architecture) for more details. +To further improve etcd client load balancer, the v3.4 balancer was rewritten to leverage the newly introduced gRPC load balancing API. By leveraging gPRC, the etcd client load balancer codebase was substantially simplified while retaining feature parity with the v3.3 implementation and improving overall load balancing by round-robining requests across healthy endpoints. See [Client Architecture](https://etcd.io/docs/v3.4.0/learning/design-client/) for more details. Additionally, etcd maintainers will continue to make improvements to Kubernetes test frameworks: kubemark integration for scalability tests, Kubernetes API server conformance tests with etcd to provide release recommends and version skew policy, specifying conformance testing requirements for each cloud provider, etc. diff --git a/content/en/blog/_posts/2019-01-15-container-storage-interface-ga.md b/content/en/blog/_posts/2019-01-15-container-storage-interface-ga.md index 192d928b02c1f..b71f37b36c3d1 100644 --- a/content/en/blog/_posts/2019-01-15-container-storage-interface-ga.md +++ b/content/en/blog/_posts/2019-01-15-container-storage-interface-ga.md @@ -166,7 +166,7 @@ Storage vendors can build Kubernetes deployments for their plugins using these c ## List of CSI Drivers -CSI drivers are developed and maintained by third parties. You can find a non-definitive list of CSI drivers [here](https://kubernetes-csi.github.io/docs/Drivers.html). +CSI drivers are developed and maintained by third parties. You can find a non-definitive list of CSI drivers [here](https://kubernetes-csi.github.io/docs/drivers.html). ## What about in-tree volume plugins? diff --git a/content/en/blog/_posts/2019-01-17-update-volume-snapshot-alpha.md b/content/en/blog/_posts/2019-01-17-update-volume-snapshot-alpha.md index b738263e39da0..89872c3a7e834 100644 --- a/content/en/blog/_posts/2019-01-17-update-volume-snapshot-alpha.md +++ b/content/en/blog/_posts/2019-01-17-update-volume-snapshot-alpha.md @@ -147,7 +147,7 @@ As of the publishing of this blog post, the following CSI drivers support snapsh - [Datera CSI Driver](https://github.com/Datera/datera-csi) - [NexentaStor CSI Driver](https://github.com/Nexenta/nexentastor-csi-driver) -Snapshot support for other [drivers](https://kubernetes-csi.github.io/docs/Drivers.html) is pending, and should be available soon. Read the “Container Storage Interface (CSI) for Kubernetes GA” blog post to learn more about CSI and how to deploy CSI drivers. +Snapshot support for other [drivers](https://kubernetes-csi.github.io/docs/drivers.html) is pending, and should be available soon. Read the “Container Storage Interface (CSI) for Kubernetes GA” blog post to learn more about CSI and how to deploy CSI drivers. ## What’s next? diff --git a/content/en/blog/_posts/2019-02-28-automate-operations-on-your-cluster-with-operatorhub.md b/content/en/blog/_posts/2019-02-28-automate-operations-on-your-cluster-with-operatorhub.md index a8f91716e72b5..6f58d9c8262db 100644 --- a/content/en/blog/_posts/2019-02-28-automate-operations-on-your-cluster-with-operatorhub.md +++ b/content/en/blog/_posts/2019-02-28-automate-operations-on-your-cluster-with-operatorhub.md @@ -43,9 +43,9 @@ One way to get started is with the [Operator Framework](https://github.com/opera If you are interested in creating your own Operator, we recommend checking out the Operator Framework to [get started](https://github.com/operator-framework/getting-started). -Operators vary in where they fall along [the capability spectrum](https://github.com/operator-framework/operator-sdk/blob/master/doc/images/operator-maturity-model.png) ranging from basic functionality to having specific operational logic for an application to automate advanced scenarios like backup, restore or tuning. Beyond basic installation, advanced Operators are designed to handle upgrades more seamlessly and react to failures automatically. Currently, Operators on OperatorHub.io span the maturity spectrum, but we anticipate their continuing maturation over time. +Operators vary in where they fall along [the capability spectrum](https://github.com/operator-framework/operator-sdk/blob/master/doc/images/operator-capability-level.png) ranging from basic functionality to having specific operational logic for an application to automate advanced scenarios like backup, restore or tuning. Beyond basic installation, advanced Operators are designed to handle upgrades more seamlessly and react to failures automatically. Currently, Operators on OperatorHub.io span the maturity spectrum, but we anticipate their continuing maturation over time. -While Operators on OperatorHub.io don’t need to be implemented using the SDK, they are packaged for deployment through the [Operator Lifecycle Manager](https://github.com/operator-framework/operator-lifecycle-manager) (OLM). The format mainly consists of a YAML manifest referred to as `[ClusterServiceVersion]`(https://github.com/operator-framework/operator-lifecycle-manager/blob/master/Documentation/design/building-your-csv.md) which provides information about the `CustomResourceDefinitions` the Operator owns or requires, which RBAC definition it needs, where the image is stored, etc. This file is usually accompanied by additional YAML files which define the Operators’ own CRDs. This information is processed by OLM at the time a user requests to install an Operator to provide dependency resolution and automation. +While Operators on OperatorHub.io don’t need to be implemented using the SDK, they are packaged for deployment through the [Operator Lifecycle Manager](https://github.com/operator-framework/operator-lifecycle-manager) (OLM). The format mainly consists of a YAML manifest referred to as `[ClusterServiceVersion]`(https://github.com/operator-framework/operator-lifecycle-manager/blob/master/doc/design/building-your-csv.md) which provides information about the `CustomResourceDefinitions` the Operator owns or requires, which RBAC definition it needs, where the image is stored, etc. This file is usually accompanied by additional YAML files which define the Operators’ own CRDs. This information is processed by OLM at the time a user requests to install an Operator to provide dependency resolution and automation. ## What does listing of an Operator on OperatorHub.io mean? diff --git a/content/en/blog/_posts/2019-03-22-e2e-testing-for-everyone.md b/content/en/blog/_posts/2019-03-22-e2e-testing-for-everyone.md index 7f3f4c5efcb9c..3dcb320736f45 100644 --- a/content/en/blog/_posts/2019-03-22-e2e-testing-for-everyone.md +++ b/content/en/blog/_posts/2019-03-22-e2e-testing-for-everyone.md @@ -107,7 +107,7 @@ manifests. But the Kubernetes e2e.test binary is supposed to be usable and entirely stand-alone because that simplifies shipping and running it. The solution in the Kubernetes build system is to link all files under `test/e2e/testing-manifests` into the binary with -[go-bindata](https://github.com/jteeuwen/go-bindata/go-bindata). The +[go-bindata](https://github.com/jteeuwen/go-bindata). The E2E framework used to have a hard dependency on the output of `go-bindata`, now [bindata support is optional](https://github.com/kubernetes/kubernetes/pull/69103). When diff --git a/content/en/blog/_posts/2019-04-01-kubernetes-v1-14-delivers-production-level-support-for-nodes-and-windows-containers.md b/content/en/blog/_posts/2019-04-01-kubernetes-v1-14-delivers-production-level-support-for-nodes-and-windows-containers.md index 53951452784e8..6dc19382767a8 100644 --- a/content/en/blog/_posts/2019-04-01-kubernetes-v1-14-delivers-production-level-support-for-nodes-and-windows-containers.md +++ b/content/en/blog/_posts/2019-04-01-kubernetes-v1-14-delivers-production-level-support-for-nodes-and-windows-containers.md @@ -57,7 +57,7 @@ As a community, our work is not complete. As already mentioned , we still have a We welcome you to get involved and join our community to share feedback and deployment stories, and contribute to code, docs, and improvements of any kind. - Read our getting started and contributor guides, which include links to the community meetings and past recordings, at https://github.com/kubernetes/community/tree/master/sig-windows -- Explore our documentation at https://kubernetes.io/docs/setup/windows +- Explore our documentation at https://kubernetes.io/docs/setup/production-environment/windows/ - Join us on [Slack](https://kubernetes.slack.com/messages/sig-windows) or the [Kubernetes Community Forums](https://discuss.kubernetes.io/c/general-discussions/windows) to chat about Windows containers on Kubernetes. Thank you and feel free to reach us individually if you have any questions. diff --git a/content/en/blog/_posts/2019-05-23-Kyma-extend-and-build-on-kubernetes-with-ease.md b/content/en/blog/_posts/2019-05-23-Kyma-extend-and-build-on-kubernetes-with-ease.md index 32abf697cc29b..73c22f4d797a3 100644 --- a/content/en/blog/_posts/2019-05-23-Kyma-extend-and-build-on-kubernetes-with-ease.md +++ b/content/en/blog/_posts/2019-05-23-Kyma-extend-and-build-on-kubernetes-with-ease.md @@ -58,7 +58,7 @@ You can provide reliable extensibility in a project like Kyma only if it is prop - Tracing is done with [Jaeger](https://www.jaegertracing.io/) - Authentication is supported by [dex](https://github.com/dexidp/dex) -You don't have to integrate these tools: We made sure they all play together well, and are always up to date ( Kyma is already using Istio 1.1). With our custom [Installer](https://github.com/kyma-project/kyma/tree/master/components/installer) and [Helm](https://helm.sh/) charts, we enabled easy installation and easy upgrades to new versions of Kyma. +You don't have to integrate these tools: We made sure they all play together well, and are always up to date ( Kyma is already using Istio 1.1). With our custom [Installer](https://github.com/kyma-project/kyma/blob/master/docs/kyma/04-02-local-installation.md) and [Helm](https://helm.sh/) charts, we enabled easy installation and easy upgrades to new versions of Kyma. ### Do not rewrite your monoliths @@ -122,5 +122,5 @@ Such an approach gives you a lot of flexibility in adding new functionality. It ## Contribute and give feedback Kyma is an open source project, and we would love help it grow. The way that happens is with your help. After reading this post, you already know that we don't want to reinvent the wheel. We stay true to this approach in our work model, which enables community contributors. We work in [Special Interest Groups]( -https://github.com/kyma-project/community/tree/master/sig-and-wg) and have publicly recorded meeting that you can join any time, so we have a setup similar to what you know from Kubernetes itself. +https://github.com/kyma-project/community/tree/master/contributing) and have publicly recorded meeting that you can join any time, so we have a setup similar to what you know from Kubernetes itself. Feel free to share also your feedback with us, through [Twitter](https://twitter.com/kymaproject) or [Slack](http://slack.kyma-project.io). diff --git a/content/en/blog/_posts/2019-11-05-grokkin-the-docs.md b/content/en/blog/_posts/2019-11-05-grokkin-the-docs.md index e3800cd648025..87de8d4442605 100644 --- a/content/en/blog/_posts/2019-11-05-grokkin-the-docs.md +++ b/content/en/blog/_posts/2019-11-05-grokkin-the-docs.md @@ -199,9 +199,7 @@ SIG Docs faces challenges due to lack of technical writers: Terms should be identical to what is used in the **Standardized Glossary**. Being consistent reduces confusion. Tracking down and fixing these occurrences is time-consuming but worthwhile for readers. - **Working with the Steering Committee to create project documentation guidelines**: - The [Kubernetes Repository - Guidelines](https://github.com/kubernetes/community/blob/master/github-managemen - t/kubernetes-repositories.md) don't mention documentation at all. Between a + The [Kubernetes Repository Guidelines](https://github.com/kubernetes/community/blob/master/github-management/kubernetes-repositories.md) don't mention documentation at all. Between a project's GitHub docs and the Kubernetes docs, some projects have almost duplicate content, whereas others have conflicting content. Create clear guidelines so projects know to put roadmaps, milestones, and comprehensive