Skip to content

Commit

Permalink
Merge pull request #130 from hxcGit/translate-docs
Browse files Browse the repository at this point in the history
Rename UnitedDeployment to YurtAppSet & Update en docs for YurtAppSet and YurtAppDaemonset
  • Loading branch information
rambohe-ch authored Jul 5, 2022
2 parents da4587f + e565154 commit 5d70f53
Show file tree
Hide file tree
Showing 19 changed files with 660 additions and 448 deletions.
10 changes: 5 additions & 5 deletions docs/core-concepts/yurt-app-manager.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ Yurt-App-Manager是OpenYurt集群提供边缘单元化管理的功能组件,

- 流量的单元化: Service 拓扑,通过简单配置来限制Service后端Endpoint的访问范围,例如只能由相同节点池的节点访问,或者只能本节点访问。

而Yurt-App-Manager是 Kubernetes 的一个标准扩展,它可以配合 Kubernetes 使用,提供 NodePool 和 UnitedDeployment 两种控制器,从主机维度和应用维度来提供边缘场景下节点和应用的运维能力。
而Yurt-App-Manager是 Kubernetes 的一个标准扩展,它可以配合 Kubernetes 使用,提供 NodePool 和 YurtAppSet(之前名为UnitedDeployment) 两种控制器,从主机维度和应用维度来提供边缘场景下节点和应用的运维能力。

## 边缘节点池概述

Expand Down Expand Up @@ -57,13 +57,13 @@ NodePool 以节点组的维度对节点划分做了更高维度的抽象,可

- 相同应用的多个Deployment,除了name,nodeselectors, replicas 这些特性外,其他的差异化配置比较小。

单元化部署(UnitedDeployment)是OpenYurt默认提供Yurt-App-Manager组件所提供的能力,是kubernetes CRD 资源,通过更上层次的抽象,对这些子的Deployment 进行统一管理:create/update/delete。
单元化部署(YurtAppSet)是OpenYurt默认提供Yurt-App-Manager组件所提供的能力,是kubernetes CRD 资源,通过更上层次的抽象,对这些子的Deployment 进行统一管理:create/update/delete。



![img](https://intranetproxy.alipay.com/skylark/lark/0/2022/png/31456432/1641823282158-8e00965d-e17e-4a79-912c-01589f98217e.png)

UnitedDeployment 控制器可以提供一个模板来定义应用,并通过管理多个 workload 来匹配下面不同的区域。 每个 UnitedDeployment 下每个区域的 workload 被称为 pool, 目前 pool 支持使用两种workload: `StatefulSet``Deployment`。控制器会根据 UnitedDeployment 中pool的配置创建子的workload 资源对象,每个资源对象都有一个期望的 `replicas` Pod 数量。通过一个UnitedDeployment 实例就可以自动维护多个 Deployment 或者 Statefulset 资源,同时还能具备replicas 等的差异化配置。
YurtAppSet 控制器可以提供一个模板来定义应用,并通过管理多个 workload 来匹配下面不同的区域。 每个 YurtAppSet 下每个区域的 workload 被称为 pool, 目前 pool 支持使用两种workload: `StatefulSet``Deployment`。控制器会根据 YurtAppSet 中pool的配置创建子的workload 资源对象,每个资源对象都有一个期望的 `replicas` Pod 数量。通过一个 YurtAppSet 实例就可以自动维护多个 Deployment 或者 Statefulset 资源,同时还能具备replicas 等的差异化配置。



Expand All @@ -77,8 +77,8 @@ UnitedDeployment 控制器可以提供一个模板来定义应用,并通过管

更多关于 Yurt-App-Manager 的讨论请参考社区 issue 和 pull request:

- issue124:[UnitedDeployment usages]( https://github.com/openyurtio/openyurt/issues/124)
- issue171:[ [feature request\] the definition of NodePool and UnitedDeployment](https://github.com/openyurtio/openyurt/issues/171)
- issue124:[YurtAppSet usages]( https://github.com/openyurtio/openyurt/issues/124)
- issue171:[ [feature request\] the definition of NodePool and YurtAppSet](https://github.com/openyurtio/openyurt/issues/171)

- pull request 173: [[proposal\] add nodepool and uniteddployment crd proposal](https://link.zhihu.com/?target=https%3A//github.com/alibaba/openyurt/pull/173)

Expand Down
4 changes: 3 additions & 1 deletion docs/core-concepts/yurthub.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,8 @@ OpenYurt supports edge autonomy, which means even under the circumstance of netw

### 2)Traffic Closure

In the native Kubernetes, the endpoints of a service are distributed among the whole cluster. But in OpenYurt we can divided nodes into nodepools, and manage them at the granularity of nodepool. On the base of it, we can also manage resources in each nodepool individually, such as using UnitedDeployment to manage pods in different nodepools.

In the native Kubernetes, the endpoints of a service are distributed among the whole cluster. But in OpenYurt we can divided nodes into nodepools, and manage them at the granularity of nodepool. On the base of it, we can also manage resources in each nodepool individually, such as using YurtAppSet to manage pods in different nodepools.

In the scenario of edge computing, resources in one nodepool are often independent on those in other nodepools, and nodes sometimes can only reach the nodes in the same nodepools. To meet this need, `YurtHub` provides the capability of traffic closure to ensure the client can only reach the endpoints in the same nodepool making the traffic closed in the granularity of nodepool.

Expand Down Expand Up @@ -299,3 +300,4 @@ Output the version of `YurtHub`.
```

Working mode of `YurtHub`. It can be "edge" which means `YurtHub` is running on an edge node, or "cloud" which means `YurtHub` is running on a cloud node.

10 changes: 5 additions & 5 deletions docs/installation/openyurt-experience-center/kubeconfig.md
Original file line number Diff line number Diff line change
Expand Up @@ -61,27 +61,27 @@ The corresponding NodePool information can be seen in browser page.
```bash
cat <<EOF | kubectl apply -f -
apiVersion: apps.openyurt.io/v1alpha1
kind: UnitedDeployment
kind: YurtAppSet
metadata:
labels:
controller-tools.k8s.io: "1.0"
name: ud-test
name: yas-test
namespace: "183xxxxxxxx" # Notice: change this with your own namespace
spec:
selector:
matchLabels:
app: ud-test
app: yas-test
workloadTemplate:
deploymentTemplate:
metadata:
labels:
app: ud-test
app: yas-test
namespace: "183xxxxxxxx" # Notice: change this with your own namespace
spec:
template:
metadata:
labels:
app: ud-test
app: yas-test
spec:
containers:
- name: nginx
Expand Down
6 changes: 3 additions & 3 deletions docs/user-manuals/iot/edgex-foundry.md
Original file line number Diff line number Diff line change
Expand Up @@ -138,13 +138,13 @@ edgex-sample-hangzhou true 9 9 9 9
$ kubectl apply -f https://raw.githubusercontent.com/openyurtio/yurt-device-controller/main/config/setup/crd.yaml
```

使用UnitedDeployment在hanghzou节点池中部署一个yurt-device-controller实例
使用YurtAppSet在hanghzou节点池中部署一个yurt-device-controller实例

```powershell
$ export WORKER_NODEPOOL="hangzhou"
$ cat <<EOF | kubectl apply -f -
apiVersion: apps.openyurt.io/v1alpha1
kind: UnitedDeployment
kind: YurtAppSet
metadata:
labels:
controller-tools.k8s.io: "1.0"
Expand Down Expand Up @@ -262,7 +262,7 @@ $ kubectl delete deviceprofile --all
$ kubectl delete deviceservice --all
# 1.2 删除部署的yurt-device-controller
$ kubectl delete uniteddeployment yurt-device-controller
$ kubectl delete yurtappset yurt-device-controller
$ kubectl delete clusterrolebinding default-cluster-admin
# 1.3 删除device, deviceservice, deviceprofile资源相关的crd
Expand Down
40 changes: 19 additions & 21 deletions docs/user-manuals/workload/yurt-app-daemon.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,22 +2,19 @@
title: YurtAppDaemon
---

## 背景介绍
## Background

在边缘场景中,来自同一区域的边缘节点被分配到同一个节点池中,此时,一般需要在节点池维度部署一些系统组件,例如CoreDNS。 创建节点池时,我们希望自动创建这些系统组件,而无需任何手动操作。



YurtAppDaemon 确保所有或部分节点池以 Deployment 或 StatefulSet 作为模板运行副本。 随着节点池的创建,这些子部署或状态集也被添加到集群中,子部署或状态集的创建/更新由 YurtAppDaemon 控制器实现。 这些子 Deployments 或 Statefulsets 将在节点池 从集群中移除时被回收,删除 YurtAppDaemon CR 将清理它创建的 Deployments 或 StatefulSets。 YurtAppDaemon 的行为类似于 K8S Daemonset,不同之处在于 YurtAppDaemon 从节点池维度自动创建 K8S 工作负载。
In edge scenarios, edge nodes from the same region will be assigned to the same NodePool, at which point some system components, such as CoreDNS, will typically need to be deployed in NodePool dimension. When creating the NodePool, we want to create these system components automatically, without any manual operations.

YurtAppDaemon ensures that all or some of the NodePools run replicas with a Deployment or StatefulSet template. As NodePools are created, these sub-Deployments or sub-StatefulSets are added to the cluster and the creation/updating of them are controlled by the YurtAppDaemon controller.



![img](https://intranetproxy.alipay.com/skylark/lark/0/2022/png/31456432/1641999454831-b8f2f9f4-c715-4063-8444-b0af22830092.png)

## 用户使用
## Usage

- 创建test1节点池
- Create test1 NodePool

```shell
cat <<EOF | kubectl apply -f -
Expand All @@ -37,7 +34,7 @@ spec:
EOF
```

- 创建test2节点池
- Create test2 NodePool

```shell
cat <<EOF | kubectl apply -f -
Expand All @@ -55,7 +52,7 @@ spec:
EOF
```

- 将节点加入到节点池
- Add nodes to the corresponding NodePool

```shell
kubectl label node cn-beijing.172.23.142.31 apps.openyurt.io/desired-nodepool=test1
Expand All @@ -65,7 +62,7 @@ EOF
kubectl label node cn-beijing.172.23.142.35 apps.openyurt.io/desired-nodepool=test2
```

- 创建YurtAppDaemon
- Create YurtAppDaemon

```shell
cat <<EOF | kubectl apply -f -
Expand Down Expand Up @@ -106,7 +103,7 @@ spec:
EOF
```

- 为test1节点池打标签
- Label test1 NodePool

```shell
kubectl label np test1 yurtappdaemon.openyurt.io/type=nginx
Expand All @@ -119,7 +116,7 @@ kubectl get deployments.apps
# Check the Pod
```

- 为test2节点池打标签
- Label test2 NodePool

```shell
kubectl label np test2 yurtappdaemon.openyurt.io/type=nginx
Expand All @@ -132,7 +129,7 @@ kubectl get deployments.apps
# Check the Pod
```

- 改变YurtAppDaemon
- Update YurtAppDaemon

```shell
# Change yurtappdaemon workloadTemplate replicas to 2
Expand All @@ -142,7 +139,7 @@ kubectl get deployments.apps
# Check the Pod
```

- 移除节点池标签
- Remove NodePool labels

```shell
# Remove the nodepool test1 label
Expand All @@ -160,11 +157,12 @@ kubectl label np test2 yurtappdaemon.openyurt.io/type-
# Check the Pod
```

## coredns单元化部署案例

> 在openyurt里使用YurtAppDaemon+服务拓扑解决dns解析问题
## Example for deploying coredns

> Using `YurtAppDaemon`+`service topology` to solve dns resolution problems
- 创建节点池
- Create NodePool
```shell
cat <<EOF | kubectl apply -f -
Expand All @@ -186,13 +184,13 @@ spec:
EOF
```

- 节点池增加标签
- Add label to NodePool

```shell
kubectl label np hangzhou yurtappdaemon.openyurt.io/type=coredns
```

- 部署coredns
- Deploy coredns

```shell
cat <<EOF | kubectl apply -f -
Expand Down Expand Up @@ -386,4 +384,4 @@ subjects:
EOF
```
```
154 changes: 154 additions & 0 deletions docs/user-manuals/workload/yurt-app-set.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,154 @@
---
title: YurtAppSet
---




In [the previous article](./node-pool-management.md) we introduced the use of `NodePool`, mainly the creation and management of `NodePool`.
Further, we developed the ability to deploy applications unitized based on `NodePool` to improve the efficiency of users' operations.

In this article, we will show how `yurt-app-manager` can help users manage their workload. Assume we already have an OpenYurt cluster built on
native kubernetes with at least two nodes.


### 1) Create YurtAppSet

- Create `YurtAppSet` by `yurtappset_test.yaml`

```yaml
apiVersion: apps.openyurt.io/v1alpha1
kind: YurtAppSet
metadata:
labels:
controller-tools.k8s.io: "1.0"
name: yas-test
spec:
selector:
matchLabels:
app: yas-test
workloadTemplate:
deploymentTemplate:
metadata:
labels:
app: yas-test
spec:
template:
metadata:
labels:
app: yas-test
spec:
containers:
- name: nginx
image: nginx:1.19.3
topology:
pools:
- name: beijing
nodeSelectorTerm:
matchExpressions:
- key: apps.openyurt.io/nodepool
operator: In
values:
- beijing
replicas: 1
- name: hangzhou
nodeSelectorTerm:
matchExpressions:
- key: apps.openyurt.io/nodepool
operator: In
values:
- hangzhou
replicas: 2
tolerations:
- effect: NoSchedule
key: apps.openyurt.io/example
operator: Exists
revisionHistoryLimit: 5
```
- Check `YurtAppSet`

```shell
$ kubectl get yas
NAME READY WORKLOADTEMPLATE AGE
yas-test 3 Deployment 43s
```


### 2) Check the deployments created by yurt-app-manager

```shell
$ kubectl get deploy
NAME READY UP-TO-DATE AVAILABLE AGE
yas-test-beijing-k5st4 1/1 1 1 54s
yas-test-hangzhou-2jkj5 2/2 2 2 54s
$ kubectl get pod -l app=yas-test
NAME READY STATUS RESTARTS AGE
yas-test-beijing-k5st4-56bc98cc7d-h7h86 1/1 Running 0 72s
yas-test-hangzhou-2jkj5-64588c484b-8mvn8 1/1 Running 0 72s
yas-test-hangzhou-2jkj5-64588c484b-vx85t 1/1 Running 0 72s
```


### 3) Add patch to YurtAppSet

- Add the patch field to the file `yurtappset_test.yaml` as follows, lines 36 to 41 of the file

```shell
$ kubectl get yas yas-test -o yaml
topology:
pools:
- name: beijing
nodeSelectorTerm:
matchExpressions:
- key: apps.openyurt.io/nodepool
operator: In
values:
- beijing
replicas: 1
patch:
spec:
template:
spec:
containers:
- name: nginx
image: nginx:1.19.0
- name: hangzhou
nodeSelectorTerm:
matchExpressions:
- key: apps.openyurt.io/nodepool
operator: In
values:
- hangzhou
replicas: 2
tolerations:
***
```

- This updates the nginx image version to 1.19.0 in the deployments and pods in Beijing NodePool, while keeping the nginx image version at 1.19.3 for the other regions

```shell
$ kubectl get deploy yas-test-beijing-k5st4 -o yaml
containers:
- image: nginx:1.19.0
$ kubectl get deploy yas-test-hangzhou-2jkj5 -o yaml
containers:
- image: nginx:1.19.3
```

- After removing the patch, all pods created by YurtAppSet revert back to nginx1.19.3

```shell
$ kubectl get pod yas-test-beijing-k5st4-974b6958c-t2kfn -o yaml
containers:
- image: nginx:1.19.3
$ kubectl get pod yas-test-hangzhou-2jkj5-64588c484b-8mvn8 -o yaml
containers:
- image: nginx:1.19.3
```

- Conclusion: Patch solves the upgrade of the NodePool's single attribute and application release.
Loading

0 comments on commit 5d70f53

Please sign in to comment.