Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update edge-ingress.md to add enhancement features #104

Merged
merged 1 commit into from
May 5, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
60 changes: 41 additions & 19 deletions docs/user-manuals/network/edge-ingress.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,39 +20,64 @@ YurtIngress operator is responsible for orchestrating multi ingress controllers
Suppose you have created 4 NodePools in your OpenYurt cluster: pool01, pool02, pool03, pool04, and you want to
enable edge ingress feature on pool01 and pool03, you can create the YurtIngress CR as below:

1). Create the YurtIngress CR yaml file: (for example: yurtingress-test.yaml)
1). Create the YurtIngress CR yaml file:

1.1). A simple CR definition with some default configurations:

apiVersion: apps.openyurt.io/v1alpha1
kind: YurtIngress
metadata:
name: yurtingress-test
spec:
pools:
- name: pool01
- name: pool03

The default nginx ingress controller replicas per pool is 1.
The default nginx ingress controller image is controller:v0.48.1 from dockerhub.
The default nginx ingress webhook certgen image is kube-webhook-certgen:v0.48.1 from dockerhub.

1.2). If users want to make personalized configurations about the default options, the YurtIngress CR can be defined as below:

apiVersion: apps.openyurt.io/v1alpha1
kind: YurtIngress
metadata:
name: yurtingress-singleton
name: yurtingress-test
spec:
ingress_controller_replicas_per_pool: 1
ingress_controller_replicas_per_pool: 2
ingress_controller_image: k8s.gcr.io/ingress-nginx/controller:v0.49.0
ingress_webhook_certgen_image: k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v0.49.0
pools:
- name: pool01
ingress_ips:
- xxx.xxx.xxx.xxx
- name: pool03

"ingress_ips" represents the IPs if users want to expose the nginx ingress controller service through externalIPs for a specified nodepool.

Notes:

a). YurtIngress CR is a singleton instance from the cluster level, and the CR name must be "yurtingress-singleton".
a). User can define different YurtIngress CRs for personalized configurations, for example set different ingress controller replicas
for different nodepools.

b). In spec, the "ingress_controller_replicas_per_pool" represents the ingress controller replicas deployed on every pool,
It is used for the HA usage scenarios.

c). In spec, the "pools" represents the pools list on which you want to enable ingress feature.
Currently it only supports the pool name, and it can be extended to support pool personalized configurations in future.
Currently it supports the pool name and the nginx ingress controller service externalIPs.


2). Apply the YurtIngress CR yaml file
2). Apply the YurtIngress CR yaml file:
Assume the file name is yurtingress-test.yaml:

#kubectl apply -f yurtingress-test.yaml
yurtingress.apps.openyurt.io/yurtingress-singleton created
yurtingress.apps.openyurt.io/yurtingress-test created

Then you can get the YurtIngress CR to check the status:

#kubectl get ying
NAME NGINX-INGRESS-VERSION REPLICAS-PER-POOL READYNUM NOTREADYNUM AGE
yurtingress-singleton 0.48.1 1 2 0 3m13s
NAME REPLICAS-PER-POOL READYNUM NOTREADYNUM AGE
yurtingress-test 1 2 0 3m13s

When the ingress controller is enabled successfully, a per-pool NodePort service is created to expose the ingress controller serivce:

Expand All @@ -64,17 +89,14 @@ Notes:

a). "ying" is the shortName of YurtIngress resource.

b). Currently YurtIngress only supports the fixed nginx ingress controller version, it can be enhanced to support user configurable
nginx ingress controller images/versions in future.

c). When the "READYNUM" equals the pools number you defined in the YurtIngress CR, it represents the ingress feature is ready on all your spec pools.
b). When the "READYNUM" equals the pools number you defined in the YurtIngress CR, it represents the ingress feature is ready on all your spec pools.

d). If the "NOTREADYNUM" is not 0 all the times, you can check the YurtIngress CR for the the status infomation.
c). If the "NOTREADYNUM" is not 0 all the times, you can check the YurtIngress CR for the the status infomation.
Also you can check the corresponding deployments and pods to figure out why the ingress is not ready yet.

e). For every NodePool which ingress is enabled successfully, it exposes a NodePort type service for users to access the nginx ingress controller.
d). For every NodePool which ingress is enabled successfully, it exposes a NodePort type service for users to access the nginx ingress controller.

f). When the ingress controllers are orchestrated to the specified NodePools, an "ingress-nginx" namespace will be created, and all the namespace
e). When the ingress controllers are orchestrated to the specified NodePools, an "ingress-nginx" namespace will be created, and all the namespace
related resources will be created under it.

---
Expand Down Expand Up @@ -150,7 +172,7 @@ Suppose your app workload is deployed to several NodePools and it exposes a glob

If you want to access the service provided by pool01:

1). Create the ingress rule yaml file: (for example: ingress-myapp.yaml)
1). Create the ingress rule yaml file:

apiVersion: extensions/v1beta1
kind: Ingress
Expand All @@ -175,13 +197,13 @@ a). Ingress class decides which NodePool to provide the ingress capability, so y
b). The ingress CR definition may be different for different K8S versions, so you need ensure the CR definition matches with your cluster K8S version.


2). Apply the ingress rule yaml file:
2). Apply the ingress rule yaml file:
Assume the file name is ingress-myapp.yaml:

#kubectl apply -f ingress-myapp.yaml
ingress.extensions/ingress-myapp created



After all the steps above are done successfully, you can verify the edge ingress feature through the ingress controller NodePort service:

#curl xxx:32255/echo
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ title: 边缘Ingress

具体应用场景为:
1. 节点池内或节点池外通过边缘ingress访问节点池内提供的服务。
2. 节点池外访问nginx ingress controller,目前只支持通过NodePort Service的方式
2. 节点池外访问nginx ingress controller,目前支持通过NodePort Service及externalIPs的方式

具体用法为:
1. 启用指定节点池上的边缘Ingress功能。
Expand All @@ -21,56 +21,82 @@ title: 边缘Ingress
YurtIngress opeator负责将nginx ingress controller编排到需要启用边缘Ingress功能的节点池中。
假设您的OpenYurt集群中有4个节点池:pool01、pool02、pool03、pool04,如果您想要在pool01和pool03上启用边缘ingress功能,可以按如下方式创建YurtIngress CR:

1). 创建YurtIngress CR yaml文件: (比如: yurtingress-test.yaml)
1). 创建YurtIngress CR yaml文件:

1.1). YurtIngress CR的简单定义:

apiVersion: apps.openyurt.io/v1alpha1
kind: YurtIngress
metadata:
name: yurtingress-singleton
name: yurtingress-test
spec:
ingress_controller_replicas_per_pool: 1
pools:
- name: pool01
- name: pool03

默认为每个节点池创建的nginx ingress控制器副本数为1
默认的ingress控制器docker image为:k8s.gcr.io/ingress-nginx/controller:v0.48.1
默认的生成ingress控制器webhook证书的docker image为:k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v0.48.1

1.2). 如果用户不想使用默认的配置,而是想对节点池做一些个性化配置,可以如下定义CR:

apiVersion: apps.openyurt.io/v1alpha1
kind: YurtIngress
metadata:
name: yurtingress-test
spec:
ingress_controller_replicas_per_pool: 2
ingress_controller_image: k8s.gcr.io/ingress-nginx/controller:v0.49.0
ingress_webhook_certgen_image: k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v0.49.0
pools:
- name: pool01
ingress_ips:
- xxx.xxx.xxx.xxx
- name: pool03

其中:
`igress_controller_replicas_per_pool`/`ingress_controller_image`/`ingress_webhook_certgen_image`可供用户自定义相关默认配置,
`ingress_ips` 代表如果用户想通过externalIPs的方式为某个特定的节点池对外暴露nginx ingress控制器服务的公网IP地址。


提示:

a). YurtIngress CR是集群级别的单例实例,CR名称必须为“yurtingress-singleton”
a). 用户可以通过定义不同的YurtIngress CRs来对不同节点池做一些个性化配置,比如通过定义不同的CR来对不同的节点池配置不同的ingress控制器副本数。

b). 在spec中,“ingress_controller_replicas_per_pool”表示部署在每个节点池上的ingress控制器副本数,它主要用于HA高可用场景。

c). 在spec中,“pools”表示要在其上开启ingress功能的节点池列表,目前只支持节点池名,以后可以扩展为支持节点池个性化配置
c). 在spec中,“pools”表示要在其上开启ingress功能的节点池列表,目前支持节点池名及针对该节点池的ingress服务公网IP配置

2). 部署YurtIngress CR yaml文件:
2). 部署YurtIngress CR yaml文件:
假定CR文件名为yurtingress-test.yaml:

#kubectl apply -f yurtingress-test.yaml
yurtingress.apps.openyurt.io/yurtingress-singleton created
yurtingress.apps.openyurt.io/yurtingress-test created

然后您可以查看YurtIngress CR的状态:

#kubectl get ying
NAME NGINX-INGRESS-VERSION REPLICAS-PER-POOL READYNUM NOTREADYNUM AGE
yurtingress-singleton 0.48.1 1 2 0 3m13s
NAME REPLICAS-PER-POOL READYNUM NOTREADYNUM AGE
yurtingress-test 1 2 0 3m13s

成功编排ingress controller后,每个节点池将暴露一个NodePort类型的Service服务:

#kubectl get svc -n ingress-nginx
ingress-nginx pool01-ingress-nginx-controller NodePort 192.167.107.123 <none> 80:32255/TCP,443:32275/TCP 53m
ingress-nginx pool03-ingress-nginx-controller NodePort 192.167.48.114 <none> 80:30531/TCP,443:30916/TCP 53m


提示:

a). “ying”是YurtIngress资源的简称

b). YurtIngress目前仅支持固定版本的nginx ingress controller,我们后续将对其进行增强,以支持用户可配置nginx ingress controller镜像/版本。

c). 当“READYNUM”与您部署的节点池数量一致时,表示ingress功能在您定义的所有节点池上已就绪。
b). 当“READYNUM”与您部署的节点池数量一致时,表示ingress功能在您定义的所有节点池上已就绪。

d). 当“NOTREADYNUM”一直不为0时,可以查看“yurtingress-singleton”这个CR的状态了解相关信息,您还可以查看相应的deployment及pod以获取更详细的错误信息,从而找出ingress功能尚未就绪的原因。
c). 当“NOTREADYNUM”一直不为0时,可以查看CR的状态了解相关信息,您还可以查看相应的deployment及pod以获取更详细的错误信息,从而找出ingress功能尚未就绪的原因。

e). 对于成功启用ingress功能的每个NodePool,会为用户暴露一个NodePort类型的服务用来访问nginx ingress controller。
d). 对于成功启用ingress功能的每个NodePool,会为用户暴露一个NodePort类型的服务用来访问nginx ingress controller。

f). YurtIngress operator会创建一个"ingress-nginx"的namespace,编排nginx ingress controller时,所有跟namespace相关的resource都会被部署在这个namespace下。
e). YurtIngress operator会创建一个"ingress-nginx"的namespace,编排nginx ingress controller时,所有跟namespace相关的resource都会被部署在这个namespace下。

---
2.同K8S一样创建并部署ingress规则以访问相应的服务
Expand Down Expand Up @@ -144,7 +170,7 @@ f). YurtIngress operator会创建一个"ingress-nginx"的namespace,编排nginx

当您想要访问pool01提供的服务时,您可以如下操作:

1). 创建ingress规则yaml文件: (比如: ingress-myapp.yaml)
1). 创建ingress规则yaml文件:

apiVersion: extensions/v1beta1
kind: Ingress
Expand All @@ -168,7 +194,8 @@ a). 由哪个节点池提供ingress功能是由ingress class决定的,因此

b). 不同K8S版本的ingress CR定义可能不同,您需要确保ingress CR的定义与集群K8S版本匹配。

2). 部署ingress规则yaml文件:
2). 部署ingress规则yaml文件:
假定yaml文件名为ingress-myapp.yaml:

#kubectl apply -f ingress-myapp.yaml
ingress.extensions/ingress-myapp created
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ title: 边缘Ingress

具体应用场景为:
1. 节点池内或节点池外通过边缘ingress访问节点池内提供的服务。
2. 节点池外访问nginx ingress controller,目前只支持通过NodePort Service的方式
2. 节点池外访问nginx ingress controller,目前支持通过NodePort Service及externalIPs的方式

具体用法为:
1. 启用指定节点池上的边缘Ingress功能。
Expand All @@ -21,56 +21,82 @@ title: 边缘Ingress
YurtIngress opeator负责将nginx ingress controller编排到需要启用边缘Ingress功能的节点池中。
假设您的OpenYurt集群中有4个节点池:pool01、pool02、pool03、pool04,如果您想要在pool01和pool03上启用边缘ingress功能,可以按如下方式创建YurtIngress CR:

1). 创建YurtIngress CR yaml文件: (比如: yurtingress-test.yaml)
1). 创建YurtIngress CR yaml文件:

1.1). YurtIngress CR的简单定义:

apiVersion: apps.openyurt.io/v1alpha1
kind: YurtIngress
metadata:
name: yurtingress-singleton
name: yurtingress-test
spec:
ingress_controller_replicas_per_pool: 1
pools:
- name: pool01
- name: pool03

默认为每个节点池创建的nginx ingress控制器副本数为1
默认的ingress控制器docker image为:k8s.gcr.io/ingress-nginx/controller:v0.48.1
默认的生成ingress控制器webhook证书的docker image为:k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v0.48.1

1.2). 如果用户不想使用默认的配置,而是想对节点池做一些个性化配置,可以如下定义CR:

apiVersion: apps.openyurt.io/v1alpha1
kind: YurtIngress
metadata:
name: yurtingress-test
spec:
ingress_controller_replicas_per_pool: 2
ingress_controller_image: k8s.gcr.io/ingress-nginx/controller:v0.49.0
ingress_webhook_certgen_image: k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v0.49.0
pools:
- name: pool01
ingress_ips:
- xxx.xxx.xxx.xxx
- name: pool03

其中:
`igress_controller_replicas_per_pool`/`ingress_controller_image`/`ingress_webhook_certgen_image`可供用户自定义相关默认配置,
`ingress_ips` 代表如果用户想通过externalIPs的方式为某个特定的节点池对外暴露nginx ingress控制器服务的公网IP地址。


提示:

a). YurtIngress CR是集群级别的单例实例,CR名称必须为“yurtingress-singleton”
a). 用户可以通过定义不同的YurtIngress CRs来对不同节点池做一些个性化配置,比如通过定义不同的CR来对不同的节点池配置不同的ingress控制器副本数。

b). 在spec中,“ingress_controller_replicas_per_pool”表示部署在每个节点池上的ingress控制器副本数,它主要用于HA高可用场景。

c). 在spec中,“pools”表示要在其上开启ingress功能的节点池列表,目前只支持节点池名,以后可以扩展为支持节点池个性化配置
c). 在spec中,“pools”表示要在其上开启ingress功能的节点池列表,目前支持节点池名及针对该节点池的ingress服务公网IP配置

2). 部署YurtIngress CR yaml文件:
2). 部署YurtIngress CR yaml文件:
假定CR文件名为yurtingress-test.yaml:

#kubectl apply -f yurtingress-test.yaml
yurtingress.apps.openyurt.io/yurtingress-singleton created
yurtingress.apps.openyurt.io/yurtingress-test created

然后您可以查看YurtIngress CR的状态:

#kubectl get ying
NAME NGINX-INGRESS-VERSION REPLICAS-PER-POOL READYNUM NOTREADYNUM AGE
yurtingress-singleton 0.48.1 1 2 0 3m13s
NAME REPLICAS-PER-POOL READYNUM NOTREADYNUM AGE
yurtingress-test 1 2 0 3m13s

成功编排ingress controller后,每个节点池将暴露一个NodePort类型的Service服务:

#kubectl get svc -n ingress-nginx
ingress-nginx pool01-ingress-nginx-controller NodePort 192.167.107.123 <none> 80:32255/TCP,443:32275/TCP 53m
ingress-nginx pool03-ingress-nginx-controller NodePort 192.167.48.114 <none> 80:30531/TCP,443:30916/TCP 53m


提示:

a). “ying”是YurtIngress资源的简称

b). YurtIngress目前仅支持固定版本的nginx ingress controller,我们后续将对其进行增强,以支持用户可配置nginx ingress controller镜像/版本。

c). 当“READYNUM”与您部署的节点池数量一致时,表示ingress功能在您定义的所有节点池上已就绪。
b). 当“READYNUM”与您部署的节点池数量一致时,表示ingress功能在您定义的所有节点池上已就绪。

d). 当“NOTREADYNUM”一直不为0时,可以查看“yurtingress-singleton”这个CR的状态了解相关信息,您还可以查看相应的deployment及pod以获取更详细的错误信息,从而找出ingress功能尚未就绪的原因。
c). 当“NOTREADYNUM”一直不为0时,可以查看CR的状态了解相关信息,您还可以查看相应的deployment及pod以获取更详细的错误信息,从而找出ingress功能尚未就绪的原因。

e). 对于成功启用ingress功能的每个NodePool,会为用户暴露一个NodePort类型的服务用来访问nginx ingress controller。
d). 对于成功启用ingress功能的每个NodePool,会为用户暴露一个NodePort类型的服务用来访问nginx ingress controller。

f). YurtIngress operator会创建一个"ingress-nginx"的namespace,编排nginx ingress controller时,所有跟namespace相关的resource都会被部署在这个namespace下。
e). YurtIngress operator会创建一个"ingress-nginx"的namespace,编排nginx ingress controller时,所有跟namespace相关的resource都会被部署在这个namespace下。

---
2.同K8S一样创建并部署ingress规则以访问相应的服务
Expand Down Expand Up @@ -144,7 +170,7 @@ f). YurtIngress operator会创建一个"ingress-nginx"的namespace,编排nginx

当您想要访问pool01提供的服务时,您可以如下操作:

1). 创建ingress规则yaml文件: (比如: ingress-myapp.yaml)
1). 创建ingress规则yaml文件:

apiVersion: extensions/v1beta1
kind: Ingress
Expand All @@ -168,7 +194,8 @@ a). 由哪个节点池提供ingress功能是由ingress class决定的,因此

b). 不同K8S版本的ingress CR定义可能不同,您需要确保ingress CR的定义与集群K8S版本匹配。

2). 部署ingress规则yaml文件:
2). 部署ingress规则yaml文件:
假定yaml文件名为ingress-myapp.yaml:

#kubectl apply -f ingress-myapp.yaml
ingress.extensions/ingress-myapp created
Expand Down
Loading