diff --git a/docs/user-manuals/network/edge-ingress.md b/docs/user-manuals/network/edge-ingress.md
index 17ada21438..ad20ae057a 100644
--- a/docs/user-manuals/network/edge-ingress.md
+++ b/docs/user-manuals/network/edge-ingress.md
@@ -3,17 +3,20 @@ title: Edge Ingress
 ---
 
 This document introduces how to access Edge services through Edge Ingress in Cloud Edge scenarios.
+Users can access the Edge services from inside or outside of the NodePools, and for the condition
+from outside of the NodePools, only NodePort type ingress controller service is supported by now.
 
 Generally, it only needs 2 steps to use the Edge Ingress feature:
-1. Enable the ingress feature on NodePools which provide your desired services.
-2. Create and apply the ingress rule as K8S to access your desired services.
 
-Follow the details below to try the Edge Ingress feature:
+  1. Enable the ingress feature on NodePools which provide your desired services.
+  2. Create and apply the ingress rule as K8S to access your desired services.
+
+Follow the steps below to try the Edge Ingress feature:
 
 ---
 1.Enable the ingress feature on NodePools which provide your desired services
 ---
-YurtIngress operator is responsible for deploying the nginx ingress controller to the corresponding NodePools.
+YurtIngress operator is responsible for orchestrating multi ingress controllers to the corresponding NodePools.
 Suppose you have created 4 NodePools in your OpenYurt cluster: pool01, pool02, pool03, pool04, and you want to
 enable edge ingress feature on pool01 and pool03, you can create the YurtIngress CR as below:
 
@@ -37,23 +40,23 @@ b). In spec, the "ingress_controller_replicas_per_pool" represents the ingress c
 It is used for the HA usage scenarios.
 
 c). In spec, the "pools" represents the pools list on which you want to enable ingress feature.
-Currently it only supports the pool name, but it can be extended to support pool personalized configruations in future.
+Currently it only supports the pool name, and it can be extended to support pool personalized configurations in future.
 
 
 2). Apply the YurtIngress CR yaml file
 
-    kubectl apply -f yurtingress-test.yaml
+    #kubectl apply -f yurtingress-test.yaml
     yurtingress.apps.openyurt.io/yurtingress-singleton created
 
 Then you can get the YurtIngress CR to check the status:
 
-    kubectl get ying
+    #kubectl get ying
     NAME                    NGINX-INGRESS-VERSION   REPLICAS-PER-POOL   READYNUM   NOTREADYNUM   AGE
     yurtingress-singleton   0.48.1                  1                   2          0             3m13s
 
 When the ingress controller is enabled successfully, a per-pool NodePort service is created to expose the ingress controller serivce:
 
-    kubectl get svc -A
+    #kubectl get svc -n ingress-nginx
     ingress-nginx   pool01-ingress-nginx-controller   NodePort    192.167.107.123   <none>    80:32255/TCP,443:32275/TCP   53m
     ingress-nginx   pool03-ingress-nginx-controller   NodePort    192.167.48.114    <none>    80:30531/TCP,443:30916/TCP   53m
 
@@ -61,16 +64,18 @@ Notes:
 
 a). "ying" is the shortName of YurtIngress resource.
 
-b). Currently YurtIngress only supports the fixed nginx ingress controller version, we will enhance it in future to support user configurable
-nginx ingress controller images/versions.
+b). Currently YurtIngress only supports the fixed nginx ingress controller version, it can be enhanced to support user configurable
+nginx ingress controller images/versions in future.
+
+c). When the "READYNUM" equals the pools number you defined in the YurtIngress CR, it represents the ingress feature is ready on all your spec pools.
 
-c). When the "READYNUM" equals the pool number you defined in the YurtIngress CR, it represents the ingress feature is ready on all the pool you defined.
+d). If the "NOTREADYNUM" is not 0 all the times, you can check the YurtIngress CR for the the status infomation.
+Also you can check the corresponding deployments and pods to figure out why the ingress is not ready yet.
 
-d). If the "NOTREADYNUM" is not 0 all the times, you can use "kubectl describe ying yurtingress-singleton" to check the details for the reasons.
-Also you can check the corresponding deployment (xxx-ingress-nginx-controller, "xxx" represents the pool name) to figure out the reasons why the
-ingress is not ready yet.
+e). For every NodePool which ingress is enabled successfully, it exposes a NodePort type service for users to access the nginx ingress controller.
 
-e). For every NodePool which ingress is enable successfully, it exposes a NodePort type service for users to access the nginx ingress controller.
+f). When the ingress controllers are orchestrated to the specified NodePools, an "ingress-nginx" namespace will be created, and all the namespace
+related resources will be created under it.
 
 ---
 2.Create and apply the ingress rule as K8S to access your desired services
@@ -78,26 +83,90 @@ e). For every NodePool which ingress is enable successfully, it exposes a NodePo
 When the step 1 above is done, you have successfully deployed the nginx ingress controller to the related NodePools, and the following
 ingress user experience is totally consistent with K8S.
 
-Suppose your app workload is deployed to several NodePools(e.g. pool01 and pool03), and it exposes a global service(e.g. myapp service), and you
-want to access the service provided by pool01:
+Suppose your app workload is deployed to several NodePools and it exposes a global service, for example:
+
+      apiVersion: apps/v1
+      kind: Deployment
+      metadata:
+        name: pool01-deployment
+        labels:
+          app: echo
+      spec:
+        replicas: 2
+        selector:
+          matchLabels:
+            app: echo
+        template:
+          metadata:
+            labels:
+              app: echo
+          spec:
+            containers:
+            - name: echo-app
+              image: hashicorp/http-echo
+              args:
+                - "-text=echo from nodepool pool01"
+              imagePullPolicy: IfNotPresent
+            nodeSelector:
+              apps.openyurt.io/nodepool: pool01
+      ---
+
+      apiVersion: apps/v1
+      kind: Deployment
+      metadata:
+        name: pool03-deployment
+        labels:
+          app: echo
+      spec:
+        replicas: 2
+        selector:
+          matchLabels:
+            app: echo
+        template:
+          metadata:
+            labels:
+              app: echo
+          spec:
+            containers:
+            - name: echo-app
+              image: hashicorp/http-echo
+              args:
+                - "-text=echo from nodepool pool03"
+              imagePullPolicy: IfNotPresent
+            nodeSelector:
+              apps.openyurt.io/nodepool: pool03
+      ---
+
+      kind: Service
+      apiVersion: v1
+      metadata:
+        name: echo-service
+      spec:
+        selector:
+          app: echo
+        ports:
+          - port: 5678
+
+
+If you want to access the service provided by pool01:
 
 1). Create the ingress rule yaml file: (for example: ingress-myapp.yaml)
 
-    apiVersion: extensions/v1beta1
-    kind: Ingress
-    metadata:
-      name: ingress-myapp
-      annotations:
-        ingress.kubernetes.io/rewrite-target: /
-    spec:
-      ingressclassName: pool01
-      rules:
-      - http:
-          paths:
-            - path: /myapp
-              backend:
-              serviceName: myapp-service
-              servicePort: xxx
+      apiVersion: extensions/v1beta1
+      kind: Ingress
+      metadata:
+        name: ingress-pool01
+        annotations:
+          kubernetes.io/ingress.class: pool01
+          ingress.kubernetes.io/rewrite-target: /
+      spec:
+        rules:
+        - http:
+            paths:
+              - path: /echo
+                backend:
+                  serviceName: echo-service
+                  servicePort: 5678
 
 Notes:
 
@@ -108,14 +177,16 @@ b). The ingress CR definition may be different for different K8S versions, so yo
 
 2). Apply the ingress rule yaml file:
 
-      kubectl apply -f ingress-myapp.yaml
+      #kubectl apply -f ingress-myapp.yaml
       ingress.extensions/ingress-myapp created
 
 
 
 After all the steps above are done successfully, you can verify the edge ingress feature through the ingress controller NodePort service:
 
-      curl xxx:32255/myapp
+      #curl xxx:32255/echo
 
       "xxx" 	represents any NodeIP in NodePool pool01
       "32255" 	represents the NodePort which pool01 nginx ingress controller service exposes
+
+      It should return "echo from nodepool pool01" all the times.
diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/user-manuals/network/edge-ingress.md b/i18n/zh/docusaurus-plugin-content-docs/current/user-manuals/network/edge-ingress.md
index 3ef557ed8f..fb38068a7f 100644
--- a/i18n/zh/docusaurus-plugin-content-docs/current/user-manuals/network/edge-ingress.md
+++ b/i18n/zh/docusaurus-plugin-content-docs/current/user-manuals/network/edge-ingress.md
@@ -2,22 +2,23 @@
 title: 边缘Ingress
 ---
 
-本文档介绍如何在云边协同场景下通过边缘Ingress访问边缘节点池提供的服务。
+本文档介绍如何在云边协同场景下通过边缘Ingress访问指定节点池提供的服务。
 
+具体应用场景为:
+1. 节点池内或节点池外通过边缘ingress访问节点池内提供的服务。
+2. 节点池外访问nginx ingress controller,目前只支持通过NodePort Service的方式。
 
-通常情况下,通过边缘Ingress访问边缘服务只需要两个步骤:
+具体用法为:
+1. 启用指定节点池上的边缘Ingress功能。
+2. 同K8S一样创建并部署ingress规则以访问相应的服务。
 
-1.启用节点池上的边缘Ingress功能。
 
-2.同K8S一样创建并部署ingress规则以访问相应的服务。
-
-
-请按以下详细步骤尝试YurtIngress功能:
+请按以下步骤尝试使用边缘Ingress功能:
 
 ---
-1.启用节点池上的边缘Ingress功能
+1.启用指定节点池上的边缘Ingress功能
 ---
-YurtIngress opeator负责将nginx ingress controller部署到需要启用边缘Ingress功能的节点池中。
+YurtIngress opeator负责将nginx ingress controller编排到需要启用边缘Ingress功能的节点池中。
 假设您的OpenYurt集群中有4个节点池:pool01、pool02、pool03、pool04,如果您想要在pool01和pool03上启用边缘ingress功能,可以按如下方式创建YurtIngress CR:
 
 1). 创建YurtIngress CR yaml文件: (比如: yurtingress-test.yaml)
@@ -34,40 +35,42 @@ YurtIngress opeator负责将nginx ingress controller部署到需要启用边缘I
 
 提示:
 
-a). YurtIngress CR是集群级别的单例实例,CR名称必须为“yurtIngress-singleton”
+a). YurtIngress CR是集群级别的单例实例,CR名称必须为“yurtingress-singleton”
 
 b). 在spec中,“ingress_controller_replicas_per_pool”表示部署在每个节点池上的ingress控制器副本数,它主要用于HA高可用场景。
 
-c). 在spec中,“pools”表示要在其上开启ingress功能的节点池列表,目前只支持池名,以后可以扩展为支持节点池个性化配置。
+c). 在spec中,“pools”表示要在其上开启ingress功能的节点池列表,目前只支持节点池名,以后可以扩展为支持节点池个性化配置。
 
 2). 部署YurtIngress CR yaml文件:
 
-    kubectl apply -f yurtingress-test.yaml
+    #kubectl apply -f yurtingress-test.yaml
     yurtingress.apps.openyurt.io/yurtingress-singleton created
 
 然后您可以查看YurtIngress CR的状态:
 
-    kubectl get ying
+    #kubectl get ying
     NAME                    NGINX-INGRESS-VERSION   REPLICAS-PER-POOL   READYNUM   NOTREADYNUM   AGE
     yurtingress-singleton   0.48.1                  1                   2          0             3m13s
 
-成功部署ingress controller后,每个节点池将暴漏一个NodePort类型的Service服务:
+成功编排ingress controller后,每个节点池将暴露一个NodePort类型的Service服务:
 
-    kubectl get svc -A
-    ingress-nginx   pool01-ingress-nginx-controller   NodePort    192.167.107.123   <none>    80:32255/TCP,443:32275/TCP   53m
-    ingress-nginx   pool03-ingress-nginx-controller   NodePort    192.167.48.114    <none>    80:30531/TCP,443:30916/TCP   53m
+    #kubectl get svc -n ingress-nginx
+    ingress-nginx  pool01-ingress-nginx-controller  NodePort  192.167.107.123  <none>   80:32255/TCP,443:32275/TCP  53m
+    ingress-nginx  pool03-ingress-nginx-controller  NodePort  192.167.48.114   <none>   80:30531/TCP,443:30916/TCP  53m
 
 提示:
 
 a). “ying”是YurtIngress资源的简称
 
-b). YurtIngress目前仅支持固定版本的nginx ingress controller,我们将在未来对其进行增强,以支持用户可配置nginx ingress controller映像/版本。
+b). YurtIngress目前仅支持固定版本的nginx ingress controller,我们后续将对其进行增强,以支持用户可配置nginx ingress controller镜像/版本。
 
-c). 当“READYNUM”与您部署的节点池数量一致时,表示ingress功能已在您定义的所有节点池上就绪。
+c). 当“READYNUM”与您部署的节点池数量一致时,表示ingress功能在您定义的所有节点池上已就绪。
 
-d). 当“NOTREADYNUM”一直不为0时,可以使用“kubectl describe ying yurtingress-singleton”来查看原因及详细信息。此外,您还可以检查相应的部署(xxx-ingress-nginx-controller,xxx代表节点池名),以找出ingress功能还未就绪的原因。
+d). 当“NOTREADYNUM”一直不为0时,可以查看“yurtingress-singleton”这个CR的状态了解相关信息,您还可以查看相应的deployment及pod以获取更详细的错误信息,从而找出ingress功能尚未就绪的原因。
 
-e). 对于成功启用ingress功能的每个NodePool,会为用户暴漏一个NodePort类型的服务用来访问nginx ingress controller。
+e). 对于成功启用ingress功能的每个NodePool,会为用户暴露一个NodePort类型的服务用来访问nginx ingress controller。
+
+f). YurtIngress operator会创建一个"ingress-nginx"的namespace,编排nginx ingress controller时,所有跟namespace相关的resource都会被部署在这个namespace下。
 
 ---
 2.同K8S一样创建并部署ingress规则以访问相应的服务
@@ -75,25 +78,89 @@ e). 对于成功启用ingress功能的每个NodePool,会为用户暴漏一个N
 
 当上述步骤1完成后,您已经通过Yurtingress成功的将nginx ingress controller部署到相应的节点池中。接下来的用法就和K8S中使用ingress的体验一致了。
 
-假设您的业务应用被部署到了多个节点池中(例如pool01和pool03),并且它们通过一个全局的service(例如myapp service)对外暴漏,当您想要访问pool01提供的服务时,您可以如下操作:
+假设您的业务应用被部署到了多个节点池中,并且它们通过一个全局的service对外暴露,举个例子:
+
+      apiVersion: apps/v1
+      kind: Deployment
+      metadata:
+        name: pool01-deployment
+        labels:
+          app: echo
+      spec:
+        replicas: 2
+        selector:
+          matchLabels:
+            app: echo
+        template:
+          metadata:
+            labels:
+              app: echo
+          spec:
+            containers:
+            - name: echo-app
+              image: hashicorp/http-echo
+              args:
+                - "-text=echo from nodepool pool01"
+              imagePullPolicy: IfNotPresent
+            nodeSelector:
+              apps.openyurt.io/nodepool: pool01
+      ---
+
+      apiVersion: apps/v1
+      kind: Deployment
+      metadata:
+        name: pool03-deployment
+        labels:
+          app: echo
+      spec:
+        replicas: 2
+        selector:
+          matchLabels:
+            app: echo
+        template:
+          metadata:
+            labels:
+              app: echo
+          spec:
+            containers:
+            - name: echo-app
+              image: hashicorp/http-echo
+              args:
+                - "-text=echo from nodepool pool03"
+              imagePullPolicy: IfNotPresent
+            nodeSelector:
+              apps.openyurt.io/nodepool: pool03
+      ---
+
+      kind: Service
+      apiVersion: v1
+      metadata:
+        name: echo-service
+      spec:
+        selector:
+          app: echo
+        ports:
+          - port: 5678
+
+当您想要访问pool01提供的服务时,您可以如下操作:
 
 1). 创建ingress规则yaml文件: (比如: ingress-myapp.yaml)
 
-    apiVersion: extensions/v1beta1
-    kind: Ingress
-    metadata:
-      name: ingress-myapp
-      annotations:
-        ingress.kubernetes.io/rewrite-target: /
-    spec:
-      ingressclassName: pool01
-      rules:
-      - http:
-          paths:
-            - path: /myapp
-              backend:
-              serviceName: myapp-service
-              servicePort: xxx
+      apiVersion: extensions/v1beta1
+      kind: Ingress
+      metadata:
+        name: ingress-pool01
+        annotations:
+          kubernetes.io/ingress.class: pool01
+          ingress.kubernetes.io/rewrite-target: /
+      spec:
+        rules:
+        - http:
+            paths:
+              - path: /echo
+                backend:
+                  serviceName: echo-service
+                  servicePort: 5678
 
 提示:
 
@@ -103,13 +170,15 @@ b). 不同K8S版本的ingress CR定义可能不同,您需要确保ingress CR
 
 2). 部署ingress规则yaml文件:
 
-      kubectl apply -f ingress-myapp.yaml
+      #kubectl apply -f ingress-myapp.yaml
       ingress.extensions/ingress-myapp created
 
 
 成功完成上述所有步骤后,您就可以通过ingress controller NodePort service验证边缘Ingress功能了:
 
-      curl xxx:32255/myapp
+      #curl xxx:32255/echo
 
       "xxx"       代表节点池pool01中的节点IP地址
-      "32255"     代表对应节点池中ingress controller暴漏的service NodePort
+      "32255"     代表对应节点池中ingress controller暴露的service NodePort
+
+      返回结果应该一直为: “echo from nodepool pool01”。
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v0.5.0/user-manuals/network/edge-ingress.md b/i18n/zh/docusaurus-plugin-content-docs/version-v0.5.0/user-manuals/network/edge-ingress.md
deleted file mode 100644
index 3ef557ed8f..0000000000
--- a/i18n/zh/docusaurus-plugin-content-docs/version-v0.5.0/user-manuals/network/edge-ingress.md
+++ /dev/null
@@ -1,115 +0,0 @@
----
-title: 边缘Ingress
----
-
-本文档介绍如何在云边协同场景下通过边缘Ingress访问边缘节点池提供的服务。
-
-
-通常情况下,通过边缘Ingress访问边缘服务只需要两个步骤:
-
-1.启用节点池上的边缘Ingress功能。
-
-2.同K8S一样创建并部署ingress规则以访问相应的服务。
-
-
-请按以下详细步骤尝试YurtIngress功能:
-
----
-1.启用节点池上的边缘Ingress功能
----
-YurtIngress opeator负责将nginx ingress controller部署到需要启用边缘Ingress功能的节点池中。
-假设您的OpenYurt集群中有4个节点池:pool01、pool02、pool03、pool04,如果您想要在pool01和pool03上启用边缘ingress功能,可以按如下方式创建YurtIngress CR:
-
-1). 创建YurtIngress CR yaml文件: (比如: yurtingress-test.yaml)
-
-      apiVersion: apps.openyurt.io/v1alpha1
-      kind: YurtIngress
-      metadata:
-        name: yurtingress-singleton
-      spec:
-          ingress_controller_replicas_per_pool: 1
-          pools:
-            - name: pool01
-            - name: pool03
-
-提示:
-
-a). YurtIngress CR是集群级别的单例实例,CR名称必须为“yurtIngress-singleton”
-
-b). 在spec中,“ingress_controller_replicas_per_pool”表示部署在每个节点池上的ingress控制器副本数,它主要用于HA高可用场景。
-
-c). 在spec中,“pools”表示要在其上开启ingress功能的节点池列表,目前只支持池名,以后可以扩展为支持节点池个性化配置。
-
-2). 部署YurtIngress CR yaml文件:
-
-    kubectl apply -f yurtingress-test.yaml
-    yurtingress.apps.openyurt.io/yurtingress-singleton created
-
-然后您可以查看YurtIngress CR的状态:
-
-    kubectl get ying
-    NAME                    NGINX-INGRESS-VERSION   REPLICAS-PER-POOL   READYNUM   NOTREADYNUM   AGE
-    yurtingress-singleton   0.48.1                  1                   2          0             3m13s
-
-成功部署ingress controller后,每个节点池将暴漏一个NodePort类型的Service服务:
-
-    kubectl get svc -A
-    ingress-nginx   pool01-ingress-nginx-controller   NodePort    192.167.107.123   <none>    80:32255/TCP,443:32275/TCP   53m
-    ingress-nginx   pool03-ingress-nginx-controller   NodePort    192.167.48.114    <none>    80:30531/TCP,443:30916/TCP   53m
-
-提示:
-
-a). “ying”是YurtIngress资源的简称
-
-b). YurtIngress目前仅支持固定版本的nginx ingress controller,我们将在未来对其进行增强,以支持用户可配置nginx ingress controller映像/版本。
-
-c). 当“READYNUM”与您部署的节点池数量一致时,表示ingress功能已在您定义的所有节点池上就绪。
-
-d). 当“NOTREADYNUM”一直不为0时,可以使用“kubectl describe ying yurtingress-singleton”来查看原因及详细信息。此外,您还可以检查相应的部署(xxx-ingress-nginx-controller,xxx代表节点池名),以找出ingress功能还未就绪的原因。
-
-e). 对于成功启用ingress功能的每个NodePool,会为用户暴漏一个NodePort类型的服务用来访问nginx ingress controller。
-
----
-2.同K8S一样创建并部署ingress规则以访问相应的服务
----
-
-当上述步骤1完成后,您已经通过Yurtingress成功的将nginx ingress controller部署到相应的节点池中。接下来的用法就和K8S中使用ingress的体验一致了。
-
-假设您的业务应用被部署到了多个节点池中(例如pool01和pool03),并且它们通过一个全局的service(例如myapp service)对外暴漏,当您想要访问pool01提供的服务时,您可以如下操作:
-
-1). 创建ingress规则yaml文件: (比如: ingress-myapp.yaml)
-
-    apiVersion: extensions/v1beta1
-    kind: Ingress
-    metadata:
-      name: ingress-myapp
-      annotations:
-        ingress.kubernetes.io/rewrite-target: /
-    spec:
-      ingressclassName: pool01
-      rules:
-      - http:
-          paths:
-            - path: /myapp
-              backend:
-              serviceName: myapp-service
-              servicePort: xxx
-
-提示:
-
-a). 由哪个节点池提供ingress功能是由ingress class决定的,因此您需要将ingress class定义为您想要访问服务的节点池名称。
-
-b). 不同K8S版本的ingress CR定义可能不同,您需要确保ingress CR的定义与集群K8S版本匹配。
-
-2). 部署ingress规则yaml文件:
-
-      kubectl apply -f ingress-myapp.yaml
-      ingress.extensions/ingress-myapp created
-
-
-成功完成上述所有步骤后,您就可以通过ingress controller NodePort service验证边缘Ingress功能了:
-
-      curl xxx:32255/myapp
-
-      "xxx"       代表节点池pool01中的节点IP地址
-      "32255"     代表对应节点池中ingress controller暴漏的service NodePort
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v0.6.0/user-manuals/network/edge-ingress.md b/i18n/zh/docusaurus-plugin-content-docs/version-v0.6.0/user-manuals/network/edge-ingress.md
index 3ef557ed8f..fb38068a7f 100644
--- a/i18n/zh/docusaurus-plugin-content-docs/version-v0.6.0/user-manuals/network/edge-ingress.md
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v0.6.0/user-manuals/network/edge-ingress.md
@@ -2,22 +2,23 @@
 title: 边缘Ingress
 ---
 
-本文档介绍如何在云边协同场景下通过边缘Ingress访问边缘节点池提供的服务。
+本文档介绍如何在云边协同场景下通过边缘Ingress访问指定节点池提供的服务。
 
+具体应用场景为:
+1. 节点池内或节点池外通过边缘ingress访问节点池内提供的服务。
+2. 节点池外访问nginx ingress controller,目前只支持通过NodePort Service的方式。
 
-通常情况下,通过边缘Ingress访问边缘服务只需要两个步骤:
+具体用法为:
+1. 启用指定节点池上的边缘Ingress功能。
+2. 同K8S一样创建并部署ingress规则以访问相应的服务。
 
-1.启用节点池上的边缘Ingress功能。
 
-2.同K8S一样创建并部署ingress规则以访问相应的服务。
-
-
-请按以下详细步骤尝试YurtIngress功能:
+请按以下步骤尝试使用边缘Ingress功能:
 
 ---
-1.启用节点池上的边缘Ingress功能
+1.启用指定节点池上的边缘Ingress功能
 ---
-YurtIngress opeator负责将nginx ingress controller部署到需要启用边缘Ingress功能的节点池中。
+YurtIngress opeator负责将nginx ingress controller编排到需要启用边缘Ingress功能的节点池中。
 假设您的OpenYurt集群中有4个节点池:pool01、pool02、pool03、pool04,如果您想要在pool01和pool03上启用边缘ingress功能,可以按如下方式创建YurtIngress CR:
 
 1). 创建YurtIngress CR yaml文件: (比如: yurtingress-test.yaml)
@@ -34,40 +35,42 @@ YurtIngress opeator负责将nginx ingress controller部署到需要启用边缘I
 
 提示:
 
-a). YurtIngress CR是集群级别的单例实例,CR名称必须为“yurtIngress-singleton”
+a). YurtIngress CR是集群级别的单例实例,CR名称必须为“yurtingress-singleton”
 
 b). 在spec中,“ingress_controller_replicas_per_pool”表示部署在每个节点池上的ingress控制器副本数,它主要用于HA高可用场景。
 
-c). 在spec中,“pools”表示要在其上开启ingress功能的节点池列表,目前只支持池名,以后可以扩展为支持节点池个性化配置。
+c). 在spec中,“pools”表示要在其上开启ingress功能的节点池列表,目前只支持节点池名,以后可以扩展为支持节点池个性化配置。
 
 2). 部署YurtIngress CR yaml文件:
 
-    kubectl apply -f yurtingress-test.yaml
+    #kubectl apply -f yurtingress-test.yaml
     yurtingress.apps.openyurt.io/yurtingress-singleton created
 
 然后您可以查看YurtIngress CR的状态:
 
-    kubectl get ying
+    #kubectl get ying
     NAME                    NGINX-INGRESS-VERSION   REPLICAS-PER-POOL   READYNUM   NOTREADYNUM   AGE
     yurtingress-singleton   0.48.1                  1                   2          0             3m13s
 
-成功部署ingress controller后,每个节点池将暴漏一个NodePort类型的Service服务:
+成功编排ingress controller后,每个节点池将暴露一个NodePort类型的Service服务:
 
-    kubectl get svc -A
-    ingress-nginx   pool01-ingress-nginx-controller   NodePort    192.167.107.123   <none>    80:32255/TCP,443:32275/TCP   53m
-    ingress-nginx   pool03-ingress-nginx-controller   NodePort    192.167.48.114    <none>    80:30531/TCP,443:30916/TCP   53m
+    #kubectl get svc -n ingress-nginx
+    ingress-nginx  pool01-ingress-nginx-controller  NodePort  192.167.107.123  <none>   80:32255/TCP,443:32275/TCP  53m
+    ingress-nginx  pool03-ingress-nginx-controller  NodePort  192.167.48.114   <none>   80:30531/TCP,443:30916/TCP  53m
 
 提示:
 
 a). “ying”是YurtIngress资源的简称
 
-b). YurtIngress目前仅支持固定版本的nginx ingress controller,我们将在未来对其进行增强,以支持用户可配置nginx ingress controller映像/版本。
+b). YurtIngress目前仅支持固定版本的nginx ingress controller,我们后续将对其进行增强,以支持用户可配置nginx ingress controller镜像/版本。
 
-c). 当“READYNUM”与您部署的节点池数量一致时,表示ingress功能已在您定义的所有节点池上就绪。
+c). 当“READYNUM”与您部署的节点池数量一致时,表示ingress功能在您定义的所有节点池上已就绪。
 
-d). 当“NOTREADYNUM”一直不为0时,可以使用“kubectl describe ying yurtingress-singleton”来查看原因及详细信息。此外,您还可以检查相应的部署(xxx-ingress-nginx-controller,xxx代表节点池名),以找出ingress功能还未就绪的原因。
+d). 当“NOTREADYNUM”一直不为0时,可以查看“yurtingress-singleton”这个CR的状态了解相关信息,您还可以查看相应的deployment及pod以获取更详细的错误信息,从而找出ingress功能尚未就绪的原因。
 
-e). 对于成功启用ingress功能的每个NodePool,会为用户暴漏一个NodePort类型的服务用来访问nginx ingress controller。
+e). 对于成功启用ingress功能的每个NodePool,会为用户暴露一个NodePort类型的服务用来访问nginx ingress controller。
+
+f). YurtIngress operator会创建一个"ingress-nginx"的namespace,编排nginx ingress controller时,所有跟namespace相关的resource都会被部署在这个namespace下。
 
 ---
 2.同K8S一样创建并部署ingress规则以访问相应的服务
@@ -75,25 +78,89 @@ e). 对于成功启用ingress功能的每个NodePool,会为用户暴漏一个N
 
 当上述步骤1完成后,您已经通过Yurtingress成功的将nginx ingress controller部署到相应的节点池中。接下来的用法就和K8S中使用ingress的体验一致了。
 
-假设您的业务应用被部署到了多个节点池中(例如pool01和pool03),并且它们通过一个全局的service(例如myapp service)对外暴漏,当您想要访问pool01提供的服务时,您可以如下操作:
+假设您的业务应用被部署到了多个节点池中,并且它们通过一个全局的service对外暴露,举个例子:
+
+      apiVersion: apps/v1
+      kind: Deployment
+      metadata:
+        name: pool01-deployment
+        labels:
+          app: echo
+      spec:
+        replicas: 2
+        selector:
+          matchLabels:
+            app: echo
+        template:
+          metadata:
+            labels:
+              app: echo
+          spec:
+            containers:
+            - name: echo-app
+              image: hashicorp/http-echo
+              args:
+                - "-text=echo from nodepool pool01"
+              imagePullPolicy: IfNotPresent
+            nodeSelector:
+              apps.openyurt.io/nodepool: pool01
+      ---
+
+      apiVersion: apps/v1
+      kind: Deployment
+      metadata:
+        name: pool03-deployment
+        labels:
+          app: echo
+      spec:
+        replicas: 2
+        selector:
+          matchLabels:
+            app: echo
+        template:
+          metadata:
+            labels:
+              app: echo
+          spec:
+            containers:
+            - name: echo-app
+              image: hashicorp/http-echo
+              args:
+                - "-text=echo from nodepool pool03"
+              imagePullPolicy: IfNotPresent
+            nodeSelector:
+              apps.openyurt.io/nodepool: pool03
+      ---
+
+      kind: Service
+      apiVersion: v1
+      metadata:
+        name: echo-service
+      spec:
+        selector:
+          app: echo
+        ports:
+          - port: 5678
+
+当您想要访问pool01提供的服务时,您可以如下操作:
 
 1). 创建ingress规则yaml文件: (比如: ingress-myapp.yaml)
 
-    apiVersion: extensions/v1beta1
-    kind: Ingress
-    metadata:
-      name: ingress-myapp
-      annotations:
-        ingress.kubernetes.io/rewrite-target: /
-    spec:
-      ingressclassName: pool01
-      rules:
-      - http:
-          paths:
-            - path: /myapp
-              backend:
-              serviceName: myapp-service
-              servicePort: xxx
+      apiVersion: extensions/v1beta1
+      kind: Ingress
+      metadata:
+        name: ingress-pool01
+        annotations:
+          kubernetes.io/ingress.class: pool01
+          ingress.kubernetes.io/rewrite-target: /
+      spec:
+        rules:
+        - http:
+            paths:
+              - path: /echo
+                backend:
+                  serviceName: echo-service
+                  servicePort: 5678
 
 提示:
 
@@ -103,13 +170,15 @@ b). 不同K8S版本的ingress CR定义可能不同,您需要确保ingress CR
 
 2). 部署ingress规则yaml文件:
 
-      kubectl apply -f ingress-myapp.yaml
+      #kubectl apply -f ingress-myapp.yaml
       ingress.extensions/ingress-myapp created
 
 
 成功完成上述所有步骤后,您就可以通过ingress controller NodePort service验证边缘Ingress功能了:
 
-      curl xxx:32255/myapp
+      #curl xxx:32255/echo
 
       "xxx"       代表节点池pool01中的节点IP地址
-      "32255"     代表对应节点池中ingress controller暴漏的service NodePort
+      "32255"     代表对应节点池中ingress controller暴露的service NodePort
+
+      返回结果应该一直为: “echo from nodepool pool01”。
diff --git a/versioned_docs/version-v0.5.0/user-manuals/network/edge-ingress.md b/versioned_docs/version-v0.5.0/user-manuals/network/edge-ingress.md
deleted file mode 100644
index cff9d906d7..0000000000
--- a/versioned_docs/version-v0.5.0/user-manuals/network/edge-ingress.md
+++ /dev/null
@@ -1,121 +0,0 @@
----
-title: Edge Ingress
----
-
-This document introduces how to access Edge services through Edge Ingress in Cloud Edge scenarios.
-
-Generally, it only needs 2 steps to use the Edge Ingress feature:
-1. Enable the ingress feature on NodePools which provide your desired services.
-2. Create and apply the ingress rule as K8S to access your desired services.
-
-Follow the details below to try the Edge Ingress feature:
-
----
-1.Enable the ingress feature on NodePools which provide your desired services
----
-YurtIngress operator is responsible for deploying the nginx ingress controller to the corresponding NodePools.
-Suppose you have created 4 NodePools in your OpenYurt cluster: pool01, pool02, pool03, pool04, and you want to
-enable edge ingress feature on pool01 and pool03, you can create the YurtIngress CR as below:
-
-1). Create the YurtIngress CR yaml file: (for example: yurtingress-test.yaml)
-
-      apiVersion: apps.openyurt.io/v1alpha1
-      kind: YurtIngress
-      metadata:
-        name: yurtingress-singleton
-      spec:
-          ingress_controller_replicas_per_pool: 1
-          pools:
-            - name: pool01
-            - name: pool03
-
-Notes:
-
-a). YurtIngress CR is a singleton instance from the cluster level, and the CR name must be "yurtingress-singleton".
-
-b). In spec, the "ingress_controller_replicas_per_pool" represents the ingress controller replicas deployed on every pool,
-    It is used for the HA usage scenarios.
-
-c). In spec, the "pools" represents the pools list on which you want to enable ingress feature.
-    Currently it only supports the pool name, but it can be extended to support pool personalized configruations in future.
-
-
-2). Apply the YurtIngress CR yaml file
-
-    kubectl apply -f yurtingress-test.yaml
-    yurtingress.apps.openyurt.io/yurtingress-singleton created
-
-Then you can get the YurtIngress CR to check the status:
-
-    kubectl get ying
-    NAME                    NGINX-INGRESS-VERSION   REPLICAS-PER-POOL   READYNUM   NOTREADYNUM   AGE
-    yurtingress-singleton   0.48.1                  1                   2          0             3m13s
-
-When the ingress controller is enabled successfully, a per-pool NodePort service is created to expose the ingress controller serivce:
-
-    kubectl get svc -A
-    ingress-nginx   pool01-ingress-nginx-controller   NodePort    192.167.107.123   <none>    80:32255/TCP,443:32275/TCP   53m
-    ingress-nginx   pool03-ingress-nginx-controller   NodePort    192.167.48.114    <none>    80:30531/TCP,443:30916/TCP   53m
-
-Notes:
-
-a). "ying" is the shortName of YurtIngress resource.
-
-b). Currently YurtIngress only supports the fixed nginx ingress controller version, we will enhance it in future to support user configurable
-    nginx ingress controller images/versions.
-
-c). When the "READYNUM" equals the pool number you defined in the YurtIngress CR, it represents the ingress feature is ready on all the pool you defined.
-
-d). If the "NOTREADYNUM" is not 0 all the times, you can use "kubectl describe ying yurtingress-singleton" to check the details for the reasons.
-    Also you can check the corresponding deployment (xxx-ingress-nginx-controller, "xxx" represents the pool name) to figure out the reasons why the
-    ingress is not ready yet.
-
-e). For every NodePool which ingress is enable successfully, it exposes a NodePort type service for users to access the nginx ingress controller.
-
----
-2.Create and apply the ingress rule as K8S to access your desired services
----
-When the step 1 above is done, you have successfully deployed the nginx ingress controller to the related NodePools, and the following
-ingress user experience is totally consistent with K8S.
-
-Suppose your app workload is deployed to several NodePools(e.g. pool01 and pool03), and it exposes a global service(e.g. myapp service), and you
-want to access the service provided by pool01:
-
-1). Create the ingress rule yaml file: (for example: ingress-myapp.yaml)
-
-    apiVersion: extensions/v1beta1
-    kind: Ingress
-    metadata:
-      name: ingress-myapp
-      annotations:
-        ingress.kubernetes.io/rewrite-target: /
-    spec:
-      ingressclassName: pool01
-      rules:
-      - http:
-          paths:
-            - path: /myapp
-              backend:
-              serviceName: myapp-service
-              servicePort: xxx
-
-Notes:
-
-a). Ingress class decides which NodePool to provide the ingress capability, so you need to define the ingress class to your desired NodePool name.
-
-b). The ingress CR definition may be different for different K8S versions, so you need ensure the CR definition matches with your cluster K8S version.
-
-
-2). Apply the ingress rule yaml file:
-
-      kubectl apply -f ingress-myapp.yaml
-      ingress.extensions/ingress-myapp created
-
-
-
-After all the steps above are done successfully, you can verify the edge ingress feature through the ingress controller NodePort service:
-
-      curl xxx:32255/myapp
-
-      "xxx" 	represents any NodeIP in NodePool pool01
-      "32255" 	represents the NodePort which pool01 nginx ingress controller service exposes
diff --git a/versioned_docs/version-v0.6.0/user-manuals/network/edge-ingress.md b/versioned_docs/version-v0.6.0/user-manuals/network/edge-ingress.md
index 17ada21438..ad20ae057a 100644
--- a/versioned_docs/version-v0.6.0/user-manuals/network/edge-ingress.md
+++ b/versioned_docs/version-v0.6.0/user-manuals/network/edge-ingress.md
@@ -3,17 +3,20 @@ title: Edge Ingress
 ---
 
 This document introduces how to access Edge services through Edge Ingress in Cloud Edge scenarios.
+Users can access the Edge services from inside or outside of the NodePools, and for the condition
+from outside of the NodePools, only NodePort type ingress controller service is supported by now.
 
 Generally, it only needs 2 steps to use the Edge Ingress feature:
-1. Enable the ingress feature on NodePools which provide your desired services.
-2. Create and apply the ingress rule as K8S to access your desired services.
 
-Follow the details below to try the Edge Ingress feature:
+  1. Enable the ingress feature on NodePools which provide your desired services.
+  2. Create and apply the ingress rule as K8S to access your desired services.
+
+Follow the steps below to try the Edge Ingress feature:
 
 ---
 1.Enable the ingress feature on NodePools which provide your desired services
 ---
-YurtIngress operator is responsible for deploying the nginx ingress controller to the corresponding NodePools.
+YurtIngress operator is responsible for orchestrating multi ingress controllers to the corresponding NodePools.
 Suppose you have created 4 NodePools in your OpenYurt cluster: pool01, pool02, pool03, pool04, and you want to
 enable edge ingress feature on pool01 and pool03, you can create the YurtIngress CR as below:
 
@@ -37,23 +40,23 @@ b). In spec, the "ingress_controller_replicas_per_pool" represents the ingress c
 It is used for the HA usage scenarios.
 
 c). In spec, the "pools" represents the pools list on which you want to enable ingress feature.
-Currently it only supports the pool name, but it can be extended to support pool personalized configruations in future.
+Currently it only supports the pool name, and it can be extended to support pool personalized configurations in future.
 
 
 2). Apply the YurtIngress CR yaml file
 
-    kubectl apply -f yurtingress-test.yaml
+    #kubectl apply -f yurtingress-test.yaml
     yurtingress.apps.openyurt.io/yurtingress-singleton created
 
 Then you can get the YurtIngress CR to check the status:
 
-    kubectl get ying
+    #kubectl get ying
     NAME                    NGINX-INGRESS-VERSION   REPLICAS-PER-POOL   READYNUM   NOTREADYNUM   AGE
     yurtingress-singleton   0.48.1                  1                   2          0             3m13s
 
 When the ingress controller is enabled successfully, a per-pool NodePort service is created to expose the ingress controller serivce:
 
-    kubectl get svc -A
+    #kubectl get svc -n ingress-nginx
     ingress-nginx   pool01-ingress-nginx-controller   NodePort    192.167.107.123   <none>    80:32255/TCP,443:32275/TCP   53m
     ingress-nginx   pool03-ingress-nginx-controller   NodePort    192.167.48.114    <none>    80:30531/TCP,443:30916/TCP   53m
 
@@ -61,16 +64,18 @@ Notes:
 
 a). "ying" is the shortName of YurtIngress resource.
 
-b). Currently YurtIngress only supports the fixed nginx ingress controller version, we will enhance it in future to support user configurable
-nginx ingress controller images/versions.
+b). Currently YurtIngress only supports the fixed nginx ingress controller version, it can be enhanced to support user configurable
+nginx ingress controller images/versions in future.
+
+c). When the "READYNUM" equals the pools number you defined in the YurtIngress CR, it represents the ingress feature is ready on all your spec pools.
 
-c). When the "READYNUM" equals the pool number you defined in the YurtIngress CR, it represents the ingress feature is ready on all the pool you defined.
+d). If the "NOTREADYNUM" is not 0 all the times, you can check the YurtIngress CR for the the status infomation.
+Also you can check the corresponding deployments and pods to figure out why the ingress is not ready yet.
 
-d). If the "NOTREADYNUM" is not 0 all the times, you can use "kubectl describe ying yurtingress-singleton" to check the details for the reasons.
-Also you can check the corresponding deployment (xxx-ingress-nginx-controller, "xxx" represents the pool name) to figure out the reasons why the
-ingress is not ready yet.
+e). For every NodePool which ingress is enabled successfully, it exposes a NodePort type service for users to access the nginx ingress controller.
 
-e). For every NodePool which ingress is enable successfully, it exposes a NodePort type service for users to access the nginx ingress controller.
+f). When the ingress controllers are orchestrated to the specified NodePools, an "ingress-nginx" namespace will be created, and all the namespace
+related resources will be created under it.
 
 ---
 2.Create and apply the ingress rule as K8S to access your desired services
@@ -78,26 +83,90 @@ e). For every NodePool which ingress is enable successfully, it exposes a NodePo
 When the step 1 above is done, you have successfully deployed the nginx ingress controller to the related NodePools, and the following
 ingress user experience is totally consistent with K8S.
 
-Suppose your app workload is deployed to several NodePools(e.g. pool01 and pool03), and it exposes a global service(e.g. myapp service), and you
-want to access the service provided by pool01:
+Suppose your app workload is deployed to several NodePools and it exposes a global service, for example:
+
+      apiVersion: apps/v1
+      kind: Deployment
+      metadata:
+        name: pool01-deployment
+        labels:
+          app: echo
+      spec:
+        replicas: 2
+        selector:
+          matchLabels:
+            app: echo
+        template:
+          metadata:
+            labels:
+              app: echo
+          spec:
+            containers:
+            - name: echo-app
+              image: hashicorp/http-echo
+              args:
+                - "-text=echo from nodepool pool01"
+              imagePullPolicy: IfNotPresent
+            nodeSelector:
+              apps.openyurt.io/nodepool: pool01
+      ---
+
+      apiVersion: apps/v1
+      kind: Deployment
+      metadata:
+        name: pool03-deployment
+        labels:
+          app: echo
+      spec:
+        replicas: 2
+        selector:
+          matchLabels:
+            app: echo
+        template:
+          metadata:
+            labels:
+              app: echo
+          spec:
+            containers:
+            - name: echo-app
+              image: hashicorp/http-echo
+              args:
+                - "-text=echo from nodepool pool03"
+              imagePullPolicy: IfNotPresent
+            nodeSelector:
+              apps.openyurt.io/nodepool: pool03
+      ---
+
+      kind: Service
+      apiVersion: v1
+      metadata:
+        name: echo-service
+      spec:
+        selector:
+          app: echo
+        ports:
+          - port: 5678
+
+
+If you want to access the service provided by pool01:
 
 1). Create the ingress rule yaml file: (for example: ingress-myapp.yaml)
 
-    apiVersion: extensions/v1beta1
-    kind: Ingress
-    metadata:
-      name: ingress-myapp
-      annotations:
-        ingress.kubernetes.io/rewrite-target: /
-    spec:
-      ingressclassName: pool01
-      rules:
-      - http:
-          paths:
-            - path: /myapp
-              backend:
-              serviceName: myapp-service
-              servicePort: xxx
+      apiVersion: extensions/v1beta1
+      kind: Ingress
+      metadata:
+        name: ingress-pool01
+        annotations:
+          kubernetes.io/ingress.class: pool01
+          ingress.kubernetes.io/rewrite-target: /
+      spec:
+        rules:
+        - http:
+            paths:
+              - path: /echo
+                backend:
+                  serviceName: echo-service
+                  servicePort: 5678
 
 Notes:
 
@@ -108,14 +177,16 @@ b). The ingress CR definition may be different for different K8S versions, so yo
 
 2). Apply the ingress rule yaml file:
 
-      kubectl apply -f ingress-myapp.yaml
+      #kubectl apply -f ingress-myapp.yaml
       ingress.extensions/ingress-myapp created
 
 
 
 After all the steps above are done successfully, you can verify the edge ingress feature through the ingress controller NodePort service:
 
-      curl xxx:32255/myapp
+      #curl xxx:32255/echo
 
       "xxx" 	represents any NodeIP in NodePool pool01
       "32255" 	represents the NodePort which pool01 nginx ingress controller service exposes
+
+      It should return "echo from nodepool pool01" all the times.
diff --git a/versioned_sidebars/version-v0.5.0-sidebars.json b/versioned_sidebars/version-v0.5.0-sidebars.json
index c94e3eb8ae..4be91fba98 100644
--- a/versioned_sidebars/version-v0.5.0-sidebars.json
+++ b/versioned_sidebars/version-v0.5.0-sidebars.json
@@ -58,8 +58,7 @@
                 {
                     "Network": [
                         "version-v0.5.0/user-manuals/network/edge-pod-network",
-                        "version-v0.5.0/user-manuals/network/service-topology",
-                        "version-v0.5.0/user-manuals/network/edge-ingress"
+                        "version-v0.5.0/user-manuals/network/service-topology"
                     ]
                 },
                 {