diff --git a/docs/docs/0.7.3/documentation/deployments.md b/docs/docs/0.7.3/documentation/deployments.md
index 99b4472fd..27be7a1f3 100644
--- a/docs/docs/0.7.3/documentation/deployments.md
+++ b/docs/docs/0.7.3/documentation/deployments.md
@@ -21,7 +21,7 @@ This is the simplest deployment mode for Kuma, and the default one.
* **Data planes**: The data planes connect to the control plane regardless of where they are being deployed.
* **Service Connectivity**: Every data plane proxy must be able to connect to every other data plane proxy regardless of where they are being deployed.
-This mode implies that we can deploy Kuma and its data plane proxies in a standalone networking topology mode so that the service connectivity from every data plane proxy can be estabilished directly to every other data plane proxy.
+This mode implies that we can deploy Kuma and its data plane proxies in a standalone networking topology mode so that the service connectivity from every data plane proxy can be established directly to every other data plane proxy.
@@ -29,7 +29,7 @@ This mode implies that we can deploy Kuma and its data plane proxies in a standa
Although standalone mode can support complex multi-cluster or hybrid deployments (Kubernetes + VMs) as long as the networking requirements are satisfied, typically in most use cases our connectivity cannot be flattened out across multiple clusters. Therefore standalone mode is usually a great choice within the context of one cluster (ie: within one Kubernetes cluster or one AWS VPC).
-For those situations where the standalone deployment mode doesn't satistfy our architecture, Kuma provides a [multi-zone mode](#multi-zone-mode) which is more powerful and provides a greater degree of flexibility in more complex environments.
+For those situations where the standalone deployment mode doesn't satisfy our architecture, Kuma provides a [multi-zone mode](#multi-zone-mode) which is more powerful and provides a greater degree of flexibility in more complex environments.
### Usage
@@ -61,16 +61,16 @@ When the mode is not specified, Kuma will always start in `standalone` mode by d
This is a more advanced deployment mode for Kuma that allow us to support service meshes that are running on many clusters, including hybrid deployments on both Kubernetes and VMs.
* **Control plane**: There is one `global` control plane, and many `remote` control planes. A global control plane only accepts connections from remote control planes.
-* **Data planes**: The data planes connect to the closest `remote` control plane in the same zone. Additionally, we need to start an `ingress` data plane on every zone.
+* **Data planes**: The data planes connect to the closest `remote` control plane in the same zone. Additionally, we need to start an `ingress` data plane on every zone to have cross-zone communication between data planes in different zones.
* **Service Connectivity**: Automatically resolved via the built-in DNS resolver that ships with Kuma. When a service wants to consume another service, it will resolve the DNS address of the desired service with Kuma, and Kuma will respond with a Virtual IP address, that corresponds to that service in the Kuma service domain.
:::tip
-We can support multiple isolated service meshes thanks to Kuma's multi-tenancy support, and workloads from both Kubernetes or any other supported Universal environment can participate in the Service Mesh across different regions, clouds and datacenters while not compromizing the ease of use and still allowing for end-to-end service connectivity.
+We can support multiple isolated service meshes thanks to Kuma's multi-tenancy support, and workloads from both Kubernetes or any other supported Universal environment can participate in the Service Mesh across different regions, clouds, and datacenters while not compromizing the ease of use and still allowing for end-to-end service connectivity.
:::
When running in multi-zone mode, we introduce the notion of a `global` and `remote` control planes for Kuma:
-* **Global**: this control plane will be used to configure the global Service Mesh [policies](/policies) that we want to apply to our data plane proxies. Data plane proxies **cannot** connect direclty to a global control plane, but can connect to `remote` control planes that are being deployed on each underlying zone that we want to include as part of the Service Mesh (can be a Kubernetes cluster, or a VM based cluster). Only one deployment of the global control plane is required, and it can be scaled horizontally.
+* **Global**: this control plane will be used to configure the global Service Mesh [policies](/policies) that we want to apply to our data plane proxies. Data plane proxies **cannot** connect directly to a global control plane, but can connect to `remote` control planes that are being deployed on each underlying zone that we want to include as part of the Service Mesh (can be a Kubernetes cluster, or a VM based cluster). Only one deployment of the global control plane is required, and it can be scaled horizontally.
* **Remote**: we are going to have as many remote control planes as the number of underlying Kubernetes or VM zones that we want to include in a Kuma [mesh](/docs/latest/policies/mesh/). Remote control planes will accept connections from data planes that are being started in the same underlying zone, and they will themselves connect to the `global` control plane in order to fetch the service mesh policies that have been configured. Remote control plane policy APIs are read-only and **cannot** accept Service Mesh policies to be directly configured on them. They can be scaled horizontally within their zone.
In this deployment, a Kuma cluster is made of one global control plane and as many remote control planes as the number of zones that we want to support:
@@ -81,7 +81,7 @@ In this deployment, a Kuma cluster is made of one global control plane and as ma
-In a multi-zone deployment mode, services will be running on multiple platforms, clouds or Kubernetes clusters (which are identifies as `zones` in Kuma). While all of them will be part of a Kuma mesh by connecting their data plane proxies to the local `remote` control plane in the same zone, implementing service to service connectivity would be tricky since a source service may not know where a destination service is being hosted at (for instance, in another zone).
+In a multi-zone deployment mode, services will be running on multiple platforms, clouds, or Kubernetes clusters (which are identifies as `zones` in Kuma). While all of them will be part of a Kuma mesh by connecting their data plane proxies to the local `remote` control plane in the same zone, implementing service to service connectivity would be tricky since a source service may not know where a destination service is being hosted at (for instance, in another zone).
To implement easy service connectivity, Kuma ships with:
@@ -134,7 +134,7 @@ $ helm install kuma --namespace kuma-system --set controlPlane.mode=global kuma/
:::
::: tab "Universal"
-Running the Global Control Plane setting up the relevant environment variale
+Running the Global Control Plane setting up the relevant environment variable
```sh
$ KUMA_MODE=global kuma-cp run
```
@@ -144,35 +144,38 @@ $ KUMA_MODE=global kuma-cp run
### Remote control plane
Start the `remote` control planes in each zone that will be part of the multi-zone Kuma deployment.
+To install `remote` control plane, you need to assign the zone name for each of them and point it to the Global CP.
:::: tabs :options="{ useUrlFragment: false }"
::: tab "Kubernetes"
```sh
-$ kumactl install control-plane --mode=remote --zone= --kds-global-address grpcs://`` | kubectl apply -f -
-$ kumactl install ingress | kubectl apply -f -
+$ kumactl install control-plane \
+ --mode=remote \
+ --zone= \
+ --ingress-enabled \
+ --kds-global-address grpcs://`` | kubectl apply -f -
$ kumactl install dns | kubectl apply -f -
```
-Get the Remote Kuma Ingress Address:
-
-```bash
-$ kubectl get services -n kuma-system
-NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
-kuma-system kuma-control-plane ClusterIP 10.105.12.133 5681/TCP,443/TCP,5676/TCP,5677/TCP,5678/TCP,5679/TCP,5682/TCP,5653/UDP 90s
-kuma-system kuma-ingress LoadBalancer 10.105.10.20 34.68.185.18 10001:30991/TCP 29s
-```
-
-In this example this would be `kuma-ingress` at `34.68.185.18:10001`. This will be used as `` below.
-
::: tip
-Kuma DNS installation supports several flavors of Core DNS and Kube DNS. We recommend checking the configuration of the Kubernetes cluster after deploying Kuma remote control plane to ensure evrything is as expected.
+Kuma DNS installation supports several flavors of Core DNS and Kube DNS. We recommend checking the configuration of the Kubernetes cluster after deploying Kuma remote control plane to ensure everything is as expected.
:::
::: tab "Helm"
-To install the Remote Control plane we need to provide the following parameters `controlPlane.mode=remote`,`controlPlane.zone=`, `ingress.enabled=true` and `controlPlane.kdsGlobalAddress=grpcs://`:
+To install the Remote Control plane we need to provide the following parameters:
+ * `controlPlane.mode=remote`
+ * `controlPlane.zone=`
+ * `ingress.enabled=true`
+ * `controlPlane.kdsGlobalAddress=grpcs://`:
```bash
$ helm install kuma --namespace kuma-system --set controlPlane.mode=remote,controlPlane.zone=,ingress.enabled=true,controlPlane.kdsGlobalAddress=grpcs:// kuma/kuma
+$ kumactl install dns | kubectl apply -f -
```
+
+::: tip
+Kuma DNS installation supports several flavors of Core DNS and Kube DNS. We recommend checking the configuration of the Kubernetes cluster after deploying Kuma remote control plane to ensure evrything is as expected.
+
+To install DNS we need to use `kumactl`. It reads the state of the control plane therefore it could not be put into HELM. You can track the issue to put this into HELM [here](https://github.com/kumahq/kuma/issues/1124).
:::
::: tab "Universal"
@@ -180,73 +183,61 @@ Run the `kuma-cp` in `remote` mode.
```sh
$ KUMA_MODE=remote \
-KUMA_MULTICLUSTER_REMOTE_ZONE= \
-KUMA_MULTICLUSTER_REMOTE_GLOBAL_ADDRESS=grpcs:// ./kuma-cp run
+ KUMA_MULTIZONE_REMOTE_ZONE= \
+ KUMA_MULTIZONE_REMOTE_GLOBAL_ADDRESS=grpcs:// \
+ ./kuma-cp run
```
-Where `` is the name of the zone mathcing one of the Zone resources to be created at the Global CP. `` is the public address as obtained during the Global CP deployment step.
+Where `` is the name of the zone matching one of the Zone resources to be created at the Global CP. `` is the public address as obtained during the Global CP deployment step.
-Add an `ingress` dataplane, so `kuma-cp` can expose its services for cross-cluster communication.
+Add an `ingress` data plane proxy, so `kuma-cp` can expose its services for cross-zone communication.
```bash
$ echo "type: Dataplane
mesh: default
name: ingress-01
networking:
- address: 127.0.0.1
- ingress: {}
+ address: 127.0.0.1 # address that is routable within the cluster
+ ingress:
+ publicAddress: 10.0.0.1 # an address which other clusters can use to consume this ingress
+ publicPort: 10000 # a port which other clusters can use to consume this ingress
inbound:
- port: 10000
tags:
- kuma.io/service: ingress" | kumactl apply -f -
-
-$ kumactl generate dataplane-token --dataplane=ingress-01 > /tmp/cluster1-ingress-token
-$ kuma-dp run --name=ingress-01 --cp-address=http://localhost:15681 --dataplane-token-file=/tmp/cluster1-ingress-token --log-level=debug
+ kuma.io/service: ingress" > ingress-dp.yaml
+$ kumactl generate dataplane-token --type=ingress > /tmp/ingress-token
+$ kuma-dp run \
+ --cp-address=https://localhost:5678 \
+ --dataplane-token-file=/tmp/ingress-token \
+ --dataplane-file=ingress-dp.yaml
```
-Adding more dataplanes can be done locally by following the Use Kuma section in the [installation page](/install).
+Adding more data plane proxies can be done locally by following the Use Kuma section in the [installation page](/install).
:::
::::
+### Verify control plane connectivity
-### Create the Zone resources
-
-:::: tabs :options="{ useUrlFragment: false }"
-::: tab "Kubernetes"
-
-We can now create a Zone resource for each Zone we will add. These can be added at any point in time, before or after the Remote CP is deployed. The format of the resource is as follows:
-```yaml
-$ echo "apiVersion: kuma.io/v1alpha1
-kind: Zone
-mesh: default
-metadata:
- name: zone-1
-spec:
- ingress:
- address: " | kubectl apply -f -
-```
+When a remote control plane connects to the global control plane, the `Zone` resource is created automatically in the global control plane.
+You can verify if a remote control plane is connected to the global control plane by inspecting the list of zones in the global control plane GUI (`:5681/gui/#/zones`) or by using `kumactl get zones`.
-:::
-::: tab "Universal"
+Additionally, if you deployed remote control plane with Ingress, it should be visible in the Ingress tab of the GUI.
+Cross-zone communication between services is only available if Ingress has a public address and public port.
+Note that on Kubernetes, Kuma automatically tries to pick up the public address and port. Depending on the LB implementation of your Kubernetes provider, you may need to wait a couple of minutes to receive the address.
-We can now create a Zone resource for each Zone we will add. These can be added at any point in time, before or after the Remote CP is deployed. The format of the resource is as follows:
-```yaml
-$ echo "type: Zone
-name: zone-1
-ingress:
- address: " | kumactl apply -f -
-```
+### Enable mTLS
-::::
+Cross-zone communication between services is only possible when mTLS is enabled, because Ingress is routing connections using SNI.
+Make sure you [enable mTLS](../policies/mutual-tls.md) and apply [Traffic Permission](../policies/traffic-permissions.md).
### Using the multi-zone deployment
-To utilize the multi-zonse Kuma deployment follow the steps below
+To utilize the multi-zone Kuma deployment follow the steps below
:::: tabs :options="{ useUrlFragment: false }"
::: tab "Kubernetes"
-To figure out the service names that we can use in the applications for cross-cluster communication, we can look at the
-service tag in the deployed dataplanes:
+To figure out the service names that we can use in the applications for cross-zone communication, we can look at the
+service tag in the deployed data plane proxies:
```bash
$ kubectl get dataplanes -n echo-example -o yaml | grep service
@@ -263,7 +254,7 @@ Kuma DNS assigns to services in the `.mesh` DNS zone. Therefore, we have three w
```
The first method still works, but is limited to endpoints implemented within the same Kuma zone (i.e. the same Kubernetes cluster).
The second option allows to consume a service that is distributed across the Kuma cluster (bound by the same `global` control plane). For
-example the there can be an endpoint running in another Kuma zone in a different data-center.
+example there can be an endpoint running in another Kuma zone in a different data-center.
Since most HTTP clients (such as `curl`) will default to port 80, the port can be omitted, like in the third option above.
:::
@@ -282,7 +273,6 @@ networking:
servicePort: 1010
tags:
kuma.io/service: echo-server_echo-example_svc_1010
- version: "2"
```
If a multi-zone Universal control plane is used, the service tag has no such limitation.
@@ -292,14 +282,18 @@ And to consume the distributed service from a Universal deployment, where the ap
```yaml
type: Dataplane
mesh: default
-name: backend-02
+name: web-02
networking:
address: 127.0.0.1
+ inbound:
+ - port: 10000
+ servicePort: 10001
+ tags:
+ kuma.io/service: web
outbound:
- port: 20012
tags:
kuma.io/service: echo-server_echo-example_svc_1010
- version: "2"
```
:::
diff --git a/docs/docs/0.7.3/documentation/dps-and-data-model.md b/docs/docs/0.7.3/documentation/dps-and-data-model.md
index 064e6ff0e..00a3ad139 100644
--- a/docs/docs/0.7.3/documentation/dps-and-data-model.md
+++ b/docs/docs/0.7.3/documentation/dps-and-data-model.md
@@ -299,10 +299,30 @@ For an in-depth example on deploying Kuma with [Kong for Kubernetes](https://git
## Ingress
-To implement cross-zone communication when Kuma is deployed in a [multi-zone](/docs/0.7.3/documentation/deployments/#multi-zone-mode) mode, the `Dataplane` model introduces the `Ingress` mode. Such dataplane is not attached to any particular workload, but instead it is bound to that particular zone.
-The specifics of the `Ingress` dataplane are described in the `networking.ingress` dictionary in the YAML resource. For the time being this one is empty, instead it denotes the `Ingress` mode of the dataplane.
+To implement cross-zone communication when Kuma is deployed in a [multi-zone](/docs/0.7.3/documentation/deployments/#multi-zone-mode) mode, the `Dataplane` model introduces the `Ingress` mode. Such data plane is not attached to any particular workload, but instead, it is bound to that particular zone.
+All the requests that are sent from one zone to another will be directed to the proper instance by the Ingress.
+The specifics of the `Ingress` data plane are described in the `networking.ingress` dictionary in the YAML resource.
+Ingress has a regular address and one inbound just like a regular data plane, this address is routable within the local Ingress zone. It also has the following public coordinates:
+* `networking.ingress.publicAddress` - an IP address or hostname which will be used by data plane proxies from other zones
+* `networking.ingress.publicPort` - a port which will be used by data plane proxies from other zones
-### Universal
+Ingress that don't have this information is not taken into account when generating Envoy configuration, because they cannot be accessed by data plane proxies from other zones.
+
+:::: tabs :options="{ useUrlFragment: false }"
+::: tab "Kubernetes"
+The recommended way to deploy an `Ingress` dataplane in Kubernetes is to use `kumactl`, or the Helm charts as specified in [multi-zone](/docs/0.7.3/documentation/deployments/#remote-control-plane). It works as a separate deployment of a single-container pod.
+
+Kuma will try to resolve `networking.ingress.publicAddress` and `networking.ingress.publicPort` automatically by checking the Service associated with this Ingress.
+
+If the Service type is Load Balancer, Kuma will wait for public IP to be resolved. It may take a couple of minutes to receive public IP depending on the LB implementation of your Kubernetes provider.
+
+If the Service type is Node Port, Kuma will take an External IP of the first Node in the cluster and combine it with Node Port.
+
+You can provide your own public address and port using the following annotations on the Ingress deployment
+* `kuma.io/ingress-public-address`
+* `kuma.io/ingress-public-port`
+:::
+::: tab "Universal"
In Universal mode the dataplane resource should be deployed as follows:
@@ -311,19 +331,18 @@ type: Dataplane
mesh: default
name: dp-ingress
networking:
- address: 10.0.0.1
- ingress: {}
+ address: 192.168.0.1
+ ingress:
+ publicAddress: 10.0.0.1
+ publicPort: 10000
inbound:
- port: 10001
tags:
kuma.io/service: ingress
```
+::::
-The `networking.address` is and externally accessible IP or one behind a LoadBalancer. The `inbound` port shall be accessible from the other Zones that are about to communicate with the zone that deploys that particular `Ingress` dataplane.
-
-### Kubernetes
-
-The recommended way to deploy an `Ingress` dataplane in Kubernetes is to use `kumactl`, or the Helm charts as specified in [multi-zone](/docs/0.7.3/documentation/deployments/#remote-control-plane). It works as a separate deployment of a single-container pod.
+Ingress deployment can be scaled horizontally. Many instances can have the same public address and port because they can be put behind one load balancer.
## Direct access to services