Skip to content

Commit

Permalink
chore(*) update ecs examples (#1446) (#1451)
Browse files Browse the repository at this point in the history
* fix(*) bootstrap pem decode error check

* chore(*) bump default version to 1.0.5

* chore(*) make basic standalone work

* chore(*) make multizone work

* docs(*) update README

* chore(*) split ingress to a separate template

Signed-off-by: Nikolay Nikolaev <[email protected]>
(cherry picked from commit 0362bca)

Co-authored-by: Nikolay Nikolaev <[email protected]>
  • Loading branch information
mergify[bot] and Nikolay Nikolaev authored Jan 20, 2021
1 parent 2b80ba9 commit 89abe00
Show file tree
Hide file tree
Showing 7 changed files with 414 additions and 147 deletions.
115 changes: 80 additions & 35 deletions examples/ecs/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,12 @@ aws cloudformation deploy \
--template-file kuma-cp-standalone.yaml
```

The `kuma-vpc` stack is the default for the `VPCStackName` parameter. Note that `AllowedCidr` parameter and override it accordingly to enable access to Kuma CP ports accordingly.
The `kuma-vpc` stack is the default for the `VPCStackName` parameter. Note that `AllowedCidr` parameter and override it accordingly to enable access to Kuma CP ports.

To remove the `kuma-cp` stack use:
```shell
aws cloudformation delete-stack --stack-name kuma-cp
```

### Global

Expand All @@ -44,7 +49,7 @@ aws cloudformation deploy \

### Remote

Setting up a remote `kuma-cp` is a two step process. First, deploy the kuma-cp itself:
Setting up a remote `kuma-cp` is a three step process. First, deploy the kuma-cp itself:

```bash
aws cloudformation deploy \
Expand All @@ -53,43 +58,37 @@ aws cloudformation deploy \
--template-file kuma-cp-remote.yaml
```

Then add a resource in the global (see how to configure `kumactl` in the next session)

#### ECS/Universal
```bash
echo "type: Zone
name: zone-1
ingress:
address: <zone-ingress-address>" | kumactl apply -f -
```

#### Kubernetes
#### OPTIONAL: Configure `kumactl` to access the API
Find the public IP address fo the remote or standalone `kuma-cp` and use it in the command below.

```bash
echo "apiVersion: kuma.io/v1alpha1
kind: Zone
mesh: default
metadata:
name: zone-1
spec:
ingress:
address: <zone-ingress-address>" | kubectl apply -f -
export PUBLIC_IP=<ip address>
kumactl config control-planes add --name=ecs --address=http://$PUBLIC_IP:5681 --overwrite
```

Where `<zone-ingress-address>` is composed of the public address of the remote kuma-cp and the port assigned for the Ingress.
### Install the Zone Ingress

For cross-zone communication Kuma needs the Ingress DP deployed. As every dataplane (see details in the `workload` chapter below) it needs a dataplane token generated

## OPTIONAL: Configure `kumactl` to access the API
Find the public IP address fo the remote or standalone `kuma-cp` and use it in the command below.
```shell
ssh root@<kuma-cp-remote-ip> "wget --header='Content-Type: application/json' --post-data='{\"mesh\": \"default\", \"type\": \"ingress\"}' -qO- http://localhost:5681/tokens"
```

```bash
export PUBLIC_IP=<ip address>
kumactl config control-planes add --name=ecs --address=http://$PUBLIC_IP:5681 --overwrite
Then simply deploy the ingress itself:

```shell
aws cloudformation deploy \
--capabilities CAPABILITY_IAM \
--stack-name ingress \
--template-file remote-ingress.yaml \
--parameter-overrides \
DPToken="<DP_TOKEN_VALUE>"
```

### Install the Kuma DNS

The services within the Kuma mesh are exposed whtough their names (as defined in the `kuma.io/service` tag) in the `.mesh` DNS zone. In the default workload example that would be `httpbin.mesh`.
The services within the Kuma mesh are exposed through their names (as defined in the `kuma.io/service` tag) in the `.mesh` DNS zone. In the default workload example that would be `httpbin.mesh`.
Run the following command to create the necessary Forwarding rules in Route 53 and leverage the integrated DNS server in `kuma-cp`.

```bash
Expand All @@ -106,18 +105,25 @@ Note: We strongly recommend exposing the Kuma-CP instances behind a load balance
### Install the workload

The `workload` template provides a basic example how `kuma-dp` can be run as a sidecar container alongside an arbitrary, single port service container.
In order to run `kuma-dp` container, we have to issue a token. Token could be generated using Admin API of the Kuma CP. By default Admin API
doesn't require security and serves only on `localhost`. If you'd like to run Admin API on the public interface please
check the [instructions](https://kuma.io/docs/0.7.1/documentation/security/#accessing-admin-server-from-a-different-machine).
In order to run `kuma-dp` container, we have to issue a token. Token could be generated using Admin API of the Kuma CP.

Run in the same network namespace as Kuma CP (this example deploys ssh server as a sidecar for Kuma CP):
In this example we'll show the simplest form to generate it by executing this command alongside the `kuma-cp`:
```bash
wget --header='Content-Type: application/json' --post-data='{"mesh": "default"}' -O /tmp/dp-httpbin-1 http://localhost:5681/tokens
ssh root@<kuma-cp-ip> "wget --header='Content-Type: application/json' --post-data='{\"mesh\": \"default\"}' -qO- http://localhost:5681/tokens"
```
Note: this command generates token which is valid for all Dataplanes in the `default` mesh. Kuma also allows you to generate tokens based
on Dataplane's Name and Tags.

The passowrd is `root`, as noted in the beginning these are sample deployments and it is not adviseable

The generated token is valid for all Dataplanes in the `default` mesh. Kuma also allows you to generate tokens based
on Dataplane's Name and Tags.

Note: Kuma allows much more advanced and secure way to expose the `/tokens` endpoint. For this it needs to have `HTTPS` endpoint configured
on port `5682` as well as client ceritificate setup for authentication. The full procedure is available in Kuma Security documentation
[Data plane proxy authentication](https://kuma.io/docs/1.0.5/documentation/security/#data-plane-proxy-to-control-plane-communication),
[User to control plane communication](https://kuma.io/docs/1.0.5/documentation/security/#user-to-control-plane-communication)

#### Standalone

```bash
aws cloudformation deploy \
--capabilities CAPABILITY_IAM \
Expand All @@ -129,6 +135,7 @@ aws cloudformation deploy \
```

#### Remote

```bash
aws cloudformation deploy \
--capabilities CAPABILITY_IAM \
Expand All @@ -137,11 +144,49 @@ aws cloudformation deploy \
--parameter-overrides \
DesiredCount=2 \
DPToken="<DP_TOKEN_VALUE>" \
CPAddress="http://zone-1-controlplane.kuma.io:5681"
CPAddress="https://zone-1-controlplane.kuma.io:5678"
```

The `workload` template has a lot of parameters, so it can be customized for many scenarios, with different workload images, service name and port etc. Find more information in the template itself.

## A second zone example
Here is an example how to run a second workload with the same SSH server in a second zone:

First create the second zone:

```shell
kumactl generate tls-certificate --type=server --cp-hostname zone-2-controlplane.kuma.io
export KEY=$(cat key.pem)
export CERT=$(cat cert.pem)
aws cloudformation deploy \
--capabilities CAPABILITY_IAM \
--stack-name kuma-cp-remote-2 \
--template-file kuma-cp-remote.yaml \
--parameter-overrides \
ServerCert=$CERT \
ServerKey=$KEY \
Zone=zone-2
```

```shell
aws cloudformation deploy \
--capabilities CAPABILITY_IAM \
--stack-name workload-2 \
--template-file workload.yaml \
--parameter-overrides \
WorkloadName=ssh \
WorkloadImage=sickp/alpine-sshd:latest \
WorkloadManagementPort=22 \
CPAddress="https://zone-2-controlplane.kuma.io:5678" \
DPToken="<DP_TOKEN_VALUE>"
```

Finally log in the new workload container and access the `httpbin.mesh` service:

```shell
ssh roo@<workload-2-ip>
wget -qO- httpbin.mesh
```

# Future work

Expand Down
8 changes: 6 additions & 2 deletions examples/ecs/kuma-cp-global.yaml
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
AWSTemplateFormatVersion: "2010-09-09"
Description: Kuma Control Plane on ECS
Description: Kuma Global Control Plane on ECS
Parameters:
VPCStackName:
Type: String
Expand All @@ -8,7 +8,7 @@ Parameters:
to locate and reference resources created by that stack.
Image:
Type: String
Default: "kong-docker-kuma-docker.bintray.io/kuma-cp:080-preview-2"
Default: "kong-docker-kuma-docker.bintray.io/kuma-cp:1.0.5"
Description: The name of the kuma-cp docker image
AllowedCidr:
Type: String
Expand Down Expand Up @@ -173,8 +173,12 @@ Resources:
Essential: true
Image: !Ref Image
PortMappings:
- ContainerPort: 5680
Protocol: tcp
- ContainerPort: 5681
Protocol: tcp
- ContainerPort: 5682
Protocol: tcp
- ContainerPort: 5685
Protocol: tcp
User: root:root # needed for UDP port 53 binding
Expand Down
Loading

0 comments on commit 89abe00

Please sign in to comment.