Skip to content

Commit

Permalink
Add CONSUL_RETRY_JOIN_WAN env and restructure examples (#48)
Browse files Browse the repository at this point in the history
Resolves #47, enabling multi-region deployments with automatic configuration of relevant options in Consul. Environment variables related to these can be overridden and are documented in the README.
  • Loading branch information
tjcelaya authored Jan 5, 2018
1 parent eff97e4 commit e4f9031
Show file tree
Hide file tree
Showing 11 changed files with 317 additions and 23 deletions.
3 changes: 2 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
@@ -1 +1,2 @@
_env
_env*
examples/triton-multi-dc/docker-compose-*.yml
70 changes: 55 additions & 15 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ When run locally for testing, we don't have access to Triton CNS. The `local-com
1. [Get a Joyent account](https://my.joyent.com/landing/signup/) and [add your SSH key](https://docs.joyent.com/public-cloud/getting-started).
1. Install the [Docker Toolbox](https://docs.docker.com/installation/mac/) (including `docker` and `docker-compose`) on your laptop or other environment, as well as the [Joyent Triton CLI](https://www.joyent.com/blog/introducing-the-triton-command-line-tool) (`triton` replaces our old `sdc-*` CLI tools).

Check that everything is configured correctly by running `./setup.sh`. This will check that your environment is setup correctly and will create an `_env` file that includes injecting an environment variable for a service name for Consul in Triton CNS. We'll use this CNS name to bootstrap the cluster.
Check that everything is configured correctly by changing to the `examples/triton` directory and executing `./setup.sh`. This will check that your environment is setup correctly and will create an `_env` file that includes injecting an environment variable for a service name for Consul in Triton CNS. We'll use this CNS name to bootstrap the cluster.

```bash
$ docker-compose up -d
Expand Down Expand Up @@ -52,6 +52,60 @@ $ docker exec -it consul_consul_3 consul info | grep num_peers

```

### Run it with more than one datacenter!

Within the `examples/triton-multi-dc` directory, execute `./setup-multi-dc.sh`, providing as arguments Triton profiles which belong to the desired data centers.

Since interacting with multiple data centers requires switching between Triton profiles it's easier to perform the following steps in separate terminals. It is possible to perform all the steps for a single data center and then change profiles. Additionally, setting `COMPOSE_PROJECT_NAME` to match the profile or data center will help distinguish nodes in Triton Portal and the `triton instance ls` listing.

One `_env` and one `docker-compose-<PROFILE>.yml` should be generated for each profile. Execute the following commands, once for each profile/datacenter, within `examples/triton-multi-dc`:

```
$ eval "$(TRITON_PROFILE=<PROFILE> triton env -d)"
# The following helps when executing docker-compose multiple times. Alternatively, pass the -f flag to each invocation of docker-compose.
$ export COMPOSE_FILE=docker-compose-<PROFILE>.yml
# The following is not strictly necessary but helps to discern between clusters. Alternatively, pass the -p flag to each invocation of docker-compose.
$ export COMPOSE_PROJECT_NAME=<PROFILE>
$ docker-compose up -d
Creating <PROFILE>_consul_1 ... done
$ docker-compose scale consul=3
```

Note: the `cns.joyent.com` hostnames cannot be resolved from outside the datacenters. Change `cns.joyent.com` to `triton.zone` to access the web UI.

## Environment Variables

- `CONSUL_DEV`: Enable development mode, allowing a node to self-elect as a cluster leader. Consul flag: [`-dev`](https://www.consul.io/docs/agent/options.html#_dev).
- The following errors will occur if `CONSUL_DEV` is omitted and not enough Consul instances are deployed:
```
[ERR] agent: failed to sync remote state: No cluster leader
[ERR] agent: failed to sync changes: No cluster leader
[ERR] agent: Coordinate update error: No cluster leader
```
- `CONSUL_DATACENTER_NAME`: Explicitly set the name of the data center in which Consul is running. Consul flag: [`-datacenter`](https://www.consul.io/docs/agent/options.html#datacenter).
- If this variable is specified it will be used as-is.
- If not specified, automatic detection of the datacenter will be attempted. See [issue #23](https://github.com/autopilotpattern/consul/issues/23) for more details.
- Consul's default of "dc1" will be used if none of the above apply.
- `CONSUL_BIND_ADDR`: Explicitly set the corresponding Consul configuration. This value will be set to `0.0.0.0` if `CONSUL_BIND_ADDR` is not specified and `CONSUL_RETRY_JOIN_WAN` is provided. Be aware of the security implications of binding the server to a public address and consider setting up encryption or using a VPN to isolate WAN traffic from the public internet.
- `CONSUL_SERF_LAN_BIND`: Explicitly set the corresponding Consul configuration. This value will be set to the server's private address automatically if not specified. Consul flag: [`-serf-lan-bind`](https://www.consul.io/docs/agent/options.html#serf_lan_bind).
- `CONSUL_SERF_WAN_BIND`: Explicitly set the corresponding Consul configuration. This value will be set to the server's public address automatically if not specified. Consul flag: [`-serf-wan-bind`](https://www.consul.io/docs/agent/options.html#serf_wan_bind).
- `CONSUL_ADVERTISE_ADDR`: Explicitly set the corresponding Consul configuration. This value will be set to the server's private address automatically if not specified. Consul flag: [`-advertise-addr`](https://www.consul.io/docs/agent/options.html#advertise_addr).
- `CONSUL_ADVERTISE_ADDR_WAN`: Explicitly set the corresponding Consul configuration. This value will be set to the server's public address automatically if not specified. Consul flag: [`-advertise-addr-wan`](https://www.consul.io/docs/agent/options.html#advertise_addr_wan).
- `CONSUL_RETRY_JOIN_WAN`: sets the remote datacenter addresses to join. Must be a valid HCL list (i.e. comma-separated quoted addresses). Consul flag: [`-retry-join-wan`](https://www.consul.io/docs/agent/options.html#retry_join_wan).
- The following error will occur if `CONSUL_RETRY_JOIN_WAN` is provided but improperly formatted:
```
==> Error parsing /etc/consul/consul.hcl: ... unexpected token while parsing list: IDENT
```
- Gossip over the WAN requires the following ports to be accessible between data centers, make sure that adequate firewall rules have been established for the following ports (this should happen automatically when using docker-compose with Triton):
- `8300`: Server RPC port (TCP)
- `8302`: Serf WAN gossip port (TCP + UDP)
## Using this in your own composition
There are two ways to run Consul and both come into play when deploying ContainerPilot, a cluster of Consul servers and individual Consul client agents.
Expand Down Expand Up @@ -82,20 +136,6 @@ services:

In our experience, including a Consul cluster within a project's `docker-compose.yml` can help developers understand and test how a service should be discovered and registered within a wider infrastructure context.

#### Environment Variables

- `CONSUL_DEV`: Enable development mode, allowing a node to self-elect as a cluster leader. Consul flag: [`-dev`](https://www.consul.io/docs/agent/options.html#_dev).
- The following errors will occur if `CONSUL_DEV` is omitted and not enough Consul instances are deployed:
```
[ERR] agent: failed to sync remote state: No cluster leader
[ERR] agent: failed to sync changes: No cluster leader
[ERR] agent: Coordinate update error: No cluster leader
```
- `CONSUL_DATACENTER_NAME`: Explicitly set the name of the data center in which Consul is running. Consul flag: [`-datacenter`](https://www.consul.io/docs/agent/options.html#datacenter).
- If this variable is specified it will be used as-is.
- If not specified, automatic detection of the datacenter will be attempted. See [issue #23](https://github.com/autopilotpattern/consul/issues/23) for more details.
- Consul's default of "dc1" will be used if none of the above apply.

### Clients

ContainerPilot utilizes Consul's [HTTP Agent API](https://www.consul.io/api/agent.html) for a handful of endpoints, such as `UpdateTTL`, `CheckRegister`, `ServiceRegister` and `ServiceDeregister`. Connecting ContainerPilot to Consul can be achieved by running Consul as a client to a cluster (mentioned above). It's easy to run this Consul client agent from ContainerPilot itself.
Expand Down
64 changes: 62 additions & 2 deletions bin/consul-manage
Original file line number Diff line number Diff line change
Expand Up @@ -6,8 +6,6 @@ set -eo pipefail
# been told to listen on.
#
preStart() {
_log "Updating consul advertise address"
sed -i "s/CONTAINERPILOT_CONSUL_IP/${CONTAINERPILOT_CONSUL_IP}/" /etc/consul/consul.hcl

if [ -n "$CONSUL_DATACENTER_NAME" ]; then
_log "Updating consul datacenter name (specified: '${CONSUL_DATACENTER_NAME}' )"
Expand All @@ -20,6 +18,46 @@ preStart() {
_log "Updating consul datacenter name (default: 'dc1')"
sed -i "s/CONSUL_DATACENTER_NAME/dc1/" /etc/consul/consul.hcl
fi

if [ -n "$CONSUL_RETRY_JOIN_WAN" ]; then
_log "Updating consul retry_join_wan field"
sed -i '/^retry_join_wan/d' /etc/consul/consul.hcl
echo "retry_join_wan = [${CONSUL_RETRY_JOIN_WAN}]" >> /etc/consul/consul.hcl

# translate_wan_addrs allows us to reach remote nodes through their advertise_addr_wan
sed -i '/^translate_wan_addrs/d' /etc/consul/consul.hcl
_log "Updating consul translate_wan_addrs field"
echo "translate_wan_addrs = true" >> /etc/consul/consul.hcl

# only set bind_addr = 0.0.0.0 if none was specified explicitly with CONSUL_BIND_ADDR
if [ -n "$CONSUL_BIND_ADDR" ]; then
updateConfigFromEnvOrDefault 'bind_addr' 'CONSUL_BIND_ADDR' "$CONTAINERPILOT_CONSUL_IP"
else
sed -i '/^bind_addr/d' /etc/consul/consul.hcl
_log "Updating consul field bind_addr to 0.0.0.0 CONSUL_BIND_ADDR was empty and CONSUL_RETRY_JOIN_WAN was not empty"
echo "bind_addr = \"0.0.0.0\"" >> /etc/consul/consul.hcl
fi
else
# if no WAN addresses were provided, set the bind_addr to the private address
updateConfigFromEnvOrDefault 'bind_addr' 'CONSUL_BIND_ADDR' "$CONTAINERPILOT_CONSUL_IP"
fi

IP_ADDRESS=$(hostname -i)

# the serf_lan_bind field was recently renamed to serf_wan
# serf_lan tells nodes their address within the LAN
updateConfigFromEnvOrDefault 'serf_lan' 'CONSUL_SERF_LAN_BIND' "$CONTAINERPILOT_CONSUL_IP"

# the serf_wan_bind field was recently renamed to serf_wan
# if this field is not set WAN joins will be refused since the bind address will differ
# from the address used to reach the node
updateConfigFromEnvOrDefault 'serf_wan' 'CONSUL_SERF_WAN_BIND' "$IP_ADDRESS"

# advertise_addr tells nodes their private, routeable address
updateConfigFromEnvOrDefault 'advertise_addr' 'CONSUL_ADVERTISE_ADDR' "$CONTAINERPILOT_CONSUL_IP"

# advertise_addr_wan tells nodes their public address for WAN communication
updateConfigFromEnvOrDefault 'advertise_addr_wan' 'CONSUL_ADVERTISE_ADDR_WAN' "$IP_ADDRESS"
}

#
Expand All @@ -44,6 +82,28 @@ _log() {
echo " $(date -u '+%Y-%m-%d %H:%M:%S') containerpilot: $@"
}


#
# Defines $1 in the consul configuration as either an env or a default.
# This basically behaves like ${!name_of_var} and ${var:-default} together
# but separates the indirect reference from the default so it's more obvious
#
# Check if $2 is the name of a defined environment variable and use ${!2} to
# reference it indirectly.
#
# If it is not defined, use $3 as the value
#
updateConfigFromEnvOrDefault() {
_log "Updating consul field $1"
sed -i "/^$1/d" /etc/consul/consul.hcl

if [ -n "${!2}" ]; then
echo "$1 = \"${!2}\"" >> /etc/consul/consul.hcl
else
echo "$1 = \"$3\"" >> /etc/consul/consul.hcl
fi
}

# ---------------------------------------------------
# parse arguments

Expand Down
2 changes: 1 addition & 1 deletion etc/consul.hcl
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
bind_addr = "CONTAINERPILOT_CONSUL_IP"
bind_addr = "0.0.0.0"
datacenter = "CONSUL_DATACENTER_NAME"
data_dir = "/data"
client_addr = "0.0.0.0"
Expand Down
3 changes: 2 additions & 1 deletion local-compose.yml → examples/compose/docker-compose.yml
Original file line number Diff line number Diff line change
Expand Up @@ -7,8 +7,9 @@ services:
# created user-defined network and internal DNS for the name "consul".
# Nodes will use Docker DNS for the service (passed in via the CONSUL
# env var) to find each other and bootstrap the cluster.
# Note: Unless CONSUL_DEV is set, at least three instances are required for quorum.
consul:
build: .
image: autopilotpattern/consul:${TAG:-latest}
restart: always
mem_limit: 128m
ports:
Expand Down
23 changes: 23 additions & 0 deletions examples/triton-multi-dc/docker-compose-multi-dc.yml.template
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
version: '2.1'

services:

# Service definition for Consul cluster running in us-east-1.
# Cloned by ../../setup-multi-datacenter.sh once per profile
consul:
image: autopilotpattern/consul:${TAG:-latest}
labels:
- triton.cns.services=consul
- com.docker.swarm.affinities=["container!=~*consul*"]
restart: always
mem_limit: 128m
ports:
- 8300 # Server RPC port
- "8302/tcp" # Serf WAN port
- "8302/udp" # Serf WAN port
- 8500
env_file:
- ENV_FILE_NAME
network_mode: bridge
command: >
/usr/local/bin/containerpilot
160 changes: 160 additions & 0 deletions examples/triton-multi-dc/setup-multi-dc.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,160 @@
#!/bin/bash
set -e -o pipefail

help() {
echo
echo 'Usage ./setup-multi-datacenter.sh <triton-profile1> [<triton-profile2> [...]]'
echo
echo 'Generates one _env file and docker-compose.yml file per triton profile, each of which'
echo 'is presumably associated with a different datacenter.'
}

if [ "$#" -lt 1 ]; then
help
exit 1
fi

# ---------------------------------------------------
# Top-level commands

#
# Check for triton profile $1 and output _env file named $2
#
generate_env() {
local triton_profile=$1
local output_file=$2

command -v docker >/dev/null 2>&1 || {
echo
tput rev # reverse
tput bold # bold
echo 'Docker is required, but does not appear to be installed.'
tput sgr0 # clear
echo 'See https://docs.joyent.com/public-cloud/api-access/docker'
exit 1
}
command -v triton >/dev/null 2>&1 || {
echo
tput rev # reverse
tput bold # bold
echo 'Error! Joyent Triton CLI is required, but does not appear to be installed.'
tput sgr0 # clear
echo 'See https://www.joyent.com/blog/introducing-the-triton-command-line-tool'
exit 1
}

# make sure Docker client is pointed to the same place as the Triton client
local docker_user=$(docker info 2>&1 | awk -F": " '/SDCAccount:/{print $2}')
local docker_dc=$(echo $DOCKER_HOST | awk -F"/" '{print $3}' | awk -F'.' '{print $1}')

local triton_user=$(triton profile get $triton_profile | awk -F": " '/account:/{print $2}')
local triton_dc=$(triton profile get $triton_profile | awk -F"/" '/url:/{print $3}' | awk -F'.' '{print $1}')
local triton_account=$(TRITON_PROFILE=$triton_profile triton account get | awk -F": " '/id:/{print $2}')

if [ ! "$docker_user" = "$triton_user" ] || [ ! "$docker_dc" = "$triton_dc" ]; then
echo
tput rev # reverse
tput bold # bold
echo 'Error! The Triton CLI configuration does not match the Docker CLI configuration.'
tput sgr0 # clear
echo
echo "Docker user: ${docker_user}"
echo "Triton user: ${triton_user}"
echo "Docker data center: ${docker_dc}"
echo "Triton data center: ${triton_dc}"
exit 1
fi

local triton_cns_enabled=$(triton account get | awk -F": " '/cns/{print $2}')
if [ ! "true" == "$triton_cns_enabled" ]; then
echo
tput rev # reverse
tput bold # bold
echo 'Error! Triton CNS is required and not enabled.'
tput sgr0 # clear
echo
exit 1
fi

# setup environment file
if [ ! -f "$output_file" ]; then
echo '# Consul bootstrap via Triton CNS' >> $output_file
echo CONSUL=consul.svc.${triton_account}.${triton_dc}.cns.joyent.com >> $output_file
echo >> $output_file
else
echo "Existing _env file found at $1, exiting"
exit
fi
}


declare -a written
declare -a consul_hostnames

# check that we won't overwrite any _env files first
if [ -f "_env" ]; then
echo "Existing env file found, exiting: _env"
fi

# check the names of _env files we expect to generate
for profile in "$@"
do
if [ -f "_env-$profile" ]; then
echo "Existing env file found, exiting: _env-$profile"
exit 2
fi

if [ -f "_env-$profile" ]; then
echo "Existing env file found, exiting: _env-$profile"
exit 3
fi

if [ -f "docker-compose-$profile.yml" ]; then
echo "Existing docker-compose file found, exiting: docker-compose-$profile.yml"
exit 4
fi
done

# check that the docker-compose.yml template is in the right place
if [ ! -f "docker-compose-multi-dc.yml.template" ]; then
echo "Multi-datacenter docker-compose.yml template is missing!"
exit 5
fi

echo "profiles: $@"

# invoke ./setup.sh once per profile
for profile in "$@"
do
echo "Temporarily switching profile: $profile"
eval "$(TRITON_PROFILE=$profile triton env -d)"
generate_env $profile "_env-$profile"

unset CONSUL
source "_env-$profile"

consul_hostnames+=("\"${CONSUL//cns.joyent.com/triton.zone}\"")

cp docker-compose-multi-dc.yml.template \
"docker-compose-$profile.yml"

sed -i '' "s/ENV_FILE_NAME/_env-$profile/" "docker-compose-$profile.yml"

written+=("_env-$profile")
done


# finalize _env and prepare docker-compose.yml files
for profile in "$@"
do
# add the CONSUL_RETRY_JOIN_WAN addresses to each _env
echo '# Consul multi-DC bootstrap via Triton CNS' >> _env-$profile
echo "CONSUL_RETRY_JOIN_WAN=$(IFS=,; echo "${consul_hostnames[*]}")" >> _env-$profile

cp docker-compose-multi-dc.yml.template \
"docker-compose-$profile.yml"

sed -i '' "s/ENV_FILE_NAME/_env-$profile/" "docker-compose-$profile.yml"
done

echo "Wrote: ${written[@]}"
1 change: 1 addition & 0 deletions docker-compose.yml → examples/triton/docker-compose.yml
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@ services:
image: autopilotpattern/consul:${TAG:-latest}
labels:
- triton.cns.services=consul
- com.docker.swarm.affinities=["container!=~*consul*"]
restart: always
mem_limit: 128m
ports:
Expand Down
1 change: 1 addition & 0 deletions setup.sh → examples/triton/setup.sh
Original file line number Diff line number Diff line change
Expand Up @@ -42,6 +42,7 @@ check() {
# make sure Docker client is pointed to the same place as the Triton client
local docker_user=$(docker info 2>&1 | awk -F": " '/SDCAccount:/{print $2}')
local docker_dc=$(echo $DOCKER_HOST | awk -F"/" '{print $3}' | awk -F'.' '{print $1}')

TRITON_USER=$(triton profile get | awk -F": " '/account:/{print $2}')
TRITON_DC=$(triton profile get | awk -F"/" '/url:/{print $3}' | awk -F'.' '{print $1}')
TRITON_ACCOUNT=$(triton account get | awk -F": " '/id:/{print $2}')
Expand Down
Loading

0 comments on commit e4f9031

Please sign in to comment.