Skip to content
This repository has been archived by the owner on Jul 16, 2024. It is now read-only.

Commit

Permalink
Updates for runing healthchecks out of a sidecar container (#7)
Browse files Browse the repository at this point in the history
* Changes and updates to allow the consul clusters to get configs and execucate checks from a sidecare container

* Terraform Formating

* update to newer version of tflint

* Refresh apk in Circle
  • Loading branch information
tfhartmann authored Feb 16, 2018
1 parent e7ed5dd commit f162ce7
Show file tree
Hide file tree
Showing 6 changed files with 216 additions and 45 deletions.
2 changes: 1 addition & 1 deletion .circleci/config.yml
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ jobs:
command: if [ `terraform fmt | wc -c` -ne 0 ]; then echo "Some terraform files need be formatted, run 'terraform fmt' to fix"; exit 1; fi
- run:
name: "get tflint"
command: apk add wget ; wget https://github.com/wata727/tflint/releases/download/v0.4.2/tflint_linux_amd64.zip ; unzip tflint_linux_amd64.zip
command: apk update ; apk add wget ; wget https://github.com/wata727/tflint/releases/download/v0.5.4/tflint_linux_amd64.zip ; unzip tflint_linux_amd64.zip
- run:
name: "install tflint"
command: mkdir -p /usr/local/tflint/bin ; export PATH=/usr/local/tflint/bin:$PATH ; install tflint /usr/local/tflint/bin
Expand Down
38 changes: 38 additions & 0 deletions .github/CODEOWNERS
Validating CODEOWNERS rules …
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
# This is a comment.
# Each line is a file pattern followed by one or more owners.

# These owners will be the default owners for everything in
# the repo. Unless a later match takes precedence,
# @global-owner1 and @global-owner2 will be requested for
# review when someone opens a pull request.
#* @global-owner1 @global-owner2
* @FitnessKeeper/devops

# Order is important; the last matching pattern takes the most
# precedence. When someone opens a pull request that only
# modifies JS files, only @js-owner and not the global
# owner(s) will be requested for a review.
#*.js @js-owner

# You can also use email addresses if you prefer. They'll be
# used to look up users just like we do for commit author
# emails.
#*.go [email protected]

# In this example, @doctocat owns any files in the build/logs
# directory at the root of the repository and any of its
# subdirectories.
#/build/logs/ @doctocat

# The `docs/*` pattern will match files like
# `docs/getting-started.md` but not further nested files like
# `docs/build-app/troubleshooting.md`.
#docs/* [email protected]

# In this example, @octocat owns any file in an apps directory
# anywhere in your repository.
#apps/ @octocat

# In this example, @doctocat owns any file in the `/docs`
# directory in the root of your repository.
#/docs/ @doctocat
12 changes: 11 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,12 +15,14 @@ This module
- Deploys registrator
- deploys oauth2_proxy containers to proxy oauth requests through to the consul ui

This module supports two modes. If you pass a single ECS cluster ID into the `ecs_cluster_ids` the module deploys a single service and deploys to it called "consul-$env". If you pass two ID's into the array, two services will be created, consul-$env-primary and consul-$env-secondary. This allows you to spread consul across two separate ECS clusters, and two separate autoscaling groups, allowing you to redeploy ECS instances without effecting the stability of the Consul cluster.


----------------------
#### Required
- `alb_log_bucket` - s3 bucket to send ALB Logs
- `dns_zone` - Zone where the Consul UI alb will be created. This should *not* be consul.tld.com
- `ecs_cluster_ids` - List of ARNs of the ECS Cluster IDs List must contain 1 entry, and can have up to two elements. Currently any elements other then the first two are ignored.
- `ecs_cluster_ids` - List of ARNs of the ECS Cluster IDs List must contain 1 entry, and can have up to two elements. Currently any elements other then the first two are ignored.
- `env` - env to deploy into, should typically dev/staging/prod
- `join_ec2_tag` - EC2 Tags which consul will search for in order to generate a list of IP's to join. See https://github.com/hashicorp/consul-ec2-auto-join-example for more examples.
- `subnets` - List of subnets used to deploy the Consul alb
Expand All @@ -33,15 +35,23 @@ This module

#### Optional

- `consul_image` - Image to use when deploying consul
- `consul_memory_reservation` - The soft limit (in MiB) of memory to reserve for the container, (defaults 32)
- `cluster_size` - Consul cluster size. This must be greater the 3, defaults to 3
- `datacenter_name` - Optional overide for datacenter nam
- `enable_script_checks` - description = This controls whether health checks that execute scripts are enabled on this agent, and defaults to false
- `definitions` - List of Consul Service and Health Check Definitions
- `healthcheck_image` - Image to use when deploying health check agent, defaults to fitnesskeeper/consul-healthchecks:latest image
- `healthcheck_memory_reservation` - The soft limit (in MiB) of memory to reserve for the container, defaults 32
- `oauth2_proxy_htpasswd_file` - Path the htpasswd file defaults to /conf/htpasswd
- `join_ec2_tag_key` - EC2 Tag Key which consul uses to search to generate a list of IP's to Join. Defaults to Name
- `raft_multiplier" - An integer multiplier used by Consul servers to scale key Raft timing parameters https://www.consul.io/docs/guides/performance.html defaults to 5
- `region` - AWS Region - defaults to us-east-1
- `registrator_image` - Image to use when deploying registrator agent, defaults to the gliderlabs registrator:latest image
- `registrator_memory_reservation` The soft limit (in MiB) of memory to reserve for the container, defaults 32
- `oauth2_proxy_provider` - OAuth provider defaults to github
- `oauth2_proxy_github_team` - list of teams that should have access defaults to empty list (allow all)
- `service_minimum_healthy_percent` - The minimum healthy percent represents a lower limit on the number of your service's tasks that must remain in the RUNNING state during a deployment

Usage
-----
Expand Down
84 changes: 78 additions & 6 deletions files/consul.json
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,9 @@
{
"name": "consul_cluster-${env}",
"image": "${image}",
"memoryReservation": 512,
"essential": true,
"cpu": 0,
"memoryReservation": ${consul_memory_reservation},
"environment": [
{
"name": "CONSUL_LOCAL_CONFIG",
Expand All @@ -22,7 +24,15 @@
}
],
"command": [
"agent", "-server", "-bootstrap-expect=3", "-ui", "-client=0.0.0.0", "-dns-port=53"
"agent", "-server", "-bootstrap-expect=3", "-ui", "-client=0.0.0.0", "-dns-port=53", "-config-dir=/consul_check_definitions"
],
"volumesFrom": [
{
"sourceContainer": "consul-healthchecks-${env}"
}
],
"portMappings": [

],
"logConfiguration": {
"logDriver": "awslogs",
Expand All @@ -33,18 +43,70 @@
}
}
},
{
"name": "consul-healthchecks-${env}",
"image": "${healthcheck_image}",
"essential": true,
"cpu": 0,
"memoryReservation": ${healthcheck_memory_reservation},
"environment": [
{
"name": "CHECKS",
"value": "${definitions}"
},
{
"name": "S3_BUCKET",
"value": "${s3_backup_bucket}"
}
],
"mountPoints": [
{
"sourceVolume": "docker-sock",
"containerPath": "/var/run/docker.sock"
},
{
"sourceVolume": "consul-check-definitions",
"containerPath": "/consul_check_definitions"
}
],
"portMappings": [

],
"volumesFrom": [

],
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "${awslogs_group}",
"awslogs-region": "${awslogs_region}",
"awslogs-stream-prefix": "${awslogs_stream_prefix}"
}
}
},
{
"name": "registrator-${env}",
"image": "gliderlabs/registrator",
"memoryReservation": 256,
"image": "${registrator_image}",
"essential": true,
"cpu": 0,
"memoryReservation": ${registrator_memory_reservation},
"environment": [

],
"command": [
"-retry-attempts=10", "-retry-interval=1000", "consul://localhost:8500"
"-retry-attempts=15", "-retry-interval=1000", "consul://localhost:8500"
],
"mountPoints": [
{
"sourceVolume": "docker-sock",
"containerPath": "/tmp/docker.sock"
}
],
"portMappings": [

],
"volumesFrom": [

],
"logConfiguration": {
"logDriver": "awslogs",
Expand All @@ -58,11 +120,21 @@
{
"name": "consul-ui-${env}",
"image": "fitnesskeeper/oauth2_proxy:add-basic-auth",
"essential": true,
"cpu": 0,
"memoryReservation": 128,
"portMappings": [
{
"containerPort": 4180
"containerPort": 4180,
"hostPort": 4180,
"protocol": "tcp"
}
],
"mountPoints": [

],
"volumesFrom": [

],
"environment": [
{
Expand Down
82 changes: 48 additions & 34 deletions main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -14,25 +14,31 @@ data "template_file" "consul" {
template = "${file("${path.module}/files/consul.json")}"

vars {
datacenter = "${coalesce(var.datacenter_name ,data.aws_vpc.vpc.tags["Name"])}"
env = "${var.env}"
enable_script_checks = "${var.enable_script_checks}"
enable_script_checks = "${var.enable_script_checks ? "true" : "false"}"
image = "${var.consul_image}"
join_ec2_tag_key = "${var.join_ec2_tag_key}"
join_ec2_tag = "${var.join_ec2_tag}"
awslogs_group = "consul-${var.env}"
awslogs_stream_prefix = "consul-${var.env}"
awslogs_region = "${var.region}"
sha_htpasswd_hash = "${var.sha_htpasswd_hash}"
oauth2_proxy_htpasswd_file = "${var.oauth2_proxy_htpasswd_file}"
oauth2_proxy_provider = "${var.oauth2_proxy_provider}"
oauth2_proxy_github_org = "${var.oauth2_proxy_github_org}"
oauth2_proxy_github_team = "${join(",", var.oauth2_proxy_github_team)}"
oauth2_proxy_client_id = "${var.oauth2_proxy_client_id}"
oauth2_proxy_client_secret = "${var.oauth2_proxy_client_secret}"
raft_multiplier = "${var.raft_multiplier}"
s3_backup_bucket = "${var.s3_backup_bucket}"
datacenter = "${coalesce(var.datacenter_name ,data.aws_vpc.vpc.tags["Name"])}"
definitions = "${join(" ", var.definitions)}"
env = "${var.env}"
enable_script_checks = "${var.enable_script_checks}"
enable_script_checks = "${var.enable_script_checks ? "true" : "false"}"
image = "${var.consul_image}"
registrator_image = "${var.registrator_image}"
healthcheck_image = "${var.healthcheck_image}"
consul_memory_reservation = "${var.consul_memory_reservation}"
registrator_memory_reservation = "${var.registrator_memory_reservation}"
healthcheck_memory_reservation = "${var.healthcheck_memory_reservation}"
join_ec2_tag_key = "${var.join_ec2_tag_key}"
join_ec2_tag = "${var.join_ec2_tag}"
awslogs_group = "consul-${var.env}"
awslogs_stream_prefix = "consul-${var.env}"
awslogs_region = "${var.region}"
sha_htpasswd_hash = "${var.sha_htpasswd_hash}"
oauth2_proxy_htpasswd_file = "${var.oauth2_proxy_htpasswd_file}"
oauth2_proxy_provider = "${var.oauth2_proxy_provider}"
oauth2_proxy_github_org = "${var.oauth2_proxy_github_org}"
oauth2_proxy_github_team = "${join(",", var.oauth2_proxy_github_team)}"
oauth2_proxy_client_id = "${var.oauth2_proxy_client_id}"
oauth2_proxy_client_secret = "${var.oauth2_proxy_client_secret}"
raft_multiplier = "${var.raft_multiplier}"
s3_backup_bucket = "${var.s3_backup_bucket}"
}
}

Expand All @@ -48,6 +54,11 @@ resource "aws_ecs_task_definition" "consul" {
name = "docker-sock"
host_path = "/var/run/docker.sock"
}

volume {
name = "consul-check-definitions"
host_path = "/consul_check_definitions"
}
}

resource "aws_cloudwatch_log_group" "consul" {
Expand All @@ -61,11 +72,12 @@ resource "aws_cloudwatch_log_group" "consul" {

# start service
resource "aws_ecs_service" "consul" {
count = "${length(var.ecs_cluster_ids) == 1 ? 1 : 0}"
name = "consul-${var.env}"
cluster = "${var.ecs_cluster_ids[0]}"
task_definition = "${aws_ecs_task_definition.consul.arn}"
desired_count = "${var.cluster_size * 2}" # This is not awesome, it lets new AS groups get added to the cluster before destruction.
count = "${length(var.ecs_cluster_ids) == 1 ? 1 : 0}"
name = "consul-${var.env}"
cluster = "${var.ecs_cluster_ids[0]}"
task_definition = "${aws_ecs_task_definition.consul.arn}"
desired_count = "${var.cluster_size * 2}" # This is not awesome, it lets new AS groups get added to the cluster before destruction.
deployment_minimum_healthy_percent = "${var.service_minimum_healthy_percent}"

placement_constraints {
type = "distinctInstance"
Expand All @@ -87,11 +99,12 @@ resource "aws_ecs_service" "consul" {
}

resource "aws_ecs_service" "consul_primary" {
count = "${length(var.ecs_cluster_ids) > 1 ? 1 : 0}"
name = "consul-${var.env}-primary"
cluster = "${var.ecs_cluster_ids[0]}"
task_definition = "${aws_ecs_task_definition.consul.arn}"
desired_count = "${var.cluster_size * 2 }" # This is not awesome, it lets new AS groups get added to the cluster before destruction.
count = "${length(var.ecs_cluster_ids) > 1 ? 1 : 0}"
name = "consul-${var.env}-primary"
cluster = "${var.ecs_cluster_ids[0]}"
task_definition = "${aws_ecs_task_definition.consul.arn}"
desired_count = "${var.cluster_size * 2 }" # This is not awesome, it lets new AS groups get added to the cluster before destruction.
deployment_minimum_healthy_percent = "${var.service_minimum_healthy_percent}"

placement_constraints {
type = "distinctInstance"
Expand All @@ -113,11 +126,12 @@ resource "aws_ecs_service" "consul_primary" {
}

resource "aws_ecs_service" "consul_secondary" {
count = "${length(var.ecs_cluster_ids) > 1 ? 1 : 0}"
name = "consul-${var.env}-secondary"
cluster = "${var.ecs_cluster_ids[1]}"
task_definition = "${aws_ecs_task_definition.consul.arn}"
desired_count = "${var.cluster_size * 2 }" # This is not awesome, it lets new AS groups get added to the cluster before destruction.
count = "${length(var.ecs_cluster_ids) > 1 ? 1 : 0}"
name = "consul-${var.env}-secondary"
cluster = "${var.ecs_cluster_ids[1]}"
task_definition = "${aws_ecs_task_definition.consul.arn}"
desired_count = "${var.cluster_size * 2 }" # This is not awesome, it lets new AS groups get added to the cluster before destruction.
deployment_minimum_healthy_percent = "${var.service_minimum_healthy_percent}"

placement_constraints {
type = "distinctInstance"
Expand Down
Loading

0 comments on commit f162ce7

Please sign in to comment.