Microservice Template (see video)
The scalable monitored template for cluster infrastructure.
- Provision servers with Packer and Chef
- Deploy and scale with Terraform
- Service discovery with Consul
- Data collection with collectd and statsd
- Analysis and visualization with InfluxDB and Grafana
-
Sign up for Amazon Web Services free tier here. It is extremely important to ensure the region in your Amazon Web Console is set to "N. California" as the defaults are all configured to use this region in future steps. If you decide to use a different region you will need to specify the region in future steps when noted.
-
Create Amazon IAM credentials for deployment. Assuming you are signed into the AWS console, visit the user admin page. Create a new user and note their security credentials (ID and secret key). Then attach a user policy of "Power User" to the newly created user.
-
Clone this repo including its chef recipe submodules.
git clone --recursive https://github.com/begriffs/microservice-template.git # if you've already cloned the repo you can do: git submodule update --init
-
Install Packer and Terraform. On a Mac you can install them with homebrew:
brew tap homebrew/binary brew install packer brew install terraform
-
Create machine images (AMI) using your credentials. When running each of these commands write down the AMI ids each one generates. They will be of the form
ami-[hash]
. You will need to remember which command created which AMI. AWS credentials are required and can be provided on the command line or accessed from environment variablesAWS_ACCESS_KEY_ID
andAWS_SECRET_ACCESS_KEY
.Optionally the region can be specified via command line. If specifying a region the proper source ami must also be provided via command line.
####Region to source ami mappings.
us-east-1: ami-b66ed3de us-west-1: ami-4b6f650e us-west-2: ami-b5a7ea85 eu-west-1: ami-607bd917 ap-southeast-1: ami-ac5c7afe ap-southeast-2: ami-63f79559 ap-northeast-1: ami-4985b048 sa-east-1: ami-8737829a
If no region is specified on the command line a default region of
us-west-1
is used with a source_ami ofami-4b6f650e
.packer build -var 'aws_access_key=xxx' -var 'aws_secret_key=xxx' -var 'region=us-east-1' -var 'source_ami=ami-b66ed3de' consul.json packer build -var 'aws_access_key=xxx' -var 'aws_secret_key=xxx' statsd.json packer build -var 'aws_access_key=xxx' -var 'aws_secret_key=xxx' influx.json packer build -var 'aws_access_key=xxx' -var 'aws_secret_key=xxx' grafana.json packer build -var 'aws_access_key=xxx' -var 'aws_secret_key=xxx' rabbitmq.json # for haskell workers (optional) packer build -var 'aws_access_key=xxx' -var 'aws_secret_key=xxx' halcyon.json
-
Deploy machine images.
cp terraform/terraform.tfvars{.example,}
Edit
terraform/terraform.tfvars
and fill in the ami instances created by the previous steps, the key name associated with your keypair created on EC2, and your AWS keys. Optionally provide a region matching the region of your amis. A default region ofus-west-1
. If you built a halcyon ami, you will need to specify the number of halcyon workers for the ami to be used.Now the fun part. Go into the terraform directory and run
make
.At the end it will output the public IP address of the monitoring server for the cluster. You can use it to watch server health and resource usage.
The server exposes web interfaces for several services.
Port | Service |
---|---|
80 | Grafana charts |
8500 | Consul server status and key/vals |
8080 | InfluxDB admin panel |
8086 | InfluxDB API used by Grafana |
15672 | RabbitMQ management console |
Influx has been configured with two databases, metrics
and
grafana
. Cluster data accumulates in the former and Grafana stores
your chart settings in the latter. The Influx user grafana
(password
grafpass
) has full access to both tables. RabbitMQ is set up with
user guest
password guest
.
The cluster exposes StatsD
server(s) at statsd.node.consul
. Your applications should send
it lots of events. The statsd protocol is UDP and incurs little
application delay. The statsd server relays all info to InfluxDB
which makes it accessible for graphing.