Inspired by https://stackoverflow.com/a/45968410
Copy tf.sh somewhere
, /usr/local/bin/tf
for instance.
Terraform in installed based on the terraform.version
file, via tfswitch.
Copy tf-prompt.sh
somewhere, source it from your bash profile.
The tf_prompt
function returns (<workspace>/<stack>)
. Call \$(tf_prompt)
somewhere in your prompt variables.
This script is used to login with Privileged User Management.
Copy pum-aws.py
in the same location as tf.sh.
Pre-requisites:
- python 3
- run
pip install -r requirements.txt
.tf/ Internal configuration. Indicates the root directory.
.workspace Current workspace
.stack Current stack
envvars/
<workspace 1.tfvars> Variables for <workspace 1>
<workspace 2.tfvars> Variables for <workspace 2>
<...>
stacks/
<stack 1>
backend.tf Partial backend configuration, only declar type (S3) and region
<stack 2>
backend.tf
<...>
state-management/ The directory for the backend of your workspaces and stacks
<...>
accounts Mapping between workspace names and AWS accounts
backend.tf Global terraform backend file. Will be copied or symlinked in each stack
global.tf Global terraform file. Will be copied or symlinked in each stack
global.tfvars Global variables, shared by all stacks
You can name your workspaces the way you want, but you need to have:
- a mapping in
accounts
:<workspace>=<AWS account>
- a AWS profile (in .aws/config and .aws/credentials) named
<workspace>
- a file in
envvars
named<workspace>
You must have a S3 bucket named terraform-state-<account id>
in each account, and your profile must have:
- read access to the state files of the stacks you consume
- write access to the state files of the stacks you modify
You can use the scripts in state-management
to create the buckets and read/write policies.
Policies are not assigned automatically.
backend.tf
should contain something like:
terraform {
backend "s3" {
region = "eu-west-1"
encrypt = true
}
}
Execute the script from a directory inside a root (i.e. one of the parent directory must contain a .tf
folder).
Select a stack:
tf stack select network
Select a workspace:
tf workspace select acc
- creating workspace requires write permissions because the buckets are encrypted with KMS keys
Login with PUM:
Run pum-aws retrieve the credentials
Do some stuff:
tf plan
tf apply
Use tf help
to see supported tf (custom and terraform) commands.
global.tf
, global.tfvars
and envvars/<workspace>.tfvars
are automatically used when running:
apply
destroy
import
plan
refresh
validate
apply
always use -auto-approve=false
. In an automation scenario, use ./tf.sh apply -auto-approve=true
Command usage tf.sh backend <subcommand> <args>
How to create the backend:
- run
terraform init && terraform apply
locally - run
tf.sh backend init -migrate-state
- if you don't have the local state anymore you'll have to import the resources manually
- check here for more information
- any plan applies needs to use the following syntax
tf.sh backend <command>
(e.g. plan/applu/etc..)
Switching between workspaces:
- run
tf.sh workspace select <workspace>
- run
tf.sh backend init
- tf ctx
tf deps
will generate a graph in the dot language to show dependencies between stacks.
tf deps status
will add color to each node based on the status return by plan
:
- green: up-to-date
- yellow: changes pending
- red: error
See examples: dependency graph 1, dependency graph 2
The wrapper expects an executable named terraform-0.11
to be on the path.
You can use version 0.12 or 0.14 in this way:
- create a file named
terraform.version
on the root of your repo, or in a stack to use that version for a single stack - file should contain
0.12.0
or0.14.0
to select the version
If you need to pass variables containing credentials, you can add a file named terraform.tfvars
at the root of your repository.
This file should be excluded from source control. It will be loaded during plan/apply/validate/destroy/import/refresh
commands.
console
env
init
providers
push
workspace
- Show current workspace and stack in bash prompt (in progress, see tf-prompt.sh)
- Bash autocompletion
- Initialize a project.
- Use symlinks if supported instead of copying
global.tf
asglobal.symlink.tf
- ? Find current stack from current directory, to be able to use
cd stacks/xxx
instead oftf stack select xxx
- the workspace is tied with the AWS account. Can't have multiple workspaces under the same AWS account
- terraform.version file is required in the root directory or at the stack level ** the contents of the version needs to exactly match (e.g. 1.5.7)