From 761d3dcbec2ce427f61da561bb088c1a606a439d Mon Sep 17 00:00:00 2001 From: milldr Date: Tue, 29 Aug 2023 10:47:30 -0700 Subject: [PATCH 1/4] added fundamentals from refarch --- content/docs/fundamentals/_category_.json | 10 +- content/docs/fundamentals/atmos.md | 480 ++++++++++++++++++++++ content/docs/fundamentals/concepts.md | 102 +++-- content/docs/fundamentals/geodesic.md | 304 ++++++++++++++ content/docs/fundamentals/leapp.md | 21 + content/docs/fundamentals/stacks.md | 295 +++++++++++++ content/docs/fundamentals/terraform.md | 197 +++++++++ 7 files changed, 1370 insertions(+), 39 deletions(-) create mode 100644 content/docs/fundamentals/atmos.md create mode 100644 content/docs/fundamentals/geodesic.md create mode 100644 content/docs/fundamentals/leapp.md create mode 100644 content/docs/fundamentals/stacks.md create mode 100644 content/docs/fundamentals/terraform.md diff --git a/content/docs/fundamentals/_category_.json b/content/docs/fundamentals/_category_.json index e94ccc14b..6e536cde4 100644 --- a/content/docs/fundamentals/_category_.json +++ b/content/docs/fundamentals/_category_.json @@ -1,8 +1,10 @@ { - "label": "Fundamentals", - "position": 10, + "label": "Tools", + "collapsible": true, + "collapsed": true, + "position": 100, "link": { "type": "generated-index", - "description": "SweetOps fundamentals" + "title": "Tools" } -} +} \ No newline at end of file diff --git a/content/docs/fundamentals/atmos.md b/content/docs/fundamentals/atmos.md new file mode 100644 index 000000000..d2d794e0d --- /dev/null +++ b/content/docs/fundamentals/atmos.md @@ -0,0 +1,480 @@ +--- +title: "Atmos" +confluence: https://cloudposse.atlassian.net/wiki/spaces/REFARCH/pages/1186234624/Atmos +sidebar_position: 110 +custom_edit_url: https://github.com/cloudposse/refarch-scaffold/tree/main/docs/docs/fundamentals/tools/atmos.md +--- + +# Atmos + +`atmos` is both a command-line tool and Golang module for provisioning, managing and orchestrating workflows across various toolchains including `terraform` and `helmfile`. + +The `atmos` tool is part of the SweetOps toolchain and was built to make DevOps and Cloud automation easier across multiple tools. It has direct support for automating Terraform, Helmfile. By utilizing [Stacks](/reference-architecture/fundamentals/tools/stacks), `atmos` enable you to effortlessly manage your Terraform and Helmfile [Components](/components) from your local machine, in your CI/CD pipelines, or using [spacelift](/components/library/aws/spacelift/). + +## Problem +A modern infrastructure depends on lots of various tools like terraform, packer, helmfile, helm, kubectl, docker, etc. All these tools have varying degrees of configuration support, but most are not optimized for defining DRY configurations across dozens or hundreds of environments. Moreover, the configuration format is very different between the tools, but usually, boils down to some kind of key-value configuration in either JSON or YAML. This lack of configuration consistency poses a problem when we want to make it easy to declaratively define the settings that end-users should care about. + +## Solution +We defined a β€œuniversal” configuration format that works for all the tools we use. When using terarform, helmfile, etc we design our components as re-usable building blocks that accept simple declarative parameters and offload all business logic to the tools themselves. + +[We designed this configuration schema in YAML](https://learningactors.com/what-is-infrastructure-as-code-automating-your-infrastructure-builds/#:~:text=Infrastructure%20as%20code%20defined,a%20vastly%20larger%20scale.) and added convenient and robust deep-merging strategies that allow configurations to extend to other configurations. As part of this, we support OOP concepts of mixins, inheritance, and multiple inheritances - but all applied to the configuration. We support YAML anchors to clean up complex blocks of configuration, folder structures, environment variables, and all kinds of tool-specific settings. + +## Alternatives +There are a number of alternative tools to atmos, that accomplish some aspect of it. + +|**Tool** | **Description** | **Website**| +| ----- | ----- | ----- | +|terragrunt | | [https://github.com/gruntwork-io/terragrunt](https://github.com/gruntwork-io/terragrunt)| +|astro | | [https://github.com/uber/astro](https://github.com/uber/astro)| +|terraspace | | [https://github.com/boltops-tools/terraspace](https://github.com/boltops-tools/terraspace)| +|leverage | The Leverage CLI intended to orchestrate Leverage Reference Architecture for AWS | [https://github.com/binbashar/leverage](https://github.com/binbashar/leverage)| +|opta | The next generation of Infrastructure-as-Code. Work with high-level constructs instead of getting lost in low-level cloud configuration | [https://github.com/run-x/opta](https://github.com/run-x/opta)[https://docs.opta.dev/](https://docs.opta.dev/)| +|pterradactyl | Pterradactyl is a library developed to abstract Terraform configuration from the Terraform environment setup. | [https://github.com/nike-inc/pterradactyl](https://github.com/nike-inc/pterradactyl)| +|terramate | Terramate is a tool for managing multiple terraform stacks | [https://github.com/mineiros-io/terramate](https://github.com/mineiros-io/terramate)| +|`make` (honorable mention) | Many companies (including cloudposse) start by leveraging `make` with `Makefile` and targets to call terraform. This is a tried and true way, but at the scale we help our customer operate didn’t work. We know, because we tried it for ~3 years and suffocated under the weight of environment variables and stink of complexity only a mother could love. | [https://www.gnu.org/software/make/](https://www.gnu.org/software/make/)| + +What `atmos` is not: + +- An alternative to chef, puppet, or ansible. Instead, `atmos` is the type of tool that would call these tools. + +- An alternative to CI or CD systems. If anything, those systems will call `atmos`. + +## Design Considerations +- Keep it strictly declarative (no concept of iterators or interpolations) + +- Offload all imperative design to the underlying tools + +- Do not write a programming language in YAML (e.g. CloudFormation) or JSON (e.g. terraform or JSONNET, KSONNET) + +- Do not use any esoteric expressions (e.g. JSONNET) + +- Keep it Simple Stupid (KISS) + +- Ensure compatibility with multiple tools, not just `terraform` + +- Define all configuration in files and not based on filesystem conventions. + +## Usage + +`atmos help` (actually, we still need to implement this 😡 after porting to golang) + +:::info +**IMPORTANT** + +Atmos underwent a complete rewrite from an esoteric task runner framework called `variant2` into native Golang as of version 1.0. The documentation is not updated everywhere. The interface is identical/backward compatible (and enhanced), but some references to `variant2` are inaccurate. You can assume this documentation is for the latest version of atmos. + +::: + +Subcommands are positional arguments passed to the `atmos` command. + +### Subcommand: `version` +Show the current version + +### Subcommand: `describe` +Show the deep-merged configuration for stacks and components. + +### Subcommand: `terraform` +- Supports all built-in [Terraform Subcommands](https://www.terraform.io/docs/cli/commands/index.html) (we essentially pass them through to the `terraform` command) + +- `deploy` is equivalent to `atmos terraform apply -auto-approve` + +- `generate backend` is used to generate the static `backend.tf.json` file that should be committed to VCS + +- `generate varfile` (deprecated command: `write varfile`) β€” This command generates a varfile for a terraform component: `atmos terraform generate varfile -s -f ` + +- `clean` deletes any orphaned varfiles or planfiles + +### Subcommand: `helmfile` +- Supports all `helmfile` subcommands + +- `describe` + +- `generate varfile` β€” This command generates a varfile for a helmfile component: `atmos helmfile generate varfile -s -f ` + +### Subcommand: `workflow` +This subcommand is temporarily unavailable as a result of a major refactor from variant2 to golang. We will reintroduce the subcommand and it **has not** been _officially_ deprecated. + +[https://github.com/cloudposse/atmos](https://github.com/cloudposse/atmos) + +**Latest Releases** + +[https://github.com/cloudposse/atmos/releases](https://github.com/cloudposse/atmos/releases) + +**Open Issues** + +[https://github.com/cloudposse/atmos/issues](https://github.com/cloudposse/atmos/issues) + +## Examples + +### Provision Terraform Component + +To provision a Terraform component using the `atmos` CLI, run the following commands in the `geodesic` container shell: + +``` +atmos terraform plan eks --stack=ue2-dev +atmos terraform apply eks --stack=ue2-dev +``` + +Where: + +- `eks` is the Terraform component to provision (from the `components/terraform` folder) that is defined in the stack. If the component is not defined in the stack, it will error. + +- `--stack=ue2-dev` is the stack to provision the component into (or in other words, where to read the configuration) + +:::info +You can pass _any_ argument supported by `terraform` and it will be passed through to the system call to `terraform`. +e.g. We can pass the `-destroy` flag to `terraform plan` by running `atmos terraform plan -destroy --stack=uw2-dev` + +::: + +Short versions of the command-line arguments can also be used: + +``` +atmos terraform plan eks -s ue2-dev +atmos terraform apply eks -s ue2-dev +``` + +To execute `plan` and `apply` in one step, use `terrafrom deploy` command: + +``` +atmos terraform deploy eks -s ue2-dev +``` + +### Provision Terraform Component with Planfile + +You can use a terraform `planfile` (previously generated with `atmos terraform plan`) in `atmos terraform apply/deploy` commands by running the following: + +``` +atmos terraform plan test/test-component-override -s tenant1/ue2/dev +atmos terraform apply test/test-component-override -s tenant1-ue2-dev --from-plan +atmos terraform deploy test/test-component-override -s tenant1-ue2-dev --from-plan +``` + +### Provision Helmfile Component + +To provision a helmfile component using the `atmos` CLI, run the following commands in the container shell: + +``` +atmos helmfile diff nginx-ingress --stack=ue2-dev +atmos helmfile apply nginx-ingress --stack=ue2-dev +``` + +Where: + +- `nginx-ingress` is the helmfile component to provision (from the `components/helmfile` folder) + +- `--stack=ue2-dev` is the stack to provision the component into + +Short versions of the command-line arguments can be used: + +``` +atmos helmfile diff nginx-ingress -s ue2-dev +atmos helmfile apply nginx-ingress -s ue2-dev +``` + +To execute `diff` and `apply` in one step, use `helmfile deploy` command: + +``` +atmos helmfile deploy nginx-ingress -s ue2-dev +``` + +### View Deep-merged CLI Configs + +Use `atmos describe config` command to show the effective CLI configuration. Use `--format` of `json` or `yaml` to alter the output to structured data. + +The deep-merge processes files from these locations: + +- system dir (`/usr/local/etc/atmos/atmos.yaml` on Linux, `%LOCALAPPDATA%/atmos/atmos.yaml` on Windows) + +- home dir (`~/.atmos/atmos.yaml`) + +- `atmos.yaml` in the current directory + +Here are some more examples: + +``` +atmos describe config -help +atmos describe config + +atmos describe config --format=json +atmos describe config --format json +atmos describe config -f=json +atmos describe config -f json + +atmos describe config --format=yaml +atmos describe config --format yaml +atmos describe config -f=yaml +atmos describe config -f yaml +``` + +### Example Commands + +``` +atmos version +atmos describe config + +# Describe components and stacks +atmos describe component -s +atmos describe component --stack + +# Generate +atmos terraform generate backend -s +atmos terraform write varfile -s # this command will be changed to `terraform generate varfile` +atmos terraform write varfile -s -f ./varfile.json # supports output file + +# Terraform +# (almost) all native Terraform commands supported +# https://www.terraform.io/docs/cli/commands/index.html +atmos terraform plan -s +atmos terraform apply -s -auto-approve +atmos terraform apply -s --from-plan +atmos terraform deploy -s +atmos terraform deploy -s --from-plan +atmos terraform deploy -s -deploy-run-init=true +atmos terraform workspace -s +atmos terraform validate -s +atmos terraform output -s +atmos terraform graph -s +atmos terraform show -s +atmos terraform clean -s + +# Helmfile +# All native helmfile commands supported including [global options] +# https://github.com/roboll/helmfile#cli-reference +atmos helmfile diff -s +atmos helmfile apply -s + +# Helmfile with [global options] +atmos helmfile diff -s --global-options "--no-color --namespace=test" +atmos helmfile diff -s --global-options="--no-color --namespace test" +``` + +### Workflows + +:::danger +**IMPORTANT** +This is in atmos 0.x and while this functionality has not been deprecated, it also **has not** been ported over to atmos 1.x yet. + +::: + +Workflows are a way of combining multiple commands into one executable unit of work, kind of like a basic task-runner. + +In the CLI, workflows can be defined using two different methods: + +- In the configuration file for a stack (see [workflows in dev/us-east-2.yaml](https://github.com/cloudposse/atmos/blob/master/examples/complete/stacks/orgs/cp/tenant1/dev/us-east-2.yaml) for an example) + +- In a separate file (see [workflows.yaml](https://github.com/cloudposse/atmos/blob/master/examples/complete/stacks/workflows/workflow1.yaml) + +In the first case, we define workflows in the configuration file for the stack (which we specify on the command line). To execute the workflows from [workflows in dev/us-east-2.yaml](https://github.com/cloudposse/atmos/blob/master/examples/complete/stacks/orgs/cp/tenant1/dev/us-east-2.yaml), run the following commands: + +``` + atmos workflow deploy-all -s ue2-dev +``` + +Note that workflows defined in the stack config files can be executed only for the particular stack (environment and stage). It's not possible to provision resources for multiple stacks this way. + +In the second case (defining workflows in a separate file), a single workflow can be created to provision resources into different stacks. The stacks for the workflow steps can be specified in the workflow config. + +For example, to run `terraform plan` and `helmfile diff` on all terraform and helmfile components in the example, execute the following command: + +``` + atmos workflow plan-all -f workflows +``` + +where the command-line option `-f` (`--file` for long version) instructs the `atmos` CLI to look for the `plan-all` workflow in the file [workflows](https://github.com/cloudposse/atmos/blob/master/examples/complete/stacks/workflows/workflow1.yaml). + +As we can see, in multi-environment workflows, each workflow job specifies the stack it's operating on: + +``` +workflows: + plan-all: + description: Run 'terraform plan' and 'helmfile diff' on all components for all stacks + steps: + - job: terraform plan vpc + stack: ue2-dev + - job: terraform plan eks + stack: ue2-dev + - job: helmfile diff nginx-ingress + stack: ue2-dev + - job: terraform plan vpc + stack: ue2-staging + - job: terraform plan eks + stack: ue2-staging +``` + +You can also define a workflow in a separate file without specifying the stack in the workflow's job config. In this case, the stack needs to be provided on the command line. + +For example, to run the `deploy-all` workflow from the [workflows](https://github.com/cloudposse/atmos/blob/master/examples/complete/stacks/workflows/workflow1.yaml) file for the `ue2-dev` stack, execute the following command: + +``` + atmos workflow deploy-all -f workflows -s ue2-dev +``` + +## Recommended Filesystem Layout + +:::info +For an example of what this looks like within [Geodesic](/reference-architecture/fundamentals/tools/geodesic) see the section on β€œFilesystem Layout” + +::: + +Our general recommended filesystem layout looks like this. It can be customized using the CLI Configuration file. + +``` +# Your infratructure repository +infrastructure/ + β”‚ + β”‚ # Centralized components configuration + β”œβ”€β”€ stacks/ + β”‚ β”‚ └── catalog/ + β”‚ β”‚ + β”‚ └── $stack.yaml + β”‚ + β”‚ # Components are broken down by tool + β”œβ”€β”€ components/ + β”‚ β”‚ + β”‚ β”œβ”€β”€ terraform/ # root modules in here + β”‚ β”‚ β”œβ”€β”€ vpc/ + β”‚ β”‚ β”œβ”€β”€ eks/ + β”‚ β”‚ β”œβ”€β”€ rds/ + β”‚ β”‚ β”œβ”€β”€ iam/ + β”‚ β”‚ β”œβ”€β”€ dns/ + β”‚ β”‚ └── sso/ + β”‚ β”‚ + β”‚ └── helmfile/ # helmfiles are organized by chart + β”‚ β”œβ”€β”€ cert-manager/helmfile.yaml + β”‚ └── external-dns/helmfile.yaml + β”‚ + β”‚ # Makefile for building the CLI + β”œβ”€β”€ Makefile + β”‚ + β”‚ # Docker image for shipping the CLI and all dependencies + └── Dockerfile (optional) + +``` + +## CLI Configuration + +Atmos supports a CLI configuration to define configure the behavior working with stacks and components. + +In [Geodesic](/reference-architecture/fundamentals/tools/geodesic) we typically put this in `/usr/local/etc/atmos/atmos.yaml` (e.g. in `rootfs/...` in the `infrastructure` repository). Note this file uses the stack config format for consistency, but we do not consider it a stack configuration. + +The CLI config is loaded from the following locations (from lowest to highest priority): + +- system dir (`/usr/local/etc/atmos` on Linux, `%LOCALAPPDATA%/atmos` on Windows) + +- home dir (`~/.atmos`) + +- current directory (`./`) + +- ENV vars + +- Command-line arguments + +It supports [POSIX-style Globs for file names/paths](https://en.wikipedia.org/wiki/Glob_(programming)) (double-star `**` is supported) + +### Environment Variables + +Most YAML settings can be defined also as environment variables. This is helpful while doing local development. For example, setting `ATMOS_STACKS_BASE_PATH` to a path in `/localhost` to your local development folder, will enable you to rapidly iterate. + +|**Variable** | **YAML Path** | **Description**| +| ----- | ----- | ----- | +|`ATMOS_COMPONENTS_TERRAFORM_BASE_PATH` | `components.terraform.base_path` | | +|`ATMOS_COMPONENTS_TERRAFORM_APPLY_AUTO_APPROVE` | `components.terraform.apply_auto_approve` | | +|`ATMOS_COMPONENTS_TERRAFORM_DEPLOY_RUN_INIT` | `components.terraform.deploy_run_init` | | +|`ATMOS_COMPONENTS_HELMFILE_BASE_PATH` | `components.helmfile.base_path` | | +|`ATMOS_COMPONENTS_HELMFILE_KUBECONFIG_PATH` | `components.helmfile.aws_profile_pattern` | | +|`ATMOS_COMPONENTS_HELMFILE_HELM_AWS_PROFILE_PATTERN` | `components.helmfile.helm_aws_profile_pattern` | | +|`ATMOS_COMPONENTS_HELMFILE_CLUSTER_NAME_PATTERN` | `components.helmfile.cluster_name_pattern` | | +|`ATMOS_STACKS_BASE_PATH` | `stacks.base_path` | | +|`ATMOS_STACKS_INCLUDED_PATHS` | `stacks.included_paths` | | +|`ATMOS_STACKS_EXCLUDED_PATHS` | `stacks.excluded_paths` | | +|`ATMOS_STACKS_NAME_PATTERN` | `stacks.name_pattern` | | +|`ATMOS_LOGS_VERBOSE` | | For more verbose output, set this environment variable to `true` to see the logs how the CLI finds the configs and performs merges.| + +### Example `atmos.yaml` Configuration File + +(see: [https://github.com/cloudposse/atmos/blob/master/examples/complete/atmos.yaml#L30](https://github.com/cloudposse/atmos/blob/master/examples/complete/atmos.yaml#L30)) + +``` + +components: + # Settings for all terraform components + terraform: + # Can also be set using `ATMOS_COMPONENTS_TERRAFORM_BASE_PATH` ENV var, or `--terraform-dir` command-line argument + # Supports both absolute and relative paths + base_path: "/atmos_root/components/terraform" + # Can also be set using `ATMOS_COMPONENTS_TERRAFORM_APPLY_AUTO_APPROVE` ENV var + apply_auto_approve: false + # Can also be set using `ATMOS_COMPONENTS_TERRAFORM_DEPLOY_RUN_INIT` ENV var, or `--deploy-run-init` command-line argument + deploy_run_init: true + # Can also be set using `ATMOS_COMPONENTS_TERRAFORM_AUTO_GENERATE_BACKEND_FILE` ENV var, or `--auto-generate-backend-file` command-line argument + auto_generate_backend_file: false + + # Settings for all helmfile components + helmfile: + # Can also be set using `ATMOS_COMPONENTS_HELMFILE_BASE_PATH` ENV var, or `--helmfile-dir` command-line argument + # Supports both absolute and relative paths + base_path: "/atmos_root/components/helmfile" + # Can also be set using `ATMOS_COMPONENTS_HELMFILE_KUBECONFIG_PATH` ENV var + kubeconfig_path: "/dev/shm" + # Can also be set using `ATMOS_COMPONENTS_HELMFILE_HELM_AWS_PROFILE_PATTERN` ENV var + helm_aws_profile_pattern: "{namespace}-{tenant}-gbl-{stage}-helm" + # Can also be set using `ATMOS_COMPONENTS_HELMFILE_CLUSTER_NAME_PATTERN` ENV var + cluster_name_pattern: "{namespace}-{tenant}-{environment}-{stage}-eks-cluster" + +# Settings for all stacks +stacks: + # Can also be set using `ATMOS_STACKS_BASE_PATH` ENV var, or `--config-dir` and `--stacks-dir` command-line arguments + # Supports both absolute and relative paths + base_path: "/atmos_root/stacks" + # Can also be set using `ATMOS_STACKS_INCLUDED_PATHS` ENV var (comma-separated values string) + included_paths: + - "**/*" + # Can also be set using `ATMOS_STACKS_EXCLUDED_PATHS` ENV var (comma-separated values string) + excluded_paths: + - "globals/**/*" + - "catalog/**/*" + - "**/*globals*" + # Can also be set using `ATMOS_STACKS_NAME_PATTERN` ENV var + name_pattern: "{tenant}-{environment}-{stage}" + +logs: + verbose: false + colors: true +``` + +## Troubleshooting + +:::info +For more verbose output, you can always set the environment variable `ATMOS_LOGS_VERBOSE=true` to see the logs how the CLI finds the configs and performs merges. + +::: + +### **Error:** `stack name pattern must be provided in 'stacks.name_pattern' config or 'ATMOS_STACKS_NAME_PATTERN' ENV variable` + +This means that you are probably missing a section like this in your `atmos.yml`. See the instructions on CLI Configuration for more details. + +``` +stacks: + name_pattern: "{tenant}-{environment}-{stage}" +``` + +### **Error:** `The stack name pattern '{tenant}-{environment}-{stage}' specifies 'tenant`, but the stack ue1-prod does not have a tenant defined` + +This means that your `name_pattern` declares a `tenant` is required, but not specified. Either specify a `tenant` in your `vars` for the stack configuration, or remove the `{tenant}` from the `name_pattern` + +``` +stacks: + name_pattern: "{tenant}-{environment}-{stage}" +``` + +## How-to Guides + +- [How to Upgrade Atmos](/reference-architecture/how-to-guides/upgrades/how-to-upgrade-atmos) +- [How to use Atmos](/reference-architecture/how-to-guides/tutorials/how-to-use-atmos) + +## Concepts + +- [Stacks](/reference-architecture/fundamentals/tools/stacks) + +- [Components](/components) diff --git a/content/docs/fundamentals/concepts.md b/content/docs/fundamentals/concepts.md index c3238745b..7906a0417 100644 --- a/content/docs/fundamentals/concepts.md +++ b/content/docs/fundamentals/concepts.md @@ -1,28 +1,36 @@ --- title: "Concepts" -description: "Learn more about the core concepts and domain model that make up the SweetOps methodology." -sidebar_position: 3 -sidebar_label: "Concepts" +confluence: https://cloudposse.atlassian.net/wiki/spaces/REFARCH/pages/1186234584/Concepts +sidebar_position: 100 +custom_edit_url: https://github.com/cloudposse/refarch-scaffold/tree/main/docs/docs/fundamentals/tools/concepts.md --- -SweetOps is built on top of a number of high-level concepts and terminology that are critical to understanding prior to getting started. In this document, we break down these concepts to help you get a leg up prior to going through your first tutorial. +import ReactPlayer from 'react-player' -## Components +# Concepts -Components are opinionated, self-contained units of infrastructure as code that solve one, specific problem or use-case. SweetOps has two flavors of components: + SweetOps is built on top of a number of high-level concepts and terminology that are critical to understanding prior to getting started. In this document, we break down these concepts to help you better understand our conventions as we introduce them. -1. **Terraform:** Stand-alone root modules that implement some piece of your infrastructure. For example, typical components might be an EKS cluster, RDS cluster, EFS filesystem, S3 bucket, DynamoDB table, etc. You can find the [full library of SweetOps Terraform components on GitHub](https://github.com/cloudposse/terraform-aws-components). -1. **Helmfiles**: Stand-alone, applications deployed using `helmfile` to Kubernetes. For example, typical helmfiles might deploy the DataDog agent, cert-manager controller, nginx-ingress controller, etc. Similarly, the [full library of SweetOps Helmfile components is on GitHub](https://github.com/cloudposse/helmfiles). +### Components +[Components](/components) are opinionated, self-contained units of infrastructure as code that solve one, specific problem or use-case. SweetOps has two flavors of components: -One important distinction about components that is worth noting: components are opinionated "root" modules that typically call other child modules. Components are the building-blocks of your infrastructure. This is where you define all the business logic for how to provision some common piece of infrastructure like ECR repos or EKS clusters. Our convention is only stick components in the `components/terraform` directory and to use `modules/` when referring to child modules intended to be called by other components. We do not recommend consuming one terraform component inside of another as that would defeat the purpose; each component is intended to be a loosely coupled unit of IaC with its own lifecycle. +1. **Terraform:** Stand-alone root modules that implement some piece of your infrastructure. For example, typical components might be an EKS cluster, RDS cluster, EFS filesystem, S3 bucket, DynamoDB table, etc. You can find the [full library of SweetOps Terraform components on GitHub](https://github.com/cloudposse/terraform-aws-components). We keep these types of components in the `components/terraform/` directory within the infrastructure repository. -## Stacks +2. **Helmfiles**: Stand-alone, applications deployed using `helmfile` to Kubernetes. For example, typical helmfiles might deploy the DataDog agent, cert-manager controller, nginx-ingress controller, etc. Similarly, the [full library of SweetOps Helmfile components is on GitHub](https://github.com/cloudposse/helmfiles). We keep these types of components in the `components/helmfile/` directory within the infrastructure repository. -Stacks are a way to express the complete infrastructure needed for an environment using a standard YAML configuration format that has been developed by SweetOps. Stacks consist of components and the variables inputs into those components. For example, you configure a stack for each AWS account and then reference the components which comprise that stack. The more modular the components, the easier it is to quickly define a stack without writing any new code. +One important distinction about components that is worth noting: components are opinionated β€œroot” modules that typically call other child modules. Components are the building blocks of your infrastructure. This is where you define all the business logic for how to provision some common piece of infrastructure like ECR repos (with the [ecr](/components/library/aws/ecr/) component) or EKS clusters (with the [eks](/components/category/eks/) component). Our convention is to stick components in the `components/terraform` directory and to use a `modules/` subfolder to provide child modules intended to be called by the components. + +:::caution +We do not recommend consuming one terraform component inside of another as that would defeat the purpose; each component is intended to be a loosely coupled unit of IaC with its own lifecycle. Further more, since components define a state backend, it’s not supported in terraform to call it from other modules. + +::: + +### Stacks +Stacks are a way to express the complete infrastructure needed for an environment using a standard YAML configuration format that has been developed by Cloud Posse. Stacks consist of components and the variables inputs into those components. For example, you configure a stack for each AWS account and then reference the components which comprise that stack. The more modular the components, the easier it is to quickly define a stack without writing any new code. Here is an example stack defined for a Dev environment in the us-west-2 region: -```yaml +``` # Filename: stacks/uw2-dev.yaml import: - eks/eks-defaults @@ -82,43 +90,52 @@ components: - "env:uw2-dev" - "region:us-west-2" - "stage:dev" + ``` +Great, so what can you do with a stack? Stacks are meant to be a language and tool agnostic way to describe infrastructure, but how to use the stack configuration is up to you. We provide the following ways to utilize stacks today: + +1. [atmos](https://github.com/cloudposse/atmos): atmos is a command-line tool that enables CLI-driven stack utilization and supports workflows around `terraform`, `helmfile`, and many other commands -Great, so what can you do with a stack? Stacks are meant to be a language and tool agnostic way to describe infrastructure, but how to use the stack configuration is up to you. SweetOps provides the following ways to utilize stacks today: +2. [terraform-provider-utils](https://github.com/cloudposse/terraform-provider-utils): is our terraform provider for consuming stack configurations from within HCL/terraform. -1. [atmos](https://github.com/cloudposse/atmos): atmos is a command-line tool that enables CLI-driven stack utilization and supports workflows around `terraform`, `helmfile`, and many other commands. -1. [Terraform Cloud](https://www.terraform.io/docs/cloud/index.html): By using the [terraform-tfe-cloud-infrastructure-automation module](https://github.com/cloudposse/terraform-tfe-cloud-infrastructure-automation) you can provision Terraform Cloud workspaces for each component in your stack using Continuous Delivery and GitOps. -1. [Spacelift](https://spacelift.io/): By using the [terraform-spacelift-cloud-infrastructure-automation module](https://github.com/cloudposse/terraform-spacelift-cloud-infrastructure-automation) you can provision Spacelift stacks (our industry loves this word, huh?) for each component in your stack using Continuous Delivery and GitOps. +3. [Spacelift](https://spacelift.io/): By using the [terraform-spacelift-cloud-infrastructure-automation module](https://github.com/cloudposse/terraform-spacelift-cloud-infrastructure-automation) you can configure Spacelift continuously deliver components. Read up on why we [Use Spacelift for GitOps with Terraform](/reference-architecture/reference/adrs/use-spacelift-for-gitops-with-terraform) . -## Catalogs +### Catalogs +Catalogs in SweetOps are collections of sharable and reusable configurations. Think of the configurations in catalogs as defining archetypes (a very typical example of a certain thing) of configuration (E.g. `s3/public` and `s3/logs` would be two kinds of archtypes of S3 buckets). They are also convenient for managing [Terraform](/reference-architecture/fundamentals/tools/terraform). These are typically YAML configurations that can be imported and provide solid baselines to configure security, monitoring, or other 3rd party tooling. Catalogs enable an organization to codify its best practices of configuration and share them. We use this pattern both with our public terraform modules as well as with our stack configurations (e.g. in the `stacks/catalog` folder). -Catalogs in SweetOps are collections of sharable and reusable configurations. These are typically YAML configurations that can be imported and provide solid baselines to configure security, monitoring, or other 3rd party tooling. Catalogs enable an organization to codify their best practices of configuration and share them. SweetOps provides many catlaogs to get you started. +SweetOps provides many examples of how to use the catalog pattern to get you started. Today SweetOps provides a couple important catalogs: 1. [DataDog Monitors](https://github.com/cloudposse/terraform-datadog-monitor/tree/master/catalog/monitors): Quickly bootstrap your SRE efforts by utilizing some of these best practice DataDog application monitors. -1. [AWS Config Rules](https://github.com/cloudposse/terraform-aws-config/tree/master/catalog): Quickly bootstrap your AWS compliance efforts by utilizing hundreds of [AWS Config](https://aws.amazon.com/config/) rules that automate security checks against many common services. -1. [AWS Service Control Policies](https://github.com/cloudposse/terraform-aws-service-control-policies/tree/master/catalog): define what permissions in your organization you want to permit or deny in member accounts. -In the future, you're likely to see additional open-source catalogs for OPA rules and tools to make sharing configurations even easier. But it is important to note that how you use catalogs is really up to you to define, and the best catalogs will be specific to your organization. +2. [AWS Config Rules](https://github.com/cloudposse/terraform-aws-config/tree/master/catalog): Quickly bootstrap your AWS compliance efforts by utilizing hundreds of [AWS Config](https://aws.amazon.com/config/) rules that automate security checks against many common services. + +3. [AWS Service Control Policies](https://github.com/cloudposse/terraform-aws-service-control-policies/tree/master/catalog): define what permissions in your organization you want to permit or deny in member accounts. -## Primary vs Delegated +In the future, you’re likely to see additional open-source catalogs for OPA rules and tools to make sharing configurations even easier. But it is important to note that how you use catalogs is really up to you to define, and the best catalogs will be specific to your organization. -Primary vs Delegated is a common implementation pattern in SweetOps. This is most easily described when looking at the example of domain and DNS usage in a mutli-account AWS organization: SweetOps takes the approach that the root domain (e.g. `example.com`) is owned by a **primary** AWS account where the apex zone resides. Subdomains on that domain (e.g. `dev.example.com`) are then **delegated** to the other AWS accounts via an `NS` record on the primary hosted zone which points to the delegated hosted zone's name servers. +### Collections +Collections are groups of stacks. -You can see examples of this pattern in the [dns-primary](https://github.com/cloudposse/terraform-aws-components/tree/master/modules/dns-primary) / [dns-delegated](https://github.com/cloudposse/terraform-aws-components/tree/master/modules/dns-delegated) and [iam-primary-roles](https://github.com/cloudposse/terraform-aws-components/tree/master/modules/iam-primary-roles) / [iam-delegated-roles](https://github.com/cloudposse/terraform-aws-components/tree/master/modules/iam-delegated-roles) components. +### Segments +Sements are interconnected networks. For example, a production segment connects all production-tier stacks, while a non-production segment connects all non-production stacks. -## Docker Based Toolbox (aka Geodesic) +### Primary vs Delegated +Primary vs Delegated is a common implementation pattern in SweetOps. This is most easily described when looking at the example of domain and DNS usage in a mutli-account AWS organization: SweetOps takes the approach that the root domain (e.g. `example.com`) is owned by a **primary** AWS account where the apex zone resides. Subdomains on that domain (e.g. `dev.example.com`) are then **delegated** to the other AWS accounts via an `NS` record on the primary hosted zone which points to the delegated hosted zone’s name servers. -In the landscape of developing infrastructure, there are dozens of tools that we all need on our personal machines to do our jobs. In SweetOps, instead of having you install each tool individually, we use Docker to package all of these tools into one convenient image that you can use as your infrastructure automation toolbox. We call it [Geodesic](/reference/tools.mdx#geodesic) and we use it as our DevOps automation shell and as the base Docker image for all of our DevOps tooling. +You can see examples of this pattern in the [dns-primary](/components/library/aws/dns-primary/), [dns-delegated](/components/library/aws/dns-delegated/) and [iam-primary-roles](https://github.com/cloudposse/terraform-aws-components/tree/main/deprecated/iam-primary-roles) / [iam-delegated-roles](https://github.com/cloudposse/terraform-aws-components/tree/main/deprecated/iam-delegated-roles) components. -Geodesic is a DevOps Linux Distribution packaged as a Docker image that provides users the ability to utilize `atmos`, `terraform`, `kubectl`, `helmfile`, the AWS CLI, and many other popular tools that compromise the SweetOps methodology without having to invoke a dozen `install` commands to get started. It's intended to be used as an interactive cloud automation shell, a base image, or in CI/CD workflows to ensure that all systems are running the same set of versioned, easily accessible tools. +### Live vs Model (or Synthetic) +Live represents something that is actively being used. It differs from stages like β€œProduction” and β€œStaging” in the sense that both stages are β€œlive” and in-use. While terms like β€œModel” and β€œSynthetic” refer to something which is similar, but not in use by end-users. For example, a live production vanity domain of `acme.com` might have a synthetic vanity domain of `acme-prod.net`. - +### Docker Based Toolbox (aka Geodesic) +In the landscape of developing infrastructure, there are dozens of tools that we all need on our personal machines to do our jobs. In SweetOps, instead of having you install each tool individually, we use Docker to package all of these tools into one convenient image that you can use as your infrastructure automation toolbox. We call it [Geodesic](/reference-architecture/fundamentals/tools/geodesic) and we use it as our DevOps automation shell and as the base Docker image for all of our DevOps tooling. -## Vendoring +Geodesic is a DevOps Linux Distribution packaged as a Docker image that provides users the ability to utilize `atmos`, `terraform`, `kubectl`, `helmfile`, the AWS CLI, and many other popular tools that compromise the SweetOps methodology without having to invoke a dozen `install` commands to get started. It’s intended to be used as an interactive cloud automation shell, a base image, or in CI/CD workflows to ensure that all systems are running the same set of versioned, easily accessible tools. -Vendoring is a strategy of importing external dependencies into a local source tree or VCS. Many languages (e.g. NodeJS) support the concept. However, there are many other tools which do not address how to do vendoring. +### Vendoring +Vendoring is a strategy of importing external dependencies into a local source tree or VCS. Many languages (e.g. NodeJS, Golang) natively support the concept. However, there are many other tools which do not address how to do vendoring, namely `terraform`. There are a few reasons to do vendoring. Sometimes the tools we use do not support importing external sources. Other times, we need to make sure to have full-control over the lifecycle or versioning of some code in case the external dependencies go away. @@ -127,12 +144,27 @@ Our current approach to vendoring of thirdparty software dependencies is to use Example use-cases for Vendoring: 1. Terraform is one situation where it’s needed. While terraform supports child modules pulled from remote sources, components (aka root modules) cannot be pulled from remotes. -1. GitHub Actions do not currently support importing remote workflows. Using `vendir` we can easily import remote workflows. -## Generators +2. GitHub Actions do not currently support importing remote workflows. Using `vendir` we can easily import remote workflows. +### Generators Generators in SweetOps are the pattern of producing code or configuration when existing tools have shortcomings that cannot be addressed through standard IaC. This is best explained through our use-cases for generators today: -1. In order to deploy AWS Config rules to every region enabled in an AWS Account, we need to specify a provider block and consume a compliance child module for each region. Unfortunately, [Terraform does not currently support the ability loop over providers](https://github.com/hashicorp/terraform/issues/19932), which results in needing to manually create these provider blocks for each region that we're targeting. On top of that, not every organization uses the same types of accounts so a hardcoded solution is not easily shared. Therefore, to avoid tedious manual work we use the generator pattern to create the `.tf` files which specify a provider block for each module and the corresponding AWS Config child module. -1. Many tools for AWS work best when profiles have been configured in the AWS Configuration file (`~/.aws/config`). If we’re working with dozens of accounts, keeping this file current on each developer's machine is error prone and tedious. Therefore we use a generator to build this configuration based on the accounts enabled. -1. Terraform backends do not support interpolation. Therefore, we define the backend configuration in our YAML stack configuration and use `atmos` as our generator to build the backend configuration files for all components. +1. In order to deploy AWS Config rules to every region enabled in an AWS Account, we need to specify a provider block and consume a compliance child module for each region. Unfortunately, [Terraform does not currently support the ability loop over providers](https://github.com/hashicorp/terraform/issues/19932), which results in needing to manually create these provider blocks for each region that we’re targeting. On top of that, not every organization uses the same types of accounts so a hardcoded solution is not easily shared. Therefore, to avoid tedious manual work we use the generator pattern to create the `.tf` files which specify a provider block for each module and the corresponding AWS Config child module. + +2. Many tools for AWS work best when profiles have been configured in the AWS Configuration file (`~/.aws/config`). If we’re working with dozens of accounts, keeping this file current on each developer’s machine is error prone and tedious. Therefore we use a generator to build this configuration based on the accounts enabled. + +3. Terraform backends do not support interpolation. Therefore, we define the backend configuration in our YAML stack configuration and use `atmos` as our generator to build the backend configuration files for all components. + +### The 4-Layers of Infrastructure +We believe that infrastructure fundamentally consists of 4-layers of infrastructure. We build infrastructure starting from the bottom layer and work our way up. + + + +Each layer builds on the previous one and our structure is only as solid as our foundation. The tools at each layer vary and augment the underlying layers. Every layer has it’s own SDLC and is free to update independently of the other layers. The 4th and final layer is where your applications are deployed. While we believe in using terraform for layers 1-3, we believe it’s acceptable to introduce another layer of tools to support application developers (e.g. Serverless Framework, CDK, etc) are all acceptable since we’ve built a solid, consistent foundation. + + diff --git a/content/docs/fundamentals/geodesic.md b/content/docs/fundamentals/geodesic.md new file mode 100644 index 000000000..3f840156a --- /dev/null +++ b/content/docs/fundamentals/geodesic.md @@ -0,0 +1,304 @@ +--- +title: "Geodesic" +confluence: https://cloudposse.atlassian.net/wiki/spaces/REFARCH/pages/1186988067/Geodesic +sidebar_position: 120 +custom_edit_url: https://github.com/cloudposse/refarch-scaffold/tree/main/docs/docs/fundamentals/tools/geodesic.md +--- + +import ReactPlayer from 'react-player' + +# Geodesic + +## Introduction + +In the landscape of developing infrastructure, there are dozens of tools that we all need on our personal machines to do our jobs. In SweetOps, instead of having you install each tool individually, we use Docker to package all of these tools into one convenient image that you can use as your infrastructure automation toolbox. We call it Geodesic and we use it as our DevOps automation shell and as the base Docker image for all of our DevOps tooling. + +Geodesic is a DevOps Linux Distribution packaged as a Docker image that provides users the ability to utilize `atmos`, `terraform`, `kubectl`, `helmfile`, the AWS CLI, and many other popular tools that compromise the SweetOps methodology without having to invoke a dozen `install` commands to get started. It’s intended to be used as an interactive cloud automation shell, a base image, or in CI/CD workflows to ensure that all systems are running the same set of versioned, easily accessible tools. + +These days, the typical software application is distributed as a docker image and run as a container. Why should infrastructure be any different? Since everything we write is "Infrastructure as Code", we believe that it should be treated the same way. This is the "Geodesic Way". Use containers+envs instead of unconventional wrappers, complicated folder structures, and symlink hacks. Geodesic is the container for all your infrastructure automation needs that enables you to truly achieve SweetOps. + +An organization may choose to leverage all of these components or just the parts that make their life easier. We recommend starting by using geodesic as a Docker base image (e.g. `FROM cloudposse/geodesic:...` pinned to a release and base OS) in your projects. + + + +:::caution +**Apple M1 Support** + +TL;DR: Geodesic works on the M1 running as `amd64` (not `arm64`). Docker auto-detects this by default, but otherwise it’s possible to pass `--platform linux/amd64` to `docker` to force the platform. + +Geodesic is comprised of a large collection of mostly [third-party open-source tools distributed via our packages repository](https://github.com/cloudposse/packages). As such, **support for the Apple M1 chip is not under Cloud Posse's contro**l, rather it depends on each tool author updating each tool for the M1 chip. All of the compiled tools that Cloud Posse has authored and are included in Geodesic are compiled for M1 (`darwin_arm64`), and of course, all of the scripts work on M1 if the interpreters (e.g. `bash`, `python`) are compiled for M1. Unfortunately, this is only a small portion of the overall toolkit that is assembled in Geodesic. Therefore we do not advise using Geodesic on the M1 at this time and do not anticipate M1 will be well supported before 2022. Historically, widespread support for a new chip takes several years to establish; we hope we will not have to wait that long given the velocity our industry moves. + +::: + +## Use-cases + +Since `geodesic` is at its heart just a dockerized toolbox, it can be used anywhere docker images can be run. It supports both headless and interactive terminals. + +### Use a Local Development Environment + +Running `geodesic` as a local development environment ensures all team members on the team can get up and running quickly using the same versions of the tools. The only requirement is having Docker installed. + +:::info +**Pro Tip!** +When Geodesic is started using the wrapper script, it mounts the host’s `$HOME` directory as `/localhost` inside the container and creates a symbolic link from `$HOME` to `/localhost` so that files under `$HOME` on the host can be referenced by the exact same absolute path both on the host computer and inside Geodesic. For example, if the host `$HOME` is `/Users/fred`, then `/Users/fred/src/example.sh` will refer to the same file both on the host and from inside the Geodesic shell. This means you can continue editing files using your favorite IDE (e.g. VSCode, IntelliJ, etc) and interact with your local filesystem within the docker container. + +::: + +### Use as a Remote Development Environment + +Running `geodesic` as a remote development environment is as easy as calling `kubectl run` on the geodesic container. You’ll be able then to remotely interact with the container to debug within a kubenretes cluster. + +### Use as a Base Image for Automation + +Running `geodesic` as the base image for Spacelift or with GitHub Actions ensures you can use the same exact tooling in an automated fashion. + +## How-to Guides + +- [How to Upgrade or Install Versions of Terraform](/reference-architecture/how-to-guides/upgrades/how-to-upgrade-or-install-versions-of-terraform) +- [How to Keep Everything Up to Date](/reference-architecture/how-to-guides/upgrades/how-to-keep-everything-up-to-date) +- [How to Switch Versions of Terraform](/reference-architecture/how-to-guides/tutorials/how-to-switch-versions-of-terraform) +- [How to run Docker-in-Docker with Geodesic?](/reference-architecture/how-to-guides/tutorials/how-to-run-docker-in-docker-with-geodesic) +- [How to Customize the Geodesic Shell](/reference-architecture/how-to-guides/tutorials/how-to-customize-the-geodesic-shell) +- [How to use Atmos](/reference-architecture/how-to-guides/tutorials/how-to-use-atmos) + +## Alpine, Debian, and CentOS Support + +Starting with Geodesic version 0.138.0, we distribute 2 versions of Geodesic Docker images, one based on Alpine and one based on Debian, tagged `VERSION-BASE_OS`, e.g. `0.138.0-alpine`. + +Prior to this, all Docker images were based on Alpine only and simply tagged `VERSION`. We encourage people to use the Debian version and report any issues by opening a GitHub issue. We will continue to maintain the `latest-alpine` and `latest-debian` Docker tags for those who want to commit to using one base OS or the other but still want automatic updates. + +## Packages + +
+ +Central to `geodesic` is its rich support for the latest version of [the most popular packages](https://github.com/cloudposse/packages/tree/master/vendor) for DevOps. We maintain hundreds of packages that are graciously hosted by Cloud Smith. Our packages are updated nightly as soon as new releases are made available by vendors. As such, we strongly recommend version pinning packages installed via the `Dockerfile`. + +Also unique about our packages is that for `kubectl` and `terraform` we distribute all major versions with `dpkg-alternative` support so they can be concurrently installed without the use of version managers. + +Package repository hosting is graciously provided by [cloudsmith](https://cloudsmith.io/). Cloudsmith is the only fully hosted, cloud-native, universal package management solution, that enables your organization to create, store and share packages in any format, to any place, with total confidence. We believe there’s a better way to manage software assets and packages, and they’re making it happen! + +## Filesystem Layout + +Here’s a general filesystem layout for an infrastructure repository leveraging `geodesic` with `atmos` together with stacks and components. Note, individual customer repos will resemble this layout but will not be identical. + +``` +infrastructure/ +β”œβ”€β”€ Dockerfile +β”œβ”€β”€ Makefile +β”œβ”€β”€ README.md +β”œβ”€β”€ components +β”‚ └── terraform/ +β”‚ └── foobar/ +β”‚ β”œβ”€β”€ README.md +β”‚ β”œβ”€β”€ backend.tf.json +β”‚ β”œβ”€β”€ context.tf +β”‚ β”œβ”€β”€ default.auto.tfvars +β”‚ β”œβ”€β”€ main.tf +β”‚ β”œβ”€β”€ modules/ +β”‚ β”‚ β”œβ”€β”€ baz/ +β”‚ β”‚ β”‚ β”œβ”€β”€ context.tf +β”‚ β”‚ β”‚ β”œβ”€β”€ main.tf +β”‚ β”‚ β”‚ β”œβ”€β”€ outputs.tf +β”‚ β”‚ β”‚ └── variables.tf +β”‚ β”‚ └── bar/ +β”‚ β”‚ β”œβ”€β”€ context.tf +β”‚ β”‚ β”œβ”€β”€ main.tf +β”‚ β”‚ β”œβ”€β”€ outputs.tf +β”‚ β”‚ └── variables.tf +β”‚ β”œβ”€β”€ outputs.tf +β”‚ β”œβ”€β”€ providers.tf +β”‚ β”œβ”€β”€ remote-state.tf +β”‚ β”œβ”€β”€ variables.tf +β”‚ └── versions.tf +β”‚ +β”œβ”€β”€ docs/ +β”‚ β”œβ”€β”€ adr/ +β”‚ β”‚ β”œβ”€β”€ 0001-namespace-abbreviation.md +β”‚ β”‚ β”œβ”€β”€ 0002-infrastructure-repository-name.md +β”‚ β”‚ β”œβ”€β”€ 0003-email-addresses-for-aws-accounts.md +β”‚ β”‚ β”œβ”€β”€ 0004-secure-channel-secrets-sharing.md +β”‚ β”‚ β”œβ”€β”€ 0005-primary-aws-region.md +β”‚ β”‚ β”œβ”€β”€ README.md +β”‚ β”‚ └── template.md +β”‚ β”‚ +β”‚ └── cold-start.md +β”‚ +β”œβ”€β”€ rootfs/ +β”‚ β”œβ”€β”€ etc/ +β”‚ β”‚ β”œβ”€β”€ aws-config/ +β”‚ β”‚ β”‚ └── aws-config-cicd +β”‚ β”‚ └── motd +β”‚ β”‚ +β”‚ └── usr/ +β”‚ └── local/ +β”‚ β”œβ”€β”€ bin/ +β”‚ β”‚ β”œβ”€β”€ aws-accounts +β”‚ β”‚ β”œβ”€β”€ eks-update-kubeconfig +β”‚ β”‚ β”œβ”€β”€ spacelift-git-use-https +β”‚ β”‚ β”œβ”€β”€ spacelift-tf-workspace +β”‚ β”‚ └── spacelift-write-vars +β”‚ └── etc/ +β”‚ └── atmos/ +β”‚ └── atmos.yaml +β”‚ +└── stacks/ + β”œβ”€β”€ catalog/ + β”‚ β”œβ”€β”€ account-map.yaml + β”‚ β”œβ”€β”€ account-settings.yaml + β”‚ β”œβ”€β”€ account.yaml + β”‚ β”œβ”€β”€ cloudtrail.yaml + β”‚ β”œβ”€β”€ dns-delegated.yaml + β”‚ β”œβ”€β”€ dns-primary.yaml + β”‚ β”œβ”€β”€ ecr.yaml + β”‚ β”œβ”€β”€ eks + β”‚ β”‚ β”œβ”€β”€ alb-controller.yaml + β”‚ β”‚ β”œβ”€β”€ cert-manager.yaml + β”‚ β”‚ β”œβ”€β”€ eks.yaml + β”‚ β”‚ β”œβ”€β”€ external-dns.yaml + β”‚ β”‚ β”œβ”€β”€ metrics-server.yaml + β”‚ β”‚ └── ocean-controller.yaml + β”‚ β”œβ”€β”€ github-runners.yaml + β”‚ β”œβ”€β”€ iam-delegated-roles.yaml + β”‚ β”œβ”€β”€ iam-primary-roles.yaml + β”‚ β”œβ”€β”€ s3 + β”‚ β”‚ β”œβ”€β”€ alb-access-logs.yaml + β”‚ β”‚ └── s3-defaults.yaml + β”‚ β”œβ”€β”€ sso.yaml + β”‚ β”œβ”€β”€ tfstate-backend.yaml + β”‚ β”œβ”€β”€ transit-gateway.yaml + β”‚ └── vpc.yaml + β”œβ”€β”€ mgmt-uw2-artifacts.yaml + β”œβ”€β”€ mgmt-uw2-audit.yaml + β”œβ”€β”€ mgmt-uw2-automation.yaml + β”œβ”€β”€ mgmt-uw2-corp.yaml + β”œβ”€β”€ mgmt-uw2-dns.yaml + β”œβ”€β”€ mgmt-uw2-globals.yaml + β”œβ”€β”€ mgmt-uw2-identity.yaml + β”œβ”€β”€ mgmt-uw2-network.yaml + β”œβ”€β”€ mgmt-uw2-root.yaml + β”œβ”€β”€ mgmt-uw2-sandbox.yaml + β”œβ”€β”€ mgmt-uw2-security.yaml + └── uw2-globals.yaml +``` +GitHub Repository +Dockerfile uses `cloudposse/geodesic` as base Image +Makefile to help build and install wrapper script for `geodesic` +Location for all re-usable component building blocks +Location for all terraform (HCL) components +Example `foobar` component +Every component has a well-maintained `README.md` with usage instructions +Programmatically generated terraform backend created from `atmos terraform backend generate` +Standard context interface for all cloud posse modules. +Terraform defaults in HCL (loaded by `terraform` at run-time). Not deep merged. +Standard `main.tf` based on HashiCorp best-practices +Example of submodules within a component (aka child modules) +Submodule named `baz/` +Submodules should use the same standard interface with `context.tf` +Submodules should also follow HashiCorp best practices for module layout +Submodules should define variables in `variables.tf` and not modify `context.tf` +Example of another submodule named `bar/` + +Outputs exported by this component in the remote state +Remote state leveraged by the component using the `remote-state` module +Variables used by the component +Version pinning for providers used by the component + +Location for documentation specific to this repository +Home for all Architectural Design Records for your organization + +Index of all READMEs +Markdown template file to create new ADRs + +The `rootfs` pattern overlays this filesystem on `/` (slash) inside the docker image (e.g. `ADD /rootfs /`) +The `/etc/` inside the container +The AWS config used by automation by setting `AWS_CONFIG_PATH` +Message of the Day (MOTD) displayed to `stdout` on interactive shell logins + +The `usr/` tree inside the docker image + +Stick all scripts in `/usr/local/bin` +Script used to generate the `~/.aws/config` for SSO profiles. Modify this to suit your needs. +Helper script to export the `kubeconfig` for EKS using the `aws` CLI + +Atmos CLI configuration. Instructs where to find stack configs and components. + +Location of all stack configurations +Location where to store catalog imports. See our catalog pattern. +Catalog entry for [account-map](/components/library/aws/account-map/) +Catalog entry for [account-settings](/components/library/aws/account-settings/) +Catalog entry for [cloudtrail](/components/library/aws/cloudtrail/) +Catalog entry for [dns-delegated](/components/library/aws/dns-delegated/) +Catalog entry for [dns-primary](/components/library/aws/dns-primary/) +Catalog entry for [ecr](/components/library/aws/ecr/) + + + +Global configuration shared by all stacks in `uw2` region (e.g. `import` the `catalog/uw2-globals`) + +## Build and Run Geodesic + +Prerequisites for your host computer: + +- Docker installed + +- `make` installed, preferably GNU Make + +- `git` installed + +- Infrastructure Git repo cloned + +If all goes well, you should be able to build and run the Infrastructure Docker image from your host by executing `make all` from the command line in the root directory of your Git repo clone. If you have issues at this step, contact Cloud Posse or look for help in the Cloud Posse [Geodesic](https://github.com/cloudposse/geodesic/) or [Reference Architecture](https://github.com/cloudposse/reference-architectures) repos. + +At this point (after `make all` concludes successfully) you should be running a `bash` shell inside the Infrastructure Docker container (which we will also call the "Geodesic shell") and your prompt should look something like this: + +``` + ⧉ Infrastructure + βœ— . [none] / β¨  +``` + +From here forward, any command-line commands are meant to be run from within the Geodesic shell. + +## Troubleshooting + +### Command-line Prompt Ends with a Unicode Placeholder + +If your command-line prompt inside of the `geodesic` shell ends with a funky Unicode placeholder, then chances are the default character we use for the end of the command line prompt ([Unicode Z NOTATION SCHEMA PIPING](https://www.compart.com/en/unicode/U+2A20)) is not present in the font library you are using. On the Mac, Terminal (at least) falls back to some other font when the character is missing, so it's not a problem. On other systems, we recommend installing the freely available Noto font from Google, whose mission is to supply workable characters for every defined Unicode code point. On Ubuntu, it is sufficient to install the Noto core fonts, via + +``` +apt install fonts-noto-core +``` + + Another option is to switch to a different command prompt scheme, by adding + +``` +export PROMPT_STYLE="fancy" # or "unicode" or "plain" +``` + + to your Geodesic customizations. See [How to Customize the Geodesic Shell](/reference-architecture/how-to-guides/tutorials/how-to-customize-the-geodesic-shell) for more detail, and also to see how you can completely customize the prompt decorations. + +### Geodesic is Slow on the M1 Mac + +The poor performance of Geodesic on M1 is not specific to Geodesic but rather a known issue with Docker for Mac. + +Things to try: + +- Check (enable) Preferences β†’ General β†’ use gRPC FUSE for file sharing + +- Check (enable) Preferences β†’ Experimental Features β†’ Use the new Virtualization Framework + +- Use VMware Fusion to run a Debian VM, and run Geodesic from within the Debian VM + + +### Files Written to Mounted Linux Home Directory Owned by Root User + +If a user runs the Docker daemon as `root`, files may fail to be written to the mounted Linux home directory. + +The recommended solution for Linux users is to run Docker in ["rootless"](https://docs.docker.com/engine/security/rootless/) +mode. In this mode, the Docker daemon runs as the host user (rather than as root) and files created by the root user in Geodesic +are owned by the host user on the host. Not only does this configuration solve this issue, but it provides much better system security overall. +[Ref](https://github.com/cloudposse/geodesic/issues/594). + diff --git a/content/docs/fundamentals/leapp.md b/content/docs/fundamentals/leapp.md new file mode 100644 index 000000000..518b4e4be --- /dev/null +++ b/content/docs/fundamentals/leapp.md @@ -0,0 +1,21 @@ +--- +title: "Leapp" +confluence: https://cloudposse.atlassian.net/wiki/spaces/REFARCH/pages/1186267399/Leapp +sidebar_position: 150 +custom_edit_url: https://github.com/cloudposse/refarch-scaffold/tree/main/docs/docs/fundamentals/tools/leapp.md +--- + +# Leapp +[https://github.com/Noovolari/leapp](https://github.com/Noovolari/leapp) + +Leapp is a Desktop DevTool that handles the management and security of your cloud credentials for you so you can log into any AWS account with the click of a button using your native OS keychain. + +## How-to Guides + +- [How to Use Leapp to Authenticate with AWS](/reference-architecture/how-to-guides/tutorials/how-to-use-leapp-to-authenticate-with-aws) + +### Reference + +- [https://cloudposse.atlassian.net/l/c/YUPa00cx](https://cloudposse.atlassian.net/l/c/YUPa00cx) ... existing AWS role settings. + + diff --git a/content/docs/fundamentals/stacks.md b/content/docs/fundamentals/stacks.md new file mode 100644 index 000000000..212c22974 --- /dev/null +++ b/content/docs/fundamentals/stacks.md @@ -0,0 +1,295 @@ +--- +title: "Stacks" +confluence: https://cloudposse.atlassian.net/wiki/spaces/REFARCH/pages/1186988164/Stacks +sidebar_position: 140 +custom_edit_url: https://github.com/cloudposse/refarch-scaffold/tree/main/docs/docs/fundamentals/tools/stacks.md +--- + +# Stacks +Stacks are a way to express the complete infrastructure needed for an environment composed of [Components](/components) using a standard YAML configuration + +## Background + +Stacks are a central SweetOps abstraction layer that is used to instantiate [Components](/components). They’re a set of YAML files [that follow a standard schema](https://github.com/cloudposse/atmos/blob/master/docs/schema/stack-config-schema.json) to enable a **fully declarative description of your various environments**. This empowers you with the ability to separate your infrastructure’s environment configuration settings from the business logic behind it (provided via components). + +SweetOps utilizes a custom YAML configuration format for stacks because it’s an easy-to-work-with format that is nicely portable across multiple tools. The stack YAML format is natively supported today via [Atmos](/reference-architecture/fundamentals/tools/atmos) , [the terraform-yaml-stack-config module](https://github.com/cloudposse/terraform-yaml-stack-config), and [Spacelift](https://spacelift.io/) via [the terraform-spacelift-cloud-infrastructure-automation module](https://github.com/cloudposse/terraform-spacelift-cloud-infrastructure-automation). + +:::note +Stacks define a generic schema for expressing infrastructure + +::: + +## How-to Guides + +- [How to add or mirror a new region](/reference-architecture/how-to-guides/tutorials/how-to-add-or-mirror-a-new-region) +- [How to Upgrade or Install Versions of Terraform](/reference-architecture/how-to-guides/upgrades/how-to-upgrade-or-install-versions-of-terraform) +- [How to Keep Everything Up to Date](/reference-architecture/how-to-guides/upgrades/how-to-keep-everything-up-to-date) +- [How to Use Terraform Remote State](/reference-architecture/how-to-guides/tutorials/how-to-use-terraform-remote-state) +- [How to Manage Explicit Component Dependencies with Spacelift](/reference-architecture/how-to-guides/tutorials/how-to-manage-explicit-component-dependencies-with-spacelift) +- [How to Switch Versions of Terraform](/reference-architecture/how-to-guides/tutorials/how-to-switch-versions-of-terraform) +- [How to Define Stacks for Multiple Regions?](/reference-architecture/how-to-guides/tutorials/how-to-define-stacks-for-multiple-regions) +- [How to Version Pin Components in Stack Configurations](/reference-architecture/how-to-guides/tutorials/how-to-version-pin-components-in-stack-configurations) +- [How to Use Imports and Catalogs in Stacks](/reference-architecture/how-to-guides/tutorials/how-to-use-imports-and-catalogs-in-stacks) + +## Conventions + +We have a number of important conventions around stacks that are worth noting. + +:::info +Make sure you’re already familiar with the core [Concepts](/reference-architecture/fundamentals/tools/concepts). + +::: + +### Stack Files +Stack files can be very numerous in large cloud environments (think many dozens to hundreds of stack files). To enable the proper organization of stack files, SweetOps recommends the following: + +- All stacks should be stored in a `stacks/` folder at the root of your infrastructure repository. + +- Name individual environment stacks following the pattern of `$environment-$stage.yaml` + +- For example, `$environment` might be `ue2` (for `us-east-2`) and `$stage` might be `prod` which would result in `stacks/ue2-prod.yaml` + +- For any **global** resources (as opposed to _regional_ resources), such as Account Settings, IAM roles and policies, DNS zones, or similar, the `environment` for the stack should be `gbl` to connote that it’s not tied to any region. + +- For example, to deploy the `iam-delegated-roles` component (where all resources are global and not associated with an AWS region) to your production account, you should utilize a `stacks/gbl-prod.yaml` stack file. + +### Catalogs +When you have a configuration that you want to share across various stacks, use catalogs. Catalogs are the SweetOps term for shared, reusable configuration. + +By convention, all shared configuration for stacks is put in the `stacks/catalog/` folder, which can then be used in the root `stacks/` stack files via `import`. These files use the same stack schema. Learn more about [How to Use Imports and Catalogs in Stacks](/reference-architecture/how-to-guides/tutorials/how-to-use-imports-and-catalogs-in-stacks). + +There are a few suggested shared catalog configurations that we recommend adopting: + +- **Global Catalogs**: For any configuration to share across **all** stacks. + +- For example, you create a `stacks/catalog/globals.yaml` file and utilize `import` wherever you need that catalog. + +- **Environment Catalogs**: For any configuration you want to share across `environment` boundaries. + +- For example, to share configuration across `ue2-stage.yaml` and `uw2-stage.yaml` stacks, you create a `stacks/catalog/stage/globals.yaml` file and utilize `import` in both the `ue2-stage.yaml` and `uw2-stage.yaml` stacks to pull in that catalog. + +- **Stage Catalogs**: For any configuration that you want to share across `stage` boundaries. + +- For example, to share configuration across `ue2-dev.yaml`, `ue2-stage.yaml`, and `ue2-prod.yaml` stacks, you create a `stacks/catalog/ue2/globals.yaml` file and `import` that catalog in the respective `dev`, `stage`, and `prod` stacks. + +- **Base Components**: For any configuration that you want to share across all instances of a component. + +- For example, you’re using the `eks` component and you want to ensure all of your EKS clusters are using the same Kubernetes version, you create a `stacks/catalog/component/eks.yaml` file which specifies the `eks` component’s `vars.kubernetes_version`. You can then `import` that base component configuration in any stack file that uses the [eks](/components/category/eks/) component. + +- More information in the below section. + +### Component Inheritance +Using a component catalog, you can define the default values for all instances of a component across your stacks. But it is also important to note that you can provide default values for multiple instances of a component in a single stack using the component inheritance pattern via the `component` key / value: + +:::info +**Pro tip**: You can also use our inheritance model for the [polymorphism](https://en.wikipedia.org/wiki/Polymorphism_(computer_science)) of components. + +::: + +``` +# stacks/catalog/component/s3-bucket.yaml +components: + terraform: + s3-bucket: + vars: + enabled: false + user_enabled: false + acl: private + grants: null + versioning_enabled: true + +# stacks/uw2-dev.yaml +import: + - catalog/component/s3-bucket + - catalog/dev/globals + - catalog/uw2/globals + +components: + terraform: + public-images: + component: s3-bucket + vars: + enabled: true + acl: public + name: public-images + + export-data: + component: s3-bucket + vars: + enabled: true + name: export-data + + # ... + +``` +In the above example, we’re able to utilize the default settings provided via the `s3-bucket` base component catalog, while also creating multiple instances of the same component and providing our own overrides. This enables maximum reuse of global component configuration. + +### Terraform Workspace Names +In `atmos` and the accompanying terraform automation modules like [terraform-spacelift-cloud-infrastructure-automation](https://github.com/cloudposse/terraform-spacelift-cloud-infrastructure-automation) the terraform [workspaces](https://www.terraform.io/docs/language/state/workspaces.html) will be automatically created when managing components. These workspaces derive their names from the stack name and the component name in question following this pattern: `$env-$stage-$component`. The result is workspace names like `ue2-dev-eks` or `uw2-prod-mq-broker`. + +## Pro Tips + +Here are some tips to help you write great stacks: + +1. Use `{}` for empty maps, but not just a key with an empty value. + +2. Use consistent data types for deep merging to work appropriately with imports (e.g. don’t mix maps with lists or scalars). + +3. Use [YAML anchors](https://blog.daemonl.com/2016/02/yaml.html) to DRY up a config within a single file. + +:::caution +**IMPORTANT** +Anchors work only within the scope of a single file boundary and not across multiple imports. + +::: + +## Stack Schema + +[The official JSON Schema document for Stacks can be found here](https://github.com/cloudposse/atmos/blob/master/docs/schema/stack-config-schema.json). The below is a walk-through of a complete example utilizing all capabilities. + +``` +# stacks/ue2-dev.yaml + +# `import` enables shared configuration / settings across different stacks +# The referenced files are deep merged into this stack to support granular configuration capabilities +import: + # Merge the below `stacks/catalog/*.yaml` files into this stack to provide any shared `vars` or `components.*` configuration + - catalog/globals + - catalog/ue2/globals + - catalog/dev/globals + +# `vars` provides shared configuration for all components -- both terraform + helmfile +vars: + # Used to determine the name of the workspace (e.g. the 'dev' in 'ue2-dev') + stage: dev + # Used to determine the name of the workspace (e.g. the 'ue2' in 'ue2-dev') + environment: ue2 + +# Define cross-cutting terraform configuration +terraform: + # `terraform.vars` provides shared configuration for terraform components + vars: {} + + # `backend_type` + `backend` provide configuration for the terraform backend you + # would like to use for all components. This is typically defined in `globals.yaml` + # atmos + our modules support all options that can be configured for a particular backend. + # `backend_type` defines which `backend` configuration is enabled + backend_type: s3 # s3, remote, vault + backend: + s3: + encrypt: true + bucket: "eg-uw2-root-tfstate" + key: "terraform.tfstate" + dynamodb_table: "eg-uw2-root-tfstate-lock" + role_arn: "arn:aws:iam::999999999999:role/eg-gbl-root-terraform" + acl: "bucket-owner-full-control" + region: "us-east-2" + remote: {} + vault: {} + +# Define cross-cutting helmfile configuration +helmfile: + # `helmfile.vars` provides shared configuration for terraform components + vars: + account_number: "999999999999" + +# Components are all the top-level units that make up this stack +components: + + # All terraform components should be listed under this section. + terraform: + + # List one or more Terraform components here + first-component: + # Provide automation settings for this component + settings: + # Provide spacelift specific automation settings for this component + # (Only relevant if utilizing terraform-spacelift-cloud-infrastructure-automation) + spacelift: + + # Controls whether or not this workspace should be created + # NOTE: If set to 'false', you cannot reference this workspace via `triggers` in another workspace! + workspace_enabled: true + + # Override the version of Terraform for this workspace (defaults to the latest in Spacelift) + terraform_version: 0.13.4 + + # Which git branch trigger's this workspace + branch: develop + + # Controls the `autodeploy` setting within this workspace (defaults to `false`) + auto_apply: true + + # Add extra 'Run Triggers' to this workspace, beyond the parent workspace, which is created by default + # These triggers mean this component workspace will be automatically planned if any of these workspaces are applied. + triggers: + - ue2-dev-second-component + - gbl-root-example1 + + # Set the Terraform input variable values for this component. + vars: + my_input_var: "Hello world! This is a value that needs to be passed to my `first-component` Terraform component." + bool_var: true + number_var: 47 + + # Complex types like maps and lists are supported. + list_var: + - example1 + - example2 + + map_var: + key1: value1 + key2: value2 + + # Every terraform component should be uniquely named and correspond to a folder in the `components/terraform/` directory + second-component: + vars: + my_input_var: "Hello world! This is another example!" + + # You can also define component inheritance in stacks to enable unique workspace names or multiple usages of the same component in one stack. + # In this example, `another-second-component` inherits from the base `second-component` component and overrides the `my_input_var` variable. + another-second-component: + component: second-component + vars: + my_input_var: "Hello world! This is an override." + + # All helmfile components should be listed under this section. + helmfile: + + # Helmfile components should be uniquely named and correspond to a folder in the `components/helmfile/` directory + # Helmfile components also support virtual components + alb-controller: + + # Set the helmfile input variable values for this component. + vars: + installed: true + chart_values: + enableCertManager: true + +# `workflows` enable the ability to define an ordered list of operations that `atmos` will execute. These operations can be any type of component such as terraform or helmfile. +# See "Getting started with Atmos" documentation for full details: /tutorials/atmos-getting-started/ +workflows: + + # `workflows` is a map where the key is the name of the workflow that you're defining + deploy-eks-default-helmfiles: + + # `description` should provide useful information about what this workflow does + description: Deploy helmfile charts in the specific order + + # `steps` defines the ordering of the jobs that you want to accomplish + steps: + + # `job` entries defined `atmos` commands that you want to execute as part of the workflow + - job: helmfile sync cert-manager + - job: helmfile sync external-dns + - job: helmfile sync alb-controller + - job: helmfile sync metrics-server + - job: helmfile sync ocean-controller + - job: helmfile sync efs-provisioner + - job: helmfile sync idp-roles + - job: helmfile sync strongdm + - job: helmfile sync reloader + - job: helmfile sync echo-server +``` + + diff --git a/content/docs/fundamentals/terraform.md b/content/docs/fundamentals/terraform.md new file mode 100644 index 000000000..9e4ba67fc --- /dev/null +++ b/content/docs/fundamentals/terraform.md @@ -0,0 +1,197 @@ +--- +title: "Terraform" +confluence: https://cloudposse.atlassian.net/wiki/spaces/REFARCH/pages/1186234654/Terraform +sidebar_position: 130 +custom_edit_url: https://github.com/cloudposse/refarch-scaffold/tree/main/docs/docs/fundamentals/tools/terraform.md +--- + +import ReactPlayer from 'react-player' + +# Terraform + +For the most part, we assume users have a solid grasp of `terraform`. Cloud Posse has adopted a number of conventions for how we work with `terraform` that we document here. Review [our opinionated public β€œbest practices” as it relates to terraform](/reference/best-practices/terraform-best-practices/). + +We use [Atmos](/reference-architecture/fundamentals/tools/atmos) together with [Stacks](/reference-architecture/fundamentals/tools/stacks) to call [Components](/components) that provision infrastructure with `terraform`. + +:::caution +Be aware of [Terraform Environment Variables](https://www.terraform.io/docs/cli/config/environment-variables.html) that can alter the behavior of `terraform` when run outside of what you see in `atmos` or `geodesic`. These are also helpful to change default behavior as well, such as by setting the `TF_DATA_DIR`. + +::: + +## How-to Guides + +- [How to Upgrade or Install Versions of Terraform](/reference-architecture/how-to-guides/upgrades/how-to-upgrade-or-install-versions-of-terraform) +- [How to Manage Terraform Dependencies in Micro-service Repositories](/reference-architecture/how-to-guides/tutorials/how-to-manage-terraform-dependencies-in-micro-service-repositori) +- [How to Keep Everything Up to Date](/reference-architecture/how-to-guides/upgrades/how-to-keep-everything-up-to-date) +- [How to Use Terraform Remote State](/reference-architecture/how-to-guides/tutorials/how-to-use-terraform-remote-state) +- [How to Switch Versions of Terraform](/reference-architecture/how-to-guides/tutorials/how-to-switch-versions-of-terraform) +- [How to support GovCloud and Other AWS Partitions with Terraform](/reference-architecture/how-to-guides/tutorials/how-to-support-govcloud-and-other-aws-partitions-with-terraform) + +## Architectural Design Records + +- [Proposed: Use Strict Provider Pinning in Components](/reference-architecture/reference/adrs/proposed-use-strict-provider-pinning-in-components) +- [Use Basic Provider Block for Root-level Components](/reference-architecture/reference/adrs/use-basic-provider-block-for-root-level-components) +- [Use Terraform Provider Block with compatibility for Role ARNs and Profiles](/reference-architecture/reference/adrs/use-terraform-provider-block-with-compatibility-for-role-arns-an) +- [Use Spacelift for GitOps with Terraform](/reference-architecture/reference/adrs/use-spacelift-for-gitops-with-terraform) +- [Use SSM over ASM for Infrastructure](/reference-architecture/reference/adrs/use-ssm-over-asm-for-infrastructure) +- [Proposed: Use Defaults for Components](/reference-architecture/reference/adrs/proposed-use-defaults-for-components) + +## Conventions + +### Mixins + +Terraform does not natively support the object-oriented concepts of multiple inheritances or [mixins](https://en.wikipedia.org/wiki/Mixin), but we can simulate by using convention. For our purposes, we define a mixin in terraform as a controlled way of adding functionality to modules. When a mixin file is dropped into a folder of a module, the code in the mixin starts to interact with the code in the module. A module can have as many mixins as needed. Since terraform does not directly, we instead use a convention of exporting what we want to reuse. + +We achieve this currently using something we call an `export` in our terraform modules, which publish some reusable terraform code that we copy verbatim into modules as needed. We use this pattern with our `terraform-null-label` using the `context.tf` file pattern (See below). We also use this pattern in our `terraform-aws-security-group` module with the [https://github.com/cloudposse/terraform-aws-security-group/blob/main/exports/security-group-variables.tf](https://github.com/cloudposse/terraform-aws-security-group/blob/main/exports/security-group-variables.tf). + +To follow this convention, create an `export/` folder with the mixin files you wish to export to other modules. Then simply copy them over (E.g. with `curl`). We recommend calling the installed files something `.mixin.tf` so it’s clear it's an external asset. + +### Resource Factories + +Resource Factories provide a custom declarative interface for defining multiple resources using YAML and then terraform for implementing the business logic. Most of our new modules are developed using this pattern so we can decouple the architecture requirements from the implementation. + +See [https://medium.com/google-cloud/resource-factories-a-descriptive-approach-to-terraform-581b3ebb59c](https://medium.com/google-cloud/resource-factories-a-descriptive-approach-to-terraform-581b3ebb59c) for a related discussion. + +To better support this pattern, we implemented native support for deep merging in terraform using our [https://github.com/cloudposse/terraform-provider-utils](https://github.com/cloudposse/terraform-provider-utils) provider as well as implemented a module to standardize how we use YAML configurations [https://github.com/cloudposse/terraform-yaml-config](https://github.com/cloudposse/terraform-yaml-config). + +Examples of modules using Resource Factory convention: + +- [https://github.com/cloudposse/terraform-aws-service-control-policies](https://github.com/cloudposse/terraform-aws-service-control-policies) + +- [https://github.com/cloudposse/terraform-spacelift-cloud-infrastructure-automation](https://github.com/cloudposse/terraform-spacelift-cloud-infrastructure-automation) + +- [https://github.com/cloudposse/terraform-datadog-platform](https://github.com/cloudposse/terraform-datadog-platform) + +- [https://github.com/cloudposse/terraform-opsgenie-incident-management](https://github.com/cloudposse/terraform-opsgenie-incident-management) + +- [https://github.com/cloudposse/terraform-aws-config](https://github.com/cloudposse/terraform-aws-config) + +### Naming Conventions (and the `terraform-null-label` Module) + +Naming things is hard. We’ve made it easier by defining a programmatically consistent naming convention, which we use in everything we provision. It is designed to generate consistent human-friendly names and tags for resources. We implement this using a terraform module which accepts a number of standardized inputs and produces an output with the fully disambiguate ID. This module establishes the common interface we use in all of our terraform modules in the Cloud Posse ecosystem. Use `terraform-null-label` to implement a strict naming convention. We use it in all of our [Components](/components) and export something we call the `context.tf` pattern. + +[https://github.com/cloudposse/terraform-null-label](https://github.com/cloudposse/terraform-null-label) + +Here’s video from our [https://cloudposse.atlassian.net/wiki/spaces/CP/pages/1170014234](https://cloudposse.atlassian.net/wiki/spaces/CP/pages/1170014234) where we talk about it. + + + +There are 6 inputs considered "labels" or "ID elements" (because the labels are used to construct the ID): + +1. `namespace` + +2. `tenant` + +3. `environment` + +4. `stage` + +5. `name` + +6. `attributes` + +This module generates IDs using the following convention by default: `{namespace}-{environment}-{stage}-{name}-{attributes}`. However, it is highly configurable. The delimiter (e.g. `-`) is configurable. Each label item is optional (although you must provide at least one). + +#### Tenants + +`tenants` are a Cloud Posse construct used to describe a collection of accounts within an Organizational Unit (OU). An OU may have multiple tenants, and each tenant may have multiple AWS accounts. For example, the `platform` OU might have two tenants named `dev` and `prod`. The `dev` tenant can contain accounts for the `staging`, `dev`, `qa`, and `sandbox` environments, while the `prod` tenant only has one account for the `prod` environment. + +By separating accounts into these logical groupings, we can organize accounts at a higher level, follow AWS Well-Architected Framework recommendations, and enforce environment boundaries easily. + +### The `context.tf` Mixin Pattern + +Cloud Posse Terraform modules all share a common `context` object that is meant to be passed from module to module. A `context` object is a single object that contains all the input values for `terraform-null-label` and every `cloudposse/terraform-*` module uses it to ensure a common interface to all of our modules. By convention, we install this file as `context.tf` which is why we call it the `context.tf` pattern. By default, we always provide an instance of it accessible via `module.this`, which makes it always easy to get your _context._ πŸ™‚ + +Every input value can also be specified individually by name as a standard Terraform variable, and the value of those variables, when set to something other than `null`, will override the value in the context object. In order to allow chaining of these objects, where the context object input to one module is transformed and passed on to the next module, all the variables default to `null` or empty collections. + + + +### Stacks and Components + +We use [Stacks](/reference-architecture/fundamentals/tools/stacks) to define and organize configurations. We place terraform β€œroot” modules in the `components/terraform` directory (e.g. `components/terraform/s3-bucket`). Then we define one or more catalog archetypes for using the component (e.g. `catalog/s3-bucket/logs.yaml` and `catalog/s3-bucket/artifacts`). + +### Atmos CLI + +We predominantly call `terraform` from `atmos`, however, by design all of our infrastructure code runs without any task runners. This is in contrast to tools like `terragrunt` that manipulate the state of infrastructure code at run time. + +See [How to use Atmos](/reference-architecture/how-to-guides/tutorials/how-to-use-atmos) + +## FAQ + +### How to upgrade Terraform? + +See [How to Switch Versions of Terraform](/reference-architecture/how-to-guides/tutorials/how-to-switch-versions-of-terraform) for a more complete guide. + +TL;DR: + +- Note the version you want to use + +- Make sure the version is available in [cloudposse/packages](https://github.com/cloudposse/packages/pulls?q=terraform) to see if the version desired is in a merged PR for terraform + +- Make sure the version is available in Spacelift by editing an existing stack and see if the new version is available + +- Update Terraform in `Dockerfile` + +- Update Terraform in `.github/workflows/pre-commit.yaml` github action + +- Update Terraform in `components/terraform/spacelift/default.auto.tfvars` + +### How to use `context.tf`? + +Copy this file from `https://github.com/cloudposse/terraform-null-label/blob/master/exports/context.tf` and then place it in your Terraform module to automatically get Cloud Posse's standard configuration inputs suitable for passing to Cloud Posse modules. + +``` +curl -sL https://raw.githubusercontent.com/cloudposse/terraform-null-label/master/exports/context.tf -o context.tf +``` + +Modules should access the whole context as `module.this.context` to get the input variables with nulls for defaults, for example `context = module.this.context`, and access individual variables as `module.this.`, with final values filled in. + +[https://github.com/cloudposse/terraform-null-label/blob/master/exports/context.tf](https://github.com/cloudposse/terraform-null-label/blob/master/exports/context.tf) + +For example, when using defaults, `module.this.context.delimiter` will be `null`, and `module.this.delimiter` will be `-` (hyphen). + +:::caution +ONLY EDIT THIS FILE IN [http://github.com/cloudposse/terraform-null-label](http://github.com/cloudposse/terraform-null-label) . All other instances of this file should be a copy of that one. + +::: + +## Learning Resources + +If you’re new to terraform, here are a number of resources to check out: + +- [https://learn.hashicorp.com/terraform](https://learn.hashicorp.com/terraform) are the official classes produced by HashiCorp + +- [https://acloudguru.com/search?s=terraform](https://acloudguru.com/search?s=terraform) + +- [https://www.pluralsight.com/courses/terraform-getting-started](https://www.pluralsight.com/courses/terraform-getting-started) + +- [https://www.youtube.com/watch?v=wgzgVm7Sqlk](https://www.youtube.com/watch?v=wgzgVm7Sqlk) + +## Troubleshooting + +### **Prompt**: `Do you want to migrate all workspaces to "s3"?` + +If you get this message, it means you have local state (e.g. a `terraform.tfstate` file) which has not been published to the S3 backend. This happens typically when the backend was not defined (e.g. `backend.tf.json`) prior to running `terraform init`. + +:::caution +**WARNING** +This will overwrite any state currently in S3 for this component. If you were not expecting the state to be completely new, this prompt is unexpected. Working with any existing component shouldn't involve migrating a workspace and further investigation is warranted. + +::: + +As far as I know, this shouldn't involve migrating a workspace, since this state should be completely new. Should I say yes? Is this just a misleading warning? or indicative that I'm about to mess something up? + +## Reference + +- [AWS Region Codes](/reference-architecture/reference/aws/aws-region-codes) +- [Structure of Terraform S3 State Backend Bucket](/reference-architecture/reference/structure-of-terraform-s3-state-backend-bucket) + + From 587a0e58d5911a0f3cef0a3986c3dead0c3bdcaf Mon Sep 17 00:00:00 2001 From: milldr Date: Tue, 29 Aug 2023 10:49:02 -0700 Subject: [PATCH 2/4] reset category --- content/docs/fundamentals/_category_.json | 10 ++++------ 1 file changed, 4 insertions(+), 6 deletions(-) diff --git a/content/docs/fundamentals/_category_.json b/content/docs/fundamentals/_category_.json index 6e536cde4..e94ccc14b 100644 --- a/content/docs/fundamentals/_category_.json +++ b/content/docs/fundamentals/_category_.json @@ -1,10 +1,8 @@ { - "label": "Tools", - "collapsible": true, - "collapsed": true, - "position": 100, + "label": "Fundamentals", + "position": 10, "link": { "type": "generated-index", - "title": "Tools" + "description": "SweetOps fundamentals" } -} \ No newline at end of file +} From 5f92a6bd5dfb31003a1e01369230a039a66fa470 Mon Sep 17 00:00:00 2001 From: milldr Date: Tue, 29 Aug 2023 10:56:18 -0700 Subject: [PATCH 3/4] formatting concepts --- content/docs/fundamentals/concepts.md | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/content/docs/fundamentals/concepts.md b/content/docs/fundamentals/concepts.md index 7906a0417..2bb516659 100644 --- a/content/docs/fundamentals/concepts.md +++ b/content/docs/fundamentals/concepts.md @@ -1,8 +1,8 @@ --- title: "Concepts" -confluence: https://cloudposse.atlassian.net/wiki/spaces/REFARCH/pages/1186234584/Concepts -sidebar_position: 100 -custom_edit_url: https://github.com/cloudposse/refarch-scaffold/tree/main/docs/docs/fundamentals/tools/concepts.md +description: "Learn more about the core concepts and domain model that make up the SweetOps methodology." +sidebar_position: 3 +custom_edit_url: https://github.com/cloudposse/docs/tree/main/content/docs/fundamentals/concepts.md --- import ReactPlayer from 'react-player' @@ -14,7 +14,7 @@ import ReactPlayer from 'react-player' ### Components [Components](/components) are opinionated, self-contained units of infrastructure as code that solve one, specific problem or use-case. SweetOps has two flavors of components: -1. **Terraform:** Stand-alone root modules that implement some piece of your infrastructure. For example, typical components might be an EKS cluster, RDS cluster, EFS filesystem, S3 bucket, DynamoDB table, etc. You can find the [full library of SweetOps Terraform components on GitHub](https://github.com/cloudposse/terraform-aws-components). We keep these types of components in the `components/terraform/` directory within the infrastructure repository. +1. **Terraform:** Stand-alone root modules that implement some piece of your infrastructure. For example, typical components might be an EKS cluster, RDS cluster, EFS filesystem, S3 bucket, DynamoDB table, etc. You can find the [full library of SweetOps Terraform components here](/components/). We keep these types of components in the `components/terraform/` directory within the infrastructure repository. 2. **Helmfiles**: Stand-alone, applications deployed using `helmfile` to Kubernetes. For example, typical helmfiles might deploy the DataDog agent, cert-manager controller, nginx-ingress controller, etc. Similarly, the [full library of SweetOps Helmfile components is on GitHub](https://github.com/cloudposse/helmfiles). We keep these types of components in the `components/helmfile/` directory within the infrastructure repository. @@ -30,7 +30,7 @@ Stacks are a way to express the complete infrastructure needed for an environmen Here is an example stack defined for a Dev environment in the us-west-2 region: -``` +```yaml # Filename: stacks/uw2-dev.yaml import: - eks/eks-defaults @@ -94,9 +94,9 @@ components: ``` Great, so what can you do with a stack? Stacks are meant to be a language and tool agnostic way to describe infrastructure, but how to use the stack configuration is up to you. We provide the following ways to utilize stacks today: -1. [atmos](https://github.com/cloudposse/atmos): atmos is a command-line tool that enables CLI-driven stack utilization and supports workflows around `terraform`, `helmfile`, and many other commands +1. [Atmos](https://atmos.tools): Atmos is a command-line tool that enables CLI-driven stack utilization and supports workflows around `terraform`, `helmfile`, and many other commands -2. [terraform-provider-utils](https://github.com/cloudposse/terraform-provider-utils): is our terraform provider for consuming stack configurations from within HCL/terraform. +2. [`terraform-provider-utils`](https://github.com/cloudposse/terraform-provider-utils): is our Terraform provider for consuming stack configurations from within HCL/Terraform. 3. [Spacelift](https://spacelift.io/): By using the [terraform-spacelift-cloud-infrastructure-automation module](https://github.com/cloudposse/terraform-spacelift-cloud-infrastructure-automation) you can configure Spacelift continuously deliver components. Read up on why we [Use Spacelift for GitOps with Terraform](/reference-architecture/reference/adrs/use-spacelift-for-gitops-with-terraform) . From 4cffdd4e923c2f90820cfef132abd5e44b5e9ab5 Mon Sep 17 00:00:00 2001 From: Dan Miller Date: Tue, 29 Aug 2023 16:19:06 -0700 Subject: [PATCH 4/4] Update content/docs/fundamentals/leapp.md Co-authored-by: Benjamin Smith --- content/docs/fundamentals/leapp.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/docs/fundamentals/leapp.md b/content/docs/fundamentals/leapp.md index 518b4e4be..83f12412a 100644 --- a/content/docs/fundamentals/leapp.md +++ b/content/docs/fundamentals/leapp.md @@ -8,7 +8,7 @@ custom_edit_url: https://github.com/cloudposse/refarch-scaffold/tree/main/docs/d # Leapp [https://github.com/Noovolari/leapp](https://github.com/Noovolari/leapp) -Leapp is a Desktop DevTool that handles the management and security of your cloud credentials for you so you can log into any AWS account with the click of a button using your native OS keychain. +Leapp is a Desktop Dev Tool that handles the management and security of your cloud credentials for you so you can log into any AWS account with the click of a button using your native OS keychain. ## How-to Guides