diff --git a/content/docs/fundamentals/introduction.md b/content/docs/fundamentals/introduction.md index 3899e13d2..159134ca7 100644 --- a/content/docs/fundamentals/introduction.md +++ b/content/docs/fundamentals/introduction.md @@ -34,7 +34,7 @@ How does it differentiate from these solutions? 1. It's 100% Open Source: SweetOps [is on GitHub](https://github.com/cloudposse) and is free to use with no strings attached under Apache 2.0. 1. It's comprehensive: SweetOps is not only about Terraform. It provides patterns and conventions for building cloud native platforms that are security focused, Kubernetes-based, and driven by continuous delivery. -1. It's community focused: SweetOps has [over 3400 users in Slack](https://sweetops.com/slack/), well-attended weekly office hours, and a [budding community forum](https://ask.sweetops.com/). +1. It's community focused: SweetOps has [over 7000 users in Slack](https://sweetops.com/slack/), well-attended weekly office hours, and a [budding community forum](https://ask.sweetops.com/). ## How is this documentation structured? diff --git a/content/docs/intro.md b/content/docs/intro.md index 5c0eb3876..bd5067d65 100644 --- a/content/docs/intro.md +++ b/content/docs/intro.md @@ -12,11 +12,11 @@ Start with getting familiar with the [geodesic](/reference/tools.mdx#geodesic). Get intimately familiar with docker inheritance and [multi-stage docker builds](/reference/best-practices/docker-best-practices.md#multi-stage-builds). We use this pattern extensively. -Check out our [terraform-aws-components](https://github.com/cloudposse/terraform-aws-components) for reference architectures to easily provision infrastructure +Check out our [terraform-aws-components](https://github.com/cloudposse/terraform-aws-components) for reference architectures to easily provision infrastructure. ## Tools -Tons of tools/clis are used as part of our solution. We distribute these tools in a couple of different ways. +Tons of tools/CLIs are used as part of our solution. We distribute these tools in a couple of different ways: * Geodesic bundles most of these tools as part of the geodesic base image * Our [packages repo](/reference/tools.mdx#packages) provides an embeddable `Makefile` system for installing packages in other contexts (e.g. [`build-harness`](/reference/tools.mdx#build-harness)). This can also be used for local ("native") development contexts. @@ -27,7 +27,7 @@ Here are some of the most important tools to be aware of: - [`chamber`](/reference/tools.mdx#chamber) - [`terraform`](/reference/tools.mdx#terraform) - [`gomplate`](/reference/tools.mdx#gomplate) -- [Leapp](/reference/tools.mdx#leapp) +- [`Leapp`](/reference/tools.mdx#leapp) If using kubernetes, then also review these tools: @@ -42,8 +42,8 @@ Kubernetes is a massive part of our solutions. Our Kubernetes documentation is g Helm is central to how we deploy all services on kubernetes. -* [helm](/reference/tools.mdx#helm) is essentially the package manager for Kubernetes (like `npm` for Node, `gem` for Ruby, and `rpm` for RHEL) -* [helm charts](https://helm.sh/docs/topics/charts/) are how kubernetes resources are templatized using Go templates +* [helm](/reference/tools.mdx#helm) is essentially the package manager for Kubernetes (like `npm` for Node, `gem` for Ruby, and `rpm` for RHEL). +* [helm charts](https://helm.sh/docs/topics/charts/) are how kubernetes resources are templatized using Go templates. * [helmfiles](/reference/tools.mdx#helmfile) are used to define a distribution of helm charts. So if you want to install prometheus, grafana, nginx-ingress, kube-lego, etc, we use a `helmfile.yaml` to define how that's done. ## Terraform @@ -70,7 +70,7 @@ Review our [glossary](/category/glossary/) if there are any terms that are confu File issues anywhere you find the documentation lacking by going to our [docs repo](https://github.com/cloudposse/docs). -Join our [Slack Community](https://cloudposse.com/slack/) and speak directly with the maintainers +Join our [Slack Community](https://cloudposse.com/slack/) and speak directly with the maintainers. We provide "white glove" DevOps support. [Get in touch](/contact-us.md) with us today! diff --git a/content/docs/reference/best-practices/docker-best-practices.md b/content/docs/reference/best-practices/docker-best-practices.md index b082d4b97..d28ca7592 100644 --- a/content/docs/reference/best-practices/docker-best-practices.md +++ b/content/docs/reference/best-practices/docker-best-practices.md @@ -15,10 +15,10 @@ Try to leverage the same base image in as many of your images as possible for fa ## Multi-stage Builds -There are two ways to leverage multi-stage builds. +There are two ways to leverage multi-stage builds: -1. *Build-time Environments* The most common application of multi-stage builds is for using a build-time environment for compiling apps, and then a minimal image (E.g. `alpine` or `scratch`) for distributing the resultant artifacts (e.g. statically-linked go binaries). -2. *Multiple-Inheritance* We like to think of "multi-stage builds" as a mechanism for "multiple inheritance" as it relates to docker images. While not technically the same thing, using mult-stage images, it's possible `COPY --from=other-image` to keep things very DRY. +1. *Build-time Environments* The most common application of multi-stage builds is for using a build-time environment for compiling apps, and then a minimal image (E.g. `alpine` or `scratch`) for distributing the resultant artifacts (e.g. statically-linked Go binaries). +2. *Multiple-Inheritance* We like to think of "multi-stage builds" as a mechanism for "multiple inheritance" as it relates to docker images. While not technically the same thing, using multi-stage images, makes it possible `COPY --from=other-image` to keep things very DRY. :::info - diff --git a/content/docs/tutorials/geodesic-getting-started.md b/content/docs/tutorials/geodesic-getting-started.md index abaa21861..44cedbc13 100644 --- a/content/docs/tutorials/geodesic-getting-started.md +++ b/content/docs/tutorials/geodesic-getting-started.md @@ -27,7 +27,7 @@ Before we jump in, it's important to note that Geodesic is built around some adv Let's talk about a few of the ways that one can run Geodesic. Our toolbox has been built to satisfy many use-cases, and each result in a different pattern of invocation: -1. You can **run standalone** Geodesic as a standard docker container using `docker run`. This enables you to get started quickly, to avoid fiddling with configuration or run one-off commands using some of the built-in tools. +1. You can **run standalone** Geodesic as a standard docker container using `docker run`. This enables you to get started quickly and to avoid fiddling with configuration or running one-off commands, all thanks to some of the built-in tools. 1. Example: `docker run -it --rm --volume $HOME:/localhost cloudposse/geodesic:latest-debian --login` opens a bash login shell (`--login` is our Docker `CMD` here; it's actually just [the arguments passed to the `bash` shell](https://www.gnu.org/software/bash/manual/html_node/Bash-Startup-Files.html) which is our `ENTRYPOINT`) in our Geodesic container. 1. Example: `docker run -it --rm --volume $HOME:/localhost cloudposse/geodesic:latest-debian -c "terraform version"` executes the `terraform version` command as a one off and outputs the result. 1. You can **install** Geodesic onto your local machine using what we call the docker-bash pattern (e.g. `docker run ... | bash`). Similar to above, this enables a quickstart process but supports longer lived usage as it creates a callable script on your machine that enables reuse any time you want to start a shell. @@ -90,9 +90,9 @@ terraform init terraform apply -auto-approve ``` -Sweet, you should see a successful `terraform apply` with some detailed `output` info on the original star wars hero! 😎 +Sweet, you should see a successful `terraform apply` with some detailed `output` data on the original star wars hero! 😎 -Just to show some simple usage of another tool in the toolbox, how about we pull apart that info and get that hero's name? +Just to show some simple usage of another tool in the toolbox, how about we parse that data and get that hero's name? ### 4. Read some data from our Outputs