The Cloud Environment ⛅
This is a suite of modern cloud tooling that wraps seamlessly over your existing shell. It provides:
- Infrastructure-as-code (IaC) tools.
- Authentication tools for Okta and AWS.
- A large collection of Kubernetes and container tools.
Tested on Mac and Linux with both Podman and Docker.
If you are using Docker on Linux, first add your user to the 'docker' group so you can run docker commands directly. Podman users do not need to do this.
Install the cloudenv
command:
sudo curl https://raw.githubusercontent.com/snw35/cloudenv/master/cloudenv -o /usr/local/bin/cloudenv && sudo chmod +x /usr/local/bin/cloudenv;
Run the cloudenv
command as your own user (not as root). It will pull the latest version of the container image (around 2GB), start the container, and drop you into the shell:
⛅user@cloudenv-user:~$
Everything should work as you expect. The bash shell contains common utilities (git, curl, ssh, etc) and all of the installed tools (listed below) with working bash-completion for those that support it. If your session has an ssh-agent running with cached credentials, then these will continue to work and be available for git/ssh etc.
There may be updates to the 'cloudenv' script itself which won't be automatically applied, so re-run the install command above if you experience any issues launching the tool.
The following software is installed and checked for updates weekly:
- AWS CLI
- AWS Connect
- AWS EC2 Instance Connect CLI
- AWS Export Credentials
- AWS IAM Authenticator
- AWS Okta Keyman
- AWS SAM CLI
- AWS Session Manager Plugin
- Cloud Nuke
- Confd
- Cookiecutter
- Datadog CLI
- EKS CLI (Elastic Kubernetes Service CLI)
- FluxCD
- Hashicorp Packer
- Hashicorp Terraform
- HCL Format
- Helm
- K9s
- Kompose
- Kubectl
- Kubectx
- Kubespy
- Okta AWS CLI
- Terraform Docs
- Terragrunt
If something you want is missing, please open an issue or submit a PR, both are welcome!
One instance of cloudenv will be run per user, named 'cloudenv-username', and multiple sessions can be run in each instance. The environment inside each instance is separate, e.g separate environment variables. In summary:
- A user can run multiple sessions of cloudenv.
- Multiple users can run separate instances of cloudenv.
WARNING: If multiple users run cloudenv on the same machine, because the home directory is bind-mounted into the container, anyone in the docker group will be able to exec into any cloudenv container and access all of that user's files. This tool is meant to be run on e.g trusted jumpbox hosts, or on single-user workstations. Keep this in mind when deploying it elsewhere.
If you require other versions of terraform or terragrunt, then they can be installed inside the container by e.g fetching the binaries with wget:
- https://github.com/gruntwork-io/terragrunt/releases
- https://github.com/hashicorp/terraform/releases
By default, a custom bash shell is run inside the container. You can change this to a plain fish or a bash session that will use your host machine's shell configuration. To do this, edit the cloudenv
script and change the "user_shell" variable to zsh
, fish
or bash
.
The container is left running in the background after you run the command for the first time. It won't re-start itself after a reboot, but will be in the stopped state. If you'd like to clean it up, then you can run the following: docker/podman rm -f cloudenv-*
If you deal with infrastructure as code, or simply work with AWS and GCP from the command line, then you will have quickly realised:
- There are too many tools.
- Most of them aren't in your package manager.
- Updating them is annoying.
- Installing them on a new machine can take hours.
- Installing them on a colleague's machine can take hours++.
- If you don't use the same versions as all of your colleagues, $BAD_THINGS can happen.
Ironically (or elegantly), cloud-tooling solves its own problem in the form of Docker images that can be used to package all of these tools up, isolate them from your host machine, and make installing and running them simple.
This is fundamentally a Docker container running an interactive shell, though it does some extra things to make the experience seamless and pleasant.
It works in the following way:
- The
cloudenv
script pulls latest version of the container and starts it. - It bind-mounts your home directory into the container, passes your user and group from the host machine in with environment variables, and ensures all permissions match up.
- If the host has an ssh-agent running, it bind-mounts the auth socket into the container. If not, it runs a separate ssh-agent as your user. This lets ssh commands access stored credentials as though they were running on the host.
- It starts a bash session inside the container as your user with a custom shell configuration (
/etc/bashrc
). - The container runs in the background and can be connected to with multiple sessions.
Further information on some of these aspects is below.
Your home directory is bind-mounted into the container. This allows access to your files as well as all of your dot-files and dot-directories, such as ~/.ssh
, which contain all of the configuration for those utilities.
This allows the environment inside the container to behave as closely as possible to the environment on the host, and means that all of the included tools have access to the keys/credentials that they may require.
Bind-mounting your home directory into a container normally creates issues with permissions, as your user on the host will not exist inside it. This is overcome by passing the IDs and names of the host's user and group into the container with environment variables, which the entrypoint script then uses to ensure that everything inside the container matches the host machine.
The timezone inside an Alpine container defaults to UTC. Normally this is fine, but when your home directory is bind-mounted into the image in read-write mode, the timestamps on files will be incorrect if anything inside the container modifies them.
An environment variable (TZ
) is used to set the timezone when the container starts up. The value is set in the cloudenv script and can be changed to match your requirements. Detecting the user's timezone cross-platform is one of those "this shouldn't be this hard" problems that is unfortunately best left out of scope.
- Image on Dockerhub: https://hub.docker.com/r/snw35/cloudenv
Travis CI automatically runs once per week and builds a new image if any updates are found to either the included software or the container base image.
The cloudenv container stays as minimal as possible while packaging a lot of tools, some of which are large (Hashicorp ones specifically), and providing a lot of functionality. It is possible to provision, manage, and develop production-grade cloud infrastructure with just the contents of this container.
cloudenv
images are tagged with the ISO-8601 date they were first built (Example: 2018-08-14). The versions of all bundled software packages inside an image are the latest that were available on that date. You can edit the cloudenv
script to pin the image to a particular date if you'd like.
The latest
tag always points to the most recent image. Where backwards compatibility is an issue (such as with terraform), both the old and new versions will be included.
The 'cloudenv' script pulls the latest
tag each time it is run. It does not stop or remove running containers however, so you will only use an updated version when you stop/remove the current container. This will happen after a reboot for example.
To build the container locally,
docker build -t snw35/cloudenv:latest .
To test the locally built image,
./cloudenv
To run with DEBUG mode on,
export CLOUDENV_DBG=true && ./cloudenv
Once your changes are tested, open a pull request with your changes.